Content Operations https://www.scriptorium.com/ Scriptoriums delivers industry-leading insights for global content operations. Mon, 08 Dec 2025 12:45:36 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 https://www.scriptorium.com/wp-content/uploads/2019/06/cropped-favicon_script-32x32.png Content Operations https://www.scriptorium.com/ 32 32 Scriptorium - The Content Strategy Experts false Scriptorium - The Content Strategy Experts © Scriptorium Publishing Services, Inc. © Scriptorium Publishing Services, Inc. podcast Content Operations https://www.scriptorium.com/wp-content/uploads/2023/01/podcast-cover-r.jpg https://www.scriptorium.com/blog/ 29e6e4f4-5a67-55af-9cc4-f5ef8f5b8368 Futureproof your content ops for the coming knowledge collapse https://www.scriptorium.com/2025/11/futureproof-your-content-ops-for-the-coming-knowledge-collapse/ Mon, 17 Nov 2025 12:30:23 +0000 https://www.scriptorium.com/?p=23342 https://www.scriptorium.com/2025/11/futureproof-your-content-ops-for-the-coming-knowledge-collapse/#respond https://www.scriptorium.com/2025/11/futureproof-your-content-ops-for-the-coming-knowledge-collapse/feed/ 0 What happens when AI accelerates faster than your content can keep up? In this podcast, host Sarah O’Keefe and guest Michael Iantosca break down the current state of AI in content operations and what it means for documentation teams and executives. Together, they offer a forward-thinking look at how professionals can respond, adapt, and lead in a rapidly shifting landscape.

Sarah O’Keefe: How do you talk to executives about this? How do you find that balance between the promise of what these new tool sets can do for us, what automation looks like, and the risk that is introduced by the limitations of the technology? What’s the roadmap for somebody that’s trying to navigate this with people that are all-in on just getting the AI to do it?

Michael Iantosca: We need to remind them that the current state of AI still carries with it a probabilistic nature. And no matter what we do, unless we add more deterministic structural methods to guardrail it, things are going to be wrong even when all the input is right.

Related links:

LinkedIn:

Transcript:

Introduction with ambient background music

Christine Cuellar: From Scriptorium, this is Content Operations, a show that delivers industry-leading insights for global organizations.

Bill Swallow: In the end, you have a unified experience so that people aren’t relearning how to engage with your content in every context you produce it.

SO: Change is perceived as being risky; you have to convince me that making the change is less risky than not making the change.

Alan Pringle: And at some point, you are going to have tools, technology, and processes that no longer support your needs, so if you think about that ahead of time, you’re going to be much better off.

End of introduction

Sarah O’Keefe: Hey everyone, I’m Sarah O’Keefe. In this episode, I’m delighted to welcome Michael Iantosca to the show. Michael is the Senior Director of Content Platforms and Content Engineering at Avalara and one of the leading voices both in content ops and understanding the importance of AI and technical content. He’s had a longish career in this space. And so today we wanted to talk about AI and content. The context for this is that a few weeks ago, Michael published an article entitled The coming collapse of corporate knowledge: How AI is eating its own brain. So perhaps that gives us the theme for the show today. Michael, welcome.

Michael Iantosca: Thank you. I’m very honored to be here. Thank you for the opportunity.

SO: Well, I appreciate you being here. I would not describe you as anti-technology, and you’ve built out a lot of complex systems, and you’re doing a lot of interesting stuff with AI components. But you have this article out here that’s basically kind of apocalyptic. So what are your concerns with AI? What’s keeping you up at night here? 

MI: That’s a loaded question, but we’ll do the best we can to address it. I’m a consummate information developer as we used to call ourselves. I just started my 45th year in the profession. I’ve been fortunate that not only have I been mentored by some of the best people in the industry over the decades, but I was very fortunate to begin with AI in the early 90s when it was called expert systems. And then through the evolution of Watson and when generative AI really hit the mainstream, those of us that had been involved for a long time were… there was no surprise, we were already pretty well-versed. What we didn’t expect was the acceleration of it at this speed. So what I’d like to say sometimes is the thing that is changing fastest is the rate at which the rate of change is changing. And that couldn’t be more true than today. But content and knowledge is not a snapshot in time. It is a living, moving organism, ever evolving. And if you think about it, the large language models, they spent a fortune on chips and systems to train the big large language models on everything that they can possibly get their hands and fingers into. And they did that originally several years ago. And the assumption is that, especially for critical knowledge, is that that knowledge is static. Now they do rescan the sources on the web, but that’s no guarantee that those sources have been updated. Or, you know, the new content conflicts or confuses with the old content. How do they tell the difference between a version of IBM database 2 of its 13 different versions, and how you do different tasks across 13 versions? And can you imagine, especially when it comes to software where most of us, a lot of us work, the thousands and thousands of changes that are made to those programs in the user interfaces and the functionality?

MI: And unless that content is kept up-to-date and not only the large language models, reconsume it, but the local vector databases on which a lot of chatbots and agenda workflows are being based. You’re basically dealing with out-of-date and incorrect content, especially in many doc shops. The resources are just not there to keep up with that volume and frequency of change. So we have a pending crisis, in my opinion. And the last thing we need to do is reduce the people that are the knowledge workers to update, not only create new content, but deal with the technical debt, so that we don’t collapse on this, I think, is a house of cards.

SO: Yeah, it’s interesting. And as you’re saying that, I’m thinking we’ve talked a lot about content debt and issues of automation. But for the first time, it occurs to me to think about this more in terms of pollution. It’s an ongoing battle to scrub the air, to take out all the gunk that is being introduced that has to, on an ongoing basis, be taken out. Plus, you have this issue that information decays, right? In the sense that when, I published it a month ago, it was up to date. And then a year later, it’s wrong. Like it evolved, entropy happened, the product changed. And now there’s this delta or this gap between the way it was documented versus the way it is. And it seems like that’s what you’re talking about is that gap of not keeping up with the rate of change.

MI: Mm-hmm. Yeah. I think it’s even more immediate than that. I think you’re right. But now we need to remember that development cycles have greatly accelerated. Now, when you bring AI for product development into the equation, we’re now looking at 30 and 60-day product cycles. When I started, a product cycle was five years. Now it’s a month or two. And if we start using AI to draft new content, for example, just brand new content, forget about the old content or update the old content. And we’re using AI to do that in the prototyping phase. We’re moving that more left upfront. We know that between then and CodeFreeze that there’s going to be a numerous number of changes to the product, to the function, to the code, to the UI. It’s always been difficult to keep up with it in the first place, but now we’re compressed even more. So we now need to start looking at AI to how does it help us even do that piece of it, let alone what might be a corpus that is years and years old, that’s not ever had enough technical writers to keep up with all the changes. So now we have a dual problem, including new content with this compressed development cycle.

SO: So the, I mean, the AI hype says we essentially, we don’t need people anymore and the AI will do everything from coding the thing to documenting the thing to, I guess, buying the thing via some sort of an agentic workflow. But what, I mean, you’re deeper into this than nearly anybody else. What is the promise of the AI hype, and what’s the reality of what it can actually do?

MI: That’s just the question of the day. Because those of us that are working in shops that have engineering resources, I have direct engineers that work for me and an extended engineering team. So does the likes of Amazon, other serious, not serious, but sizable shops with resources. We have a lot of shops that are smaller. They don’t have access to either their own dedicated content systems engineers or even their IT team to help them. First, I want to recognize that we’ve got a continuum out there, and the commercial providers are not providing anything to help us at this point. So it’s either you build it yourself today, and that’s happening. People are developing individual tools using AI where the more advanced shops are looking at developing entire agentic workflows. 

And what we’re doing is looking at ways to accelerate that compressed timeframe for the content creators. And I want to use content creators a little more loosely because as we move the process left, and we involve our engineers, our programmers in the early, earlier in the phase, like they used to be, by the way, they used to write big specifications in my day. Boy, I want to go into a Gregorian chant. “Oh, in my day!” you know, but, but they don’t do that anymore. And basically the, the role of the content professional today is that of an investigative journalist. And you know what we do, right? We, we scrape and we claw. We test, we use, we interview, we use all of the capabilities of learning, of association, assimilation, synthesis, and of course, communication. And turns out that writing’s only 15% roughly of what the typical writer does in an information developer or technical documentation professional role, which is why we have a lot of different roles, by the way, that if we’re gonna replace or accelerate with people with AI, have to handle all those capabilities of all those roles. So, so where we are today is some of the more leading-edge shops are going ahead, and we’re looking at ways to ingest knowledge, new knowledge, and use that new knowledge with AI to draft new or updated content. But there are limitations to that. So, I want to be very clear. I am super bullish on AI. I think I use it every single day. I’m using it to help me write my novel. I’m using it to learn about astrophotography. I use it for so much. When the tasks are critical, when they’re regulatory, when they’re legal-related, when there’s liability involved, that’s the kind of content that we cannot afford to be wrong. We have to be right. We have to be 100% in many cases.

Whereas with other kinds of applications, we can very well be wrong. I always say AI and large language models are great on general knowledge that’s been around for years and evolves very slowly. But things that move quickly and change very quickly, in my business, it’s tax rates. There’s thousands and thousands of jurisdictions. Every tax rate is different and they change them. So you have to be 100% accurate or you’re going to pay a heck of a penalty financially if you’re wrong. So we are moving left. We are pulling knowledge from updated sources, things like videos that we could record and extract and capture, Figma designs, code even, to a limited degree that there’s assets in there that can be caught, and other collateral, and we’re able to build out and initial drafts. It’s pretty simple. Several companies are doing this right now, including my own team. And then the question comes, how good could it be initially? What can we do to improve that, make it as good as it can be? And then what is the downstream process for ensuring validity and quality of that content? What are the rubrics that we’re going to use to govern that? And therein is where most of the leading edge or bleeding edge or even hemorrhaging edge is right now.

SO: Yeah, and I mean, this is not really a new problem, and it’s not a problem specific to AI either, but we’ve had numerous projects where the delta between what, let’s say, the product design docs and the engineering content and the code, the as-designed documentation and the actual reality of the product walking out the door. So the as-built product, there was the resources, all that source material that you’re talking about, right, that we claw and scrape at. And I would like to also give a shout-out to the role of the anonymous source for the investigative journalists, because I feel like there’s some important stuff in there. But you go in there, you get all this as-designed stuff, right? Here’s the spec, here’s the code, here are the code comments, whatever. Or here’s the CAD for this hardware piece that we’re walking out the door. But the thing that actually comes down the factory assembly line or through the software compiler is different than what was documented in the designs because reality sets in and changes get made. And in many, many, many cases, the role of the technical writer was to ensure that the content that they were producing represented reality and not the artifacts that they started from. So there’s a gap. And there jobs to close that gap so that that document goes out and it’s accurate, right? And when we talk about these AI or automated or any sort of workflow, any sort of automation, any automation that does not take into account the gap between design and reality is going to run into problems. The level of problem depends on the accuracy of your source materials. Now, I wrote an article the other day and referred to the 100% accurate product specifications. I don’t know about you, I have seen one of those never in my life. 

MI: Hahaha that’s absolutely true. That’s really true. 

SO: The promise we have here is, AI is going to speed things up and it’s going to automate things and it’s going to make us more productive. And I think you and I both believe that that is true at a certain level. How do you talk to executives about this? How do you find that balance between the promise of what these new tool sets can do for us and what automation looks like and the risk that is introduced by the limitations of you know, of the technology itself? What does that conversation look like? What are the points that you try to make? What’s the roadmap for somebody that’s trying to, as you said, know, maybe in a smaller organization, navigate this with people that are, you know, all-in on “just get the AI to do it.”

MI: That’s a great question too, because we need to remind them that the current state of AI still carries with it a probabilistic nature. And no matter what we do, unless we add more deterministic structural methods to guardrail it, things are going to be wrong even when all the input is right. AI can still take a collection of collateral and get the order of the steps wrong. It can still include things or do too much. We’ve been trained to write as professional writers in a minimalistic capability. And we can control some of that through prompting. Some of that can be done with guardrails. But when you think about writing tech docs, some people might think, we document. we’re documenting APIs or documenting tasks and we, you know, we’ve always been heavily task-oriented, but you can extract all the correct steps and all the correct steps in the right order, but what doesn’t come along with it all too frequently and almost universally is the context behind it, the why part of it. I always say we can extract great things from code for APIs like endpoints or puts and, you know, gets and puts and things like that. That’s a great for creating reference documentation for programmers. 

But if you want to know, it doesn’t tell you the why, it doesn’t tell you the steps, the exact steps, the code doesn’t tell you that. Now maybe your Figma does. And if your Figma has been done really well, your design docs have been done really well and comprehensively. That can mitigate it tremendously. But what have we done in this business? We’ve actually let go more UX people than probably even writers, you know, which is, which is counterproductive. And then you’ve got things like the happy path and the alternate paths that could exist, for example, through the use of a product or the edge cases, right? The what-ifs that occur. You might be able to, and we should, we are able to do better with the happy path, but the happy path is not the only path. These are multifunction beasts that we built. When we built iPhone apps, we often didn’t need documentation because they did one thing and they did one thing really well. You take a piece of middleware, and it can be implemented a thousand different ways. And you’re going to you’re going to document it by example and maybe give some variance, you’re not going to pull that from Figma design. You’re not going to pull that from code. There’s too much of it there that the human fact-baking capability can look at it and say, this is important, this is less important, this is essential, this is non-essential, to actually deliver useful information to the end user. And we need to be able to show what we can produce, continue to iterate and try to make it better and better, because someday we may actually get pretty darn close with support articles and completed support case payloads, we were able to develop an AI workflow that very often was 70% to 100% accurate and ready for publish. 

But when you talk about user guides and complex applications, it’s another story because somebody builds a feature for a product and that feature boils down into not a single article, but into an entire collection of articles that are typed into the kind of breakdown that we do for disclosure, such as concepts, tasks, references, Q&A. So AI has got to be able to do something much more complex, which is to look at content and classify it and apply structure to separate those concerns. Because we know that when we deliver content in the electronic world, we’re no longer delivering PDF. Well, of us are hopefully not delivering PDF books made up of long chapters that intersperse all of these different content types because of the type of consumption, certainly not for AI and AI bots. Then when we, so we need to document, maybe the bottom line here is we need to show what we can do. We need to show where the risks are. We need to document the risks, and then we need the owners, the business decision makers, to see those risks, understand those risks, and sign off on those risks. And if they sign off on the risks, then me, as a technology developer and an information developer, I can sleep at night because I was clear on what it can do today. And that is not a statement that says it’s not going to be able to do that tomorrow. It’s only a today statement so that we can set expectations. And that’s the bottom line. How do we set expectations when there’s an easy button that Staples put in our face, and that’s the mentality of what AI is. It’s press a button and it’s automatic.

SO: Yeah, and I did want to briefly touch on, you know, the knowledge base articles are really, really interesting problem because in many cases you have knowledge base articles that are essentially bug fixes or edge cases when I, you know, hold my finger just so and push the button over here, you know, it blue screens.

MI: Mm-hmm.

SO: And that article can be very context-specific in the sense of you’re only going to see it if you have these five things installed on your system. And/or it can be temporal or time-limited in the sense that, while we fixed the bug, it’s no longer an issue. Okay. Well, so you have this knowledge-based article and you feed it into your LLM as an information source going forward, but we fixed the bug. So how do we pull it back out again?

MI: I love that question. 

SO: I don’t!

MI: I love it. No, I’ve been, actually working for a couple of years on this very particular problem. The first problem we have, Sarah, is we’ve been so resource constrained that when doc shops built an operations model, the last thing they invested in is the operations and the operations automation. So when I’m in a conference and I have a big room of 300 professional technical doc folks. I love asking the simple question, how do you track your content? And inevitably, I get, yeah, well, we do it on Excel spreadsheets. To actually have a digital system of record, I get a few hands. And then I ask the question, well, does that digital system of record that you have for every piece of documentation you’ve ever published, does that span just the product doc or does that actually span more than product doc like your developer, your partner, your learning, your support, all these different things. Cause the customer doesn’t look at us as those different functions. They look at us as one company, one product. And inevitably, I’m lucky if I get one hand in the audience that says, yeah, we actually are doing that. So the first thing they don’t have is they don’t have a contemporary system of record that is digital that we can say, we know and can automate notifications as to when a piece of documentation should either be re-reviewed and revalidated or retired and taken out.

The other problem we have is that all of these AI implementations and companies, almost universally, not completely, but most of them, were based on building these vector databases. And what they did, was often to the completely ignoring the doc team, was just go out to the different sources they had available, Confluence, SharePoint. If you had a CCMS, they’d ask you for access to your CCMS or your content delivery platform, and they suck it in. They may date-stamp it, which is okay, but pretty rudimentary. And they may even have methods for rereading those sources every once in a while, but they’re not, unless they’re rebuilding the entire vector database, and then what did they do when they ingested the content? They shredded it up into a million different pieces, right? Because the context windows for large language models have limitations for token numbers and things like that. Maybe they’re bigger today, but they’re still limited. So how would they even replace a fragment of what used to be whole topics and whole collection of topics? And this is why we wrote the paper and did the implementation and share with the world what we call the document object model knowledge graph because we needed a way outside of the vector database to say go look over here and you can retrieve the original entire topic or collection of topics or related topics in their entirety to deliver to the user. And again, we’re still unless we update that content and it’s don’t treat it like a frozen snapshot in time, we’ll still have those content debt problems. But it’s becoming a bigger, bigger, a much bigger problem now. It wasn’t as big a problem when we put out chatbots. And the chatbots we’ve been building, what, for three, you know, two, three, four years now. And, you know, everybody celebrated, they popped the corks, you know, we can deflect X amount percentage of support cases. They can self-service. And I always talk about the precision paradox that once you reach a certain ceiling, it gets really hard to increment and get above that 70%, 80%, 85%, 90% window. And as you get closer and better, the tolerance for being wrong goes down like a rock. And you now have a real big problem.

So how do we do these guardrails to be more deterministic, to mitigate the probabilistic risk that we have and reality that we have? The problem is that people are still looking for fast and quick, not right. When I say right, I mean the building out of things like ontologies and leveraging our taxonomies that we labored over with all of that metadata that never even gets into the vector database because they strip it all away in addition to shredding it up. So if we don’t start building those things like knowledge graphs and retaining all of that knowledge, it’s even… now we’re compounding the problem. Now we have debt, and we have no way to fix the debt. And now when we get into the new world of agentic workflows, which is the true bleeding edge right now, when you have sequences of both agentic and agentive, and the difference between those two, by the way, is agentic is autonomous. There’s no human doing that task. It’s just doing it. And then agentive, which is a human in the loop, which is helping there. When you’ve got a mix of agentive and agentic processes in a business workflow, now you’ve got to worry about what happens if I get something wrong early in the chain of sequence in that workflow. And this doesn’t apply to just documentation, by the way. We’ll be seeing companies taking very complex workflows in finance and in marketing and in business planning and reporting and mapping out this is the workflow our humans do. And there’s hundreds, if not more steps and many roles involved in those workflows. And as we map those out and say, where can we inject AI, not as just individual tools, like just separately using a large language model or separately using a single agent, but stringing them together to automate a complex business workflow with dependencies upstream and downstream. How are we going to survive and make this work? And I think that’s why you saw the MIT study had come out where they said, you know, roughly only 5% or so of AI projects are succeeding. And I think that’s because we did the easy stuff first. We did the chatbots and they could be lossy in terms of accuracy. But when you now, when you get to these agenda workflows that we’re building, literally coding as we speak, now you’re facing a whole different experience and ballgame where precision and currency really matters.

SO: Yeah, and I think I mean, we’ve really only scraped the surface of this. Both of the articles that you’ve mentioned, the one that I started with and the one that you mentioned in this context, we’ll make sure we get those into the show notes. I believe they are on your is it Medium? On your website. So we’ll get those links in there. Any final parting words in the last? I don’t know. Fifteen seconds or so.

MI: No, that’s good. I want to give, I want to tell you the good news and the bad news for tech doc professionals. What I’m seeing in the industry hurts me. I think there’s a lot of excuse right now, not just in the tech doc space, but in all jobs where we’re seeing AI being used as an excuse to make business decisions, to scale back. It may take some time until the impact of some poor business decisions that are being made will reflect themselves, but there’s going to be reality that hits. And the question is, is how do we navigate the interim? I’m confident that we will. I’m confident that those of us that are building the AI, I feel like I’m evil and a savior at the same time. I’m evil because I’m building automation that can speed up and make people much more productive, meaning you need less people potentially. At the same time, I feel like we’re in a position when we do it, rather than an engineer that doesn’t even know the documentation space, we’re getting to redefine our space ourselves and not leave it to the whims of people that don’t understand the incredible intricacy and dependencies of creating what we know as high-quality content. So we’re in this tumult right now, I think we’re going to come out of it. I can’t tell you what that window looks like. There will be challenges through doing that, but I would rather see this community define their own, redefine their own future in this transformation that is unavoidable. It’s not going away. It’s going to accelerate and get more serious. But if we don’t define ourselves, others will. And I think that’s the message I want our community to take away. So when we go to conferences and we show what we’re doing and we’re open and we’re sharing all the stuff that we’re doing, that’s not, hi, look at us. That’s you come back to the next conference and the next webinar and show us what you took from us and made better and helped shape and mold that transformative industry that we know as knowledge and content. And I’m excited because I want to celebrate every single advance that I see as we share. And I think it’s incumbent upon us to share and be vocal. And I think when I write my articles, they’re aimed at not only our own community, they’re aimed at the executives and technologists themselves to educate them, so that if we don’t do it, who will? And it does fall on all of us to do that.

SO: I think I’m going to leave it there with a call for the executives to pay attention to what you are saying, and some of the rest of this community, many of the rest of this community are saying. So, Michael, thank you very much for taking the time. I look forward to seeing you at the next conference and seeing what more you’ve come up with. And we will see you soon.

MI: Thank you very much.

SO: Thank you.

Conclusion with ambient background music

CC: Thank you for listening to Content Operations by Scriptorium. For more information, visit Scriptorium.com or check the show notes for relevant links.

Want more content ops insights? Download our book, Content Transformation.

The post Futureproof your content ops for the coming knowledge collapse appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 32:49
The five stages of content debt https://www.scriptorium.com/2025/11/the-five-stages-of-content-debt/ Mon, 03 Nov 2025 12:33:22 +0000 https://www.scriptorium.com/?p=23332 https://www.scriptorium.com/2025/11/the-five-stages-of-content-debt/#respond https://www.scriptorium.com/2025/11/the-five-stages-of-content-debt/feed/ 0 Your organization’s content debt costs more than you think. In this podcast, host Sarah O’Keefe and guest Dipo Ajose-Coker unpack the five stages of content debt from denial to action. Sarah and Dipo share how to navigate each stage to position your content—and your AI—for accuracy, scalability, and global growth.

The blame stage: “It’s the tools. It’s the process. It’s the people.” Technical writers hear, “We’re going to put you into this department, and we’ll get this person to manage you with this new agile process,” or, “We’ll make you do things this way.” The finger-pointing begins. Tech teams blame the authors. Authors blame the CMS. Leadership questions the ROI of the entire content operations team. This is often where organizations say, “We’ve got to start making a change.” They’re either going to double down and continue building content debt, or they start looking for a scalable solution.

— Dipo Ajose-Coker

Related links:

LinkedIn:

Transcript:

Introduction with ambient background music

Christine Cuellar: From Scriptorium, this is Content Operations, a show that delivers industry-leading insights for global organizations.

Bill Swallow: In the end, you have a unified experience so that people aren’t relearning how to engage with your content in every context you produce it.

SO: Change is perceived as being risky; you have to convince me that making the change is less risky than not making the change.

Alan Pringle: And at some point, you are going to have tools, technology, and processes that no longer support your needs, so if you think about that ahead of time, you’re going to be much better off.

End of introduction

Sarah O’Keefe: Hey, everyone. I’m Sarah O’Keefe and I’m here today with Dipo Ajose-Coker. He is a Solutions Architect and Strategy at RWS and based in France. His strategy work is focused on content technology. Hey, Dipo.

Dipo Ajose-Coker: Hey there, Sarah. Thanks for having me on.

SO: Yeah, how are you doing?

DA-C: Hanging in there. It’s a sunny, cold day, but the wind’s blowing.

SO: So in this episode, we wanted to talk about moving forward with your content and how you can make improvements to it and address some of the gaps that you have in terms of development and delivery and all the rest of it. And Dipo’s come up with a way of looking at this that is a framework that I think is actually extremely helpful. So Dipo, tell us about how you look at content debt.

DA-C: Okay, thanks. First of all, I think before I go into my little thing that I put up, what is content debt? I think it’d be great to talk about that. It’s kind of like technical debt. It refers to that future work that you keep storing up because you’ve been taking shortcuts to try and deliver on time. You’ve let quality slip. You’ve had consultants come in and out every three months, and they’ve just been putting… I mean writing consultants.

SO: These consultants.

DA-C: And they’ve been basically doing stuff in a rush to try and get your product out on time. And over time, those sort of little errors, those sort of shortcuts will build up and you end up with missing metadata or inconsistent styles. The content is okay for now, but as you go forward, you find you’re building up a big debt of all these little fixes. And these little fixes will eventually add up and then end up as a big debt to pay.

SO: And I saw an interesting post just a couple of days ago where somebody said that tech debt or content debt, you could think of it as having principle and interest and the interest accumulates over time. So the less work you do to pay down your content debt, the bigger and bigger and bigger it gets, right? It just keeps snowballing and eventually you find yourself with an enormous problem. So as you were looking at this idea of content debt, you came up with a framework for looking at this that is at once shiny and new and also very familiar. So what was it?

DA-C: Yeah, really familiar. I think everyone’s heard of the five stages of grief, and I thought, “Well, how about applying that to content debt?” And so I came up with the five stages of content debt. So let’s go into it.

I’m not going to keep referring to the grief part of it. You can all look it up, but the first stage is denial. “Our content is fine. We just need a better search engine. We can actually put it into this shiny new content delivery platform and it’s got this type of search,” and so on and so forth. Basically what you’re doing is you’re ignoring the growing mess. You’re duplicating content. You’ve got outdated docs. You’re building silos, and then you’re ignoring that these silos are actually getting even further and further apart. No one wants to admit that the CMS or whatever system, bespoke system that you’ve put into place, is just a patchwork of workarounds.

This quietly builds your content debt until, actually the longer denial lasts, the more expensive that cleanup is. As we said in that first bit, you want to pay off the capital of your debt as quickly as possible. Anyone with a mortgage knows that. You come into a little bit of money, pay off as much capital as you can so that you stop accruing that debt, the interest on the debt.

SO: And that is where when we talk about AI-based workflows, I feel like that is firmly situated in denial. Basically, “Yeah, we’ve got some issues, but the AI will fix it. The AI will make it all better.” Now, we painfully know that that’s probably not true, so we move ourselves out of denial. And then what?

DA-C: There we go into anger.

SO: Of course.

DA-C: “Why can’t we find anything? Why does every update take two weeks?” And that was a question we used to get regularly where I used to work at a global medical device manufacturer. We had to change one short sentence because a spec change and it took weeks to do that. Authors are wasting time looking for reusable content if they don’t have an efficient CCMS. Your review cycles drag through because all you’re doing is giving the entire 600-page PDF to the reviewer without highlighting what’s in there. Your translation costs balloon and your project managers or leadership gets angry because, “Well, we only changed one word. Can’t you just use Google Translate? It should only cost like five cents.” Compliance teams then start raising flags. And if you’re in a regulated industry, you don’t want the compliance teams on your back, and especially you don’t want to start having defects out in the field. So eventually, productivity drops, your teams feel like they’re stuck. And the cracks are now starting to show across other departments and you’re putting a bad name on your doc team.

SO: Yeah. And a lot of this, what you’ve got here, is the anger that’s focused inward to a certain extent. It’s the authors that are angry at everybody. I’ve also seen this play out as management saying, “Where are our docs? We have this team, we’re spending all this money, and updates take six months.” Or people submit update requests, tickets, something, the content doesn’t get into the docs, the docs don’t get updated. There’s a six-month lag. Now the SOP, the standard operating procedure, is out of sync with what people are actually doing on the factory floor, which it turns out, again, if you’re in medical devices, is extremely bad and will lead to your factory getting shut down, which is not what you want generally.

DA-C: Yeah, it’s not a good position to be in.

SO: And then there’s anger.

DA-C: Yeah.

SO: “Why aren’t they doing their job?” And yet you’ve got this group that’s doing the best that they can within their constraints, which are, as you said, in a lot of cases, very inefficient workflows, the wrong tool sets, not a lot of support, etc. Okay, so everybody’s mad. And then what?

DA-C: Everyone’s mad, and eventually, actually this is a closed little loop because all you then do is say, “Okay, well, we’re going to take a shortcut,” and you’ve just added to your content debt. So this stage is actually one of the most dangerous of the parts of it because all you end up trying to do without actually solving the problem is just add to the debt. “Let’s take a shortcut here, let’s do this.”

The next stage is now the blame stage. “It’s the tools. It’s the process. It’s the people.” These here and then you get calls of technical writers or, “Well, we’re going to put you into this department and we’ll get this person to rule you with this new agile process,” or, “We’ll get you to be doing it in this way.” The finger-pointing begins. Tech teams will blame the authors. Authors will blame the CMS. Leadership questions the ROI of the entire content operations team. This is often where organizations see that we’ve got to start making a change. They’re either going to double down and continue building that content debt or they start looking for a scalable solution.

SO: Right. And this is the point at which people look at it and say, “Why can’t we just use AI to fix all of this?”

DA-C: Yep, and we all know what happens when you point AI at garbage in. We’ve got the saying, and this saying has been true from the beginning of computing, garbage in, garbage out, GIGO.

SO: Time.

DA-C: Yeah. I changed that to computing.

SO: Yeah. It’s really interesting though because the blame that goes around, I’ve talked to a lot of executives who, and we’re right back to anger too, it is sort of like, “We’ve never had to invest in this before. Why are you telling us that this organization, this group, this tech writers, content ops,” whatever you want to call it, “that they are going to need enterprise tools just like everybody else?” And they are just halfway astounded and halfway offended that these worker bees that were running around doing their thing…

DA-C: Glorified secretaries.

SO: Yeah, that whole thing, like, “How dare they?” And it can be helpful, sometimes it is and sometimes it isn’t, to say, “Well, you’ve invested in tools for your developers. You wouldn’t dream of writing software without source control, I assume,” although let’s not go down the rabbit hole of vibe coding.

DA-C: Let’s not go down that one.

SO: And the fact that there are already people with the job title of vibe coding remediation specialist.

DA-C: Nice.

SO: Yeah. So that’s going to be a growth industry.

DA-C: That’s what, if you can get it.

SO: But this blame thing is we are saying, “This is an asset. You need to invest in it. You need to manage it. You need to depreciate it just like anything else. And if you don’t invest properly, you’re going to have some big problems.” And to your-

DA-C: A lot of that-

SO: Yeah, they don’t want to do it. They’re horrified.

DA-C: Yeah. A lot of that comes to looking at docs departments as cost centers. They’re costing us money. We’re paying all these people to produce this stuff that people don’t read. The users don’t want to. But if you look at it properly, deeply, the documentation department can be classed as a revenue generator. What are your sales teams pointing prospects at? They’re pointing at docs. Where are they getting the information about how things work? They’re pointing at the docs. What are you using? Especially if you’re having people looking through trying to find a solution?

I know I do this. I go and look at the user manuals. And first thing that I want to see in there that is properly written, if I see something that does not describe the gadget or whatever I’m trying to buy properly, then I’m like, “Well, if you’ve taken shortcuts there, you’ve probably done the same with the actual thing that I’m going to buy.” So I’m going to walk away.

Reducing costs for online centers. If your customers can find the information very quickly that describes the exact problem that they’re trying to solve, then you’ve got fewer calls to your online help center. And then while escalating onto the next person, because the level, I don’t know how this goes, level three, two, one, let’s say the level three is the lowest level, if that person can not find the information that is true, clear, one source of truth, then they’re going to escalate it onto that person who you’re paying a lot more, is at that level two, that person can’t find it, moved on. So it’s basically costing you a lot of money not to have good documentation. It’s a revenue generator.

SO: So my experience has been that the blame phase is perhaps the longest of all the phases.

DA-C: Yeah.

SO: And some organizations just get stuck there forever and they blame different people every year. I’ve also, I’m sure you’ve seen this as well, we were talking about reorganizing. “Well, okay, the tech writers are all in one group. Let’s burst them out and put them all on the product team.”

DA-C: Yes.

SO: “So you go on product team A and you go on product team B and you go on product team C.” And I talk to people about this and they say, “This is terrible and I don’t want to do it.” I’m like, “It’s fine, just wait two years.”

DA-C: Yeah.

SO: Because it won’t work, and then they’ll put them all back together. Ultimately, I’m not sure it matters whether they’re together or apart because we fall into this sort of weird intermediate thing. What matters is that somebody somewhere understands the value, to your point, and isn’t making the investment. I don’t care if you do that in a single group or in a cross-functional matrix, blah, blah, but here we are. All right. So eventually, hopefully, we exit blame.

DA-C: And then we move into acceptance.

SO: Do we?

DA-C: “Okay, we need a better way to manage that.” And this is like when people start contacting you, Sarah, it’s like, “I’ve heard there’s a better way to manage this. Somebody’s talked to me about there’s something called the component content management system or the structured content,” and all of this.

So teams start to acknowledge, one, that they’ve got debt and that debt is growing. Then they start auditing that content and then really seeing that, “Oh, well, yes, things are really going bad. We’ve got 15 versions of this same document living in different spaces in different countries. The translations always cost us a bomb.” So leadership then starts budgeting for a transformation.

This is where they then start doing their research to find structured content, competent reuse, they enter the conversation. If they look at their software departments, software departments reuse stuff. You’ve got libraries of objects. Variables is the simplest form of that reuse. And they’ve been using this for years. And so, “Well, why aren’t we doing this? Oh, there’s DITA, there’s metadata. We can govern our content better. We can collaborate using this tool.” So there is a better way to do this. And then we know what to do.

SO: I feel like a lot of times the people that reach out to us are in stage four, they’ve reached acceptance, but their management is still back in anger and bargaining and denial and all the rest of that.

DA-C: They’re still blaming and trying to find a reason.

SO: Yeah, blaming and all of it, just, “How dare you?” All right, so we acknowledge that we have a problem, which I think is actually the first step in a different step process, but okay.

DA-C: Yeah.

SO: And then what?

DA-C: And then there’s action. Let’s start fixing this before it gets totally out of control, before it gets worse. Then they start investing in structured content authoring platforms like Tridion Docs, I work for RWS, I’ve got to mention it. They start speaking with experts, doing that research, listening to their documentation team leaders, speaking with content strategists to define what the content model is, first of all, and then where can we optimize efficiency by having a reuse strategy? A reuse without a strategy is just asking for trouble. You’re basically going to end up duplicating content.

And then you’ve got to govern how that is used. What rules have you got in place and what ways have you got to implement those rules? The old job of having an editor used to work in the good old days where you’d print something off and somebody would sign it off and so on and so forth. Now, we’re having to deliver content really quickly and we’re using a lot of technology to do that. And so, well, you need to use technology to govern how that content is being created.

Then your content becomes an asset. It’s no longer a liability. This is where that transformation happens, and then you start paying down your content debt. You’re able to scale the content that you’re creating a lot faster without raising the number of the headcount, without having to hire more people. And if you want to then really expand, let’s say, because you’ve got this really great operation now and you’re able to create that content that takes hours and not weeks, then you’re able to expand your market. You’re able to say, “Okay, well, now we’re going to tackle the Brazilian market. Now, we can move into China because they’ve got different regulations.”

Again, I speak a lot on the regulatory side of things. That’s where I passed most of my time as a technical writer. Having different content for different regulatory regimes and so on is just such a headache where you don’t have something that is helping you with that structure, applying structure to that content, applying rules to that content, making sure that your workflows are carried out in the way that you set it out six months ago and people have changed and so on and they’re not doing their own thing again. If your organization is stuck at stages one to three, as I just mentioned it, it’s basically time to move.

SO: Yeah, I think it’s interesting thinking about this in the larger context of when we talk about writers, the act of writing, right?

DA-C: Yes.

SO: Culturally, that word or that process is really loaded with this idea of a single human in an attic somewhere writing the great American or French or British novel, writing a great piece of literature or creating a piece of art on their own, by themselves, in solitude. And of course, we know that technical writing-

DA-C: Starting at A and going all the way to Z.

SO: And we know that technical writing is not that at all, but it does really feel as though when we describe what it means to be a writer or a content creator in a structured content environment, it is just the 180 degree opposite of what it means to be a writer. It’s not the same thing. You are a creator of these little components. They all get put together. We need consistent voice and tone. You have to kind of subordinate your own voice and your own style to the corporate style and to the regulatory and to all the rest of it. And so it’s just this sort of… I think we maybe sometimes underestimate the level of cultural push and pull that there is between what it is to be a writer and what it is to be a technical writer.

DA-C: Yes.

SO: Or a technical communicator or content creator, whatever you want to call that role. Okay, so we’ve talked about a lot of this and then we’ve not talked a lot about AI, but a big chunk of this is that when you move into an environment where you are using AI for end users to access your content, so they go through a chatbot to get to the content or they’re consulting ChatGPT or something like that, and asking, “Tell me about X.” All of the things that you’re describing in terms of content debt play into the AI not performing, the content not getting in there, not being delivered. So what does it look like? What are some of the specifics of good source content, of paying down the debt and moving into this environment where the content is getting better? What does that mean? What do I actually have to do? We’ve talked about tools.

DA-C: Yeah. So first, you’ve got to understand how AI accesses content and how large language models get trained. AI interprets patterns as meaning. If your content deviates from pattern predictability, then you’re going to get what we call hallucinations. And so asking the ChatGPT without having it plugged as an enterprise AI thing where you’ve really trained it on your own content, you get all sorts of hallucinations. Basically, they’ve taken two PDFs that have similar information, but two different conclusions. And so you’re looking for a conclusion in document A, but ChatGPT has given you the one in B. And it’s mixed and matched those because it does not know how one bit of information relates to the other.

So good source content needs to be accurate. Your facts are correct. They reflect the current state of the product or subject. It needs to be kept up to date. You need to have single copies of it, that’s what we talk about, a single source of truth. You can not have two sources of truth. It’s either black or it’s white. There are no gray zones with AI, it will hallucinate. You’ve got to have that consistency in style and tone.

How do you get that? Well, you’ve got the brand and the way we speak. In French, you would say, “Do you vouvoie or do you tutoie?” Do use the formal voice, formal tone, or do you speak like you’re speaking with your friends? How do you enforce some of that? Well, you can use controlled terminology. These are special terms that you’ve defined, a special voice. But the gold part of it is having that structured formatting and presentation. There’s always a logical structure and sequence to the way that you present that information. Your heading, subheading, steps, lists, are always displayed in the same way. You’ve defined an information architecture to then give that pattern. And the way AI then understands or creates relationships with those patterns is from the metadata that you’re adding onto it.

And so good source content is accurate, up to date, consistent in style and tone, uses control terminology, has structure in formatting. Forget the presentation because that you put on the end of things, in that what it looks like, how pretty it is. But the presentation in terms of I always start with a short description and then I follow up with the required tools. And then I describe any prerequisites, and that is the way every one of my writers are contributing towards this central repository of knowledge, this single repository of knowledge.

And you can do that as well if you’ve got a great CCMS by using templates, building templates into that CCMS so that it guides the author. And the author no longer has to think about, “Oh, how is this going to look? Should I be coloring my tables green, red, blue? Should they be this wide?” They’re basically filling in a template form. And some of the standards that we’ve developed like DITA allow you to do this, allow you to have a particular pattern for creating that information and the ability to put it into a template which is managed by your CCMS.

SO: Yeah, and that’s the roadmap, right? We talk about how as a human, if I’m looking at content and I notice that it’s formatted differently, like, “Oh, they bolded this word here but not there,” and I start thinking, “Well, was that meaningful?”

DA-C: Yeah.

SO: And at some point, I decide, “No, it was just sloppy and somebody screwed up and didn’t bold the thing.” But AI will infer meaning from pattern deviations.

DA-C: Yeah.

SO: And so the more consistent the information is in all the levels that you’ve described, the more likely it is that it will process it correctly and give you the right outcome. Okay, so that seems like maybe the place that we need to wrap this up and say, folks, you have content debt. Dipo is giving you a handy roadmap for how to understand your content debt and understand the process of coming to terms with your content debt, and then figuring out how and where to move forward. So any closing thoughts on that before we say good luck to everybody?

DA-C: Basically before, or, I mean, most enterprises today have already jumped on the AI bandwagon. They’re already trying to put it in, but at the same time, start taking a look at your content to ensure that it is structured and has semantic meaning to it. Because the day that you then start training your large language model on that, if you’ve not built those relationships into it, it’s like teaching a kid bad habits. They’re going to just continue doing it. It’s basically train your AI right the first time by having content that is structured and semantic, and you’ll find your AI outcomes are a lot more successful.

SO: So I’m hearing that AI is basically a toddler? Okay. Well, I think we’ll leave it there. Dipo, thanks, it’s great to see you as always.

DA-C: Thanks for having me.

SO: Everybody, thank you for joining us, and we’ll see you on the next one.

Conclusion with ambient background music

CC: Thank you for listening to Content Operations by Scriptorium. For more information, visit Scriptorium.com or check the show notes for relevant links.

Want more content ops insights? Download our book, Content Transformation.

The post The five stages of content debt appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 27:00
Balancing automation, accuracy, and authenticity: AI in localization https://www.scriptorium.com/2025/10/balancing-automation-accuracy-and-authenticity-ai-in-localization/ Mon, 20 Oct 2025 11:07:09 +0000 https://www.scriptorium.com/?p=23302 https://www.scriptorium.com/2025/10/balancing-automation-accuracy-and-authenticity-ai-in-localization/#respond https://www.scriptorium.com/2025/10/balancing-automation-accuracy-and-authenticity-ai-in-localization/feed/ 0 How can global brands use AI in localization without losing accuracy, cultural nuance, and brand integrity? In this podcast, host Bill Swallow and guest Steve Maule explore the opportunities, risks, and evolving roles that AI brings to the localization process.

The most common workflow shift in translation is to start with AI output, then have a human being review some or all of that output. It’s rare that enterprise-level companies want a fully human translation. However, one of the concerns that a lot of enterprises have about using AI is security and confidentiality. We have some customers where it’s written in our contract that we must not use AI as part of the translation process. Now, that could be for specific content types only, but they don’t want to risk personal data being leaked. In general, though, the default service now for what I’d call regular common translation is post editing or human review of AI content. The biggest change is that’s really become the norm.

Steve Maule, VP of Global Sales at Acclaro

Related links:

LinkedIn:

Transcript:

Introduction with ambient background music

Christine Cuellar: From Scriptorium, this is Content Operations, a show that delivers industry-leading insights for global organizations.

Bill Swallow: In the end, you have a unified experience so that people aren’t relearning how to engage with your content in every context you produce it.

SO: Change is perceived as being risky; you have to convince me that making the change is less risky than not making the change.

Alan Pringle: And at some point, you are going to have tools, technology, and processes that no longer support your needs, so if you think about that ahead of time, you’re going to be much better off.

End of introduction

Bill Swallow: Hi, I’m Bill Swallow, and today I have with me Steve Maule from Acclaro. In this episode, we’ll talk about the benefits and pitfalls of AI in localization. Welcome, Steve.

Steve Maule: Thanks, Bill. Pleasure to be here. Thanks for inviting me.

BS: Absolutely. Can you tell us a little bit about yourself and your work with Acclaro?

SM: Yeah, sure, sure. So I’m Steve Maule, currently the VP of Global Sales at Acclaro, and Acclaro is a fast-growing language services provider. So I’m based in Manchester in the UK, in the northwest of England, and I’ve been now in this industry, and I say this industry, the language industry, the localization industry for about 16 years, always in various sales, business development, or leadership roles.

So like I say, we’re a language services provider. And I suppose the way we try and talk about ourselves is we try and be that trusted partner to some of the world’s biggest brands and the world’s fastest growing global companies. And we see it Bill as our mission to harness that powerful combination of human expertise with cutting edge technology, whether it be AI or other technology. And the mission is to put brands in the heads, hearts, and hands of people everywhere.

BS: Actually, that’s a good lead in because my first question to you is going to be where do you see AI and localization, especially with a focus of being kind of the trusted partner for human-to-human communication?

SM: My first answer to that would be it’s no longer the future. AI is the now. And I think whatever role people play in our industry, whether you’re like Acclaro, you’re a language services provider, offering services to those global brands, whether you are a technology provider, whether you run localization, localized content in an enterprise, or even if you’re what I’d call an individual contributor, maybe you’re a linguist or a language professional. I think AI is already changed what you do and how you go about your business. And I think that’s only going to continue and to develop. So I actually think we’re going to stop talking at some stage relatively soon about AI. It’s just going to be all pervasive and all invasive.

BS: It’ll be the norm. Yeah.

SM: Absolutely. We don’t talk any more about the internet in many, many industries, and we won’t talk about AI. It’ll just become the norm. And localization, I don’t think is unique in that respect. But I do think that if you think about the genesis of large language models and where they came from, I think localization is probably one of the primary and one of the first use cases for generative AI and for LLMs.

BS: Right. The industry started out decades ago with machine translation, which was really born out of pattern matching, and it’s just grown over time.

SM: Absolutely. And I remember when I joined the industry, what did I say? So 2009, it would’ve been when I joined the industry. And I had friends asking me, what do you mean people pay you for translation and pay for language services? I’ve just got this new thing on my phone, it’s called Google Translate. Why are we paying any companies for translation? So you’re absolutely right, and I think obviously machine translation had been around for decades before I joined the industry. So yeah, I think that question has come into focus a lot more with every sort of, I was going to say, every year that passes, quite honestly, it’s every three months.

BS: If that.

SM: Exactly, yeah. Why do companies like Acclaro still exist? And I think there are probably a lot of people in the industry who actually, if you think about the boom in Gen I over the last two, two and a half years, there’s a lot of people who see it as a very real existential threat. But more and more what I’m seeing amongst our client base and our competitors and other actors in the industry, the tech companies, is that there’s a lot more people who are seeing it as an opportunity actually for the language industry and for the localization industry.

BS: So about those opportunities, what are you seeing there?

SM: I think one of the biggest things, it doesn’t matter what role you play, whether you’re an individual linguist or whether you’re a company like ours, I think there’s a shift in roles and the traditional, I suppose most of what I dealt with 16 years ago was a human being doing translation, another human being doing some editing. There were obviously computers and tools involved, but it was a very human-led process. I think we’re seeing now a lot of those roles changing. Translators are becoming language strategists; they’re becoming quality guardians. Project managers are becoming sort of almost like solutions architects or data owners. So I think that there’s a real change.

And personally, I don’t think, and I guess this is what this podcast is all about. I don’t see the roles of a few things going away, but I do see those roles changing and developing. And in some cases, I think it’s going to be for the better. And I think what we’re seeing is a lot of, because there’s all this kind of doubt and uncertainty and sort of threat, people are wanting to be shown the way, and people are wanting companies like our company and other companies like it to sort of lead the way in terms of how people who manage localized content can kind of implement AI.

BS: Yeah. We’re seeing something similar in the content space as well. I know there was a big fear, certainly a couple of years ago, or even last year, that, oh, AI is going to take all the writing jobs because everyone saw what ChatGPT could do until they really started peeling back the layers and go, well, this is great. It spit out a bunch of words, it sounds great, but it really doesn’t say anything. It just kind of glosses over a lot of information and kind of presents you with the summary. But what we’re seeing now is that a lot of people, at least on the writing side, yeah, they’re using AI as a tool to automate away a lot of the mechanical bits of the work so that the writers can focus on quality.

SM: We’re seeing exactly the same thing. I had a customer say to me she wants AI to do the dishes while she concentrates on writing the poetry. So it is the mundane stuff, the stuff that has to be done, but it’s not that exciting. It’s mundane, it’s repetitive. Those have always been the tasks that have been first in line to be automated, first in line to be removed, first in line, to be improved. And I think that’s what we’re seeing with AI. 

BS: So on the plus side, you have AI potentially doing the dishes for you, while you’re writing poetry or learning to play the piano, what are some of the pitfalls that you’re seeing with regard to AI and translation?

SM: I think there’s a few, and I think it depends on whereabouts AI is used, Bill, in the workflow. I think the very active translation itself is a very, very common use now of AI. But I think there’s some kind of a, I’m going to call them translation adjacent tasks as well, like we’ve mentioned with the entire workflow. So I think the answer would depend on that. But I think one of the biggest pitfalls of AI, and it was the same again, 2009 when I joined the industry and friends of mine had this new thing in their pocket called Google Translate. One of the pitfalls was, well, it’s not always right. It’s not always accurate.

And even though the technology has come on leaps and bounds since then, and you had neural NT before large language models, it still isn’t always accurate. And I think you mentioned it before, it does almost always sound smooth and fluid and almost like it sounds like it’s very polished, and it sounds like it should be, right? I’m thinking, “I’m in sales myself. So it could be a metaphor for a salesperson, couldn’t it? Not always, right? But always sounds confident. But I think there’s a danger where in any type of translation, sometimes accuracy doesn’t actually matter. I mean, if the type of content we’re talking about is, I don’t know, some frequently asked questions on how I can get my speaker to work as a customer, you’re going to be very patient if it’s not perfect English or if you speaking to the language, if it’s not perfect, as long as it gets you to get your speaker to work, you’re not really going to mind. But there’s other content where accuracy is absolutely crucial. In some industries could even be life or death.

But I go back to my first year or two in the industry, and we had a customer that made really good digital cameras, and they had a huge problem because their camera was water resistant, and one of their previous translators had translated it as waterproof. And of course, the customer takes it scuba diving or whatever they were doing with the digital camera, and the camera stops working because it wasn’t waterproof, it was just water resistant.

So sometimes what would be a very kind of seemingly innocuous choice of term, it wasn’t life or death, but obviously it was the difference between a thousand-dollar camera working or not. So I think accuracy is really critical. And even though it sounds confident, it’s not always accurate. And I think that’s one of the biggest pitfalls. Language is subjective, and some things are sort of black and white or wrong, but other things are a lot more nuanced. And what we see is, especially because a lot of the large language models are trained in English and with English data, they don’t necessarily always get the cultural or the sort of linguistic specific nuances of different markets.

We’ve seen some examples, it could be any markets, but specifically Arabic requires careful handling because of the way certain language comes across. Japanese, the politeness Japanese and what do they say, 50 words for snow. Some things aren’t sort of black or white in terms of whether they’re right or wrong. So it’s very, very gray areas in language. And again, however confident the output sounds, sometimes it’s not always culturally balanced or culturally sensitive.

BS: You don’t want it to imply anything or have anyone kind of just take away the wrong message because it was unclear or whatnot.

SM: Absolutely, absolutely. And especially when you’re thinking of branded content. I mean, some of the companies we work with and some of the companies, I’m sure that people listen to the podcast, they’d spend millions on protecting building, first of all, but also protecting their brand in different markets and the wrong choice of language, the wrong translation can put that at risk.

BS: Yeah. With branding, I assume that there’s a tone shift that you need to watch for. There’s certainly what you can and can’t say in certain contexts regarding the brand.

SM: Well, I think with AI, when you are using GenAI to translate, the other thing is it’s because I think you mentioned before, the technology it is a pattern-based technology. The content could be quite inherently repetitive. And again, whilst they’ll be confident, whilst they’ll be polished, it doesn’t always take into account the creativity or the emotion. And it’s less and less now we’re seeing AI sort of properly trained on a specific brand’s content. The models are more, they’re too big really to be trained just on a brand-specific content. So sometimes the messaging can appear quite generic or not really in step with the identity that a brand wants to portray. I think most of our clients would be in agreement when it comes to brand. It can’t be left to the machines alone.

BS: And I would think that any use of AI or even machine translation in something with regard to branding, where you want to own that messaging and really tailor that messaging, you really don’t want to have other influences coming in from the wild. So I would imagine that with an AI model that’s trained to work in that environment, you really don’t want it to know that there’s an outside internet, there’s an outside world that it can harvest information from because you might be getting language from your competitors or what have you.

SM: Yeah, absolutely. Absolutely. Yeah, you’re sort of getting it from too many sources where it kind of needs to be beyond brand really. I think there’s other things as well that we see. I mean, there’s still quite common cases of bias and stereotyping because like you say, it, taking content if you like, or data from all sorts of sources. And if there’s bias in there, there’s misgendered language, especially with some target languages. I mean, you’ve got, in English, it’s kind of fine, really, but in Spanish and French and German, you’ve got to choose a gender for every noun, every adjective, in order to be accurate.

BS: Otherwise, it’s wrong.

SM: Yeah, absolutely. Yeah, absolutely. And it compounds because the models are built on such scale, it compounds over time. So again, without that sort of active monitoring and without that human oversight, what might be a problem today will compound, and it’d be even worse tomorrow in the months ahead.

BS: How about the way in which the translation process works? Have you seen AI really shifting a lot of those workflows?

SM: So the short answer is yes. So by far, the most common workflow, if you’re looking at translation by far, the most common workflow with our customers now is to start with AI output. And to have a human being review some or all of that output. It is very, very rare. Now, when we are working with the enterprise-level companies, it’s very, very rare that they’d want, well, actually I might hold that thought, but it’s very rare that they’d want, for most content, they would want a fully human translation. Except one of the pitfalls that we have seen is, or one of the concerns if you like, that a lot of enterprises have about using AI is security and confidentiality.

And in fact, we have some customers where it’s written in our contract that we must not use AI as part of the translation process. Now, that could be for some specific content types only, and a lot of the time it’s a factor of, if you like, the attitude to risk or the attitude to confidentiality that that particular customer might have. But a lot of people are still very, very paranoid about that. They don’t want to be risking personal data being effectively leaked or being used to train and being cross pollinated, like your previous example. But in general, the sort of default service now for what I’d call regular common translation is post editing or human review of AI content. So that, that’s probably the biggest change is that’s now really become the norm.

BS: Okay. We talked a lot about the pitfalls here, so let’s talk about some benefits that you get at of using AI and localization.

SM: Well, I think the first thing is scale. I think it just allows you to do so much more because it almost, well, it doesn’t remove, but it significantly reduces those budget and time constraints that the traditional translation process used to have. Yeah, you can translate content really, really fast, very, very affordably, and it’s huge volumes that you just couldn’t consider if that technology wasn’t there.

So you could argue you’ve always been able to do that since machine translation was available. But I think large language models, they do bring more fluency. They do bring more sort of contextual understanding than those sort of pattern-based machine translation models. They can, even though we’ve talked about how some of the challenges around nuance and tone, they can improve style and tone. So we’ve seen a lot of benefits and a good opportunity really in sort of pairing the two technologies, neural machine translation, large language models, and again, you can’t get away when they’re guided by the human expertise.

They can offer a really good balance of scale, but also quality that you weren’t able to achieve before. And this is what I would say to people who are sort of worried about the existential threat of, oh my gosh, I’m a translator, so AI is taking my job. Absolutely, it’s probably changing your job. But we see AI translation not replace human translation, but replacing no translation. So that mountain of content, the majority of content actually that was never translated before because of time and budget constraints can now be translated to a certain level of quality. And so we see the overall volume of content localize, exploding, and ideally a similar level of human involvement or even more, in some cases, human involvement than before, but as a proportion of the overall, it’s a lot less, if that makes sense.

BS: Yeah. So what about multimedia? So audio and video, I know those have been traditionally a more difficult format to handle in localization, particularly when you may need to change the visuals along the way.

SM: If you ask any project manager in our company, the most expensive, the most time-consuming type projects traditionally to deliver, and you’re absolutely right, you make a mistake with terminology and you’re doing a professional voiceover and the studio’s booked and the actor’s booked and you want to change three or four words or three or four terms. Okay, that’s fine. Rebook the studio, rebook the actor. Yeah. I mean, it was traditionally, and I say traditionally, we’re talking only three or four years ago, one of the most expensive forms of content to translate.

So I think what we see is it’s been revolutionized by AI, video localization, audio localization, and this is a great example of actually where it’s replacing no translation. I mean, we had customers who just wouldn’t, we don’t want to dub that video. We don’t want to localize their audio, we just can’t afford it. We haven’t got the time. And now with synthesized voice synthesized videos, the quality is sort of very natural, very expressive, and you can produce training videos and product demos and all those kind of marketing assets in various markets that used to cost you lots and lots of money for 10 times less the cost, and probably more than 10 times less the speed.

BS: Nice. Yeah. I know that one of the things that we saw, particularly with using machine translation is that there was a pretty good check for accuracy built into a lot of those systems, but they weren’t quite a hundred percent. How does AI compare with that because it does understand language a bit more. So with regard to QA, how is that being leveraged?

SM: Well, they can understand. It’s not just about accuracy and grammatical correctness and spelling errors and that sort of thing has always been around, like you say, with machine translation. But the LLMs now, they can evaluate that sort of fluency terminology, use adherence to brand guidelines, style guidelines, and they can do that. So what we see is that whereas before LLMs came around and you had neural machine translation, pretty much most of the machine, unless it was very low value output, and unless it was very invisible or less visible content, let’s say if it was something that the clients cared about, they would want a human review of every single segment or every single sentence effectively. Whereas now, LLMs can help you sort of hone in and identify that percentage of the content that might need looking at by a human. And actually, I mean, there’s no real pattern, but if an LLM as a first pass can look at a large volume of content and say, actually 70% of that is absolutely fine, it matches the instructions that we’ve given it.

Not only is it accurate, but also it adheres to fluency and terminology and so on. Why don’t you human beings focus on this 30%? I mean, that’s a huge benefit to a lot of companies, saves a lot of time, saves a lot of costs, and just again, allows them to localize a lot more of that content than they were ever able to do before. So it’s great as a first pass before an extra layer if you like, a technology-dead layer before any human involvement and focusing the humans on the work that matters and the work that’s going to have the most impact.

BS: Nice. So if someone is looking to adopt AI within their localization efforts, what are the first steps for building AI into a strategy that you would recommend?

SM: Just call me. No, I’m kidding. I think it is any new process bill or any new technology, I think, and it sounds kind of common sense, but I think when deciding on any new strategy, it’s kind of be clear about why you’re doing it. You asked earlier on how AI is changing the localization industry. I think one huge thing I see, I speak to enterprise buyers of localization services every day. That’s my job. That’s what me and my team do. And one of the things that they tell me is that all of a sudden the C-suite know who they are.

All of a sudden, the guys with the money, the people with the money, they know they exist. And oh, we’ve got a localization department because as we said, GenAI, one of the earliest adopters, one of the earliest use cases for this was localization and was translation. So now there’s a lot of pressure from people who previously didn’t even know you existed or sort of maybe just saw you as a cost of doing business. Now they’re putting pressure on you to use AI. How are you using GenAI in your workflow? What can we as a business learn from it? Where can we save costs? Where can we increase volume? How can we use it as a revenue driver? Those sort of things. So that being said, that’s a big opportunity, but where we see it not go right or where we see it go more wrong more often than not is where people are doing it just because of that pressure and they think, oh, I have to do it because I’m getting asked to do it. I’m getting asked to experiment.

Again, it sounds really obvious, but they don’t really know what they’re looking for. Are they looking for time to be saved? Are they looking for costs to be removed? Are they looking to increase efficiencies with in their overall workflow? So I think it’s like anything, isn’t it? Unless you know how you’re going to measure success, you probably won’t be successful. So I think that’s the first tip I’d give people. Be clear about what it is you’re looking for AI in localization to achieve. And again, one of the pitfalls is we see lots of people wanting to experiment and it’s good, and you want to encourage that. I suppose as a chief exec or even with our clients, we’d love to see experimentation, but when you see lots of people doing lots of different things just because it looks cool and they just want to experiment, unless it’s joined up and unless it’s with a purpose, it doesn’t always work well.

So I think what we see when people do it well is they have that purpose. They have it documented actually, they have that sort of agreed, if you like, with they have that executive buy-in, this is why we’re doing it, and this is what we’re hoping to see, not just because it’s cool because it might save us X dollars or it might save us X amount of time. And I think what we see well is when people do that and then they kind of embrace those small iterative tests. One of our solutions architects was on a call with me with a customer, just advise them not to boil the ocean. And again, I know this isn’t specific to AI, but just let’s not do everything all at once. Lots of localization workflows. They have legacy technology, they have legacy connectors to other content repositories, and you can’t just rip it out without a lot of pain and start again.

So you’ve got to decide where you’re going to have that impact. Start small, very small tests, iterate frequently, get the feedback. That’s one of the key things. And then it just becomes any other implementation of technology or implementation of a workflow. One of the things we did at Acclaro is actually publish a checklist to help companies answer that exact same question, but when you read it, there’s not going to be much there about specific AI technologies and this type of LLM is better for this, and that type of LLM is better for that. It’s not prescriptive. It’s just designed as a guide to actually say, okay, well don’t get ahead of yourselves. Just follow a really sensible process, prove that it works, and then choose the next experiment.

BS: Yeah, get people thinking about it.

SM: Absolutely,

BS: We hear a lot from people that, oh, it came down from the C-suite that we have to incorporate AI into our workflows in 2025, in 2026. And yeah, I mean that’s all the directive is usually. Usually there’s no foresight coming down from above saying, this is what we’re envisioning you doing with AI. So it really does come down to the people who are managing these processes to take a step back and say, okay, here’s where things are working, here’s where we could make improvements. Here are some potential footholds that we can start building with AI and see where it goes. But yeah, I think for a lot of people, the answer of how do I use AI? I think it’s going to be different for every company out there. I mean, it might be similar, but I think it might be very different and very unique from company to company as to what they’re actually doing.

SM: That’s what we see. Yeah, that’s what we see. And again, some of those pitfalls we’ve talked about, some companies have a different approach to information security and confidentiality. Some companies are just risk averse. Some company’s content is, they should be more sensitive about it than other company’s content. Some company’s content, think finance, life sciences, medical devices, there’s real-world problems. Let’s say if it’s not accurate, whereas other company’s contents, yeah, okay, it might take you an extra 30 seconds to get that speaker to work or it might not. But I think, yeah, that’s no surprise. One of our customers said to me, AI is like tea. You need to infuse it. You can’t just dump it. You need to infuse it. You need to let it breathe. You need to let it kind of circulate. You got to decide the strength. You’ve got to decide where you get it from. You’ve got to decide what the human being making it has to do to make a great cup. And it’s just going to be different for every single person.

BS: True.

SM: We have five in our house and we have five different types of tea, whoever’s making that tea has to know what everyone’s preferences are. And I think it’s the same with AI. And it’s the same with a lot of technologies, isn’t it?

BS: It is. So when let’s say someone running a localization department, their CEO says, “We need to incorporate AI. Here’s your mandate. Go run, figure it out, implement it.” Do you have any advice around how to report, I guess the results, the findings, the progress back up?

SM: Yeah. My first advice would be, if I was in that situation, to say to that person, listen, we’ve been doing this for 10 years. We just never used to call it AI. We used to call it neural machine translation or machine translation. But my second bit of advice is you’ve actually got to do that because whilst the opportunity is there for localization managers to really drive and shape how AI is implemented, if they don’t do that, or if you pretend it’s something different than it is obvious, if you pretend it’s going away or if you pretend it’s a fad that people are going to forget about, what’ll happen is that somebody else will be asked to implement AI and you won’t be. And it’s quite interesting. We’re seeing a lot now of the persona, if you like, of the people that we’re working with in those enterprise localization teams is getting wider, it’s getting more multidisciplinary.

It’s very, very rare that you’d have any decent sized company, a localization manager making decisions about partners, vendors, technology by themselves. It would always be now with a keen eye from the technology team, the IT team, because everyone’s laser-focused on getting this right. So that’d be my second piece of advice. But I think if you define the results that you’re looking for and you document those and you’re able to capture those, again, it is not rocket science. It’s really just basic project management then. And then try and report on those regularly and quickly in a way that you’re able to iterate. An AI pilot shouldn’t be a six-month project with results at the end of six months. I mean, you should be able to know if you’ve chosen the right size of pilots, you should be able to know within days or weeks whether it’s likely to bring the benefits you thought it would do.

BS: Very true. So you see the return on using it or the lack of return on using it much quicker?

SM: Yeah, well absolutely. Yeah. Again, I think from my own personal experience, we’ve done a lot of helping and guiding clients with pilots, with experiments. It’s not all great results. And again, we haven’t manufactured anything to make it not great results so we stay in a job and people still use the human service. But we have seen really good results. I’m thinking of one, it’s quite a specific use case to do with translation memories, but the client was using GenAI to improve the fuzzy match, if you’re familiar with that term, build a translation memory match, the fuzzy match enhancer, and they found that it improved about 80% of the segments in I think five languages.

So again, if I look at that one, they didn’t pick every single language that they had. They only picked five, probably picked five where they could get some quick feedbacks of five more commonly spoken languages. And they were able to measure in their tool, the post editing time and the accuracy. And yeah, they found it improved 80%. I mean, 20% didn’t improve, so not 100% success, but they were able to provide real data to the powers that be to decide whether to extend it to their other language sets or their other content types.

BS: Nice. Well, I think we’re at a point where we can wrap up here. Any closing thoughts on AI and localization? Good, bad, ugly, just do it.

SM: I think the biggest thing for me is that AI is today. It’s not the future. It’s here. I’m in the UK, like I say, and multi-billion dollar announcement in investments, all specifically to do with AI from companies like NVIDIA, from Microsoft. And AI is the now. So I think you don’t have a choice whether to adopt it, whether to adapt to it being here. It’s just about how you choose to do it really. That’s become our role as a language service provider. As a sort of trusted partner of brands, our role has become to help guide and give our opinions. It’ll continue to change and we’ll have new use cases. And you ask me those same questions, I think Bill, in six months or 12 months, I might give you some different answers because we’ll have found new experiments and new use cases.

BS: And that’s fair. Well, Steve, thank you very much.

SM: Thank you, Bill. I enjoyed the conversation.

Conclusion with ambient background music

CC: Thank you for listening to Content Operations by Scriptorium. For more information, visit Scriptorium.com or check the show notes for relevant links.

Want to learn more about AI, localization, and the future of content? Download our book, Content Transformation.

The post Balancing automation, accuracy, and authenticity: AI in localization appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 33:51
From classrooms to clicks: the future of training content https://www.scriptorium.com/2025/10/the-future-of-training-content/ Mon, 06 Oct 2025 11:45:16 +0000 https://www.scriptorium.com/?p=23277 https://www.scriptorium.com/2025/10/the-future-of-training-content/#respond https://www.scriptorium.com/2025/10/the-future-of-training-content/feed/ 0 AI, self-paced courses, and shifting demand for instructor-led classes—what’s next for the future of training content? In this podcast, Sarah O’Keefe and Kevin Siegel unpack the challenges, opportunities, and what it takes to adapt.

There’s probably a training company out there that’d be happy to teach me how to use WordPress. I didn’t have the time, I didn’t have the resources, nothing. So I just did it on my own. That’s one example of how you can use AI to replace some training. And when I don’t know how to do something these days, I go right to YouTube and look for a video to teach me how to do it. But given that, there are some industries where you can’t get away with that. Healthcare is an exampleyou’re not going to learn how to do brain surgery that someone could rely on with AI or through a YouTube video.

— Kevin Siegel

Related links:

LinkedIn:

Transcript:

Introduction with ambient background music

Christine Cuellar: From Scriptorium, this is Content Operations, a show that delivers industry-leading insights for global organizations.

Bill Swallow: In the end, you have a unified experience so that people aren’t relearning how to engage with your content in every context you produce it.

SO: Change is perceived as being risky; you have to convince me that making the change is less risky than not making the change.

Alan Pringle: And at some point, you are going to have tools, technology, and processes that no longer support your needs, so if you think about that ahead of time, you’re going to be much better off.

End of introduction

SO: Hi, everyone, I’m Sarah O’Keefe. I’m here today with Kevin Siegel. Hey, Kevin.

KS: Hey, Sarah. Great to be here. Thanks for having me.

SO: Yeah, it’s great to see you. Kevin and I, for those of you that don’t know, go way back and have some epic stories about a conference in India that we went to together where we had some adventures in shopping and haggling and bartering in the middle of downtown Bangalore, as I recall.

KS: I can only tell you that if you want to go shopping in Bangalore, take Sarah. She’s far better at negotiating than I am. I’m absolutely horrible at it.

SO: And my advice is to take Alyssa Fox, who was the one that was really doing all the bartering.

KS: Really good. Yes, yes.

SO: So anyway, we are here today to talk about challenges in instructor-led training, and this came out of a LinkedIn post that Kevin put up a little while ago, which will include in the show notes. So Kevin, tell us a little bit about yourself and IconLogic, your company and what you do over there.

KS: So IconLogic, we’ve always considered ourselves to be a three-headed dragon, three-headed beast, where we do computer training, software training, so vendor-specific. We do e-learning development, and I write books for a living as well. So if you go to Amazon, you’ll find me well-represented there. Actually, one of the original micro-publishers on this new platform called Amazon with my very first book posted there called, “All This PageMaker, the Essentials.” Yeah, did I date myself for that reference? Which led to a book on QuarkXPress, which led to Microsoft Office books. But my bread and butter books on Amazon even today are books on Adobe Captivate, Articulate Storyline, and TechSmith Camtasia. I still keep those books updated. So publishing, training, and development. And the post you’re talking about, which got a lot of feedback, I really loved it, was about training and specifically what I see as the demise of our training portion of our business. And it’s pretty terrifying. I thought it was just us, but I spoke with other organizations similar to mine in training, and we’re not talking about a small fall-off of training. 15, 20% could be manageable. You’re talking 90% training fall off, which led me to think originally, “Is it me?” Because I hadn’t talked to the other training companies. “Is it us? I mean, we’re dinosaurs at this point. Is it the consumer? Is it the industry?”

But then I talked to a bunch of companies that are similar to mine and they’re all showing the same thing, 90% down. And just as an example of how horrifying that is, some of our classes, we’d expect a decent-sized class, 10, a large class, 15 to 18. Those were the glory days. Now we’re twos and threes, if anyone signs up at all. And what I saw as the demise of training for both training companies and trainers, if you’re a training company and you’re hiring a trainer, one or two people in the room isn’t going to pay the bills. Got to keep the lights on with your overhead running 50%, 60%, you know this as a business person, but you’ve got to have five or six minimum to pay those bills and pay your trainer any kind of a rate.

SO: So we’re talking specifically about live instructor-led, in-person or online?

KS: Both, but we went more virtual long before the pandemic. So we’ve been teaching more virtual than on-site for 30 years. Well, not virtual 30 years, virtual wasn’t really viable until about 20 years ago. So we’ve been teaching virtual for 20 years. The pandemic made it all the more important. But you would think that training would improve with the pandemic, it actually got even worse and it never recovered. So the pandemic was the genesis of that spiral down. AI has hastened the demise. But this is instructor-led training in both forms, virtual and on-site. I think even worse for on-site.

SO: So let’s start with pandemic. You’re already doing virtual classes, along comes COVID and lockdowns and everything goes virtual. And you would think you’d be well-positioned for that, in that you’re good to go. What happened with training during the pandemic era when that first hit?

KS: When that pandemic first hit, people panicked and went home and just hugged their families. They weren’t getting trained on anything. So it wasn’t a question of, were we well-positioned to offer training? Nobody wanted training, period. And this was, I think if you pull all training companies, well, there are certain markets where you need training no matter what. Healthcare as an example, they need training. Security, needed training. But for the day-to-day operations of a business, people went home and they didn’t work for a long time. They were just like, “The world is ending.” And then, oh, the world didn’t end. So now they’ve got to go back to work, but they didn’t go back to work for a long time. Eventually people got back to work. Now, are you on-site back to work or are you at home? That’s a whole nother thing to think about.

But just from a training perspective, when panic sets in, when the economy goes bad, training is one of the first things, you get rid of it. Go teach yourself. And the teaching yourself part is what has led to the further demise of training, because you realize I can teach myself on YouTube. At least I think I can. And I think when you start teaching yourself on your own and you think you can, it becomes, the training was good enough. So if you said, “Let’s focus on the pandemic.” That’s what started it, the downward spiral. But we even saw the downward spiral before the pandemic, and it was the vendors that started to offer the training that we were offering themselves.

SO: So instead of a third-party, certainly a third-party, mostly independent organization offering training on a specific software application, the vendors said, “We’re going to offer official training.”

KS: Correct. And it started with some of these vendors rolling out their training at conferences. And I attended these conferences as a speaker. I won’t name the software, I won’t name the vendor, but I would just tell you I would go there and I would say, “Well, what’s this certificate thing you’re running there?” It’s a certificate of participation. But as I saw people walking around, they would say, “I’m now certified.” And I go, “You’re not certified after a three-hour program. You now have some knowledge.” They thought they were certified and experts, but they wouldn’t know they weren’t qualified until told to do a job. And then they would find out, “I’m not qualified to do this job.” But that certificate course, which was just a couple of hours by this particular vendor, morphed into a full day certificate. They were charging now a lot of money for it, which morphed into a multi-day thing, which now has destroyed any opportunity for training that we have. And that’s when I started noticing a downward spiral. Tracking finances, it would be your investments going down, down, down, down this thing. It’s like a plane, head and nose down.

SO: And we’ve seen something similar. I mean, back in the day, and I do actually… So for those of you listening at home that are not in this generation, PageMaker was the sort of grandparent of InDesign. I am also familiar with PageMaker and I think my first work in computer stuff was in that space. So now we’ve all dated ourselves. But back in the day we did a decent amount of in-person training. We had a training classroom in one of our offices at one point.

Now, we were never as focused on it as you are and were, but we did a decent business of public-facing, scheduled two-day, three-day, “Come to our office and we’ll train you on the things.” And then over time, that kind of dropped off and we got away from doing training because it was so difficult. And this is longer ago than you’re talking about. So the pattern that you’re describing where instructor-led in-person training, a classroom training with everybody in the same room kind of got disrupted a while back. We made a decent living doing that for a long time and there was-

KS: Made a great living doing that. Oh, my God. That was the thing.

SO: But we got away from it, because it got harder and harder to put the right people in the right classes and get people to travel and come to us. So then there’s online training, which we kind of got rid of training. You sort of pivoted to online/ virtual. And then ultimately, the pandemic has made it such, from my point of view, that the vast majority of what we do in this space is custom. We’re doing a big implementation project. We do some custom training that might be in-person, on-site, but much more often it is online, live online instructor-led, but custom. Because all of the companies that we’re dealing with, even if people did return to office, very much they’re fragmented, right? It’s two people here and five people there, and four people there and one in every state. And so, bringing them all together into a classroom is not just bring the instructor in, but bring everybody in and it costs a fortune. And that’s before we get into the question of, can they get across the borders and can they travel?

There’s visa issues, there’s admin issues, people have caregiving responsibilities, they can’t travel. There’s a whole bunch of stuff that goes into actually relocating from point A to point B to do a class at point B. So fine. Okay. So along comes the pandemic that really pushes on the virtualization, right? The virtual stuff. And then you’re saying the vendors get into it and they are clawing back some of this revenue for themselves. They’re basically saying, “We’re going to do official vendor-approved stuff, which then makes it very difficult as a third-party, because you have to walk that line, and I’ve been there, you have to walk that line between, we are delivering training on this product which belongs to somebody else, and we can be maybe a little more forthright about the issues in the product because it’s not our product. So we’re just going to say, “Hey, there’s an issue over here. It doesn’t really work. Do it this other way.” Not toeing the official party line. Okay, so we have all of that going on and all of those challenges already. And now along comes AI. So what does AI do to this environment that you’re describing?

KS: It further destroys it. I’ll give you an example. My blog, Typepad, we received an email September 1st, 2025, and we’re recording this September 4th, 2025, okay? So three days ago I got an email saying, “Hello, we’re shutting down. Sorry.” And I’m like, “What? Yeah, you’ve got 30 days to get your stuff out of here.” Basically being kicked out of your apartment or your house. So I’m like, “All right, well, go to AI and I asked AI, what is the top blog software?” They said, “WordPress.” Love it or hate it, okay. So I went to WordPress. I had no idea how to use WordPress. I had no staff available to help me. So I had to get my stuff out of Typepad and on and on it went. I went to AI, ChatGPT specifically, and I said, “Teach me how to use WordPress,” and specifically how to get my crap out of TypePad. I say crap, my stuff out of TypePad. In a matter of what? Two days I had everything transferred over.

So, didn’t need training, otherwise I would’ve had to go to training to learn how to do that and I didn’t have to. So that’s an example of there’s probably a training company out there that’d be happy to teach me how to use WordPress. I didn’t have the time, I didn’t have the resources, nothing. So I just did it on my own. That’s one example of how you can use AI to replace the training. There’s other examples of training that is not just good enough, it’s fine. It’s good. It’s good. It’s not lacking. When I don’t know how to do something these days, I go right to YouTube and look for a video to teach me how to do it. So given that, some industries where you can’t get away with that. Healthcare as an example, you’re not going to learn how to do brain surgery that you could rely on with AI or video through YouTube.

SO: We hope.

KS: We hope. “Hey, relax. I know this is your first time, Sarah, I’m your surgeon. I watched a video yesterday, I feel pretty good about it as I grab that saw.” I don’t believe you’re going to be comfortable with that. So listen, it’s bad enough. And you mentioned the vendor that is now offering training. So vendor pullback, they want that for a revenue source. This particular vendor is using it as a revenue tool, but there’s also vendors out there that are actively stopping you from offering training classes, and on it goes.

SO: Yeah, I do want to talk about that one a little bit. I know nothing about the specifics of your situation, but this is a losing battle. Because you were just talking about YouTube, I was doing some research for a very, very, very large company that makes farm equipment and I went looking for their content. And they had content on their website, it was like type in your product name or product number and it would give you the official user manual, which was of course ugly and terrible. But I discovered that if you typed in something like, “How do I fix the breaks on my X, Y, Z product?” It would take you to YouTube. And it would take you to this YouTube channel that had a lot of subscribers and was in fact not at all the official company YouTube channel.

KS: It was a dude who was working on it?

SO: It was a dude in Warsaw, North Carolina, which is not the same as Warsaw, Poland. It is a tiny, tiny, tiny little place, mostly known for me as being halfway between where I am and the beach. It’s where we stop to get gas and summer peaches and corn from the farm stand and fried chicken on our way to the beach, because that’s the thing we do. That’s where Warsaw is. It has a population of, I don’t know, 3,000 maybe.

KS: Okay, yeah.

SO: I have no idea. But there’s some guy who works for the dealership there who’s making these videos explaining how to do maintenance on these, in this case tractors, and he has got the audience. Not the official website, which by the way does not have a YouTube channel that I’m aware of, or at least that I could find now. This was five, 10 years ago. It has been a while. But so, there’s all this third-party content out there and there’s this ecosystem of content because it’s digital. You can’t really control that unless, we were talking about this earlier, unless you’re doing something like nuclear weapons, intelligence work, or maybe brain surgery. You can probably control those things. That’s about it. Clearly things are changing and not for the better. If your revenue is built on instructor-led, whether in-person or online, it sounds as though things are changing and not for the better in that space specifically, unless we’re training on brain surgery, which most of us are not. So what’s the path forward?

KS: I’m thinking about it, actually.

SO: I am not signing up for you to do my brain surgery.

KS: I need someone to practice on. Sarah, let me know if you’re available.

SO: Oh, I’m so sorry, you’re breaking up. I can’t hear you. Okay, so what does the path forward look like? I mean, what does it mean to be inside this disruption and where do you go from here?

KS: Okay, so every training company that I have contacts in, they’re all down significantly. The ones that are surviving have government contracts.

SO: Mm-hmm.

KS: And that is to develop training in all of its guises, that primarily they’re seeing a call for virtual reality training. That’s really, really hot right now. But not the virtual reality training that you can create with the Captivates and the Storylines of the world. That’s too lowbrow. They’re talking about immersive, almost gamification, where you build a world. So if that’s your expertise, you can create training in that. That’s what people want. It looks like augmented reality and virtual reality.

I can’t see it. Maybe I’m of a certain age that I’m like, “I’m not putting goggles on to take my training.” But that is pretty popular with other generations. So you can’t ignore it, I think, embrace it. So government contracts, if you can get that, you’ll be okay in the training business. Several of my colleagues have actually done that. So that’s a leg up. The other is to embrace asynchronous training and put your materials out there that live now forever. So I ignored for years these providers of asynchronous training where you put your content there and they sell it for you. I’ve got five classes on Udemy now, and each of them sells pretty well.

Matter of fact, my Captivate Udemy is one of their bestsellers. That does not translate into offsetting the revenue lost from your training gigs when you were bringing in six, seven, $800 a person for a training class. Our prices were between $695 and $895 per person to take a public class, but it certainly does bring in some revenue. So if you have the ability to create the asynchronous training, the video training, and make it really, really good training, really impactful, then that’s going to help you stay in the game as long as you can. I also think embracing AI versus getting under the covers and just, “I don’t want to see it,” is not the way to go.

I now use AI as a tool. I don’t think it replaces me, I think that I have more to offer in guiding the course than AI, but it gives me a nice, “Get me started here.” Maybe you’ve got a little writer’s block, maybe just getting started. It’s a beautiful day out, I can’t get started. Have AI start, you’ve started up. But if you’re going to go that route and you have AI make suggestions, you better fact check it. And just as an example, I was just curious, I asked ChatGPT to create an exam for Articulate Storyline. That is a tool I know really well, I’ve written exams for Storyline and Captivate and Camtasia. I said, “Write an exam. I want to see what you come up with.” And some of the questions were actually worded better than what I had done. They were very similar questions. And I go, “I kind of like the way you, AI, did that.” Which was kind of a bummer. But I would say a good 30% of what I read, while it was well-written, was completely wrong.

SO: Yes, confidently wrong.

KS: Yes, it was confidently wrong. Asking questions, “When you do this on storyline, what is the correct thing? What do you do?” And Storyline doesn’t do that thing. They were talking about Rise as an example. I’m like, “You’ve gone and combined Rise with Storyline.” So if you’re going to use AI, it’s the way you ask the question, your prompts. So get some training on engineering your prompts and fact-checking what you get from those prompts. But I use AI every day in my writing to make sure I don’t have grammar issues. So I’ll tell AI, “Check this for clarity and grammar.” So it’s my words, but it now is saying, “Well, there’s a couple typos, I fix that. And a couple of dangling modifiers, I fix that.” So it makes me feel like I’m writing better. But do keep in mind, if you put your stuff into ChatGPT, it’s now part of this mass of stuff that other people are going to get access to.

So you can’t copyright anything that you put in AI. I wrote a book about copyright and training materials and things to think about, because we have a lot of people finding an image of a nice puppy on Google and using it in their training, and that puppy was copyrighted. So anything you do on AI, any photos that get created, any artwork, anything, any writing can’t be copyrighted because only a human can get a copyright. So that’s something to think about. If you have something really, really good, you really didn’t create that, so you can’t copyright it.

You’re going to have to adapt. You’re going to have to adapt or you’re going to fail in the training industry, again, unless it’s very specific niche markets, or as you mentioned, custom training. If you don’t adapt, you’re going to fail. And that adaptation is going to be, embrace AI asynchronous training to put your training out there, available 24 hours a day, seven days a week when you can’t do it. And that’ll offset getting these onesies and twosies in your class.

SO: And it removes the time-bound, I have to set aside these two hours or these four hours of this day to be in the classroom, whether virtual or not if it’s live. I do think that this idea that we’re going to see a split between things that go higher and higher end that people are willing to pay nearly anything for versus the low-end where the price is going… There’s going to be downward pressure on the price for all the low-end stuff, because the barrier to entry to producing asynchronous training is pretty minimal and it gets lower every single day because there’s so many people out there that can potentially do that.

KS: Anybody can hang out a shingle and say that they’re an expert. So I mean, it’s the credentials of the trainer too, I think. Who is the person that’s teaching this? Is it what we call it, Chuck with a truck? Is it Chuck with a truck? Or is it someone who has actually done this? I wouldn’t want to get trained on handling my content by someone who hadn’t done it. I’d want you to handle that, right? So a content strategy. “I mean, who came up with that strategy? Oh, Bob. Has Bob ever done it? No, but he feels good about it. No, I want to get a Sarah who’s done it for years and years and years.”

SO: Yeah, I mean that’s an interesting point though, because at the end of the day, if you commoditize/ productized training, you’re going to have a product as the asynchronous training that’s a package, and you get what you get. When it’s live with an instructor, you’re going to get that instructor on that day in that context. They’re feeling good, they’re feeling bad. The classroom dynamics are good or bad or weird. Every experience is going to be different. Whereas with async, it’s always going to be the same. I mean, barring internet connectivity or something, as the learner, you’re going to get a consistent experience. Now, it’s not going to be the best possible experience, right? Because the best possible experience is you’re in a group with some other people in a room with an amazing instructor.

KS: That is the best.

SO: That is the best.

KS: There’s good too-

SO: It costs the earth.

KS: Yeah, there’s good too, the asynchronous training, because it’s always the same, it’s going to be consistent. How many times have you read a live class and the attendees, one of the attendees just spoiled the sauce? And you’re reminding me now, a colleague of mine, they were doing their certification as a certified technical trainer, CTT, and back in those days, you actually had to record yourself teaching.

SO: Oh, yes, there was a VHS tape of me and kids. That is video, pre-digital video.

KS: That is correct. VHS tape. And I had to do the same thing, but I remember for this one colleague of mine, and the students in this classroom, fake classroom, were other trainers that were also getting the recording done. And I remember she was being recorded and it was over her shoulder looking at the students, because she had to show the students. And one of these students, she made a comment that she knew was correct, and the student shook her head, “Nope, nope. That’s not right. Nope.” And the trainer is now, “What are you doing? Why are you shaking your head no and contradicting one of us? How about just nod?” And so, at some point God had turned around where the students started shaking their head, but realize, “Oh my God, you’re defeating all of us in this room.”

So yes, that was to your point, that the training can vary wildly in a live class, whether it’s virtual or on-site, based on the attendees. Because listen, I’ve been teaching Captivate since it was called RoboDemo, so years and years and years and years, and no class has ever been the same. No two classes are the same and it’s all based on the dynamics of the students in my live class. And you get one person in there who is stuck, can’t move forward, file open is a mystery. Go to the file menu, choose open. How do you do that? Okay, mouse skills. All of that can either derail or can help your class. Funny moments, whatever they may be. But asynchronous training, if you do it right, is always consistently good. The problem is there’s no live interaction. So you can’t ask that instructor, “Well, what do you think about this? What do you think about that?”

So yeah, you made me laugh when you mentioned that, that the dynamics of your live class, you better be fast on your feet to be a live trainer. So I am not saying, if you’re going to teach virtually, you shouldn’t know how to do it. Because listen, I think you’ll agree, there is a vast difference between teaching a class live on-site versus live online, or God forbid, live online and live on-site, where you’re doing both at the same time. Or if you’re going to do blended learning, you’ve got to mix all three, you better know what you’re doing as a facilitator and a trainer to do that or you’ll fall flat on your feet.

You’ll hear all kinds of complaints that people who teach these live classes on-site that now incorporate virtual, and they ignore the virtual audience completely. So the virtual audience is not included in the training, they feel like they’re watching a recording. So you’ve got to know how to engage this audience. I’m actually really stunned, Sarah, that conferences still survive on-site. We mentioned a couple of times before we turned on this recording, why are those conferences live on-site? People are going there to network face-to-face. I guess that’s the big one, but not the content that you’re learning. That content could have been taught virtually.

SO: Yeah, I’ve had the position for a long time that the most important part of a conference is the hallway track, right? The conversations at lunch, in the hallway, and in the exhibit hall and everywhere else. There’s a couple that are doing online in addition to in-person, and typically the-

KS: ATD does that. Yeah, does a good job at that. Yeah.

SO: Yeah, LavaCon is doing that, they’re coming up. But yeah, they have an online track with a chat, a pretty lively chat, and then they also have the in-person version if you can get there in-person.

KS: Which is successful only if the facilitator addresses the online chat, if the facilitator addresses someone who’s virtual. Yeah.

SO: And fun fact, Phylise Banner has been running that for years and years and years and has done a fantastic job of exactly that, of making sure that the online people get into the conversation, even when there’s 200 people in the room and another couple hundred on the chat, and she’s making sure that they get their questions into the discussion. Okay, so that was cheerful, and that made me feel better, because the first half hour of this was super not encouraging. So I think I’m going to close us out there because I’m pretty sure we could go on forever, but let’s leave it there. Kevin, thank you for coming and for giving us the inside information on what’s happening in training land. And hopefully I’ll see you again somewhere in-person at a conference.

KS: Or virtual, with the camera is fine. So yeah, great working with you, Sarah. Thanks for having me.

SO: Great to see you. Bye. 

Conclusion with ambient background music

CC: Thank you for listening to Content Operations by Scriptorium. For more information, visit Scriptorium.com or check the show notes for relevant links.

Questions about this episode? Let’s talk!

"*" indicates required fields

This field is for validation purposes and should be left unchanged.
Data collection (required)*

The post From classrooms to clicks: the future of training content appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 31:30
From PowerPoint to possibilities: Scaling with structured learning content https://www.scriptorium.com/2025/09/from-powerpoint-to-possibilities-scaling-with-structured-learning-content/ Mon, 22 Sep 2025 11:32:59 +0000 https://www.scriptorium.com/?p=23258 https://www.scriptorium.com/2025/09/from-powerpoint-to-possibilities-scaling-with-structured-learning-content/#respond https://www.scriptorium.com/2025/09/from-powerpoint-to-possibilities-scaling-with-structured-learning-content/feed/ 0 What if you could escape copy-and-paste and build dynamic learning experiences at scale? In this podcast, host Sarah O’Keefe and guest Mike Buoy explore the benefits of structured learning content. They share how organizations can break down silos between techcomm and learning content, deliver content across channels, and support personalized learning experiences at scale.

The good thing about structured authoring is that you have a structure. If this is the concept that we need to talk about and discuss, here’s all the background information that goes with it. With that structure comes consistency, and with that consistency, you have more of your information and knowledge documented so that it can then be distributed and repackaged in different ways. If all you have is a PowerPoint, you can’t give somebody a PowerPoint in the middle of an oil change and say, “Here’s the bare minimum you need,” when I need to know, “Okay, what do I do if I’ve cross-threaded my oil drain bolt?” That’s probably not in the PowerPoint. That could be an instructor story that’s going to be told if you have a good instructor who’s been down that really rocky road, but again, a consistent structure is going to set you up so that you have robust base content.

— Mike Buoy

Related links:

LinkedIn:

Transcript:

Introduction with ambient background music

Christine Cuellar: From Scriptorium, this is Content Operations, a show that delivers industry-leading insights for global organizations.

Bill Swallow: In the end, you have a unified experience so that people aren’t relearning how to engage with your content in every context you produce it.

Sarah O’Keefe: Change is perceived as being risky; you have to convince me that making the change is less risky than not making the change.

Alan Pringle: And at some point, you are going to have tools, technology, and processes that no longer support your needs, so if you think about that ahead of time, you’re going to be much better off.

End of introduction

Sarah O’Keefe: Hi everyone, I’m Sarah O’Keefe. I’m here today with Mike Buoy. Hey, Mike.

Mike Buoy: Good morning, Sarah. How are you?

SO: I’m doing well, welcome. For those of you who don’t know, Mike Buoy is the Senior Solutions Consultant for AEM Guides at Adobe since the beginning of this year of 2025. And before that had a, we’ll say, long career in learning.

MB: Long is accurate, long is accurate. There may have been some gray hair grown along the way, in the about 20-plus years.

SO: There might have been. No video for us, no reason in particular. Mike, what else do we need to know about you before we get into today’s topic, which is the intersection of techcomm and learning?

MB: Oh gosh, so if I think just quickly about my career, my background’s in instructional design, consulting, instructor, all the things related to what you would consider a corporate L&D, moving into the software side of things into the learning content management space. And so what we call now component content management, we, when I say we, those are all the different organizations I’ve worked for throughout my career, have been focused in on how do you take content that is usually file-based and sitting in a SharePoint drive somewhere, and how do you bring it in, get it organized so it’s actually an asset as opposed to a bunch of files? And how do you take care of that? How do you maintain it? How do you get it out to the right people at the right time and the right combination, all the rights, all the right nows, that’s really the background of where I come from.

And that’s not just in learning content; at the end of the day, learning content is often the technical communication-type content with an experience wrapped around it. So it’s really a very fun retrospective when you look back on where both industries have been running in parallel and where they’re really starting to intersect now.

SO: Yeah, and I think that’s really the key here. When we start talking about learning content, structured authoring, techcomm, why is it that these things are running in parallel and sitting in different silos? What’s your take on that? Why haven’t they intersected more until maybe now we’re seeing some rumblings of maybe we should consider this, but until now it’s been straight up, we’re learning and your techcomm, or vice versa, and never the twain shall meet, so why?

MB: Yeah, and it’s interesting, when you look at most organizations, the two major silos that you’re seeing, one is going to be product. So whether it’s a software product, a hardware product, an insurance or financial product, whatever that product is, technical communication, what is it? How do you do it? What are all the standard operating procedures surrounding it? That all tends to fall under that product umbrella. And then you get to the other side of the other silo, and that’s the hey, we have customers, whether those customers are our customers or the internal customers, our own employees that we need to trade and bring up the speed on products and how to use them, or perhaps even partners that sit there. And so, typically, techcomm is living under the product umbrella, and L&D is either living under HR or customer success or customer service of some sort, depending on where they’re coming from.

Now in the learning space you, over the last probably decade or so, seeing where there’s a consolidation between internal and external L&D teams and having them get smarter about, what are we building, how are we building it, who are we delivering it to, and what are all those delivery channels? And then when I think about why are they running in parallel, well, they have different goals in mind, right? techcomm has to ship with the product and service and training ideally is doing that, but is often, there’s a little bit of a lag behind, “Okay, we ship the thing, how long is it before we start having all the educational frameworks around it to support the thing that was shipped?”

And so I think leadership-wise, very different philosophies, very different principles on that. techcomm, very much focused on the knowledge side of things. What is it? How do you do it? What are all the SOPs? And L&D leans more towards creating a learning experience around, “Okay, well here’s the knowledge, here’s the information, how do we create that arc going from I’m a complete novice to whatever the next level is?” Or even, I may be an expert and I need to learn how to apply this to get whatever new changes there are in my world and help me get knowledgeable and then skilled in that regard.

So I think those are kind the competing mindsets and philosophies as well as, I won’t say competing, but parallel business organization of why we don’t usually see those two. And if we think about from a workflow perspective, you have engineering or whoever’s building the product, handing over documentation of what they’re building to techcomm and techcomm is taking all of that and then building out their documentation, and then that documentation then gets handed to L&D for them to then say, “Well, how do we contextualize this and build all the best practices around it and recommendations and learning experiences?” So there is a little bit of a waterfall effect for how a product moves through the organization. I think those are the things that really contribute to it being siloed and running in parallel.

SO: Yeah. And I mean many, many organizations, the presence of engineering documentation or product design documentation is also a big question mark, but we’ll set that aside. And I think the key point here is that learning content, and you’ve said this twice already, learning content in general and delivery of learning content is about experience. What is the learning experience? How does the learner interact with this information and how do we bring them from, they don’t understand anything to they can capably do their job? The techcomm side of things is more of a point of need. You’re capable enough but you need some reference documentation or you need to know how to log into the system or various other things. But techcomm to your point, tends to be focused much less on experience and much more on efficiency. How do we get this out the door as fast as possible to ship it with the product? Because the product’s shipping and if you hold up the product because your documentation isn’t ready, very, very bad things will happen to you.

MB: Bad, bad, very bad.

SO: Not a good choice.

MB: It’s not a good look. It’s not a good look.

SO: Now, what’s interesting to me is, and this sort of ties into some of the conversations we have around pre-sales versus post-sales marketing versus techcomm kinds of things, as technical content has moved into a web experience, online environment, and all the rest of it, it has shifted more into pre-sales. People read technical documentation, they read that content to decide whether or not to buy, which means the experience matters more.

And conversely, the learning content has fractured into classroom learning and online instructor led and e-learning a bunch of things I’m not even going to get into, and so they have fractured into multi-channel. So they evolved from classroom into lots of different channels for learning where techcomm evolved from print into lots of different channels, but online and so the two are kind of converging where techcomm needs to be more interested in experience and learning content needs to be more interested in efficiency, which brings us then to, can we meet in the middle and what does it look like to apply some of the structured authoring principles to learning content? We’ve talked a lot about making techcomm better and improving the experience. So now let’s flip it around and talk about how do we bring learning content into structured authoring? Is that a sensible thing to do? I guess that’s the first question: is that a sensible thing to do?

MB: Yeah, and here’s the thing that I like to keep in mind when talking about structured authoring, the context for why in the world would we even consider it? And when I think of traditional L&D training courses, whether it’s butts in seats at an instructor-led training event, whether I’m actually in a physical classroom or I’m sitting virtually in a Zoom class for example, or it’s self-paced e-learning, so much great content is built and encapsulated in that experience and is not able to be extracted out.

My favorite example of talking about this is I’ve got a big truck sitting in my driveway, I need to change the oil on it, it’s time. If it’s the first time I’ve ever changed oil, absolutely, I want all the learning. I want the scaffolding. I want the best practices, how I’m going to set up my work environment, the types of tools. How I’m going to need to deal with all the fluids, what I need to purchase. I’m going to dive into all that. In the real world, university of YouTube, I’m going to go watch videos on this and there’s going to be some bad content, there’s going to be some gems, and I’m going to pay attention to the ones that are good.

Now as I go from a novice, I’m going to build that knowledge of how to do it, I’m going to apply that knowledge. I’m actually going to go do it, now I’m probably going to make a mess and make mistakes my first time through, but that’s also building experience. So I’m moving from novice to knowledgeable to building skills to as I do it more and more, I move into that realm of being experienced.

Now as you move further up that chain, you need less and less support to the point where I’m like, “Crap, which oil do I need to buy? What are the torque specs on my drain plug?” I really only need three or four data points to do the job now. So that’s where as I move from a novice to an expert, I need to be able to skim and find exactly what I need in the moment of need, the just enough information. And so I’ll take the oil changing experience and let’s take that to any product or service training your customers, the people who are consuming your content are going through the same thing.

So learning-wise, why structured? Once I get to the expert level of things, I am not going to log into the LMS and I’m not going to launch that e-learning course, and I’m not going to click next 5 to 10 to 20 times to get to the answer that has the specification tables of, here’s what I need and what I need to do in order to accomplish the task at hand. Everybody’s nodding their head. Every time I ask, “When was the last time you logged into the LMS to get an answer to a question?” The only time I’ve ever had somebody go, “Oh, me,” it was actually an LMS administrator.

So learning is great at creating that initial experience, but their content’s trapped. It is stuck inside that initial learning experience. So getting back to the question, why structured authoring? Well, if you move to a structured authoring where you’re taking your content and building it in chunks, yes, you can create that initial learning experience where you’ve assembled that very crafted, we’re taking you from novice, getting you the knowledge, giving you the opportunities to practice the skill in a safe environment and fail well and learn from that and get you to a place where you move from novice to skilled. And then over time, this is where a lot of the L&D in general, because their content’s trapped in that initial learning experience, they can’t easily extract that information out and provide the things people need to move from skilled to experienced and experienced to mastery.

So that’s where when I think about, “Well, what does techcomm do really well? Techcomm supports that, I’ve got enough skills to do the job and I need to reference the very specific information, or the SOP, I’m on step four, I forget what are the things I need to enter in to get through step four, I can hop over the documentation and find that. So techcomm has figured out the structured authoring part. You mentioned creating new varied experiences for getting to the technical communication. Multi-channel delivery, I want to hop on and hit my search or hit my AI chatbot and pull up the information and just get me just enough to get through the tasks that I’m doing.

Learning’s still often stuck, if we equate it to the tech communication side, they’re still stuck in the, “I’m hand building a Microsoft Word based 500 page user guide that to get anything out of that, it’s a lot of work to build it, it’s a lot of work to maintain it, and it’s not easy to extract that information out to use it for other things.”

So why structured authoring, feature proof your content, make it more flexible. You’ve invested so much time and energy creating great content, great experiences, why not make it so it’s modular so you can pull things out and create new and different ways of consuming that content and delivering it in different bite size bits and pieces along the way?

SO: And I guess we have to tackle the elephant in the room, which is PowerPoint. So much learning and training, in particular, especially classroom training, is identified with an instructor standing at the front, running through a bunch of slides. And we like to say that PowerPoint is the black hole of content, that’s where content goes to die, and once it goes in, you never get it back out. So what do we say to the people that come in and they’re like, “You will pry PowerPoint from my cold, dead hands.”

MB: Such a great question. I’ll jokingly refer to PowerPoint as “My precious.” Here’s the reality: PowerPoint is not the knowledge chunk. That knowledge is actually sitting in the head of the instructor, the PowerPoint is providing the framework for them to deliver and impart that knowledge and impart those best practices. It’s there to provide guardrails so that it’s done in a consistent fashion, and there’s a bare minimum amount of structure that… There’s a bullet point there, they’re going to talk about it. The degree to the quality of how they’re going to talk about it and present it is going to vary based on the person delivering the content. So if you’ve got a bunch of PowerPoint slides, you don’t necessarily have all of your training material well documented. Now, if you’ve got parallel instructor guides and student guides that talk about the details of what should be said behind those bullet points, you’re a lot closer to having that information.

So why structured authoring? Well, it’s kind of, again, the good thing about structured authoring is you have a structure. You have a, if this is the concept that we need to talk about and discuss, here’s all the background information that goes with it. So with that structure comes consistency, and with that consistency, that means that you have more of your information and knowledge documented so that it can then be distributed and repackaged in different ways. Because if all you have is a PowerPoint, you can’t give somebody a PowerPoint when they’re in the middle of an oil change and say, “Here’s the bare minimum you need.” When I need to know, “Okay, what do I do if I’ve cross-threaded my oil drain bolt?” That’s probably not in there. That may be an instructor story that’s going to be told if you have a good instructor who’s been down that really rocky road. But again, structure and being consistent about it is going to set you up so that you have robust base content. 

We’ve got Legos in the house, I got two boys. Gosh, I’ve stepped on so many Legos in my life, it’s ridiculous. But the Lego metaphor works because you have a more robust batch of Legos that you can create new creations from, rather than a limited set if you’re only doing PowerPoint.

SO: And because you’re nice, and I’m not, I’ll say this, we can produce PowerPoint out of structured content, that is a thing we can do. I’m not saying it’s going to be award-winning, every page is a special snowflake PowerPoint, but we can generate PowerPoint out of structured content. And if you’re using it as a little bit of an instructor support in the context of a classroom or live training, that’s fine.

A lot of the PowerPoint that we see that people say, “This is what I want, and if you don’t allow me to do this,” and there’s this rainbow unicorn traipsing across the side of the page kind of thing, and no, we can’t do one-off slides, we can’t do crazy every slide is different stuff, but the vast majority of the content that I see that is PowerPoint based and kind of all over the place is not actually effective. So it’s like, this is not good. We have the same issue with InDesign. We see these InDesign pages that are highly, highly laid out, and it’s like, “We need this.” Well, why? It’s terrible. I mean, it’s awful. What are you doing here? No, we can give you a framework.

MB: Now, you’re telling somebody that their baby’s ugly when you say that, that’s somebody’s baby.

SO: I would never tell somebody that their baby is ugly, but I have seen a lot of really bad PowerPoint. Babies are wonderful.

MB: Yes.

SO: It’s so bad. So why does the PowerPoint exist, and how do we work around that? And also, are you delivering in multiple languages? Because if so, we need a way to localize this efficiently, and we’re right back to the structured content piece.

MB: And as soon as you’re talking about with PowerPoint, it is the poster child of pixel-perfect placement. As soon as I take a perfectly placed pixel product and have to translate it from English to let’s just say French, just the growth of the text alone, now I’ve got what was a perfectly placed pixel layout, my beautiful slide is now a jumbled mess. So just because you can doesn’t mean you should. And the thing is, PowerPoint and Microsoft Excel are the duct tape that runs business. Everybody has it. Everybody uses it. That’s the reality.

Now, the thing is, does everything have to be structured? I don’t believe it has to be. They are absolutely the one-off snowflake instances where, you know what? PowerPoint is the exact right tool for the job. Maybe it’s the one-off presentation that really is not going to see any reuse, it’s expendable, it’s disposable. We need to get the information communicated quickly. I’m going to fire it PowerPoint. I’m going to use it as my, I’m going to do air quotes, “My throwaway content” because it’s something that is short, sweet, and needs to be communicated, absolutely. I’m not, and I don’t think you are either, saying that PowerPoint has to go away, it’s the when is it appropriate and when is it not?

SO: I mean, I am the queen of the one-off can never be reused content being developed in, now I refuse to use PowerPoint, but in slideware for a short presentation, so the next one of you that’s listening to this and walks up to me at a conference and says, “Oh, is your presentation structured content?” No, it is not. Thank you for asking. Why isn’t it structured? Because I don’t reuse it at scale. Because in fact, every presentation at every conference is a special snowflake and has been lovingly handcrafted by me to deliver the message that I need, the context that I need, potentially the language, but to your point, even if I’m not localizing the presentation itself, the cultural context matters. So if my audience is largely English-speaking or primarily English, or… I mean, we’re going to Atlanta for LavaCon, that is going to be mostly a US-based audience, and maybe we get some Canadians, eh. And other than that… But mostly US and a US context. Will I be using excessive amounts of images from the Georgia Aquarium? Yes, I will.

Now, when I go to conferences elsewhere, so let’s take tcworld in Germany in November, that audience is, we’re delivering content in English, and the audience ranges from perfect English speakers to sort of barely hanging on. And so my practice at a conference like that is to include more text on my slides because if I include some additional text, it gives the people that are not quite as comfortable in English, a little bit more scaffolding to hang onto as they’re trying to follow my ridiculous analogies and insane references to cultural things. I also do try to pay attention to the kinds of words that I’m using and the kinds of idioms that I’m using so that they’re just not completely lost in space or things are not coming from left field or whatever. So the context matters, and no, my presentations are not structured.

But pulling this back, let’s talk about the potential. So when we look at learning content and you think about saying, okay, we’re going to structure our learning content or we’re going to structure some of our learning content, what does that mean in terms of what gets enabled? What are the possibilities? What are the things that you can do with structured learning content that you cannot do in unstructured, by which I mean PowerPoint, but unstructured, locked-in content? If we break this stuff into components and we deliver on structured learning content, what are the ideas there? What are the possibilities?

MB: Well, as you’re explaining the PowerPoint point of view, a word that came up a few times was scale. I’m not having to do it at scale. Effectively, it is a one-off. Yes, I’m going to personalize it for the audience, and the degree of personalization and customization that you’re doing per conference, per audience, per default language that they’re speaking, you’re able to scale that to the degree that you need to. There’s no need for you to put your content in data and localize it and do all the things that you need to do. So it’s really that word at scale, that, I think, is the key word.

It’s when you hit that tipping point where the desktop tools that you’re using today, and we can say this with tech communications as well, I was using Word and Excel and copy and pasting and keeping things in sync, it works until you get to a tipping point where the scale no longer is sustainable. That same exact problem exists in training. So when you’re looking at things like, I have my training content that when I deliver it in California, I have to put my Prop 65 note in everything because Lord forbid, as soon as I step across the state line into California, everything that’s around me is going to give me cancer. Prop 65 is the default thing that you see plastered everywhere.

So do I need to customize my content for delivering in California? Perhaps. Maybe different states have different regional laws or policies that apply to only that audience. That’s where that mass customization and mass personalization are really hard to scale because now you don’t have just one course, you have potentially 50 courses, if I’m just talking about the US, 50 states, 50 courses, and I have to have 50 different variations, which means that not if something changes, but when something changes, now I have to open up and change 50 different courses, and it’s not, did I miss anything? It’s, “What did I miss?” That’s the thing that you wake up in the morning in a cold sweat of, “Oh my God, what did I miss?”

So why structured for learning? Largely when you get to that tipping point where you’re copy/pasting, and I call it the copy/paste published treadmill, when you are on that hamster wheel of copy/paste/publish, copy/paste/publish, and that is the majority of what you’re doing, and you’re looking at a pie chart of how much time is spent maintaining your courses or taking a base course and creating all the variations, that precious PowerPoint that is the handcrafted bespoke one-off, you can’t do that anymore. That’s the equivalent of, you look at a Lamborghini, how many do they make a year? They can afford to make a very small number per year because they’re really expensive to make. When you look at a Ford Mustang, which probably gives you 80% of the performance at a fraction of the cost and exponentially scales well beyond, it’s because they’ve taken that structured approach of, every frame’s the same, every hood’s the same, very few handcrafted things, and the things that are going to be handcrafted, that’s when I go order the special edition Shelby Cobra that has some handcrafted components put onto the basic structure. That’s that same metaphor applied.

So why structured content? Because I want to have modular content that can be reassembled really quickly, that I may have chunks that are reused so that when I need to slip in my Prop 65 disclaimers, I can do that at scale and have 50 variations of a course, but when it comes time to update it, I’m literally updating one or two things and it’s automatically updating all 50 courses and of course all the efficiencies of publishing things out in a structured format.

So that pixel-perfect placement, I’m going to give that up to stay sane so I can get home and have dinner with my family, because the amount of time that I’ve spent in my life doing pixel-perfect placement and updating things, God, I wish I could hit the way back machine and reclaim all that time in my life. How many… Guilty as charged. Show of hands of anybody who’s listening, how many times have you sat there and fiddled with the slide or a text box in InDesign then design to get it just right, that two days later, something changes and you’re back there spending 10, 15 minutes doing it to fiddle it in just right. So, as I affectionately like to say, I’m a recovered FrameMaker, InDesign, PowerPoint, and Word user because I want to author it in a structured format so that I am giving up the responsibility of layout and look and feel.

SO: I like to tell people, “I’m not lazy, I’m efficient.” The fact that I don’t want to do it is just a bonus; I can get out of doing all this work.

MB: That’s right, that’s right.

SO: Because we are not allowed to leave any podcast without covering this topic, what does it look like to have AI in this context?

MB: There are two sides of the AI coin from a content perspective, I think, and it’s the, “How can AI help me do my job better to create content?” Some things that when we’re looking at duplication of content, things that AI can do really well that, working smart, not hard, help me find things that already exist in my repository of structured content that look like this, that are really close. The human in the loop, so helping me deduplicate or help me not create new unnecessary variations of content. I think that’s one area of AI-based assistance for content creation that people may not be necessarily thinking about. Because right now, the easy one is like, “Hey, ChatGPT, help me write an introduction or an overview for the following,” it spits that out. That’s great, but that overview and that content may have already been written by somebody else, and so what ends up happening is you start generating content drift where it’s almost exactly the same but just slightly different. And in reality, yes, I could have used the one that was already there.

So I think that’s one of the areas where AI from a content authoring perspective is one that I’m really excited about. Because at the end of the day, and this leads us into the second part of AI, AI is only as good as what you feed it, and if you feed it junk food, you’re going to get junk results. So it’s that whole thing of do you eat healthy food or are you going to eat Cheetos? If you’re pointing your AI at a SharePoint repository and saying, “Hey, read all of this,” and all the content shifts and variations and content drift and out-of-date and perhaps out-of-context content that exists inside of that repository, your results are not going to be as accurate as they need to be. So, how do you ensure that AI is providing good results? Well, you feed good content.

And so within an organization, I think the two silos that we started our conversation with, technical communications and L&D, tend to have some of the most highly vetted, highly accurate, up-to-date content in an organization. And so this is my encouragement to everybody who’s in this space, you are the owners of what is good, highly nutritious food that you can feed your AI. So taking it back to the structured content perspective, if I’m authoring in the structured content, publishing it out in a format that is AI ready, all of your tags, all of your enrichments, all of your, here’s the California version of the content versus the Georgia or Florida’s version of the content, all of that context and enrichment and tagging that’s gone on, you’re now feeding AI all of that context so that AI can provide the proper answer. So that’s my short, it’s sweet for the AI side. We could talk for probably days on all sorts of other variations, but right now, that’s where I’m seeing the biggest impact that it’s going to have on techcomm and L&D.

SO: I think that’s a great place to wrap it up. And I want to say thank you for being here and for a great conversation around all of these issues, and we will reconvene at a future conference somewhere to cause some more trouble and talk some more about all of these things. So Mike, thank you.

MB: You are welcome. And yeah, I think the next conference we’re going to see each other is going to be LavaCon, so I’ll be talking in and around the convergence of L&D and techcomm and what life can look like with that. So certainly a deeper dive and continuation of what we started here, and super excited to sit on your session as well.

SO: Yep, super. I will see you there. I’m pretty sure I’m doing one on the same topic, but it will be more complaining and less positive, so that seems to be my role. Okay, with that, thank you everybody, and we’ll see you on the next one. 

Conclusion with ambient background music

CC: Thank you for listening to Content Operations by Scriptorium. For more information, visit Scriptorium.com or check the show notes for relevant links.

Questions about this episode? Ask Sarah!

"*" indicates required fields

This field is for validation purposes and should be left unchanged.
Data collection (required)*

The post From PowerPoint to possibilities: Scaling with structured learning content appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 32:17
Every click counts: Uncovering the business value of your product content https://www.scriptorium.com/2025/08/every-click-counts-uncovering-the-business-value-of-your-product-content/ Mon, 11 Aug 2025 11:25:57 +0000 https://www.scriptorium.com/?p=23175 https://www.scriptorium.com/2025/08/every-click-counts-uncovering-the-business-value-of-your-product-content/#respond https://www.scriptorium.com/2025/08/every-click-counts-uncovering-the-business-value-of-your-product-content/feed/ 0 Every time someone views your product content, it’s a purposeful engagement with direct business value. Are you making the most of that interaction? In this episode of the Content Operations podcast, special guest Patrick Bosek, co-founder and CEO of Heretto, and Sarah O’Keefe, founder and CEO of Scriptorium, explore how your techcomm traffic reduces support costs, improves customer retention, and creates a cohesive user experience.

Patrick Bosek: Nobody reads a page in your documentation site for no reason. Everybody that is there has a purpose, and that purpose always has an economic impact on your business. People who are on the documentation site are not using your support, which means they’re saving you a ton of money. It means that they’re learning about your product, either because they’ve just purchased it and they want to utilize it, so they’re onboarding, and we all know that utilization turns into retention and retention is good because people who retain pay us more money, or they’re trying to figure out how to use other aspects of the system and get more value out of it. There’s nobody who goes to a doc site who’s like, “I’m bored. I’m just going to go and see what’s on the doc site today.” Every person, every session on your documentation site is there with a purpose, and it’s a purpose that matters to your business.

Related links:

LinkedIn:

Transcript:

Introduction with ambient background music

Christine Cuellar: From Scriptorium, this is Content Operations, a show that delivers industry-leading insights for global organizations.

Bill Swallow: In the end, you have a unified experience so that people aren’t relearning how to engage with your content in every context you produce it.

Sarah O’Keefe: Change is perceived as being risky, you have to convince me that making the change is less risky than not making the change.

Alan Pringle: And at some point, you are going to have tools, technology, and process that no longer support your needs, so if you think about that ahead of time, you’re going to be much better off.

End of introduction

Sarah O’Keefe: Hi, everyone, I’m Sarah O’Keefe and I’m here today with our guest, Patrick Bosek, who is one of the founders and the CEO of Heretto. Welcome.

Patrick Bosek: Thanks, Sarah. It’s lovely to be here. I think this is may be my third or fourth time getting to chat with you on the Scriptorium podcast.

SO:  Well, we talk all the time. This is talking and then we’re going to publi- no, let’s not go down that road. Of all the things that happen when we’re not being recorded. Okay. Well we’re glad to have you again and looking forward to productive discussion here. The theme that we had for today was actually traffic and I think web traffic and why you want traffic and where this is going to go with your business case for technical documentation. So, Patrick, for those of you that have not heard from you before, give us a little bit of background on who you are and what Heretto is and then just jump right in and tell us about web traffic.

PB: No small requests from you, Sarah.

SO: Nope.

PB: So I’m Patrick Bosek. I am the CEO and one of the co-founders of Heretto. Heretto is a CCMS based on DITA. It’s a full stack that goes from the management and authoring layer all the way up to actually producing help sites. So as you’re moving around the internet and working with technology companies, primarily help_your_product.com or help_your_company.com, it might be powered by Heretto. That’s what we set out to do. We set out to do it as efficiently as possible, and that gives me some insight into traffic, which is what we’re talking about today, and how that can become a really important and powerful point when teams are looking to make a case for better content operations, showing up more, producing more for their customers, and being able to get the funding that allows them to do all those great things that they set out to do every day.

SO: So here we are as content ops, CCMS people, and we’re basically saying you should put your content on the internet, which is a fairly unsurprising kind of priority to have. But why specifically are you saying that web traffic and putting that content out there and getting people to use the content helps you with your sort of overall business and your overall business case for tech docs?

PB: Yeah. So I want to answer that in a fairly roundabout way because I think it’s more fun to get there by beating around the bush. But I want to start with something that seems really obvious, but for some reason it isn’t in tech pubs. So first of all, if you went to an executive and you said, I can double the traffic to your website, and then you put a number in front of them, probably say a hundred thousand dollars, almost like any executive at any major organization is like a hundred thousand dollars, of course, I’ll double my web traffic. That’s a no-brainer. Right? And when they’re thinking of website, they’re thinking of the marketing site and how important traffic is to it. So intrinsically, everybody pays quite a bit of money and by transference puts a lot of value on the traffic that goes to the website and, as they should. It’s the primary way we interact with organizations asynchronously today.

Digital experience is really important. But if you went to an executive and you said, I can double your traffic to your doc site, they would probably be like, wait a second. But that makes no sense because nobody reads the docs for no reason. I want to repeat that because I think that’s a really important thing for us, as technical content creators to not only understand, I think we understand it, but to internalize it and start to represent it more in the marketplace and to our businesses and to the other stakeholders. People might show up at your marketing site, because they misclick an advertisement. They might show up in your marketing site because they Googled something and your market and a blog like caught them and they looked at it. So there’s probably a lot of traffic where people are just curious. They’re just window shopping. Maybe they’re there by mistake. But nobody shows up at your documentation site.

Nobody reads a page in your documentation site for no reason. Everybody that is there has a purpose and that purpose always has an economic impact on your business. People who are on the documentation site are either not utilizing your support, which means that they’re saving you a ton of money. It means that they’re learning about your product, either because they’ve just purchased it and they want to utilize it, so they’re onboarding, and we all know that utilization turns into retention and retention is good because people who retain pay us more money, or they’re trying to figure out how to use other aspects of the system and get more value out of it. There’s nobody who goes to a doc site who’s like, I’m bored. I’m just going to go and see what’s on the doc site today. So every person, every session on your documentation site is there with a purpose and it’s a purpose that matters to your business. So that’s why I want to start. That’s why it matters. That’s why I think traffic is important, but you look like you want to contribute here, so.

SO: We talk about enabling content. Right? Tech docs are enabling content. They enable people to do a thing, and this is what you’re saying. People don’t read tech docs for fun. I know of, actually, I do know one person. One person I have met in my life who thought it was fun to read tech docs. One.

PB: Okay. So to be fair, I also know somebody who loves reading release notes.

SO: Okay. So two in the world.

PB: But hang on, hang on. But this person, part of the thing is this person is an absolute, can I say fanboy, is that, they’re a huge fan of this product and they talk about this product in the context of the release notes. So even though this person loves the release notes, the release notes are a way that they go and generate word-of-mouth and they’re promoting your product because of the thing they saw in the release notes. The release notes are a marketing piece that goes through this person. All the people who are your biggest fans are going to tell people about that little thing they found in your release notes. Sorry. Anyways.

SO: So again, they’re trying to learn. Okay. But, so two people in the universe that we know of read docs for fun. Cool. Everybody else is reading them, as you said, for a purpose. They’re reading them because they are blocked on something or they need information, usually it’s they need information. And then you slid in that when they do this, this is producing, providing value to the organization or saving the organization money. So what’s that all about?

PB: Well, I mean there’s a number of ways to look at this. You want to start with the hard numbers, the accounting stuff, the stuff you can take the CFO. That stuff is actually, it’s pretty easy to do. You can do it in just a couple of lines. So every support ticket costs a certain amount of money. Somebody in your organization knows that number, if your organization is sufficiently large and sufficiently large is like 20 people probably. Maybe that’s not that small, but if you’re a couple hundred people, everybody knows what that number is. So it’s very easy to figure out how much it costs when somebody actually goes to the support.

SO: Somewhere between $20 and $50 is kind of the industry average per call. You may have better numbers internally in your organization, but if you don’t or you don’t know where to start there. Every call is $25.

PB: Yeah. $20, $25. A little more, if you’re in a complex industry. The reality is that when you start comparing it to how much you spend answering a question with content, it’s kind of like, oh, is it a thousand times cheaper or is it 2,000 times cheaper? So it’s not really that big of a difference. The cost of answering a question with content is also pretty straightforward. So all you really need to know is how much are you spending on your content, which is typically speaking just the combination of the people and tools, so people in content operations stack that you’re using to get that content out in front of people. And then the page views. I mean, fundamentally if you exclude search, so take search out of your page views, take home page out of your page views, if you can filter section pages, so just look at actual content pages and then you have to pick a resolution rate.

Obviously, if you want to say 100%, if you don’t have any better metrics, that’s probably too high. Maybe it’s unreasonable, but it’s very simple. It makes the equation easier. If you want to say that 50% percent of people who read what you’ve considered to be like a content page, resolve their issue, that’s probably too low. So pick a number between those two things and you run the multiplication on that and you’re going to find out that it’s going to cost you, in most situations, less than a penny to answer a question, typically way less than a penny to answer a question with content as opposed to the $25. That’s the pure economic math of it. There’s more though.

SO: Okay. So yeah, we did some math and we’re basically saying, looking at this in a tech support centric way, usually we talk about call deflection. Right? So the idea is that every time somebody does not call tech support, you save $25 and spend a penny, a fraction of a penny instead, which seems good. Now interestingly to me, I think, you can look at this as the first time somebody hits that site and hits a content page, costs really a lot of money. Right? Because the people and the tools and the setup and the publishing, but then the next one is zero.

So you’re replacing sort of an upfront planned cost with a recurring cost because every time somebody calls, it’s another 25 or 50 or whatever dollars. So there’s a huge scalability argument here, and I can make a decent case for if you are a startup, a day one startup, you have no content, you have nothing, you have no infrastructure, cool. Hire a tech support person. Let them do their thing for maybe a year, and then look at the top 10 queries that they had and write some docs and deflect off those top 10 queries and handle it that way. But most of our customers, speaking for both of us, are medium to large to incredibly large organizations that have content. We’re not talking about the you have nothing start from scratch scenario.

PB: A hundred percent. When you’re really thinking about where you get the value, both on the accounting side, like saving money, so bottom line stuff, and then also the customer experience, which I think is worth getting into in a minute, that’s really going to take place when you start scaling up. I agree with you that a startup style organization should write content. Even small organizations benefit from it. I think they, small organizations actually benefit in a slightly different way than the deflection, which is the word you’re using. And I’m going to come back to that because I have a pet peeve with that word, but I’ll use it right now for the purposes because we’ve been using it. I think that what, the value that a smaller organization gets is not in the deflection, but it’s actually in the presence. So if you’re trying to show up and you’re trying to compete with larger organizations and you’re doing something, which is technical or considered to be highly important, so you’re in a high technology industry, your buyer is going to go look at your documentation. They’re going to look at your competitor’s documentation as well.

And if your documentation appears to be not that great, it’s very thin, there’s not a lot there, that’s going to be a factor in a buying decision. And I know everybody kind of like, yeah, but it really is, and I can tell you because we’re not a huge organization, that we’ve won deals because our docs were better. We invest in it, as we should. We’re a documentation tool. You know? So it does matter at the smaller end, even if you can’t build a really scalable content operation stack that you probably don’t need.

SO: Now, personally, I’m okay with deflection, and I’ll also say that the key thing here is that if you’re doing additional research on this as a listener, call deflection is sort of the industry term that will help you in your Google/AI searches. But tell us about why call deflection is bad and evil.

PB: Okay. So that is true. If you are talking to executives, you should probably say deflection, but maybe forward-thinking executives would appreciate why I think deflection is a bad term. I think we should use, you’re shaking your head at me. Fine. I think we should be talking about call avoidance. And the reason that I think this is because when most people think about deflection, they’re thinking about it as being very reactive, and it’s that box that pops up when you’re trying to put a support ticket in that’s like, well, have you already looked at this? And by the time someone has arrived at your support site and they have decided that they want to interact with a human, they are annoyed. They don’t want to be there. Nobody visits the support site because they want to. They have made the emotional commitment that they’re going to go and deal with one of your human beings to solve their problem, which is not something they planned on doing today. Nobody wanted to do this when they got up in the morning. So you’ve already failed. And at that point in time, the best thing you can do is get them to a human efficiently without sticking things in front of them and trying to deflect them. So that’s why I don’t like deflection. Avoidance is that that never happened. They Googled it because Google is tier zero support for everybody, even if yours is bad. They got an answer or they ChatGPT-ed it, different topic, there’s problems there. But probably they Googled it. They got an answer very quickly. They solved their issue. You never heard about it. It cost you a fraction of a penny. They had a great experience. It’s how they prefer to get their information, and you avoided the support rather than trying to deflect them to save yourself a couple of bucks when they were annoyed and broke their customer experience.

SO: Yeah. I’m on board with that. It’s just that terminology-wise, we’ve got to work with what we’ve got. But I would agree that avoiding the call in the first place, and I talk about how when people call tech support, they’re mad. If you think about the emotional state of your customer, the tech support person is angry. There’s also the issue that they asked ChatGPT and it said something wrong, and then they call up tech support and yell at you because ChatGPT was wrong, which is, that’s a whole other podcast. So let’s just set that aside for a moment, but okay.

PB: Maybe you’ll invite me back. We can talk about that.

SO: Yeah. So the, it’d be a long podcast and we’ll have to lift our no profanity rule for that one just to get through the topic.

PB: Oh. Special edition.

SO: Special edition. Okay. So you were talking about the value though of a documentation site and we’ve sort of paired it with tech support and with this avoidance, deflection, get them the answers that they need before they get angry at the product. Right?

PB: Yeah. For sure.

SO: How does the customer experience tie into that? And what is the value of the customer experience?

PB: So the value of the customer experience is subjective, but every organization already has an opinion on it. Some organizations place a lot of value in customer experience, have done a lot of work to tie customer experience to the metrics and analytics and things like that they use to track financial performance. Other organizations less. So the first thing I would say is go and see where your organization is relative to their thinking on customer experience. But, as you’re talking about customer experience, other than the support, which I think we’ve covered that quite a bit, for someone who’s showing up your documentation site, really what it’s touching on is a couple of things, what they’re trying to get to. So there’s the discovery aspect of it. And this can be very, very simple or it can be very, very complex. The simple one I like is like let’s say you sell gym equipment and that gym equipment goes out to people who own gyms, as it would make sense, and they’re going to go and they’re thinking of buying a new treadmill or something from you.

They’re going to want to know, is this going to fit in my gym? Can the power I have set up work with it? What are the other details of this product? And then how much information is there to service it? So somebody, once they get past the whole like, okay, I kind of like this brand, maybe this is a good thing, it’s kind of cool, they’re going to go into the documentation because they’re making a purchase that matters to them. And having confidence and trust in the product based on the depth of information that they get prior to purchasing it is a major factor. And this only increases as the economic value and the end implementation, like how critical it is, how system critical it is, increases. So there’s a discovery, evaluation, and confidence, those are the three things I think of, aspect to your documentation or your help site that is there, even if you’re not thinking about it, even if it’s not coming up directly in sales conversations. I promise you, because I have the data that people are doing this during the process of deciding if they want to work with your organization. And that’s the kind of pre-customer experience that’s really, really critical that most organizations are just not thinking about and they’re probably leaving a lot on the table relative to their competitors that either could be advantage or they’re behind.

SO: There was a study. It was a while back, maybe five or 10 years ago that came out from, I always have trouble finding it. It was either PwC or IBM. The gist of it was that 80% of people that were buying consumer products were doing pre-buying research, the technical research. So they were looking at specs and they were looking at how do I install this thing and various other things that we consider to be not marketing information. They were looking at what is traditionally labeled post-sales documentation.

PB: Yeah. Because people care. And the other thing too is like as we move into an economic environment where people are more careful about what they’re spending on, they’re only going to do more research to make sure the things they’re buying are things that are going to last and be supported. I bought a pair of headphones a year ago, and I have an issue with one of them. They’re like the ones that go in your ears, one of them’s not working. I ended up going to the documentation to try to figure it out, and the documentation was so bad I could not make heads or tails of it. And I just gave up and I was like, okay, if I had spent hundreds of dollars on these, I’d go through the process, but they were like 30 bucks or whatever. But I’m never going to do business with that company again. Ever.

If I see another one of the products, I will never buy it. So they don’t know about that experience. But if you have, not even just bad content operations, because frankly their site was, it was kind of nice, it wasn’t bad. I think it could have been better, but you know, funny that I would have that opinion, but it was really the information architecture, so it was kind of the stuff that Scriptorium, Sarah, you guys would help them with. It wasn’t so much they had bad tools. They had terrible organization, and the content was, I’m not allowed to swear, which I wouldn’t anyways…

SO: Sorry.

PB: … but the content was, think of your word, it was bad. It was completely unhelpful. So you can have the best content operations in the world, but without the right information architecture, who cares?

SO: Yeah. This is the infamous if a tree falls in the forest. You know, and to your point, A, the company doesn’t know that your headphones are broken and you’re unhappy, but you just told me, and the next time I’m in the market to buy a pair of headphones, I’m going to remember this story and I’m going to call you, I won’t call you. I’ll send a text and say, hey, what brand was that? Right? And you’re going to tell me and then I’m going to not buy them. So the impact of this failure of documenting, well, actually it’s a product fail, right, but also of support. Because if they had come back with, oh, we’re so sorry, send them in or we’ll send you a new pair, whatever, they could have rescued this encounter, but they didn’t. So the next thing that’s going to happen is that you and every single one of your friends that hears the story will never buy that brand.

PB: Right.

SO: So as we talk about this, the really critical point here though is, I think, there are a bunch of really critical points, but the one that I really want to zoom in on is that the content has to be there. Right? You have to have helpful content that solves the problem that a person is on the website for. And, in your case, it might have been, oh, sometimes this happens and you have to repair them, or you have to this or you have to that, you know, press all these weird buttons in this weird sequence and sacrifice the chicken and stand on your head. Cool.

PB: Right. Which I would’ve done.

SO: Which you would’ve done. But the bigger problem is that you went to their website and we don’t actually know whether or not this problem is fixable because you didn’t find it. Right? You didn’t find the answer. And that means that it’s sort of like a last mile problem. I can write all these really good procedures, they can be super accurate, they can be amazing, blah, blah, blah, blah, blah. You come onto my website, you can’t find the answer your question, it exists, but you can’t find it, right, you, the customer, and so it fails, and now you either, A, tell all your friends that company XYZ is terrible, or, B, you call tech support and you’re mad. Right?

PB: Yeah.

SO: That’s actually the best outcome.

PB: It is. Yeah.

SO: Yeah. And interestingly, we’ve got some, I’ll be very non-specific, but we have a project right now where one of the top tech support topics, you know how you look at what are the top 10 things that people call and ask about, and it’s like, my headphones aren’t working, or how do I return this or whatever. One of the most common reasons that people call their tech support is to ask, where is the documentation? I can’t find it.

PB: Do you have any idea how common that is? I mean, you probably do, but it’s so common.

SO: Yeah.

PB: And we’ve started doing this thing in the process of helping people think through this where we have a very simple tool that we use. It’s a sheet, happy to share it with anybody, and a process where you effectively go through and you just do a very simple 15 to 30 minute interview with X number of support people. You know, we recommend three to five. Some people do more. And you just go through the last 10 support cases, the ones that they worked on, and there’s a few things you mark off, but the idea is to do lightning round, very, very quick. And could this be solved by documentation? And the amount of it that is just looking for documentation, I can’t find it, is so funny. And you’re like, I think that’s a problem. And people are like, wow. But you can’t blame them because people don’t think about these things and it doesn’t make sense that you would because it’s non-obvious, and I think that’s one of the really critical things I want to leave people with.

And I have one other thing that I, you know, we’ve been talking for a while that I want to let people go soon, but this one, I want to zoom in on this for a second. People shouldn’t feel bad that they haven’t thought about this. They shouldn’t feel bad that they haven’t thought about the value of the traffic, the impact of the traffic, the customer experience side of it, the cost ratio of the traffic relative to support people. It really isn’t that obvious. And there’s so much momentum around the way that we’ve done business in having people solve problems for other people in direct communications that even if that isn’t ideal, that’s just the way it’s done and that’s what feels obvious. So don’t feel bad about not having thought about this if you haven’t. Your colleagues shouldn’t either. But it is the way the world is moving, and I think it’s critical to start thinking about it now.

SO: Yeah. And you started this by talking about customer experience needing to be asynchronous. People can get the stuff, self-service when they want it and digital as opposed to call somebody on the phone. So let’s sort of wrap this up and say, what’s your advice to people that know that they’re struggling with this? They know that they have huge tech support volumes and nobody’s happy. And I mean, we know we have a problem. So where should they start? What’s the first step that they can take to begin attacking this thing in a way that will lead to forward progress within a large organization that has as their informal motto, oh, they can just call tech support.

PB: Yeah. So I would say buy-in is always step one, and that means that there’s going to be some selling that has to happen at the organization. You have to get people to recognize the value, the potential, and also the ability to achieve it. So it’s those things when they come together, there can be a ground swell where people are going to actually support these projects and fund them and get involved, and then you’ll have really successful projects. One of the big challenges with getting that buy-in historically has been that there’s no precedent. So when you’re looking for a better website, you already have a website. What if you increase traffic by 10%? You know, people can start to draw some lines between that and sales or the bottom line or value, those types of things. And oftentimes, even organizations that I would say are somewhat up to maturity curve in terms of tech pubs, they don’t have any metrics about their site, like how many people come to it? I don’t know.

They just don’t track it. So there’s not this historical precedent of metrics that can be back to results, and that can create some issues. So the advice that I give organizations that are in that situation is if you are in a technology field and you have a relatively complex product, so something where it breaks, it’s not always obvious how to use it, there’s a reason that people would need to learn about your product for some reason, what our data shows from having done this many times with organizations that fit that profile is that a well-implemented documentation help site, whatever you want to call it, gets about as much traffic as the dot com, the primary marketing site. It tends to be plus or minus 15%. We’ve actually seen as high as 65% of the total traffic between the two sites being on documentation.

That’s a bit of an outlier, but so is 30%. You know, we’ve seen that too. So if you’re want to be conservative, say you’ll get 40% of the total traffic. So four sessions for every six on a marketing site. If you want to be, what we tend to see on an average, just say it’s one for one. If it’s one for one and you don’t have metrics, that’s a target. And you have to ask the internal question, what’s the value of that? If we get a hundred thousand sessions per month or per year or whatever on the marketing site, what if we had a hundred thousand sessions on the help content? Well, those people are there for a reason. Remember? They’re there because they’re not calling support. They’re there because they’re onboarding and using our system better, or they’re there because they’re trying to figure out if our stuff’s going to work for them.

So like how valuable would that be? And once you get the organization to a place where they’re like, oh, that would actually be quite valuable, could we get that, I think 80% of the work is done and well, 80% of the work of getting started is done. And then you probably call somebody like Scriptorium or Scriptorium specifically, if you’re not familiar with this, and you start the process of actually thinking through of how to do it. But I do think the organizational buy-in and giving people in the right head space to think about the value of this is step one, and that’s the process I use for it.

SO: Yeah. I think I would agree with all of that, especially the part where they should call us.

PB: Go figure.

SO: But the key thing in here is, and you said this a different way, but changing the momentum, right, getting organizational buy-in, getting people on board with this concept. The other thing I’ll say is that ultimately one of the biggest problems we face in content ops is that so much of it is invisible in the sense that we’re going to refactor this and we’re going to do it better, and we’re going to produce it faster and we’re going to automate, okay, great, but you’re still producing the same thing. One of the most powerful things we can do early in the process is say to people, look at this portal that we can deliver. Look at this experience that we can deliver. It’s not the first thing or the only thing or even necessarily the most important thing we need to do, because the portal has to have content. Right?

PB: Yeah.

SO: I mean, it’s kind of a chicken and egg thing, but showing people the vision of what can be works typically much, much better than saying we should do structured content because it will help automate things and speed up time to market. That’s all behind the scenes, and it’s not visual and nobody cares. I mean, people care, but it’s hard to visualize. So, okay, I think we’ve promised people a whole bunch of resources. We will put those in the show notes. I’m quite certain that we could go on for a very long time about this topic, but I am going to wrap it up there ’cause I feel like we hit a good starting point for people.

PB: Yeah.

SO: So if there are other questions, I would say reach out to me or to Patrick, because I know we’ve only scratched the surface on this thing. Patrick, thank you for being here.

PB: Of course. Always a blast.

SO: Always good to see you. And we will wrap this thing up, and thanks for being here. Feel free to reach out if you have any other questions.

The post Every click counts: Uncovering the business value of your product content appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 30:55
AI in localization: What could possibly go wrong? (podcast) https://www.scriptorium.com/2025/08/ai-in-localization-what-could-possibly-go-wrong-podcast/ Mon, 04 Aug 2025 11:35:56 +0000 https://www.scriptorium.com/?p=23158 https://www.scriptorium.com/2025/08/ai-in-localization-what-could-possibly-go-wrong-podcast/#respond https://www.scriptorium.com/2025/08/ai-in-localization-what-could-possibly-go-wrong-podcast/feed/ 0 In this episode of the Content Operations podcast, Sarah O’Keefe and Bill Swallow unpack the promise, pitfalls, and disruptive impact of AI on multilingual content. From pivot languages to content hygiene, they explore what’s next for language service providers and global enterprises alike.

Bill Swallow: I think it goes without saying that there’s going to be disruption again. Every single change, whether it’s in the localization industry or not, has resulted in some type of disruption. Something has changed. I’ll be blunt about it. In some cases, jobs were lost, jobs were replaced, new jobs were created. For LSPs, I think AI is going to, again, be another shift, the same that happened when machine translation came out. LSPs had to shift and pivot how they approach their bottom line with people. GenAI is going to take a lot of the heavy lifting off of the translators, for better or for worse, and it’s going to force a copy edit workflow. I think it’s really going to be a model where people are going to be training and cleaning up after AI.

Related links:

LinkedIn:

Transcript:

Introduction with ambient background music

Christine Cuellar: From Scriptorium, this is Content Operations, a show that delivers industry-leading insights for global organizations.

Bill Swallow: In the end, you have a unified experience so that people aren’t relearning how to engage with your content in every context you produce it.

Sarah O’Keefe: Change is perceived as being risky, you have to convince me that making the change is less risky than not making the change.

Alan Pringle: And at some point, you are going to have tools, technology, and process that no longer support your needs, so if you think about that ahead of time, you’re going to be much better off.

End of introduction

Sarah O’Keefe: Hey, everyone. I’m Sarah O’Keefe, and I’m here today with Bill Swallow.

Bill Swallow: Hey there.

SO: They have let us out of the basement. Mistakes were made. And we have been asked to talk to you on this podcast about AI in translation and localization. I have subtitled this podcast, What Could Possibly Go Wrong? As always, what could possibly go wrong, both in this topic and also with this particular group of people who have been given microphones. So Bill.

BS: They’ll take them away eventually.

SO: They will eventually. Bill, what’s your generalized take right now on AI in translation and localization? And I apologize in advance. We will almost certainly use those two terms interchangeably, even though we fully understand that they are not. What’s your thesis?

BS: Let’s see. It’s still early. It is promising. It will likely go wrong for a little while, at least. Any new model that translation has taken has first gone wrong before it corrected and went right, but it might be good enough. I think that pretty much sums up where I’m at.

SO: Okay. So when we look at this … Let’s start at the end. So generative AI, instead of machine translation. Let’s walk a little bit through the traditional translation process and compare that to what it looks like to employ GenAI or AI in translation.

BS: All right. So regardless of how you’re going about traditional translation, there is usually a source language that is authored. It gets passed over to someone who, if they’re doing their job correctly, has tools available to parse that information, essentially stick it in a database, perhaps do some matching against what’s been translated before, fill in the gaps with the translation, and then output the translated product. On the GenAI side, it really does look like you have a bit of information that you’ve written. And it just goes out, and GenAI does its little thing and bingo, you got a translation. And I guess the real key is what’s in that magic little thing that it does.

SO: Right. And so when we look at best practices for translation management up until this point, it’s been, as you said, accumulate assets, accumulate language segment pairs, right? This English has been previously translated into German, French, Italian, Spanish, Japanese, Korean, Chinese. I have those pairs, so I can match it up. And keeping track of those assets, which are your intellectual property, you as the company put all this time and money into getting those translations, where are those assets in your GenAI workflow?

BS: They’re not there, and that’s the odd part about it.

SO: Awesome. So we just throw them away? What?

BS: I mean, they might be used to seed the AI at first, just to get an idea of how you’ve talked about things in the past. But generally, AI is going to consume its knowledge, it’s going to store that knowledge, and then it’s going to adapt it over time. When it’s asked for something, it’s going to produce it with the best way it knows how, based on what it was given. And it’s going to learn things along the way that will help it improve or not improve over time. And that part right there, the improve or not improve, is the real catch in why I say it might be good enough but it might go wrong as well, because GenAI tends to … I don’t want to say hallucinate because it’s not really doing that at this stage. It’s taking all the information it has, it’s learning things about that information, and it’s applying it going forward. And if it makes an assumption based on new information that it’s fed, it could go in the wrong direction.

SO: Yeah. I think two things here. One is that what we’re describing applies whether you have an AI-driven workflow inside your organization where you’re only allowing the AI to access your, for example, prior translation. So a very limited corpus of knowledge, or if you’re sending it out like all of us are doing, where you’re just shoving it into a public-facing translation engine of some sort and just saying, “Hey, give me a translation.” In the second case, you have no control over the IP, no control over what’s put in there and how it’s used going forward, and no control over what anyone else has put in there, which could cause it to evolve in a direction that you do or do not want it to. So the public-facing engines are very, very powerful because they have so much volume, and at the same time, you’re giving up that control. Whereas if you have an internal system that you’ve set up … And when I say internal, I mean private. It doesn’t have to be internal to your organization, but it might be that your localization vendor has set up something for you. But anyway, gated from the generalized internet and all the other people out there.

BS: We hope.

SO: Or the other content. You hope. Right. Also, if you don’t know exactly how these large learning models are being employed by your vendors, you should ask some questions, some very pointed questions. Okay, we’ll come back to that, but first I want to talk a little bit about pivot languages. So again, looking at traditional localization, you run into this thing of … Basically many, many, many organizations have a single-language authoring workflow and a multi-language translation workflow. So you write everything in English and then you translate. So all of the translations are target languages, they are downstream, they are derived from the English, et cetera. Now let’s talk a little bit about… First of all, what is a multilingual workflow? Let’s start there. What is that?

BS: Okay. So yeah, the traditional model usually is author one language, which maybe 90% of the time is English, whether it’s being authored in an English-speaking country or not, and then it’s being pushed out to multiple different languages. In a multilingual environment, you have people authoring in their own native language, and it should be coming in and being translated out as it needs to be to all the other target languages. Traditionally, that has been done using pivot languages because infrastructures were built. It is just the way it is. It was built on English. English has been used as a pivot language more than any other language out there. There are some outliers that use a different pivot language for a very specific reason, but for the sake of this conversation, English is the predominant pivot language out there.

SO: So I have a team of engineers in South Korea. They are writing in Korean. And in order to get from Korean to, let’s say, Italian, we translate from Korean to English and then from English to Italian, and English becomes the pivot language. And the generalized rationale for this is that there are more people collectively that speak Korean and English and then English and Italian than there are people that speak Korean and Italian.

BS: With nothing in between, yeah.

SO: With nothing in between. Right. Directly. So bilingual in those two languages is a pretty small set of people. And so instead of hiring the four people in the world that know how to do that, you pivot through English. And in a human-driven workflow, that makes an awful lot of sense because you’re looking at the question of where do I find English … Sorry, not English, but rather Italian and Korean speakers that can do translation work for my biotech firm. So I need a PhD in biochemistry that speaks these two languages. I think I’ve just identified a specific human in the universe. So that’s the old way. What is a multilingual workflow then?

BS: So yeah, as we were discussing, the multilingual workflow is something where you have two, three, four different language sources that you’re authoring in. So you’re authoring in English, you have people authoring in German, you have people authoring in Korean and, let’s say, Italian. And they’re all working strictly in their native language, and those would go out for translation into any other target language. It’s tricky because the current model still uses a pivot language, but I think when we talk about generative AI, it’s going to avoid that completely. It’s going to skip that pivot and just say, “Okay, I know the subject matter that you’re talking about and I know the language that you’ve presented it in. Let’s take this subject and meaning and just represent it in a different language and not even worry about trying to figure out what does this mean in English. It doesn’t matter at this point.”

SO: Right. And so I think the one caveat here as we’re looking at this issue is to remember that GenAI in general is going to do better when it has larger volumes of content. And a lot of the generative AI tools are tuned for English. That’s kind of where they started. But it’s also useful to remember that GenAI is math. GenAI doesn’t really have a concept of knowledge or learning or any of these other things. It’s just math. So math is a language of its own, and we should be able to express mathematical concepts in a human language of choice. So there’s some really interesting stuff happening there. Okay. So stepping back a little bit from this, let’s talk about where this is coming from and the history of machine translation in translation localization. Where did we start? And isn’t it true that localization really was one of the leaders in adopting AI early on?

BS: It really was. So way, way, way back, you had essentially transcription in a different language. So people were given a block of text and asked to reproduce it in a different language, and they went line by line and just rewrote it in a different language. Then you start getting into the old-school machine translation or statistical machine translation. What this did was it kept, essentially, a corpus of the translations that you’ve done in the past, and it also broke down the information that you were feeding it into small segments. And it would do a statistical query, taking one segment from what your source said and throwing it out into its memory and say, “Okay, is there anything out here? Was this translated before? And give me a ranking of these results of what was done before.” And essentially, the highest result floated to the top, and it used that. Translators could modify those results over time based on actual accuracy versus systematic or statistical accuracy. But that is forever old. Over the past 10, 15 years, we’ve seen neural machine translation come out, which is getting a lot closer to AI-based translation. So it takes away the text matching and replaces it with more pattern matching. So it’s better at gisting. It will find, let’s say, a 95% match and can fill in those gaps for the most part, or at least say, “Hey, this gets us 95% of the way there. I’m going to put this out over here, and then the translator will essentially verify that translation going forward.” It’s a bit more accurate, but it still relies on this corpus of translation memory that you build over time. And now we’ve got generative AI machine translation, which completely takes everything that was done before, and it doesn’t necessarily throw it away, but it says, “Thank you for all the hard work you did. I will absorb that information and move forward.”

SO: Does it actually say thank you?

BS: It could. It depends on the prompt you use. But I mean, really, you’re looking at a situation where the generative AI model, it uses a transfer learning model to do the translation work. So it takes everything that it knows, applies it to what you feed it for translation, produces an output, learns a lot of things along the way in getting that translation to a point where you say, “Okay, great, thank you. This was good,” and then applies what it learned to the next time you ask. And it keeps doing that and doing that and doing that. On the plus side is that, yes, you can train your generative AI to get really, really, really good if you train it the right way. If someone … And I am not saying it’s malicious or anything, but if you train your GenAI translation model to start augmenting how it translates, then you’ll start getting these mixed results over time because it’s going to learn a different way to apply your request to provide an output.

SO: So the question that I actually have, which I’m not going to ask you to answer because that would be mean, is whether AI is actually storing content in a language, like in English, or is language, in the case of GenAI, just an expression of the math that underlies the engines? You don’t want to tackle that, no. Moving on.

BS: Well, it’s worth poking at, at least, because … Does GenAI actually do anything with the language that we give it now, just for answers? If we’re asking it to write a stupid limerick about a news event, or are we asking, “Summarize this document,” does it care that it’s written in any language? I honestly don’t know.

SO: As meta as it is to ask the question, what is the math that underlies it, the other thing that’s helpful to me, and again, we’re grossly oversimplifying what’s going on, but what is very helpful to me is to think of AI as autocorrect, or autocomplete, actually, on steroids. It’s more than that, but not a lot more. It has just learned that every time I type certain words in my text app, certain other words are likely to follow and it helpfully suggests them. And sometimes it’s right and sometimes it’s wrong, but it’s just doing math, right? Autocorrect learns that there are certain words that, when misspelled or that I do not wish to have corrected, or perhaps it introduces the concept that that word needs to be corrected to the word that I use more commonly, which can be extremely embarrassing. We had some questions about this. We’ve done some prior localization AI conversation, and I wanted to bring in a question that came from one of our audience members. Their question was, “Will we get to the point where we can effectively ask an AI help system a question in a foreign language, the AI system will parse the source language content, and then return the answer in the user’s language? Will translating documentation eventually be no longer necessary?” And what’s your take on that?

BS: Well, I think the answer is yes, and my take is that we are nearly there already. We already have… even apps that you can run on your phone. We have apps that can translate on the fly from verbal language. And I have used them when I travel abroad and I don’t know the language very well, to be able to speak it into my phone and it essentially translates the text for the person I’m trying to communicate with. There are other apps that take a step further and use a synthetic AI voice to read it so that they don’t have to look at my screen. They can just hear what the phone has to say because obviously I’m unable to say it myself.

SO: There’s also a version that does that through the camera. So you point the camera at a sign or a menu, more importantly, and it magically translates the menu into your language while you’re looking at it through your phone, or through your camera.

BS: That has been so helpful.

SO: Yes. Now that is actually a really good example, though, of a place where this kind of translation is hard because there’s very little context, and there’s a tendency in food culture to have very specific terms for things that maybe are not part of the AI’s daily routine. We were talking not too long ago about … What was it? We came up with half a dozen different words in German for dumpling. And we got into a big argument about which one was what and which one is correct for this type of dumpling and all the rest of it. So yeah. The thing I would point out here is that the question was, if someone comes in and asks the AI help system a question in, let’s say, French, but the underlying system is in, let’s say, English, but it would then return French. It’s a very English-centric perspective, to say, “Well, the French people … Our AI is going to be in English, essentially. Our AI database.” And that is a really interesting question to me. Is the AI database actually going to be in English? And maybe not.

BS: Probably not.

SO: I tried this about a year ago with ChatGPT. And you might experiment with this if you speak another language, or combine it with machine translation, which should work as well. I asked ChatGPT a specific question, and I got an answer. Cool. And then I asked the same question again and added, “Respond in, in this case, German.” The answer that I got in German was, obviously, it was in German, step one, which I wasn’t actually sure it could do. But step two, the reply that I got in German, the content was different. It wasn’t just a translated version of the English content. It was functionally a different answer. So it’s like in English, I said to ChatGPT, “What color is the sky?” And it said, “The sky is blue.” And then I said the same thing, “What color is the sky? Respond in German,” and it came back with, “The sky is green.” Now, it was actually did a DITA-related question, which kind of explains what happened here. But what happened was that ChatGPT, even though the prompt was in English, it pretty clearly used German language sources to assemble the answer. And those of you who know that DITA is more popular in the US than it is in Germany would not be too surprised that the answer I got regarding something DITA-specific in German was very much culturally bound to what German language content about DITA looks like. So it was processing the German content to give me my answer, not the English content. Now, if you ask an AI help system, the next question is what’s sitting in that corpus? Because if you ask it a question in French and it has no French in the corpus, then it’s probably going to generate an answer in English and machine translate. But if it has four topics in French and you ask it something in French, it is probably going to try and assemble an answer out of that French content, which could be…

BS: Before it falls back, yeah.

SO: Fascinating, which brings me to my next meta question that we’re not going to answer, which is can we capture meaning and separate it from language? And a knowledge graph is an attempt to capture relationships and meaning. And that can be rendered into a language, but it is not itself specifically English. It’s a database entry of person, which has a relationship with address, and you can say, “Person X lives at address Y,” but that sentence is just an expression of the mathematical or the database relationship that’s sitting inside the knowledge graph. I want to talk about the outlook for LSPs, for localization services providers. What does it look like to be an LSP, to be a translation service provider, in this AI world? What do you think is going to happen?

BS: I think it goes without saying that there’s going to be disruption again. Every single change, whether it’s in the localization industry or not, has resulted in some type of disruption. Something has changed. I’ll be blunt about it. In some cases, jobs were lost, jobs were replaced, new jobs were created. And I think that for LSPs, I think AI is going to, again, be another shift, the same that happened when machine translation came out, when neural machine translation came out, all of this. They’ve had to shift and pivot of how they approach their bottom line with people. GenAI is going to take a lot of the heavy lifting off of the translators, for better or for worse, and it’s going to force a more copy edit workflow. And perhaps, I guess, a corpus editing role or basically an information keeper who basically will go in and make sure that the information that the AI model is being trained on is correct and accurate for very specific purposes, and start teaching it that when you talk about this particular subject matter, this is the type of corpus we want you to consume and respond with, versus someone who actually does the translation work and pushes all the buttons and writes all of the translations. It’s really going to be a model where I think people are going to be training AI and cleaning up after it, essentially. And I don’t know any further than that. I mean, it’s still pretty young. I think also you will see LSPs turning more into consultative agencies with companies, rather than just a language service provider. So they will help companies establish that corpus and train their AI and work with their corporate staff to make sure that they are writing better queries, that they are providing better information out of the gate, and so forth. So I think it’s going to be a complete shift in how these companies function, at least between now and what’s to come.

SO: Yeah. The cost of a really bad translation getting into your database when it was human-driven… this AI thing is going to scale. There’s going to be more and more of it, everything’s going to go faster and faster. And we already have these conversations about AI slop and the internet degrading into just garbage because there’s all this AI-created stuff. And so if you apply that vision to a multilingual world, it’s quite troubling, right? So I think you’re right. I mean, this idea of content hygiene. How do we keep our content databases good, such that they can do all this interesting math processing instead of becoming more and more and more and more error-riddled is really interesting. We started by saying clearly this is a disruptive innovation. Disruptive innovations start out bad, clearly of lower quality than the thing they’re disrupting, but they’re cheaper and/or faster and/or have some aspect that they can do that the original thing cannot. So mobile phones are a great example. They were worse than landlines in every possible way, but they were mobile, right? They were not tethered to a cord in the wall. And then over time, a mobile phone turned into something that really is a computer that is context and location-aware and can do all sorts of nifty things. It doesn’t look at what resemblance it bears to POTS, to plain old telephone service. And we hear people. Oh, I don’t use my phone to make phone calls. Why would I do that? That’s terrible, because we have all these other options.

So from a localization point of view, any organization that is using person-driven, manually-driven, inefficient, fragmented processes is going to be in trouble. And that stuff’s all going to get squeezed out. And I think it’s actually helpful to look at the structured authoring concept and how it eliminated desktop publishing, right? It just got squeezed right out because it all got automated. We do the same thing with localization. I think AI is going to have a similar impact, whether it’s on content creation in any language, that it’s going to remove that manual labor over time. And I think that maybe we’re going to reach a point where content creation is just content creation. It’s not creating content in English so that I can translate it into the target languages. I think that that distinction between source and target is really going to evaporate. It’ll just be somebody created content, and then we have ways of making that available in other languages, and that’s where this is going to go. I’ve talked to a lot of localization service providers recently, and certainly this is one of the things that they are thinking about and looking at, is the question of what it means, to your point, to be a localization service provider in a universe where language translation specifically is automatable, maybe. Okay. Bill, any closing thoughts before we let you go here?

BS: I think this is a good place to end this one.

SO: We’ll wrap it up, and they will come and take away our microphones and put us back in the corner. Good to see you, as always.

BS: Good to see you.

CC: Thank you for listening to Content Operations by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

The post AI in localization: What could possibly go wrong? (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 29:19
Help or hype? AI in learning content https://www.scriptorium.com/2025/07/help-or-hype-ai-in-learning-content/ Mon, 21 Jul 2025 11:00:15 +0000 https://www.scriptorium.com/?p=23144 https://www.scriptorium.com/2025/07/help-or-hype-ai-in-learning-content/#respond https://www.scriptorium.com/2025/07/help-or-hype-ai-in-learning-content/feed/ 0 Is AI really ready to generate your training materials? In this episode, Sarah O’Keefe and Alan Pringle tackle the trends around AI in learning content. They explore where generative AI adds value—like creating assessments and streamlining translation—and where it falls short. If you’re exploring how AI can fit into your learning content strategy, this episode is for you.

Sarah O’Keefe: But what’s actually being said is AI will generate your presentation for you. If your presentation is so not new, if the information in it is so basic that generative AI can successfully generate your presentation for you, that implies to me that you don’t have anything interesting to say. So then, we get to this question of how do we use AI in learning content to make good choices, to make better learning content? How do we advance the cause?

Related links:

LinkedIn:

Transcript:

Introduction with ambient background music

Christine Cuellar: From Scriptorium, this is Content Operations, a show that delivers industry-leading insights for global organizations.

Bill Swallow: In the end, you have a unified experience so that people aren’t relearning how to engage with your content in every context you produce it.

Sarah O’Keefe: Change is perceived as being risky, you have to convince me that making the change is less risky than not making the change.

Alan Pringle: And at some point, you are going to have tools, technology, and process that no longer support your needs, so if you think about that ahead of time, you’re going to be much better off.

End of introduction

Alan Pringle: Hey everybody, I am Alan Pringle, and today I’m talking to Sarah O’Keefe.

Sarah O’Keefe: Hey everybody, how’s it going?

AP: And today, Sarah and I want to discuss artificial intelligence and learning content. How can you apply artificial intelligence to learning content? We’ve talked a whole lot, Sarah, about AI and technical communication and product content, let’s talk more about learning and development and how AI can help or maybe not help putting together learning content. So how is it being used right now? Let’s start with that. Do you know of cases? I know of one or two, and I’m sure you do too.

SO: Yeah. So the big news, the big push, is AI in presentations. So how can I use AI to generate my presentation? How can it help me put together my slides? Now, the problem with that from our point of view, for those of you that have been listening to what we’re saying about AI, this will be no surprise whatsoever, I think this is all wrong. It’s the wrong strategy, it’s the wrong approach. If you want to take AI and generate an outline of your presentation and then fill in that outline with your knowledge, that’s great, I think that’s a great idea. Also, if you have existing really good content and you want to take that content and generate slides from it, I don’t have a problem with that. But what’s actually being said is AI will generate your presentation for you. If your presentation is so not new, if the information in it is so basic that generative AI can successfully generate your presentation for you, that implies to me that you don’t have anything interesting to say.

AP: And you’re going to say it with very pretty generated images and a level of authority that makes it sound like there’s something that’s actually there when it’s not.

SO: Oh, yeah. It’ll look very plausible and authoritative and it will be wrong, because that’s how this generative stuff-

AP: Or not even wrong, surface-skimmy, just nothing of any real value there.

SO: Yeah. So then, we go into this question of, how do we use AI in learning content to make good choices, to make better learning content, how do we advance the cause?

AP: Well, there’s that one case where we have done it, because we have our own learning site, LearningDITA.com, and we were trying to think about ways to apply AI to our efforts to create courses, to tell people how to use the DITA standard for content. And I think you and I both agree, one of the strengths of artificial intelligence is its ability to summarize and synthesize things, I don’t think that’s controversial. So if you think about writing assessments from existing content in a way that’s summarizing, so one of us suggested to our team, why don’t y’all try that and see what these AI engines can do to generate questions from our existing lesson content. And then, of course, we suggested that they—the people who were creating the courses—review them. So our folks reviewed them, and I think some of the questions were actually quite usable, decent.

SO: And some of them were not.

AP: True, this is true.

SO: But the net of it was they saved a bunch of time, because they said, “Generate a bunch of assessment questions,” they went through them, they fixed the ones that were wrong, they improved the ones that were maybe not the greatest, they got a couple that were actually pretty usable. And so, it took less time to write the assessments than it would’ve taken to do that process by hand, to slowly go through the entire corpus to say, “Okay, what are the key objectives and how do I map that to the assessments?” So that’s a pretty good example, I think, of using generative AI, as you said, to summarize down, to synthesize existing content. On the LMS side, so when we start looking at learning management systems and how the learning content goes into the LMS and then is given or delivered to the learner, there are some big opportunities there, because if you think about what it means for me as a learner, as a person taking the course, to work my way through course material, maybe the assumptions that the course developer made about my expertise were too optimistic. I’m really struggling with this content, it’s trying to teach me how to use Photoshop and I am just not good at Photoshop. There’s this idea of adaptive learning, this is not an AI concept, the idea behind adaptive learning is that if you’re doing really well, it goes faster. If you’re struggling, it goes deeper, or maybe you do better with videos than you do with text, or vice versa. It’s that adapt to the learner and to the learner’s needs in order to make the learning more effective. Now, if you think about that, that is a matter of uncovering patterns in how the learner learns and then delivering a better fit for those patterns. Well, that’s AI. AI and machine learning do a great job of saying, “Oh, you seem to be preferring video, so I’m going to feed you more video.” Now, we can do this by hand or we can build it in with personalization logic, but you can also do this at scale with AI and machine learning. So there are definitely some opportunities to improve adaptive learning with an AI backbone.

AP: I think it’s worth noting at this point, when you’re talking about gathering the data to make, I hate to, I’m going to personalize AI, so it can make these decisions or do the synthesis, there’s got to be intelligence that’s built into your content, and that goes all the way back to the content creation, going back from the presentation layer, back to how you’re creating your content. And again, this loops back, in my mind, to the idea of building in that intelligence with structured content, that is your baseline.

SO: Yeah. I know we’re just relentless on this drum of you need structured content for learning content, but it’s because of all these use cases, because as you try to scale this stuff, this is what you’re going to run into. I also see a huge opportunity for translation workflows specifically for learning content. So if you look at translation and multilingual delivery, there’s a lot of AI and machine learning going on in machine translation. So now, we think a little bit about what that means for learning content, and of course, all of the benefits that you get just in general from machine translation still apply, but the one that I’m looking at that I think would be really, really interesting to apply to learning is learning has a lot of audio in it, audio and video, but specifically audio, and audio typically is going to be bound to a language. You’re going to have a voiceover, you’re going to have a person saying, “Here’s what you need to know, and I’m going to show you this screenshot,” or, “I’m going to show you how to operate this machine.” And so, you’ve got audio and potentially captions that are giving you the text or the audio that goes with that video. Okay, well, we can translate the captions, that’s relatively easy, but what about the voiceover? And the answer could be that you do synthetic voiceovers. So you take your original, let’s say, English audio and you turn it into French, Italian, German, Spanish or whatever else you need, but you synthesize the voice instead of re-recording. Now, is it going to be as good as a human, an actual human person who has expression and emotion in their delivery? No. Is it better than the alternative where you don’t provide it in the target language at all? Probably, yes. And when we start talking about machines, “Here is how to safely operate this machine,” the pretty good synthetic voice in target language is probably better than, “Here it is in English, deal with it,” or, “Here it is in English with a translated caption in German, but no audio.” I think that’s what we’re looking at is, is the synthetic audio good enough that it will improve the learner experience, and I think the answer is yes.

AP: I’m turning this over in my mind, and there’s part of me that’s very resistant to the idea of these synthesized voices. For example, and this is bias on my part, when I am downloading audiobooks from the library, they now, in the app that I use that’s connected to the local library, a lot of the narration, it will say, “This is an AI-generated voice.” I tend to avoid those, I do, because sometimes the inflection’s a little odd, there’s no personality there. However, I can buy that having that slightly robotic-esque voice in another language is better than not having it at all, I can buy that.

SO: Right. And I think the audiobooks that we listen to for fun are different than I need to figure out how to use this machine without hurting myself, those are different, and I don’t need a… It wouldn’t hurt. I don’t need a personable obviously human voice to voiceover the video that helps me figure out how to use this thing on the factory floor. I wouldn’t object, but I would prefer to get something in my language. That’s really the key, because when we start asking the question, the question is less, would you prefer a really good artistic performance voiceover versus a robotic voice… That’s what you’re getting from the library, you’re saying, I am not going to consume entertainment content that is like this, and I think a lot of people are onboard with that. But what about technical product and learning content that you need? You’re not making a choice that this is something I want to do in my downtime, but rather, if I can’t figure out how to do this, bad things will happen.

AP: Yeah. There is a legitimate use case there, and they’re two different things, and I do think, based on some of the synthetic voices I’ve heard, they are getting better, quite better, and sounding a little more realistic as well.

SO: Right. We’ve already experimented with this. We have a podcast where we actually generated, it was a synthetic voice, but it was based on a person’s voice print. So it wasn’t fake AI, it was fake AI voice, but it was fake AI voice generated off of a specific person. The audio is quite good. Every once in a while between paragraphs, it shifts weirdly as you’re listening, as a new thought is introduced, and it shifts in ways that a human would not, but all in all, I thought it was pretty acceptable. So I think that what I’m trying to say in an extremely long-winded way is that when you have scalability issues in your content production, learning content or otherwise, AI has the potential to help you with productivity across multichannel workflows with repurposing content from it’s the learning content versus it’s the assessments, it’s in language A versus language B, it’s audio, it’s video. There are things that we can do there to use the AI tools for productivity to support these workflows and scale them, and to your point, and therefore, we need underlying structured content. We can’t do this with slapped-together one-off formatted mess.

AP: Yeah. The intelligence has to be built in at the very foundation, and that is when you are creating the content. That intelligence really can’t be a layer that’s put on when you transform things or you connect to an LMS, it’s not a presentation layer thing. The presentation layer needs to pull that intelligence from your source content. Again, this is why you need structured content, the metadata built in, to help drive the way you transform and distribute your learning content.

SO: Yeah. I’m, again, very skeptical of GenAI in the process of generating net-new content, new information, nobody’s ever written it before, it’s a new product, it needs to be explained, taught, whatever. Maybe an outline, this is what a typical intro course looks like, now go fill in the details, okay, maybe even a first draft, especially if product A is based on product B, or I guess the other way around. But our world is structured content, obviously, but also our world is content where it matters that the content is accurate, because when the content is wrong, bad, bad things happen, people get hurt, people die, companies get shut down for compliance reasons, that type of thing. So the content has to be accurate, and at the end of the day, it’s actually quite difficult to get GenAI to gen accurate content. That’s not what it does; that’s not its function. So I’m very interested in applying AI to various product and content roadmaps to enable productivity, to enable new deliverables, to enable new synthesis summaries, et cetera, but I’m very, very worried about what happens if you apply it on top of bad content or you apply it to the wrong use case in an effort to just get your stuff for free, essentially.

AP: So what I’m hearing in summary is that content creation for learning, AI is probably not a good fit now. To support you and help you possibly develop on the edges of that content or give you outlines and ideas, and also to augment and support delivery channels, it could be helpful. So it’s a support mechanism for the development of the content, distribution of the content, but not necessarily for the direct creation of that content.

SO: Yeah, I think that’s fair, and I think that’s where we land. I’d be quite curious to hear from our listeners, what they’re doing with this and where they’re going with it.

AP: And I’m sure people are having the same struggles right now over the best way to apply it. But I think right now, as of the moment that we’re recording this, AI is in no way ready for prime time to basically take the place of a learning content person. It should be there to support them, not to replace them.

SO: Yeah, for meaningful content.

AP: Exactly.

SO: And if it’s not meaningful, what are you even doing?

AP: Right.

SO: Well, that’s cheery, okay.

AP: And on that very cheerful note, we’re going to wrap up. So thank you, Sarah. And folks, do get in contact with us to let us know how you’re using AI, because that is of great interest to us. So thank you. Thanks, Sarah.

SO: Thank you. And maybe let us know how you’re being made to use AI.

AP: That too. Thanks, everyone.

Conclusion with ambient background music

CC: Thank you for listening to Content Operations by Scriptorium. For more information, visit Scriptorium.com or check the show notes for relevant links.

Questions about this podcast? Let’s talk!

"*" indicates required fields

This field is for validation purposes and should be left unchanged.
Data collection (required)*

The post Help or hype? AI in learning content appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 17:48
Tool or trap? Find the problem, then the platform https://www.scriptorium.com/2025/06/tool-or-trap-find-the-problem-then-the-platform/ Mon, 02 Jun 2025 11:31:48 +0000 https://www.scriptorium.com/?p=23069 https://www.scriptorium.com/2025/06/tool-or-trap-find-the-problem-then-the-platform/#respond https://www.scriptorium.com/2025/06/tool-or-trap-find-the-problem-then-the-platform/feed/ 0 Tempted to jump straight to a new tool to solve your content problems? In this episode, Alan Pringle and Bill Swallow share real-world stories that show how premature solutioning without proper analysis can lead to costly misalignment, poor adoption, and missed opportunities for company-wide operational improvement.

Bill Swallow: On paper, it looked like a perfect solution. But everyone, including the people who greenlit the project, hated it. Absolutely hated it. Why? It was difficult to use, very slow, and very buggy. Sometimes it would crash and leave processes running, so you couldn’t relaunch it. There was no easy way to use it. So everyone bypassed using it at every opportunity.

Alan Pringle: It sounds to me like there was a bit of a fixation. This product checked all the boxes without actually doing any in-depth analysis of what was needed, much less actually thinking about what users needed and how that product could fill those needs.

Related links:

LinkedIn:

Transcript:

Introduction with ambient background music

Christine Cuellar: From Scriptorium, this is Content Operations, a show that delivers industry-leading insights for global organizations.

Bill Swallow: In the end, you have a unified experience so that people aren’t relearning how to engage with your content in every context you produce it.

Sarah O’Keefe: Change is perceived as being risky, you have to convince me that making the change is less risky than not making the change.

Alan Pringle: And at some point, you are going to have tools, technology, and process that no longer support your needs, so if you think about that ahead of time, you’re going to be much better off.

End of introduction

Bill Swallow: Hi, I’m Bill Swallow

Alan Pringle: And I’m Alan Pringle.

BS: And in this episode we’re going to talk about the pitfalls of putting solutioning before doing proper analysis. And Alan, I’m going to kick this right off to you. Why should you not put solutioning before doing proper analysis?

AP: Well, it’s very shortsighted and oftentimes it means you’re not going to get the funding that you need to do the project to solve the problems that you have. And with that, we can wrap this podcast up because there’s not a whole lot more to talk about here, really. But no, seriously, we do need to dive into this. It is very easy to fall into the trap of taking a tool’s first point of view. You’ve got a problem, it’s really weighing on you. So it’s not unusual for a mind to go, this tool will fix this problem, but it’s really not the way to go. You need to go back many steps, shut that part of your brain off and start doing analysis. And Bill, you’ve got an example, I believe, of how taking a tool’s first point of view didn’t help back in a previous job you had.

BS: I do, and I’m not going to bury the lead here, but they didn’t do their homework upfront to see how people would use the system. So I worked for a company many, many, many years ago that decided to roll out and I will name the product. They rolled out Lotus Notes.

AP: You’re killing me. That’s also very old, but we won’t discuss that angle.

BS: But they did so because it checked every single box, every single box on the needs list, it did email, it had calendar entries, it did messaging, notes, documents, linking, sharing, robust permissions, and you even had the ability to create mini portals for different departments and projects. So on paper, it looked like a perfect solution. And everyone, including the people who greenlit the implementation of Lotus Notes, hated it. Absolutely hated it. Why did they hate it? It was difficult to use. It was very slow. It was very buggy. Sometimes it would crash and leave processes running, so you couldn’t relaunch it. There was no easy way to use it. Back at that point, we had PDAs, personal digital assistants, and very soon after that we had the birth of the smartphone. There was no easy way to use it in these mobile devices except for maybe hooking up to email. It didn’t fit how we were working at all. While it shouldn’t count, it really wasn’t very pretty to look at either. So everyone bypassed using it at every opportunity. They would set up a Wiki instead of using the Lotus Notes document or notes portal that they had. They would use other messaging services. This is back during Yahoo Messenger and ICQ. But yes, we had that going on and in the end it was discontinued after its initial three-year maintenance period ended because nobody liked it.

AP: Yeah, so sounds to me like there was a bit of a fixation. This product checks all the boxes without actually doing any in-depth analysis of what you needed, much less actually thinking about what users needed and how that product could fill those needs. And I think it’s worth noting too, think about this from an IT department point of view, because they’re often a partner on any kind of technology project, especially if new software is going to be involved because they’re going to be the ones a lot of times that say yay or nay, this tool is a duplicate of what we already have. Or no, you have some special requirements and we do need to buy a new system. So if I as an IT person, the person who vets tools hears from someone, and let’s get back into the content world, I need a way to do content management and I need to have a single source of truth and I need to be able to take the content that is my single source of truth and then publish to a bunch of different formats. This is a very common use case. I would be more interested as an IT person in hearing that than hearing I have to have a component content management system. There’s a subtle difference there. And I think, and this is possibly unfair and grouchy of me, but that is me, grouchy and unfair. If I hear someone come to me, I need this tool instead of I have these issues and I have these requirements. It sounds selfish and half-baked.

BS: It does.

AP: And again, I am thinking about this from the receiving end of these queries, of these requests, but I also want to step back into the shoes of the person making a request. You can be so frustrated by your inefficiency and your problems, you latch onto the tools. So I completely understand why you want to do that, but you are basically punching yourself in the face when you go and make a request that is, I need this tool instead of I have these issues, these requirements, and I need to address these things. It’s subtle, but it’s different.

BS: It’s very different. And also if you do take that approach of looking at your needs, you find that there’s more to uncover than just fixing the technological problem itself.

AP: Yes.

BS: There might be a workflow problem in your company that you may acknowledge, you may not know it’s quite there. Once you start looking at the requirements and looking at the flow of how you need to work, and how you need any type of new system to work, you start seeing where the holes are in your organization. Who does what? What does a handoff look like? Is it recorded? What does the review process look like? When does it go out for formal review? What does the translation workflow look like? And you start seeing that there may be a lot of ad hoc processes in place currently that could be fixed as well.

AP: True. And I also think when you’re talking about solving problems and developing your requirements from that problem solving, you are potentially opening up the solution to more than just your department, your group. It can possibly be a wider situation there, too. And also by presenting it as a set of problems and requirements to address those problems, there may be already a tool in-house at your company that you don’t know about or there may be part of a suite of tools, and if you add another component to it will address your problem instead of just buying something completely outright. And we’ve seen this before, where it turned out there was an incumbent vendor that had some related tools already at the company, and that company also had a tool that could solve the problems that our client had or our prospect had. We’ve had both prospects and clients have this issue, so it doesn’t make sense, therefore, to go and say, I need this tool, which is essentially a competitor of what’s already in place. You’re going to have a very uphill battle trying to get that in place. It is also very easy, as someone who has already done a content ops improvement project, to understand this tool is good. It saves me at this company, but you’ve got to be careful of thinking just because it helped you over at company A. Now you’re at company B, it may not be a fit for company B culturally, there may be already something in-house. So you’ve got to let go of those preconceived notions. I am not saying that the tool you used before was bad. It may be the greatest thing ever, but there may be cultural issues, political issues, and even IT tech issues that mean you cannot pick that tool. So why are you pushing on it when you have got all of these things against you? Again, it is easy to fall into these traps. Don’t do it.

BS: Yep. On the flip side of that, we had a situation where a customer of ours years ago was looking for a particular system, a CCMS, component content management system, and they had what they perceived to be a very hard requirement of being able to connect to another very specific system.

AP: Yes, I remember this. It was about 10 or 11 years ago.

BS: And it was such a hard requirement that it basically threw out all of their options except for one. And we got the system working the way they needed it to. It needed quite a bit of customization, especially over the years as their requirements grew. But in the end, they never connected to that requirement system. The one that everyone said this would be a showstopper. They never connected to it because they just decided it wasn’t a requirement after X many years. And that just kills me because there could have been three or four other candidate systems that would’ve easily have fit the bill for them as well and probably would’ve cost them a little bit less money. But there we are.

AP: In fairness, all parties involved, including us, we’re working on the information that we had at the time. And I think this is a case where a requirement that we thought was a hard requirement turned out not to be. However, just because this happened in this case, folks out there listening to us, that does not mean that if a particular requirement points at a particular system that it could be not a real requirement because you want another system really badly. So you want to ignore that really hard, not how that works. It’s not how that should work. So I think there is a balance here that needs to be struck, and I think this is probably a good closing message. Don’t follow your knee-jerk instinct in regard to, I need this tool. Really look at the requirements, do an analysis. And because we’re humans, sometimes that analysis is not going to catch other things that it should have. Or you may end up having, like you just mentioned, a requirement that that’s not necessarily as real as you thought that it was. But I think your chance at project success and getting a tool purchased can configure and up and running are much higher when you start with those requirements than you start off with, I need tool Y.

BS: Well said. Do the homework before the test.

AP: And don’t put the cart before the horse.

BS: Well, thank you, Alan.

AP: Thank you. This was shorter, but it’s an important thing, and I think, again, this points to any kind of operational change being a human problem and dealing with people’s emotions and their instincts as much or more than an actual technological issue.

CC: Thank you for listening to Content Operations by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

Need to talk about content solutioning? Contact us!

The post Tool or trap? Find the problem, then the platform appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 13:25
Deliver content dynamically with a content delivery platform https://www.scriptorium.com/2025/05/deliver-content-dynamically-with-a-content-delivery-platform/ Mon, 19 May 2025 11:37:21 +0000 https://www.scriptorium.com/?p=23047 https://www.scriptorium.com/2025/05/deliver-content-dynamically-with-a-content-delivery-platform/#respond https://www.scriptorium.com/2025/05/deliver-content-dynamically-with-a-content-delivery-platform/feed/ 0 Struggling to get the right content to the right people, exactly when and where they need it? In this podcast, Scriptorium CEO Sarah O’Keefe and Fluid Topics CEO Fabrice Lacroix explore dynamic content delivery—pushing content beyond static PDFs into flexible platforms that power search, personalization, and multi-channel distribution.

When we deliver the content, whether it’s through the APIs or the portal that you’ve built that is served by the platform, we render the content in a way that we can dynamically remove or hide parts of the content that would not apply to the context, the profile of the user. That’s the magic of a CDP. It’s delivering that content dynamically.

— Fabrice Lacroix

Related links:

LinkedIn:

Transcript:

Introduction with ambient background music

Christine Cuellar: From Scriptorium, this is Content Operations, a show that delivers industry-leading insights for global organizations.

Bill Swallow: In the end, you have a unified experience so that people aren’t relearning how to engage with your content in every context you produce it.

Sarah O’Keefe: Change is perceived as being risky, you have to convince me that making the change is less risky than not making the change.

Alan Pringle: And at some point, you are going to have tools, technology, and process that no longer support your needs, so if you think about that ahead of time, you’re going to be much better off.

End of introduction

Sarah O’Keefe: Hi everyone, I’m Sarah O’Keefe and I’m here today with the CEO of Fluid Topics. Fabrice, Lacroix. Fabrice, welcome.

Fabrice Lacroix: Hey. Hi Sarah. Nice being with you today. Thanks for welcoming me.

SO: It’s nice to see you. So as many of you probably know, Fluid Topics is a content delivery portal or possibly a content delivery platform. And we’re going to talk about the difference between those two things as we get into this. So Fabrice, tell us a little bit about Fluid Topics and what that content delivery portal or maybe platform. Which one is it? What do you prefer?

FL: For us, it’s platform definitely. But you’re right, depends on where people are in this evolution process, on how they deliver content. And for many, many customers, the piece stands for a portal. You’re right, because that is the first need. That’s how they come to us, because they need a portal.

SO: Okay, so in your view, the portal is a front end, an access point for content, and then what makes it a platform rather than a portal?

FL: Probably because the goal that many companies have to achieve is delivering that content where it’s needed. It’s many places most of the time. So it’s not just the portal itself, and that’s where solving the problem of being able to disseminate this content to many touch points, you need a platform for that. The portal is one touch point only, but when you start having multiple touch points like doing in-product help or you want to feed your helpdesk tool or field service application or whatever sort of chatbot somewhere else, whatever use case you have that is not just the portal itself, then that becomes a platform thing.

SO: So looking at this from our point of view, so many of our projects start with component content management systems, CCMSs, which are the back end. This is where you’re authoring and managing and taking care of all your information, and then you have to deliver it. And one of the ways that you could solve your delivery front-end would be with a content delivery platform such as Fluid Topics. Okay. So then, what are the prerequisites, when you start thinking about this? So our hypothetical customer has content obviously, and they have, we’re going to say probably a back-end content management system of some sort, probably.

FL: Most of the time.

SO: Most of the time.

FL: Depends where you go, depends on the maturity and the industry. If you go to some manufacturing somewhere, they mostly still are maybe on the word and FrameMaker or something like that in design, and then they generate PDFs.

SO: So maybe we have a backend authoring, well, we have an authoring environment of some sort on the back-end. Maybe it’s a CCMS, maybe it’s something not like that. And now we’re going to say, all right, we’re going to take all this content that we’ve created and we’re going to put it into the CDP, the content delivery platform. Now, what does success look like? What do you need from that content or from the project to make sure that your CDP can succeed in doing what it needs to do?

FL: The first answer to that question that comes to my mind is no PDFs. I mean, if you look at it, don’t laugh at me. If you look at it from an evolutionary perspective, it’s like regardless how people were writing before, it was not CCMS, mostly unstructured. And at the end of the day, people were pressing a button and generating PDFs and putting the PDF somewhere, CRM, USB key, website for download. But managing the content unstructured was painful. That’s where you start working with the CCMS, because you have multiple versions, variants, you want to work in parallel, you want to avoid copy paste, translation, so the story around that. So then companies start and they start moving their content into CCMS. All of the content, part of the content, but they start investing in a modern way of managing, creating their content. But again, if you look at it once they have made that move, most of those companies 10, 15 years ago probably were still pressing a button and still generating PDFs. And then they realized that they had solved one problem for themselves, which is streamlining the production capability and managing the content in a better way. But from a conception perspective, regardless whether you work with word FrameMaker or in DITA with the most advanced CCMS of the market, if you still deliver PDF, you are not improving the life of your customers. And then people started realizing that, oh yeah, so we should do better. So let’s try to output that content in another way than PDFs. And then say, “What else than PDF, do we have? HTML.” And was like, okay, and let’s output HTML. But HTML that is pretty much the same as the PDF. You see what I mean? It’s like static document. Each document was a set of HTML pages. And then they started realizing that they need to reassemble the set of HTML pages into a website, which is even more painful than just putting PDFs on the website is reassembling zip files of HTML pages on the website, and then it’s like static HTML. And then you have to put a search on top and have to create consistency. And that’s why CDP have emerged. That’s solving this need, which is, how do we transition from PDF to static HTML to something that is easier, that ingest all this content, comes with search capabilities, comes with configuration capabilities, and as well at the same time as API, so that back to the platform thing, it’s not just a portal, but can serve other touch points. So that’s really because we are in the detail world, DITA is the Darwin Information Typing Architecture. So that’s a very Darwinian process that led to this creation of the CDP and the need of a CDP is the next step in the process. And many companies really follow that process of, I have to go from my old ways of writing, which are not working painful, move to a CCMS, but in fact realize that they don’t solve the real problem of the company, which is how can I help my customer, my support agent, my field technicians better find the content better use my content? And that’s where this T, oh, okay. That’s where we need a CDP.

SO: Yeah, and I think, I mean, we’ve talked for 20 years about PDFs and all the issues around them, but it’s probably worth remembering that PDF in the beginning was a replacement for a shelf of books, paper books that went out the door. And the improvement was that, instead of shipping 10 pounds, or I’m sorry, what four kilos of books you were shipping as you said, a CD-ROM or this was before USB, a zip drive. Remember those?

FL: Zip drive.

SO: A zip drive. But you were shipping electronic copies of your books and all you were really doing was shifting the process of printing from the creator, the software, hardware, the product company to the consumer. So the consumer gets a PDF, they print it, and then that’s what they use. Then we evolved into, oh, we can use the PDF online, we can do full-text search, that’s kind of cool, that was a big step forward. But now to your point, the way that we consume that information is not printed and it’s for the most part, and it’s not big PDFs, but rather small chunks of information like a website. So how do we evolve our content into those websites? So then what does it look like to have a, and I think here we’re talking about the portal specifically, but what does it look like to have a portal for the end user that allows them to get a really good experience in accessing and using and consuming the content that they need to use the product, whatever it may be. What are some of the key things that you need to do or that you can do?

FL: Yeah. I would say that the main thing that a CDP is achieving compared to static HTML, because now we have to compare not with PDFs that are probably still needed if you want to print as well, I’m not saying that PDF is dead and we should get rid of all PDFs. Just said that it’s just when you need to print, then you can get the PDF version of a document. But if we compare static HTML with what a CDP brings, we’re trying to make content personalized and contextual. If you pre-generate static HTML pages, it’s one size fits all. It’s the same HTML pages for everyone. And if you have two versions of your product and one variant, and then you translate the same zip file exists in 20 versions, so to say, and you have to assemble that and let people understand how to navigate that and that should become super complex. What a CDP solves is like, give me everything, and I will sort out this notion of I understand the fact that the same document can exist in 20 variants, whether it’s product version, document version, capabilities of the product version A, version, B, Asian market, European market, American market. And then you have subtilities and some paragraphs are here, some paragraphs are removed, added. And so we are adapting the content so that it fits the profile of the user. And if you ask me what’s needed to make a CDP work, it’s mostly metadata, metadata, metadata. And I can tell you a story, what was fun? It’s like, few years ago, some years ago, more than few, we had customers reaching out or expecting customers to reach out and say, “Oh, show me three topics.” And then we’re showing the capability and say, “Oh my God, it’s exactly what we need.” And then those guys disappeared for two years. And in fact, what they did during these two years is like adding metadata to the content. It was not about the product, but through this discussion we had with them and showing that you can put facets for the search and then varianting content and let people switch between variants and versions of the content through metadata and all that, and they realized that, oh my God, that’s exactly what we need. And then through their questions, they understood that they needed to have those metadata on the content and those metadata were not existing and still they were working with the CCMS. But if your output channel are PDFs, if you don’t put PDFs, you don’t care about putting this metadata on the content inside the CCMS. That’s a lot of work to do to maintain those metadata. But if at the end of the day you print a button and you generate a PDF, those metadata are lost, they are not used, they’re not leveraged by the PDF. So that becomes flat pages of content. So they had transitioned to a CCMS but never made this investment of tagging content. And when I mean tagging content, it’s not just the map, it’s like the section, the chapter, this is for installing, this is for removing, this is for configuring, this is for troubleshooting, this chapter is about this, this topic is about that for this version of the products. You know what I mean? Fine-grained tagging at different level of the documents. And because they were generating PDFs, they didn’t see the need of making that tagging at the right level, and they realized that suddenly the sheer value they could get from PDF is when the content is tagged because that’s using those tags and those metadata schemes that the CDP can adapt the content to the context profile of the user. So I would say, what’s needed to leverage the capabilities of a CDP? It’s mostly granularity of content and tags, metadata that let people, and you can design your metadata from a user perspective. As an end user, how would I like to filter the content? What are the tags I need for filtering the content? It’s like, if I run a search, I have these facets on the left side of the search result page, what would I like to click on to refine my search and spot the content that fits my needs?

SO: And I think, going back to our flat file PDF or static HTML, if we need to do this kind of thing, if you need context in a flat file, what you have to do is say something like, if you have product variant A, do this. And if you have product variant B, do this. Or if you are installing and the temperature, the ambient local temperature is greater than X, then do these extra steps. If you are baking and you are at high altitude, you have to adjust your recipe in these ways. So you end up with all these sort of if statements that are, hey, if this is you do these things, but it’s all in the text, because I have no way, maybe I can do two variants of the PDF like variant A for regular altitude and variant B for high altitude. But I can’t do one per country, right? I mean, I guess I could, but ultimately, what you’re describing is that instead of putting it into the text explicitly, “Hey Fabrice, if you meet these conditions, do these things or don’t do these things or do these extra things,” the delivery portal, platform is going to say, “Okay, what do I know about this end user? What do I know about Fabrice? I know he is in a certain location with a certain preferred language and a certain product. I know which products you bought.” So therefore you don’t get an if, if, if, if, you just get, here’s what you need to do in your context with your product.

FL: Exactly. When we deliver the content, whether it’s through the APIs or the portal that you’ve built and that is served by the platform, we render the content in a way that we can remove or hide dynamically parts of the content that would not apply to the context, the profile of the user. And that’s the magic of CDP. It’s making that content dynamically. It’s also called dynamic content delivery. You remember we had this concept, the dynamic part is, how can I dynamically leverage the metadata on the content side or the conditions that I adapted, read through metadata schemes and make that applicable to the situation and the user profile? So that’s the magic part of it, and that’s a huge improvement compared to a static document that lists all the conditions and then you put the burden on the reader to figure out, sort out inside the document what should be skipped and what to do depending on the product configuration.

SO: Which can of course get very complicated. Now you mentioned product help, in-app help, context sensitive help. So what does it look like to use a Fluid Topics or this class of tool to deliver context sensitive help or in-app help?

FL: We are back again to this granularity and the metadata. So imagine you are a software vendor, you design a web application that you have created and you want to do the inline help for your application, your web product. What would you do? You would say in that page, when people click on that question mark or help button, we should open a pane and display that information. That information needs to be a topic, it needs to be written, and the granularity should be a topic because that’s what you pull from the system. So that’s where we need the granularity that’s matching what you want to display inside your app, whether it’s a tool tip, maybe a small tool tip when you move something in the app and then that becomes some fragment of content you need to get from the CDP dynamically. That can be one page of explanation that you display in a pane that opens in your app, but you need to pull that content. So the same way that that’s how you would do it, you were embedding the content inside the application itself. You would write each part of the explanation, the help that you want to display as fragments of information. If you are doing it statically inside the application, but the problem is that if you want to fix something or enhance the content, you have to edit the application, change the… So it’s part of the development. Here, you want the app to pull the content dynamically because the same content can be not only used to be displayed live in the screen, real time. But can be the same content that is used on the doc portal or then you print a PDF on how to do this. That’s the same. You don’t want to maintain the same explanation, in the application, in the portal, in PDFs. So one source. So it’s exactly that. And then you’re pulling through metadata. The app will say, “Oh, give me what goes into that page.” So it’s metadata-driven as well.

SO: Right? So there’s an ID on the software or something like that, and it says, “Give me the content that belongs with this unique label.”

FL: Exactly. Behind each button you give an ID to that button, which is the question mark in that page. When people click pull content, inline content help, ID number 1, 2, 3, 4. And on your CCMS, you have a metadata, which is called content ID for inline help, whatever. And then you tag that piece of content, 1, 2, 3, 4, and then that’s it. Magic is done. So it’s that simple.

SO: So what I’m hearing, and this is in fairness, exactly what you started with is, you have to have metadata, right? On the content.

FL: You have to have metadata.

SO: And without the metadata there is, well, let’s talk about magic. So if you have a front end that is some sort of a large language model that bought something, what does that mean in terms of this content delivery platform? I mean, can’t you just use ChatGPT and call today?

FL: Yes, that’s a good one. I think most of the project AI project we’ve seen in large companies when they started to do, oh, let’s build a chatbot. That’s the magic dream of any company like building the chatbot that replies to any question. Okay, so how does the project start usually? You have the IT, some people in the IT team or the IT team is hiring external people specialized in AI and they realize that they need content. So the first thing they do is they come usually to the TechDoc team and say, “Give me all the content that you have.” And the TechDoc team says, “Okay, we have all these DITA contents.” You say, “No, I don’t want DITA, I want PDFs.” That’s huge to see that. Why? Because they use technology like something from Microsoft, you can build your chatbot in five minutes, but then the only content types you can fit this ready to use platform is with PDFs and Word. So all the magic you’ve put in your content and the tags are lost and you see people getting PDFs out of you wanting PDFs from your content, which is the exact opposite of the investment you’ve made. Putting PDFs somewhere on the storage place and say to Microsoft Chatbot, blah, blah, this is the content, this is the knowledge of the company. And then when you have 20 variants of the same product, then no metadata anymore. Then the chatbot is always mixing all the content. And when you start asking real questions about how to do this, how to do that with this version of the product, everything is lost. And then the chatbot start hallucinating, not because the LLM is hallucinating, because the LLM just the system, the chatbot does not know what PDF to use because it’s implicit to know that this PDF applies to that version of the product or that version. It’s even worse if you say, “If you have product A, do this, if you have product B, do that and start mixing conditions and then just the knowledge becomes barely readable by humans that make mistake reading it. So can you imagine how an LLM can make sure that it’s putting the right information from that complex text structure?

SO: Okay, so make PDFs out of DITA, dumb it down, send it to the chatbot, that’s bad.

FL: And then it’s guaranteed failure.

SO: So what’s the good version of this?

FL: But that’s how it works. I guess, I write that you’ve seen this sort of projects where people were asking for the content, thinking that the more they have, the better it’s going to be. And suddenly they realize that, that chatbot is not working and doing many mistakes. And they call that hallucination, because if the LLM was hallucinating, but it’s not, it’s just able to feed the LLM dynamically with the right retrieval, augmented generation scheme to dynamically provide the information for replying to the question because it’s difficult to pull from the PDF the right information that applies to the context. And we are back to, what is the context? What is the machine? What is the profile of the user? What is the variant, the version, the whatever you have in front of you? So that’s the complex part. So what’s the relationship? What is the successful AI? What’s the relationship between CDP and AI? All AI projects I’ve seen start regardless of us, regardless of Fluid Topics, start with we need to gather content. We need to take the content that we have, put it in one place, create this sort of unified repository of content. The promise that usually, as I said, they do it using static document, PDFs, to analyze blah blah. If you look at what a CDP is, that’s exactly what it is. It’s already your repository of content. At least everything around the product, because we’ve been talking about CCMS published to CDP. What also makes a CDP very special is that, not only can we ingest this DITA content, but also this legacy PDF and markdown content, API documentation knowledge bases. So the CDP is here to ingest all the knowledge that you have around your product, not just necessarily the formal techdoc, the proper techdoc that has been well written and validated. So we have already, well, the CDP is exactly that. It’s building, that’s the purpose of it. It’s building that unified repository and that’s where you should start from. And it’s fine grained, and we have the metadata and we have everything, so we know how to feed the LLM. So there are two things in an AI project. One is the LLM, but now people use generic LLM, you don’t fine tune, train an LLM anymore for this sort of use case that is just a chatbot for replying to questions and solving cases automatically. You use a generic LLM and you feed the LLM dynamically with the fragments of content of knowledge that you have in your repository. And that’s where just as a human, when you run a search, you look for content, you know what part of content, what are the fragments, the topics, the chapters that contain the knowledge for replying to that question? The tough part is, extracting that from the repository. Am I extracting the 2, 3, 4 pages around the question that are matching the version, the situation that I’m in? So that I can then feed the LLM and say, “This is the 10 pages of knowledge that we have, or 20 or 50 pages of knowledge. This is the question replied to the question using that knowledge.” That’s exactly what a chatbot does. You’re giving the question of the user, you give 5, 10, 20, whatever number of pages of knowledge that you have in your repository and you ask the LLM say, “This is the question, this is the knowledge, please reply.” So the test part is extracting the 5, 10, 20 pages that are really adapted to the situation, to the context.

SO: And the metadata helps you do that.

FL: And the metadata. Nothing else than metadata for doing that.

SO: Right. Okay. So we’ve talked a lot about metadata as I guess a precondition, right? A prerequisite. Yeah, it is. If you don’t have metadata, none of these other things are going to work. And I wanted to ask you about other, maybe, challenges or prerequisites. So other than people coming in and saying, oh, right, we need metadata, and then they go away for two years and then they come back and they have some metadata, what are the other issues that you run into when you’re trying to build out a CDP like this? What are some of the other… What are the top challenges that you run into other than clearly metadata? So we’ll put that one at number one.

FL: Oh yeah, clearly number one. I would say the second one now is the UX UI people want to design. Because modern platform have unlimited capabilities in designing the front end, the UI that you want. It’s like what do you want? What makes sense for regarding, based on your product types, the user that you have, the content that you have, what is the UX you want to build? That’s interesting, because probably five, no, let’s say 10 years ago, we were providing default interfaces out of the box with the product, with three topics to build your portal. And you could just brand that, put your colors, logo, tweak it a bit, and everybody was happy with that. And then we’ve seen a big evolution because now for many companies, marketing everywhere to say on UX, you have now UX director of VP of user experience that were not existing five years ago, 10 years ago. See what I mean, everybody was working is on swim lane. The techdoc department was in charge of writing the content and probably generating the PDFs and then setting up a doc portal. But many companies have realized that this tech doc portal is instrumental to the performance of the company. And now it says, “Oh, we need to have a look at that.” So it becomes a shared place. See, you’ve seen that I guess in your project.

SO: Yeah. Yeah.

FL: Five years ago, 10 years ago, the only people you had to work with and educate and discuss with were probably the tech dog team. And now you’ve got marketing and you’ve got customer support, and you’ve got customer experience people. And because they’ve realized the value there is in this content, but as well as how important it is to design the writer’s experience that fits with the other touch points of the company to create a seamless journey when you go from the corporate website to the documentation website to the help desk tool to the LMS. And you need some consistency around that, not only in terms of just branding colors and logos, but you go beyond that. And we see this as a new place where people struggle a bit. Our customers struggle is what do we want? In fact, they know that marketing says we need something that is more modern, more like this, more like that. But we start opening the discussion, what is it really that you want? Some companies are very mature, they got the Figma mockups and they come to us, “This is what we need to implement. We’ve spent two years with UX designer crafting the UX of our portal.” And some come and say, “Oh my God, you’re right. We don’t know what you need. Give us a default, something to start with and we’ll see.”

SO: Well, you’ll appreciate this. I had a call not too long ago with a very, very, very, very large company, very large. And they said, “We need a front end for our content, this tech content that needs to go out into the world, we need a design for it.” And because it’s a very large company, I said, “Great, where’s your UX team? And do you have a design system?” Because, I mean presumably they do. And the person I was talking to said, “I don’t know. I don’t think so.” And so I consulted the almighty search engine and discovered that not only did this particular company have a design system, they had something that is publicly available, that is their design system that you can go get all the pieces and parts and all the logos and all the behaviors and everything. It is all out there in the world. And yet, the people that work at this organization and in their defense, there are many, many tens of thousands of them did not know that this thing existed. And so all of their requirements in terms of what they had to do for their portal design were right out there in the world accessible to me.

FL: They didn’t even know about it.

SO: And they had no idea that it existed. And so we had to be the ones to make that connection and say, okay, we have to talk to the people or at least download all these assets and then figure out what to do with them and then make sure that we’re following the rules and all the rest of it. So to your point, the enterprise issues, and we also run out into this with metadata and taxonomy, that that is typically an enterprise problem, not a departmental problem. And actually making those connections across the departments for the first time is a task that very often falls to us as the consultants on the outside who are asking, “Do you have a taxonomy project? Do you have design systems? Do you have these enterprise assets that we need to align with and be consistent with?” And they’re not ready for that question, because it was until recently, put a pile of PDFs somewhere.

FL: That’s just a known and you don’t know what you don’t know. And when they start moving up to more capable tools, they discover that it comes with more capabilities, but they have to make choices, they have to invest in metadata, UX design and all that. And it’s probably some of those companies are not ready yet. I mean, they didn’t foresee that coming. And that’s where the project lag a bit in terms of complexity as well, because they realize that it’s not just buying the tool as well, making the investment on their content, their UX strategy, their design system and all that. That may be missing in some cases.

SO: And I think that probably saying it’s not just about buying the tool is really a good summary of this whole situation. Because we started with you’re really going to need metadata, and if you don’t have metadata, that’s a huge problem. And we’ve landed on, and there are all these other connections and pieces and parts that you have to think about. So Fabrice, thank you very much. This was a great discussion and I appreciate all your information and we will wrap this up there. Are there any parting thoughts that you want to leave people with?

FL: It was an absolute pleasure having this discussion with you, Sarah. I think it could have last another hour easily, so we need to stop somewhere. Maybe we’ll have another opportunities to keep on chatting about some of the subjects.

SO: Yep. Sounds good. And thank you again, and we will see you soon. 

Christine Cuellar: Thank you for listening to Content Operations by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

The post Deliver content dynamically with a content delivery platform appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 32:58
LearningDITA: DITA-based structured learning content in action https://www.scriptorium.com/2025/04/learningdita-dita-based-structured-learning-content-in-action/ Mon, 21 Apr 2025 11:27:48 +0000 https://www.scriptorium.com/?p=23009 https://www.scriptorium.com/2025/04/learningdita-dita-based-structured-learning-content-in-action/#respond https://www.scriptorium.com/2025/04/learningdita-dita-based-structured-learning-content-in-action/feed/ 0 Are you considering a structured approach to creating your learning content? We built LearningDITA.com as an example of what DITA and structured learning content can do! In this episode, Sarah O’Keefe and Allison Beatty unpack the architecture of LearningDITA to provide a pattern for other learning content initiatives.

Because we used DITA XML for the content instead of the actual authoring in Moodle, we actually saved a lot of pain for ourselves. With Moodle, the name of the game is low-code/no-code. They want you to manually build out these courses, but we wanted to automate that for obvious reasons. SCORM allowed us to do that by having a transform that would take our DITA XML, put it in SCORM, and then we just upload the SCORM package to Moodle and don’t have to do all the painful things of, you know, “Let’s put a heading two here with this little piece of content.” And the key thing is that allowed us to reuse content.

Allison Beatty

Related links:

LinkedIn:

Transcript:

Introduction with ambient background music

Christine Cuellar: From Scriptorium, this is Content Operations, a show that delivers industry-leading insights for global organizations.

Bill Swallow: In the end, you have a unified experience so that people aren’t relearning how to engage with your content in every context you produce it.

Sarah O’Keefe: Change is perceived as being risky, you have to convince me that making the change is less risky than not making the change.

Alan Pringle: And at some point, you are going to have tools, technology, and process that no longer support your needs, so if you think about that ahead of time, you’re going to be much better off.

End of introduction

Sarah O’Keefe: Hi everyone, I’m Sarah O’Keefe.

Allison Beatty: And I’m Allison Beatty.

SO: And in this episode, we’re focusing in on the LearningDITA architecture and how it might provide a pattern for other learning content initiatives, including maybe the one that you, the listener, are working on. We have a couple of major components in the learningDITA.com site architecture. We have learner records for the users. We have e-commerce, the way we actually sell the courses and monetize them. That is my personal favorite. And then we have the content itself and assorted relationships and connectors amongst all those pieces. So I’m here with Allison Beatty today, and her job is to explain all those things to us because Allison did all the actual work. So Allison, talk us through these things. Let’s start with Moodle. What is Moodle and what’s it doing in the site architecture?

AB: Okay. So Moodle is an open-source LMS that we-

SO: What’s an LMS?

AB: Learning management system, Sarah.

SO: Thank you.

AB: And we installed Moodle, our own instance of Moodle and customized it as we saw fit for our needs. And that is the component that acts as the layer between the content and the learning experience. So without the Moodle part, it’s just a big chunk of content that you can’t really interact with. And Moodle gives that a place to live.

SO: And then Moodle has the learner records, right?

AB: Yes.

SO: And what about groups? What does that look like?

AB: In Moodle, there’s a cohort functionality which allows us to use groups so that a manager can buy multiple seats and assign them to individuals and keep track of their course progress through group registration rather than individual self-service signups.

SO: So if I were a manager of a group that needs to learn DITA, instead of having to send five or 10 or 50 people individually to our site, I could just sign up once and buy five or 10 or 50 seats in a given course and then assign those via email addresses to all of my people, right?

AB: Exactly.

SO: Okay. So then speaking of buying things, we had to build out this e-commerce layer, which I was apparently traveling the entire time that this was going on, but I heard a lot of discussion about this in our Slack. So what does it look like? What does the commerce piece look like?

AB: Yeah. So it is a site outside of the actual learningDITA.com Moodle site that has a connector into Moodle so that you can buy a course or a group registration in the store, and then you get access to that content in Moodle.

SO: So we have this site, this actually separate site, and if you’re in there, you can do things like buy a course or buy a collection of courses or a number of seats. And then what were some of the fun complications that we ran into there?

AB: Oh yeah. So the fun complications there were figuring out how to set up an commerce site that A, connected to Moodle so that we could sell the courses, and B was able to process taxes and payments and all of that fun stuff. So Moodle has PayPal as a feature just out of the box and the base Moodle source code. But we wanted to accept credit cards directly and so that meant some additional layers, which is how we ended up with the store.scriptorium.com site, which is built on WordPress and uses a connector, the aforementioned connector, to make those two sites talk to each other. So they’re actually, the LMS and the e-commerce piece are totally separate websites, but exist within the same system environment.

SO: And most of you listening to this probably don’t care, but one of the things we learned was that digital training, downloadable training content is sometimes subject to sales tax and sometimes not, depending on the particular state or the particular jurisdiction. So it’s not just, what is sales tax in North Carolina versus what is sales tax in Washington state versus what is it in Oregon? But additionally, in each jurisdiction is this type of training subject to sales tax or not. So we spent a more than optimal amount of time on figuring out all of those things and making sure we get it right, because I’m extremely interested in making sure that those taxes are done correctly and keep us out of trouble.

AB: And the basic PayPal and Moodle wasn’t going to give us that level of granular control and specification.

SO: And typically our customers are looking to pay via credit card. So we’ve got the LMS piece with the learner experience, the actual learning platform. We’ve got the e-commerce piece with the Let’s Take Money piece. And then finally we have the content piece. So what does it look like to actually create these courses and create and manage the content that then eventually goes into Moodle?

AB: Yeah. So the content does have a single source of truth. It is all authored in DITA XML and stored in a central repository. You can see that content in GitHub. It’s open source. We took the DIT XML and we developed a SCORM transform that we could use to hook the content up into Moodle and be able to use all of the grading and progress and prerequisite type things that we needed to flush out the actual learning platform. We had learned a fun lesson along the way that Moodle does not support SCORM 2004. So that required a little bit of backtracking to make sure that we were getting the data into the correct SCORM to get into Moodle. And so because we used it XML for the content instead of the actual authoring in Moodle, we actually saved a lot of pain for ourselves with Moodle. The name of the game with Moodle is low-code/no-code, and they want you to manually build out these courses. But we wanted to automate that for obvious reasons, and SCORM allowed us to do that by having a transform that would take our DITA XML, put it in SCORM, and then we just upload the SCORM package to Moodle and don’t have to do all the painful things of let’s put a heading to here with this little piece of content. And the key thing is that allowed us to reuse content as well. And then if we need to update the content, all we have to do is replace the SCORM package in Moodle.

SO: So currently we have DITA 1.3 content out there. The DITA 2.0 content is under development, and I would say mostly done. We’re mainly waiting for the actual release of the those two chunks of content, although those courses are going to be in GitHub in the DITA training, or I think it’s called Learning DITA now, the Learning DITA project.

AB: Yep.

SO: Separately from that, we’re working on some new courses which are not going to be open sourced, but will be available on Moodle or… Sorry, on learningDITA.com. And so for those of you that are wondering, we’ve got a number of things on our roadmap. I’d love to hear more from people listening to this about what they need out of this. What more advanced courses are you looking for? One thing that we’ve heard a lot of requests for is a DITA open toolkit plugins 101.

How do I build a plugin? How do I use best practices? How do I make this all happen? So we have this, I don’t know, DITA inception thing happening because we’re training people on how to do DITA using DITA inside DITA, building out the stuff.

AB: It’s all very meta.

SO: It’s extremely meta. Hypothetically, what would it look like to localize this? So what we’ve delivered right now is in English, and in the past we have had people put together both, let’s see, German, Chinese, and I think French versions of the Learning DITA content. But what does it look like in this new architecture to localize?

AB: Yeah. So much like the tool chain for this new architecture, there are a couple of different components, and if you would like to localize the Learning DITA content, what you’ll want to look at is the content itself, translating and localizing the source content, but you’ll also need to localize Moodle some. So what you would do is make a, basically clone the Moodle site, and you’ll have to, not to go too into the Moodle weeds, but you’ll need to reconfigure the initializing PHP file a little bit. And then you would take your translated localized content and prep that up into your new Moodle for whichever language you’re localizing into.

SO: So it looks as though, you mentioned maintenance and this idea that Moodle by design wants you to make updates inside Moodle, and we pulled the content out of there. We’re basically saying Moodle is for learners and learning management and course records and sequencing and those kinds of things, and grading, I suppose, but the DITA back end is for content. So we’re putting all the content in DITA and then we push it over to SCORM, which then goes into learningDITA.com into the Moodle site. It sounds like more work, right? We had to build a SCORM transform. We had to put all this stuff in… We didn’t just go into Moodle and start authoring, which would be a lot faster on day one. So what’s the rationale for that? What does it look like in the long term to maintain something in Moodle versus to maintain something in the system that we’re describing?

AB: Yeah. It may seem easier on day one to manually put the content in, but when you need to make an update or change something, or particularly if you want to change something about a piece of content that is reused and repeated throughout the courses, you have to manually trawl through every single course page and make those updates, whereas with the SCORM package, once you have the SCORM transform set up and running to your liking, you can run your DITA content through there and then replace the SCORM package in Moodle instead of having to manually trawl through page by page. And maybe there is some content that is duplicated, but you mess it up because you were manually trawling through page by page. So it also, having DITA as the single source of truth helps you with maintenance, even if it seems scary at first.

SO: And I expect one of the things we’re looking at is CCMS courses, and the concept of what is a CCMS is going to be the same for all of them. The process of how do I check out files is going to be a little different for each of them. So if you think about that from a course material point of view, you would have that conceptual overview of, what is the component content management system and why do I care? And then there’s, how do I do the thing in specific component content management system? That would probably be unique, but the conceptual overview would be probably the same. So we might have two or five or 15 different courses, one for each CCMS, but you could see where the conceptual stuff would overlap.

AB: Exactly.

SO: Okay. Everyone, I hope this glimpse into content operations for structured learning content was useful. Of course, the learningDITA.com site is much smaller than what we typically do with our customers at scale, but we are getting more and more requests for learning content and structured content options for learning content. If you’re interested in learning more about learningDITA.com, would suggest you go there and check it out. Check out the DITA training, which has eight or nine courses on DITA stuff from what is structured authoring, all the way to tell me about the learning and training specialization. Allison, thank you so much for all your input.

AB: Thank you.

SO: And we’ll see you on the next one.

Christine Cuellar: Thank you for listening to Content Operations by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

The post LearningDITA: DITA-based structured learning content in action appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 14:15
The benefits of structured content for learning & development content https://www.scriptorium.com/2025/04/the-benefits-of-structured-content-for-learning-development-content/ Mon, 07 Apr 2025 11:46:46 +0000 https://www.scriptorium.com/?p=22991 https://www.scriptorium.com/2025/04/the-benefits-of-structured-content-for-learning-development-content/#respond https://www.scriptorium.com/2025/04/the-benefits-of-structured-content-for-learning-development-content/feed/ 0 In this episode, Alan Pringle, Bill Swallow, and Christine Cuellar explore how structured learning content supports the learning experience. They also discuss the similarities and differences between structured content for learning content and technical (techcomm) content.

Even if you are significantly reusing your learning content, you’re not just putting the same text everywhere. You can add personalization layers to the content and tailor certain parts of the content that are specific to your audience’s needs. If you were in a copy-and-paste scenario, you’d have to manually update it every single time you want to make a change. That scenario also makes it a lot more difficult to update content as you modify it for specific audiences over time, because you may not find everywhere a piece of information has been used and modified when you need to update it.

Bill Swallow

Related links:

LinkedIn:

Transcript:

Introduction with ambient background music

Christine Cuellar: From Scriptorium, this is Content Operations, a show that delivers industry-leading insights for global organizations.

Bill Swallow: In the end, you have a unified experience so that people aren’t relearning how to engage with your content in every context you produce it.

Sarah O’Keefe: Change is perceived as being risky, you have to convince me that making the change is less risky than not making the change.

Alan Pringle: And at some point, you are going to have tools, technology, and process that no longer support your needs, so if you think about that ahead of time, you’re going to be much better off.

End of introduction

Christine Cuellar: Hey, everybody, and welcome to today’s show. I’m Christine Cuellar, and with me today I have Alan Pringle and Bill Swallow. Alan and Bill, thanks for being here.

Alan Pringle: Sure. Hello, everybody.

Bill Swallow: Hey, there.

CC: Today, Alan, Bill, and I are going to be talking about structured content for learning content. Before we get too far in the weeds, let’s kick it off with a intro question.

Alan, what is structured content?

AP: Structured content is a content workflow that lets you define and enforce consistent organization of your information. Let’s give a quick example in the learning space. For example, you could say that all learning overviews contain information about the audience for that content, the duration, prerequisites, and the learning objectives for that lesson or learning module. And by the way, that structure that I just mentioned … It actually comes from a structured content standard called the Darwin Information Typing Architecture, DITA for short. That is an open-source standard that has a set of elements that are expressly for learning content, including lessons and assessments. And I think it’s also worth noting, another big part of the whole idea of structured content is that you are creating content in a format agnostic way. You are not formatting your content specifically for, let’s say, a study guide, a lesson that’s in a learning management system, or even a slide deck. Instead, what a content creator instructional designer does … They are going to develop content that follows the predefined structure, and then an automated publishing process is going to apply the correct kind of formatting depending on how you’re delivering the content. That way, as a content creator and instructional designer, you’re not having to copy and paste your learning content into a bunch of different tools. And I know for a fact a lot of instructional designers are doing that right now. Instead of doing all that copying and pasting, you write it one time, and then you say, “I want to deliver it for these different delivery targets, whether it’s for online purposes, whether it’s for in-person training or maybe a combination of both.” You set up publishing processes to apply the formatting for whatever your delivery targets are so you, as a human being, don’t have to mess with that.

CC: Which is awesome. Part of the reason that we’re talking about this today is that structured content has been a part of the techcomm world for over 30 years, for a really long time, and now we’re starting to see it make inroads in the learning and development space. We’ve been doing a lot of work for structured content in the learning space, but how is it different from the techcomm space? And Bill, I’m going to kick this over to you for that.

BS: I think I’m going to take a higher-level view on this because there is a lot of overlap between techcomm and learning content. Where they really start to diverge is in delivery. Techcomm is pretty uniform in how it delivers content to people. There’s personalization involved and so forth, but essentially everyone’s getting the same thing. The experience is going to be the same. Everyone’s going to get a manual. Everyone’s going to get online help. Everyone’s going to get a web resource, what have you. It might be tailored to their specific needs, but it’s a pretty candid delivery experience. For training, the focus is on the learning experience itself, and it’s usually tailored to a very specific need, whether it’s a very specific type of audience that needs information, or it’s very specific information that needs to be delivered in a very specific way for those people. Beyond that, we start looking at the content itself under the hood, and the information starts to, I would say, broaden with learning content because it can consume all the different types of information you have with technical content. And generally in a structured world, we think of that as conceptual information, how-to information, and reference information, for the most part. With learning content, now you have a completely new set of content in addition to that where you have learning objectives. You have assessments. You have overviews, reviews, all sorts of different content that essentially expands on the wealth of information you have from your technical resources.

CC: That’s great. Typically, the arguments for structured content, and the reason it’s really valuable for organizations, is it introduces consistency in your content, consistency for your brand across wherever you’re delivering content. It also helps you build some scalable content processes, that kind of thing. What are some of the arguments for structured content for the learning environment specifically, if there are any other new ones?

AP: Some of the reasons that you want to do structured content for learning content are really similar to other types of content. We’ve already talked about one of them. I touched on this earlier in regard to automated formatting. You are not having to do all of the work as a human being, applying formatting to ever how many delivery formats that you have. That is a huge win that you’re not having to do that. And especially in the training space, I have seen so many organizations copying content from one platform to another because the platforms don’t play well together, so you’ve got multiple versions of what should be the same exact content to maintain. That is another huge reason to consider structure. You want a single source of truth for your content regardless of where that information is being delivered because if you’re looking at the overall learning experience and the excellence and quality of that learning experience, if you were telling learners slightly different things in different places in your content, you are not providing an optimal learning experience. Therefore, having that single source of truth for a particular bit of information gives your learners a consistent piece of information regardless of what channel they consume it for. That’s a really important win for a solid, dependable learning experience.

CC: Gotcha. No, that definitely makes sense. It sounds like it would take some of the effort off of the subject-matter experts who are creating these trainings so that they can … They, I’m assuming, would rather focus on the work of helping train people. Getting some of the manual formatting and copy and pasting off of their workload sounds pretty nice. What are the complications that it might introduce or the change management issues that might need to be tackled when you’re bringing structured content into a learning environment?

AP: It’s true anytime you bring in structure. When people are used to working in an environment where you are doing manual formatting, and you’re seeing what things look like as you kind of develop the content, the idea of developing content in a format agnostic way where you’re not thinking about what does this slide look like, or how is this assessment going to work in the learning management system, it’s very easy to get focused on the delivery angle because you want it to be good, and you want it to be done in a way that makes that learning experience useful for the people who are trying to learn whatever it is they’re trying to learn. You don’t want those impediments of bad formatting or a not great way that your assessments behave in your learning management system, but you kind of get to offload all of those concerns, which are very valid. I’m not saying they’re not valid. They are, but you want an automated process. Basically, you want computers to do that work for you. You want programming to apply that formatting so you can really focus on getting that information as solid as it can be, and you let technology handle the rest. You do set up the standards for how you deliver that content, whether it’s in print, online, in person, whatever. However you’re delivering your learning and training content, you set the standards. “This is how I need this to behave. This is how I need it to look. This is how I need it to interact.” Once you set those standards, then you turn around and have someone who has this programmatic skill set, like we do at Scriptorium, to come in and develop the transformations that take your content and deliver it in the ways you need it delivered so you, as, like you were saying, the subject-matter expert, the instruction designer, or whatever content creator we’re talking about here … You are not doing that for every single delivery type that you are putting out for your learners.

BS: And it’s not to say that the experience isn’t tailored because it still can be tailored. Even if you are significantly reusing your content, you’re not just taking the same text everywhere. You can add personalization layers to that content and tailor certain parts of the content specific to what that specific audience needs rather than having to retype it all every single time you want to make a change if you were in a copy-paste scenario. And that also would make it a lot more difficult to update all that content as you modify it for specific audiences over time because you may not find everywhere where a piece of information has been used and modified if you need to update it. It does take a little bit of … Well, it takes a lot of the work off of those developing the content because they don’t have to worry about exactly what it looks like for every single target that they’re producing. It does require a little bit of, I would say, faith in the system that it will work. It really comes down to how you’re architecting this in the first place to make sure you understand who your varied audiences are, what the look and feel needs to be, what the delivery points are, and making sure that you are authoring within the scope of those things. And once you get that down, as Alan mentioned, it becomes a push-button operation to produce all of your various outputs.

AP: I think, too, from a change management point of view, one thing that I have heard from lots of content creators in the learning space is the burden they have, for example, if a program or the company changes names, changes logos, changes branding, if you have that built in to the formatting in a way where you’re having to go into, say, a bunch of Microsoft Word or PowerPoint files and manually change those out, and I am sure I am talking to people out there in the ether who know exactly what I’m talking about, it is extremely painful. And when you have automated the application of formatting, what you can do is change those processes to update them to include the latest corporate colors, the latest taglines, the latest fonts, the latest logos, whatever has changed so you, as a human being, again, do not have to go in there and touch all of those files yourselves because that is a burden you don’t need when you were trying to quote do your real work, which is help people learn, not apply formatting to a zillion Microsoft Word documents. Nobody wants to do that, at least nobody I know anyway.

CC: No. That’s a very good example of how the structure can just take that part of the workload off of you so you can get to focus on what you want to do. But I like, Bill, how you put it that you have to trust the process because it is an adjustment to go from authoring your content in a specific PowerPoint or in a specific Word doc to authoring it in a way that it can be reused. But ultimately what I’m hearing both of you say is that, even though it’s a valid concern that you might worry about your ability to personalize and your ability to control the user experience, once structured content is implemented correctly, and everyone is adjusted to the system, it sounds to me like you’re saying that your opportunities for personalizing at scale are actually going to be bigger than when everyone’s doing it individually, and at least it introduces consistency across those personalized experiences. Do you think that’s fair to say, either of you? Do you think that’s a fair statement, or is that too optimistic-

AP: That is an incredibly loaded question the only answer to which is … No, you were correct. That is, structure does enable all the things that you just ask in that very leading, but good, question.

CC: It is very leading.

BS: It removes the visual context of where the content is going, but it doesn’t remove … In fact, it enhances the context of what the content is about.

AP: Right.

CC: That’s a good way to say it. I like that. Looking at structured content within the learning space itself, how does it … I know, Bill, you had mentioned that, within the techcomm space, it’s fairly uniform in how content is delivered and who it’s delivered to. Not that it’s always the same. How about in the learning space? How does that vary? And how does the structure approach vary?

BS: Well, this might contradict what I said before, but it’s a slightly different look on it in that, really, the learning clients that we’ve had … They kind of mirror a lot of the techcomm clients we had in that everyone is producing roughly … If you look at it from a high enough altitude, it all looks the same. They’re all producing manuals. They’re all producing e-learning. They’re all producing whatever. When you get down into the nuts and bolts, that’s when you start finding that every single implementation is going to look a little bit different. In techcomm, you might have completely different types of content that you need to be able to handle. The same thing is with the learning space. Every single group is going to have different needs, and they’re going to have very specialized needs based on the content that they’re producing and who they’re producing it for. The learning space, unlike techcomm where they’ve basically been going down the structured path for 20, 30 years … The learning space has really been a sea of black boxes where every single system has its own way of doing things. It does about 90%, 95% of the same stuff that every other system out there does, but there is something special, something canned, something within the system that allows it to do the one thing that no other system does. And all of these technologies historically have really been locked down tight where your content goes in, and it lives and thrives in that box that you’re developing it in. But if you need to take that content out and change systems and put it somewhere else, there’s a lot of rework that potentially needs to be done depending on how customized that system you were using was. And let’s face it. You can structure content. You can centralize it. You can componentize it all you want. It’s not going to change the fact that learning content is going to have these many varied endpoints for how it’s being delivered. Even though you are consolidating and structuring in a central repository to maximize your reuse, to not worry about the formatting, you may still have three or four different learning management systems that you are pushing that content into. Each one of those systems has different requirements. The type of content that gets consumed. What it does. How it reacts. What it expects. The order it needs that information in per lesson, per page. E-page. It gets a little more complicated in the delivery of the learning content because we need to be able to tailor to not only the needs of the particular client in the content that they’re producing but the needs of the systems that need to ingest it.

AP: One other thing I would mention here is the level of interactivity, I think, is higher with learning and training content than the techcomm world. Now, I realize there are documentation portals and things like that that do provide some levels of interactivity. However, I think you are going to see much more of that kind of thing on the learning and training side, especially in regard to assessments when you are trying to have people do little, basically, mini exercises to prove that they have learned what they need to learn and that they are graded, and then those scores are recorded. That is the kind of thing you don’t see in techcomm. That is a whole, very specific thing to the learning and training world. Therefore, the structure that you choose needs to accommodate that, and your delivery targets in particular need to accommodate that very high level of interactivity with, for example, like Bill was saying, a learning management system.

BS: You have quite a variety of needs out there from basic, true/false, multiple choice, or matching all the way down to simulations, doing interactive exercises, and so forth all within a learning management system. And you need to be able to account for that. And as I mentioned, not all of those systems function in the exact same way, so it needs to be tailored.

CC: For any listeners that are listening to this episode right now, and they are in the learning content space, and they’re interested in getting started with structured content, Alan, where would you recommend they start?

AP: Well, our website, scriptorium.com, has lots–very self-serving. Very self-serving. We have a lot of resources, and we will put them in the show notes so you can get to them. We also are the creator and maintainer of a site called learningdita.com that teaches people about one way to do structured content, which is DITA, which I mentioned earlier in the show. And there is a free Introduction to DITA course that you can take. Between some links that we’ll include in the show notes in regard to what is structured content, how it applies to the learning and training space, and learning DITA, those are all good starting points for people who are considering going on the structured content journey for their learning content.

CC: That’s great. And the only thing I’ll add to that is that, if you’re interested in learning more about learning content and structured content, this is something that we talk about a lot. I would recommend also subscribing to our Illuminations newsletter which, like Alan said, that’s also going to be linked in the show notes. But every month, we send out a recap of the topics we talked about, and learning content is very often in there because we talk about it a lot.

This final question is for both of you. Is there anything else that you want to leave our listeners with about structured content in the learning content space before we wrap up today?

BS: I’d say, if you’re looking at structured content, it’s not going to on its face be a savior solution. But if with enough thought, it can really make a difference in your content development workflow, and it can save you a lot of time in producing content that is targeted to very specific people and delivery points.

AP: For me, my final suggestion here is think about your pain points. What are the things that are keeping you up at night as you develop your learning and training content? What are the continual issues you are battling, especially your content creators? What are they battling? Is it they’re having to format for umpteen different platforms? Is it that they’re needing to personalize things for different locations? For different levels of service that you were training people about? What are the things that are causing you problems? Basically, compile a list of those. And then from there, figure out, could structured content, solve any of these problems? Don’t put the cart before the horse, is the best way to put it, really. Think about your pain points in your processes and then see if structure might be the thing to solve them.

CC: That’s great. And on that, Alan, Bill, thank you very much for being here and recording this with me today.

BS: Thank you.

AP: Absolutely. We like to talk about this stuff probably too much.

CC: Thank you for listening to Content Operations by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

Get monthly insights on structured learning content, content operations, and more with our Illuminations newsletter.

The post The benefits of structured content for learning & development content appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts The benefits of structured content for learning & development content full false 23:22
LearningDITA: What’s new and how it enhances your learning experience https://www.scriptorium.com/2025/03/learningdita-whats-new-and-how-it-enhances-your-learning-experience/ Mon, 10 Mar 2025 11:42:33 +0000 https://www.scriptorium.com/?p=22957 https://www.scriptorium.com/2025/03/learningdita-whats-new-and-how-it-enhances-your-learning-experience/#respond https://www.scriptorium.com/2025/03/learningdita-whats-new-and-how-it-enhances-your-learning-experience/feed/ 0 In this episode, Alan Pringle, Gretyl Kinsey, and Allison Beatty discuss LearningDITA, a hub for training on the Darwin Information Typing Architecture (DITA). They dive into the story behind LearningDITA, explore our course topics, and more.

Gretyl Kinsey: Over time that user base grew and grew. And now it boggles my mind that it got all the way up to 16,000 users. I never expected it to grow to that size.

Alan Pringle: Well, we didn’t really either, nor did our infrastructure. Because as of late 2024, things started to go a little sideways, and it became clear our tech stack was not going to be able to sustain more students. It was very creaky. The site wasn’t performing well. So we made a decision that we needed to take the site offline, and we did, to basically redo it on a new platform.

Related links:

LinkedIn:

Transcript:

Introduction with ambient background music

Christine Cuellar: From Scriptorium, this is Content Operations, a show that delivers industry-leading insights for global organizations.

Bill Swallow: In the end, you have a unified experience so that people aren’t relearning how to engage with your content in every context you produce it.

Sarah O’Keefe: Change is perceived as being risky, you have to convince me that making the change is less risky than not making the change.

Alan Pringle: And at some point, you are going to have tools, technology, and process that no longer support your needs, so if you think about that ahead of time, you’re going to be much better off.

End of introduction

Alan Pringle: Hey, everyone, I am Alan Pringle, and today I am here with Gretyl Kinsey and Allison Beatty. Say hello, you two.

Gretyl Kinsey: Hello.

Allison Beatty: Hello.

AP: We are together here today because we want to talk about LearningDITA, our e-learning site for the DITA specification because we have just moved it to a new platform. So we want to give you a little background on what went on with that decision. So first of all, Gretyl, you and I were at Scriptorium when we kicked off this site, and I just went back and looked at blog posts. We announced it via blog post I wrote in July of 2015. So we have had this site up and running for 10 years, which absolutely blows my mind.

GK: It blows my mind too. It’s hard to believe that it’s been that long because it does seem like it got launched pretty recently in my memory, but it has been through a lot of changes and so has the entire landscape of content creation as well. So yeah, it’s really cool that now we can look back and say it has been 10 years of LearningDITA being on the web.

AP: For those who may not be familiar with the site, give us a little summary of what it is.

GK: Sure. So LearningDITA is a training resource on DITA XML and it’s developed by Scriptorium, and it covers a lot of the main fundamentals of DITA. So we have some courses on basic authoring and publishing. We also have a couple of courses on reuse and one course on the DITA learning and training specialization. So you get a good overview of a lot of different areas of DITA XML. And all of the courses are self-guided e-learning. So you can go through and take them at your own pace. You can go back and take the courses again if you want a memory refresher. And they all come with a lot of examples and exercises. So you get a download of sample files that you can work your way through. There’s some of that practice that’s guided, and then there’s others that you do on your own. And then there are also assessments throughout each course that help you test your knowledge. So you get a really nice hands-on approach to LearningDITA. So that’s why we called the site that in the first place. And it really helps to get those basics, those fundamentals in place if you are coming at it as a beginner who is unfamiliar with DITA or maybe you have some familiarity, but you want to just reinforce what you know.

AP: So we went along with this site and kept adding courses over the years. I think we got to nine, is that right? I think it’s nine.

GK: That’s right. So we really started this out, like I was mentioning earlier, that we needed something that was beginner-friendly, something for people who were unfamiliar with DITA because we saw a gap in the information that was available at the time 10 years ago. A lot of the DITA resources, documentation, guides and things like that out there were something that assumed some prior knowledge or prior expertise, and there wasn’t really anything that filled that gap. So we came up with these courses. And the nine courses that we have, the first one is just an introduction to DITA. So that was the first one that launched back in July of 2015. And then shortly after that, we added a few courses on topic authoring. So that covers the main topic types, concept, task reference and glossary entry. And then we just added more courses over time. So we’ve got one that covers the use of maps and book maps. We’ve got one that covers publishing basics. We have, like I mentioned, the two courses on reuse. So there’s a more introductory basic reuse course and then a more advanced reuse course, and then learning and training. So those are the nine courses that we have, and they’ve been up there pretty much the entire time. The earliest ones where that introduction, the authoring, and then we added the others as the demand increased over time.

AP: And that demand, I’m glad you mentioned that, really did increase because as of late 2024, we had over 16,000 students in the database for LearningDITA, which also completely blows my mind.

GK: Yeah, it does for me too, because I think in the early days we saw a lot more individuals using it, and then over time we would see more large groups of users sign up. So an entire class whose professor might’ve recommended taking the LearningDITA courses or sometimes an organization, whether it was one of our clients or just another organization, would have a lot of employees sign up all at once. And so yeah, over time that user base grew and grew. And now it does boggle my mind as well that it got all the way up to 16,000 users. I never expected it to grow to that size.

AP: Well, we didn’t really either, nor did our infrastructure. Because as of late last year, things started to go a little sideways and it became clear our tech stack was not going to be able to sustain more students. It was very creaky. The site wasn’t performing well. So we made a decision that we needed to take the site offline and we did to basically redo it on a new platform. And Allison, this is where I want you to come in because you are one of the, shall we say, victims on the Scriptorium side who got to dive into what our requirements were, what we needed to do. Essentially, I mean, we really became consultants for ourselves and turned our consultant eye at our problem to figure out what it was. And Allison, if you don’t mind, tell us a little bit about that process and where we landed.

AB: Yeah, so the platform was the first big choice that we knew we had to make, and things started out pretty fuzzy because we didn’t really know what we were doing and just had to figure out what was going to work to solve these pain points. And so as a starting place, we knew we needed a new LMS, learning management system. And so we did some research on what learning management systems were out there and thought about what we could use that would fit our needs. And we ended up choosing Moodle, which is an open source LMS that is very widely used within colleges and universities and higher education settings. And we knew it could be very powerful and probably suit our needs with some custom work. But the thing about Moodle is it’s known for having a high barrier to entry in terms of the installation, and that made us a little nervous. But the more we kept looking at LMS options, both open source and commercial, we realized that Moodle is so popular and industry standard almost for a reason and that it was worth taking on that challenge.

AP: And I even had someone in the learning space because I asked her advice, what LMS would you use? She pretty much said run away from Moodle because for a lot of the reasons that you just mentioned. But I think it’s worth noting, it does have… There are a lot of people using it, especially in educational settings, schools, universities. It’s also the open source angle was appealing because that way it didn’t look like we were picking “favorites” by picking a particular proprietary LMS.

AB: Yeah, definitely. And then the other piece of the puzzle there as far as how we’re going to display and host the learning content was the DITA transform for the content itself and how we were going to get the LearningDITA content into our LMS. And so we knew that Moodle is compatible with both SCORM and xAPI and we ended up deciding that we wanted to develop a DITA to SCORM transform because SCORM is something that we have discussed and worked on with other clients as we’ve been seeing this trend in learning and training content pickup.

I don’t know if Gretyl wants to talk a little bit about how she’s seen SCORM throughout various projects and why we decided it was something we wanted to pursue and learn more about ourselves.

AP: And what is it while you’re at it? That too.

AB: That’s a good question. I’ll just go ahead and talk a little about what it is without getting too deep technically. Basically it’s a standard for e-learning content and it provides communication that can do things like track grades within your LMS. In the LearningDITA, the previous site and the current site, you had to pass assessments to get to the next lesson. And so SCORM can handle things like tracking assessment completion and scores. It’s pretty flexible and widely used. It’s more or less just a standard, but it requires a pretty specific data structure for it to function because it’s expecting certain data structures that are defined in the standard for it to work in different environments. And Gretyl, would you like to talk a little bit about how we’ve seen the SCORM standard pop up through various client projects?

GK: Sure. So we have seen I think over especially these last 10 years since LearningDITA launched an increase or a bit of an uptick in clients who come to us with e-learning content specifically. Some of them, that’s the only content they have. For others, they are trying to get some sort of a process for developing both e-learning content and then other kinds like technical documentation, marketing content. But a lot of them end up going down this path where they realize DITA XML is going to be helpful for content creation, especially if they do have that cross-department collaboration or reuse that needs to happen. And SCORM has been something that we’ve seen crop up with a lot of these projects. Because like you mentioned, Allison, it offers all that flexibility around things like scoring the assessments, keeping that student data that’s needed. And we’ve also seen how it’s really good when you’ve got an organization that has to deliver e-learning content to multiple different LMSs. So let’s say they’ve got students in a lot of different geographical areas or different industries and they all use different LMSs. That SCORM package can be delivered into all of them and used. And so they get that flexibility. So we’ve seen this crop up in a lot of different client projects. And the more we saw it pop up in these different projects, the more we said this might be beneficial for us too. And we’ve seen all the different ways that these organizations have made use of SCORM packages and why not give it a try for our LearningDITA content. And which by the way, I just wanted to mention, I don’t think we explicitly said this, but all of the LearningDITA courses themselves are authored in DITA XML. So kind of meta layer there to think about. But because of that, we have to think about how are we going to publish this information, get these e-learning courses out onto the web. And so a DITA to SCORM transform, as Allison said, is the approach that we decided on.

AP: And those source files, by the way, are part of this open source project that’s out in GitHub. And we’ll put some links in the show notes about it. But you can look at the source files that we used and download them for free. They’re open source. You can look at them and even use them for your own purposes if you like.

GK: And one question I had there, so you mentioned that all of those files are free and LearningDITA itself, the website, the platform has always been free, but now we are introducing a new pricing model. And so Alan, I wanted to ask you about that, how that change came about, why we made that decision to go from an entirely free resource to something with a new pricing model?

AP: Yeah, that’s a hard one and it was not a fun discussion. It wasn’t. But basically considering we’ve got 10 years of work invested in this, we had both hundreds of hours invested in developing and maintaining the site and all the courses. We also have hosting costs involved. So it got to the point to where especially with those 16,000 students, things were just not sustainable. And the tech model, the tech stack was not working anymore. So we knew we had to do something and invest more time into the platform or frankly abandon it. And when you look at the choices, completely shut down the site and get rid of that resource or decide to charge very small amounts. The intro course will always be free. That was the decision that we made. And there will be coupon codes. There will be discounts for courses and other things. So we realize we are changing from the free model. Wish we didn’t have to do it. But looking at the reality of the time that we’ve invested in it and to keep it running in the future, that was a decision that we made to keep this running for the long haul.

GK: And I think, like we’ve said, we’ve seen so many changes in the content space, the industry itself over these years. And I think evolving and making sure that we are keeping track of the value that we add by having this resource makes sense to go to that pricing model.

AP: And I want to talk a little more about the Moodle part of this equation, because the way that it works is different than what we had before. And I think it’s worth noting the user experience is a little different. Because when you open up a course, it essentially opens up in a SCORM package viewer. Allison, could you talk just a little bit about how that experience is different?

AB: Yeah. So something that we noticed about Moodle is that it’s a very low-code, no-code type of platform. And so part of that SCORM decision was we wanted to be able to single source the content that lives in that repo or repository. We didn’t want to manually insert all that content. And so the way that SCORM ends up interacting with the Moodle site is that instead of having the content baked into webpages, it launches equivalent to an iframe, but it launches a second window where you take the course. And then when you close out that window, it ends your session. So don’t freak out if a second window pops up when you go to take your course. That’s the way that it is designed to work with the SCORM transform.

AP: And then Moodle records your activity, how well you’ve done with the quizzes, and all of that kind of information.

AB: And on the technical back end, all of that grade recording and assessment tracking is something that is handled because of the SCORM transform and how we built the Moodle site.

AP: And I think it is time for us to mention the people who really helped build that Moodle transform. Let’s call them out by name. Thank you to Jake Campbell, Simon Bate, and Melissa Kershes. Thanks to all of them for getting in there and helping us get that done.

GK: And I can just say after doing a lot of end user testing to make sure this works, I actually think it is easier to keep track of where you are than it was in our previous platform. I like that it pops things out into a new window. It really helps you, guide you along as you go through each part of the course. And it pops up with notifications about saving your progress if you need to stop and start a course at any point. And it does make it very clear where you are in the course and whether you have passed those assessments. And so the entire package does work really well. I think it’s really intuitive as an end user. And hopefully for all of you who go and take the courses on the new platform, you will see the same thing.

AP: I think it’s worth mentioning too, moving to this new platform, it’s going to give us opportunities to do more things in the future. We will be adding new content, especially as the DITA 2.0 standard comes out. So when that is released by the committee that controls the standard, we will do some updates to our courses. And I think we’re going to maybe do some micro learning perhaps, some live e-learning. We’ve got lots of choices here, so stay tuned for that.

And with that, Allison and Gretyl, I want to thank you very much for your work on the site and for talking with us today.

GK: Absolutely. Thank you.

AB: Thank you.

The post LearningDITA: What’s new and how it enhances your learning experience appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 19:21
Building your futureproof taxonomy for learning content (podcast, part 2) https://www.scriptorium.com/2025/02/building-your-futureproof-taxonomy-for-learning-content/ Mon, 10 Feb 2025 12:29:05 +0000 https://www.scriptorium.com/?p=22930 https://www.scriptorium.com/2025/02/building-your-futureproof-taxonomy-for-learning-content/#respond https://www.scriptorium.com/2025/02/building-your-futureproof-taxonomy-for-learning-content/feed/ 0 In our last episode, you learned how a taxonomy helps you simplify search, create consistency, and deliver personalized learning experiences at scale. In part two of this two-part series, Gretyl Kinsey and Allison Beatty discuss how to start developing your futureproof taxonomy from assessing your content needs to lessons learned from past projects.

Gretyl Kinsey: The ultimate end goal of a taxonomy is to make information easier to find, particularly for your user base because that’s who you’re creating this content for. With learning material, the learner is who you’re creating your courses for. Make sure to keep that end goal in mind when you’re building your taxonomy.

Related links:

LinkedIn:

Transcript:

Introduction with ambient background music

Christine Cuellar: From Scriptorium, this is Content Operations, a show that delivers industry-leading insights for global organizations.

Bill Swallow: In the end, you have a unified experience so that people aren’t relearning how to engage with your content in every context you produce it.

Sarah O’Keefe: Change is perceived as being risky, you have to convince me that making the change is less risky than not making the change.

Alan Pringle: And at some point, you are going to have tools, technology, and process that no longer support your needs, so if you think about that ahead of time, you’re going to be much better off.

End of introduction

Allison Beatty: I am Allison Beatty.

Gretyl Kinsey: I’m Gretyl Kinsey.

AB: And in this episode, Gretyl and I continue our discussion about taxonomy.

GK: This is part two of a two-part podcast.

AB: So if you don’t have a taxonomy for your learning content, but you know need one, what are some things to keep in mind about developing one?

GK: Yeah, so there are all kinds of interesting lessons we’ve learned along the way from working with organizations who don’t have a taxonomy and need one. And I want to talk about some of the high-level things to keep in mind, and then we can dive in and think about some examples there. One thing I also want to just say upfront is that it is very common for learning content in particular to be developed in unstructured environments and tools like Microsoft Word or Excel. It’s also really common that if you are working within a learning management system or LMS for there to be a lack of overall consistency because the trade-off there is you want flexibility, right? You want to be able to design your courses in whatever way is best suited for that specific subject or that set of material. But that’s where you do have that trade-off between how consistent is the information and the way it’s organized versus how flexible is it to give your instructional designers that maximum creativity. And so when you’ve got those kinds of considerations, then that can make the information harder for your students to find or to use and even for your content creators. So we’ve seen organizations where they’ve said, “We’ve got all of our learning materials stuck in hundreds of different Word files or spreadsheets or in sometimes different LMS’ or sometimes different areas in the same LMS.” And when they have all of those contributors, like we talked about with multiple authors contributing, or sometimes lots and lots of subject matter experts part-time contributing, that really creates these siloed environments where you’ve got different little pieces of learning material all over the place and no one overarching organizational system. And so that’s typically the driving point that see where that organization will say, “We don’t have a taxonomy. We know that we need one.” But I think that is the first consideration is if you don’t have one and you know you need one, the first question to ask is why? Because so often it is those pain points that I mentioned, that lack of one cohesive system, one cohesive organization for your content, and sometimes also one cohesive repository or storage mechanism. So that’s typically where you’ll have an organization saying, “We don’t have a good way to kind of connect all of our content and have that interoperability that you were talking about earlier, and we need some kind of a taxonomy so that even if we do still have it created in a whole bunch of different ways by a bunch of different people, that when it gets served to the students who are going to be taking these courses, it’s consistent, it’s well-organized, it’s easy for people to find what they need.” So I think that’s the first consideration is that if you’ve got that demand for taxonomy developing, think about where that’s coming from and then use that as the starting point to actually create your taxonomy. And then I think one other thing that can help is to think about how your content is created. So if you do have those disparate environments or you’ve got a lot of unstructured material, then take that into account and think about building a taxonomy in a way that’s going to benefit rather than hinder your creation process. And that is especially important the more people that you have contributing to your learning material. It’s really helpful to try to gather information and metrics from all of your authors and contributors, as well as from your learners. So any kind of a feedback form that, if you’ve got some kind of an e-learning or training website where you can assess information that your learners tell you about, what was good or bad about the experience, what was difficult or what would make their lives easier, that’s really great information for you to have. But also from your contributors, your authors, your subject matter experts, your instructional designers, if they have a way to collect feedback or information on a regular basis that will help enhance the next round of course design, then all of that can contribute to taxonomy creation as well. When you start building a taxonomy from the ground up, you can look at all the metrics that you’ve been collecting and say, “Here’s what people are searching for. We should make sure that we have some categories that reflect that. Here are difficulties that our authors are encountering with being able to find certain information and keep it up to date or with being able to associate things with learning objectives. So let’s build out categories for that.” So really making sure that you use those metrics. And if you’re not collecting them already, it’s never too late to start. I think the biggest thing to keep in mind also is to plan ahead very carefully and to make sure that you’re thinking about the future, that you’re doing futureproofing before you actually build and implement your taxonomy. And I know we both can probably speak to examples of how that’s been done well versus not so well.

AB: Yeah, maintenance is so important.

GK: Yeah, and I think the more that you think about it upfront before you ever build or put a taxonomy in place, the easier that maintenance is going to be, right? Because we’ve seen a lot of situations where an organization will just start with a taxonomy, but maybe it’s not broad enough. So maybe it only starts in one department. Like they have it for just the technical docs, but they don’t have it for the learning material. And then down the road it’s a lot more difficult to go in and have to rework that taxonomy for new information that came out of the learning department. That if they had had that upfront, it could have served both training and technical docs at the same time. So thinking about that and doing that planning is one of the best ways to avoid having to do rework on a taxonomy.

AB: And I’m glad you brought up the gathering of feedback and insight from users before diving into building out a taxonomy. Because at the end of the day, you want it to be usable to the people who need that classification system. That is the most important part.

GK: Yeah, that’s absolutely the end goal.

AB: Usability.

GK: Yeah, and I think a big part of that, like I’ve mentioned, planning ahead carefully and futureproofing, is looking at metrics that you’ve gathered over time because that can help you to see whether something in those metrics or in that feedback is a one-off fluke or whether it’s an ongoing persistent trend or something that you need to always take into consideration from your end users. If you’ve got a lot of people saying the same things, a lot of people using the same search terms over time, that can really help you with your planning. And yeah, like you said, I think the ultimate end goal of a taxonomy is to make information easier to find, and in particular for your user base because that’s who you’re creating this content for. And with learning material, that’s who you’re creating your courses for. So you want to make sure that when you’re building that taxonomy, that that end goal is something you always keep in mind. How can we make this content easier for people to find and to use?

AB: Definitely. Something else that I am curious to get your take on is in this planning stage. So in my experience, I feel like there’s never nothing to start with. Even if there’s not any formalized standards or anything around classification of content, there’s like a colloquial system, right?

GK: Yes, very much so.

AB: Of how content creators or users think about an organized content, even if they’re not necessarily using a taxonomy.

GK: Yeah. A lot of times it’s very similar to when we just talked about content structure itself. That if you’re in something like Microsoft Word or Unstructured FrameMaker, even if there’s not an underlying structure, a set of tags under that content, there is still an implied structure. You can still look at something like a Word document and say, “Okay, it’s got headings at these various levels. It’s got paragraphs. It’s got notes,” and you can glean a structure from that even though that structure does not exist in a designated form, right? So taxonomy is the same way. You’ve got people using information and categorizing information, even if they don’t have formal categories or a written down or tagged taxonomy structure. There’s always still some a way that people are organizing that material so that they can find it as authors or so that their end users can find it as the audience. And so that’s also a really good place to draw from. If you don’t have that formal taxonomy in place, you do still have an implied taxonomy somewhere. And so that’s where, going back to what you said about gathering the metrics, that’s a lot of times how you can find it and start to root it out if you are looking for that starting point of here’s how we need to build this formal taxonomy. So I think that’s step one is after you’ve figured out why you need to have that formal taxonomy in place, what’s the driving factor behind it? Then start going and hunting down that information about your existing implied taxonomy and how people are currently finding and categorizing information, because that will help you to at least start drafting something. And then you can further plan and refine it as you take into account the various metrics from your user base, and then gather information across all the different content producing departments in your organization until you finally settle on what that taxonomy structure should look like.

AB: I know that the word taxonomy can sound complicated and scary and all that, but you’re never really starting with the fear of a blank page. Taxonomies are everywhere and in everything, even if they’re not formalized. Think about when you go to the grocery store and you know you need ketchup and you’re going to go to the condiment aisle to find that. There’s so much organization and hierarchy just in our day-to-day lives that exist already. That’s never a fear of a blank page with taxonomies. There’s just thinking of the future and being mindful that things may change and maintenance will happen.

GK: Exactly. I think that point that you made about even when you go to the grocery store, humans think in taxonomy, right? Humans naturally categorize things.

AB: And group things. Yeah.

GK: And so I think the main goal of having a taxonomy formalized is to take that out of people’s heads and actually get it into a real form that multiple people can all use together, and then that serves that ultimate end goal we talked about of making things easier for your users to find.

AB: Access. Definitely. I want to talk about some lessons learned based on taxonomies that you and I have worked with clients, and I’m thinking of how you’re never starting with a blank page. I’m thinking about one project in particular where we developed a learning content model and used Bloom’s Taxonomy as a jumping-off point for this learning model. That’s another option or another way to go about it is use the implied structure in combination with a structure that already exists and integrating that into your content model. And then on the other hand, I know we’ve also done taxonomies for learning where we’ve specialized a lot.

GK: And specialization is always interesting because we see that develop out of… If you are putting out information that is very specific, so for example, if you are putting out learning material or courses around… I’ll go back to the example from earlier. Here’s how to use this specific kind of software. Here’s a class that you can take to get certified for doing this kind of an activity and this kind of software. Then that’s when it makes sense to think about any kind of specialized structures that you might want to have that are specific to that software. And it can be the same in whatever kind of material that you’re presenting. If you’re saying, “Oh, we’re in the healthcare industry. We are in the finance industry. We’re in the technology industry,” whatever your industry is, there’s going to be specific information to that industry that you probably want to capture as part of your taxonomy. Those categories are going to be specific to that industry and to the product or material that you are producing or to the learning material, the courses that you’re creating. So that’s a really good thing to think about when it comes to that taxonomy development is if we are in any very specific industry where we need that industry-specific information in the taxonomy, then it’s going to be really important to specialize. And so if you’re working in DITA XML, specialization is creating custom elements from out of the box or existing ones or standard ones. And so whenever you think about a taxonomy that is driven by metadata in DITA XML, then that’s where you might start creating some custom metadata elements and attributes that can drive your taxonomy. And those custom names for those elements and attributes would be something that you do specialize in and that matches the requirements or the demands of your industry.

AB: Yeah, that’s spot on with the example I was talking about a while ago about how the Library of Congress uses Library of Congress subject headings, but the National Library of Medicine has their own classification system for cataloging. But under the hood, they’re both Dublin Core. They’re both specialized Dublin Core. You know what I mean?

GK: Yes.

AB: There’s different context and then… Yeah, totally. Oh, this was the question I was going to ask you. Is there a trade-off with heavy specialization in your taxonomy?

GK: I think the biggest trade-off is maintenance. So we were talking earlier about how when you’re doing that initial planning that you want to think about futureproofing and you want to think about how you can make it as easy to maintain as possible within reason, of course, because nothing is ever easy when it comes to content development.

AB: That’s true.

GK: But yeah, when it comes to heavy specialization, that’s the biggest thing to consider is that for any kind of specialized tagging, you have to have specialized knowledge, so people who understand the categories, who know how to build that specialization and how to maintain it. So you have to have those resources available, and you also have to think about when you need to inevitably add or change the material, how much more difficult is that going to be if you specialize tags. Maybe it’s going to actually enhance things. And so instead of making things more difficult, it might be a little bit easier if you are specializing because then you already have created custom categories before. And if you need to add one down the road, you’ve got a roadmap for that. But it really depends on your organization and the resources that you have available. And thinking specifically about learning content as well, I think one of the biggest areas where heavy specialization can be challenging is that it is typical to have so many part-time contributors and subject matter experts who are not going to be experts in the tagging system. They’re just going to be experts in the actual material that they’re contributing. And so if they have to learn how to use those tags to a certain extent, then sometimes the more customization or specialization that you do, the more difficult that can be for those contributors, and it can make it sometimes difficult to get them on board with having that taxonomy in the first place.

AB: Yeah, change management.

GK: So I think that’s the big trade-off. Yes, change management, maintenance, and thinking about the best balance for making sure that things are useful for your organization. That you’ve got the taxonomy in place that you need, but it’s also not going to be so difficult to maintain that it essentially fails and that your authors and contributors don’t want to keep it going.

AB: This is a big question, but who’s responsible for maintaining a taxonomy within an organization that develops learning content site.

GK: So I think there’s a difference here between who is responsible and who should be responsible.

AB: Oh, that’s so true.

GK: If we think about best practice, it really should just be I would say generally a small team who is designated for that role, who has an administrative role so that they can be in charge of governance over that taxonomy. Because if you don’t have that, if you don’t have the best practice or the optimal situation, then instead, what can happen is that either no one’s managing the taxonomy, which is obviously bad, because then it can just continue to spiral out of control, or it’s almost like a too many cooks in the kitchen a situation, where if you don’t have that designated leadership or governance role over taxonomy, and anyone can update it or make changes to it, then it loses all of its meaning, all of its consistency. I do think it’s important that it’s a small team and not one single person. Because if that person is sick or something, then you’re left high and dry. So you want to make sure you’ve got it’s a small enough team that it’s not going to have the too many cooks in the kitchen problem, but it’s also not just one person.

AB: Another reason that it’s not ideal to have just one person is diversity prevents bias in your taxonomy, right?

GK: Absolutely.

AB: If one person has a confirmation bias about a specific facet and they document it or build something that way, but no one in the organization… You know what I mean?

GK: Yeah. So that’s where that small team can provide checks and balances too.

AB: Totally.

GK: You can have things set up where maybe every person on that team has to approve changes that are made to the taxonomy, or when they’re initially designing it, they all are giving the final review and final approval on it, so that way you’re not having it just through one person and whatever biases that person might carry.

AB: And biases isn’t necessarily a negative connotation, but just that people see the world differently from person to person. And by world, I do mean learning content sometimes. Is there anything else that you wanted to cover?

GK: I think I just want to wrap things up by saying the big things to keep in mind, the main points that we talked about when you’re developing a taxonomy, whether it is for learning content or just more broadly, are to plan ahead, think ahead, do all of the planning upfront that you can, rather than just building things, so that that way you can avoid rework. Use the metrics of the information that you’ve gathered from both inside your organization and from your user base. And finally, keep that end goal in mind that this is all about making things easier for people to use, for people to find content and develop your taxonomy with that end goal in mind.

AB: Yeah, I agree with all of that. Well, thanks so much for talking with me, Gretyl.

GK: Of course. Thank you, Allison, for talking with me.

Outro with ambient background music

Christine Cuellar: Thank you for listening to Content Operations by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

Behind every successful taxonomy stands an enterprise content strategy

Building an effective content strategy is no small task. The latest edition of our book, Content Transformation is your guidebook for getting started.

The post Building your futureproof taxonomy for learning content (podcast, part 2) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 22:12
Taxonomy: Simplify search, create consistency, and more (podcast, part 1) https://www.scriptorium.com/2025/02/simplify-search-create-consistency-and-more-with-a-learning-content-taxonomy/ Mon, 03 Feb 2025 12:30:49 +0000 https://www.scriptorium.com/?p=22925 https://www.scriptorium.com/2025/02/simplify-search-create-consistency-and-more-with-a-learning-content-taxonomy/#respond https://www.scriptorium.com/2025/02/simplify-search-create-consistency-and-more-with-a-learning-content-taxonomy/feed/ 0 Can your learners find critical content when they need it? How do you deliver personalized learning experiences at scale? A learning content taxonomy might be your solution! In part one of this two-part series, Gretyl Kinsey and Allison Beatty share what a taxonomy is, the nuances of taxonomies for learning content, and how a taxonomy supports improved learner experiences in self-paced e-learning environments, instructor-led training, and more.

Allison Beatty: I know we’ve made taxonomies through all sorts of different frames, whether it’s structuring learning content, or we’ve made product taxonomies. It’s really a very flexible and useful thing to be able to implement in your organization.

Gretyl Kinsey: And it not only helps with that user experience for things like learning objectives, but it can also help your learners find the right courses to take. If you have some information in your taxonomy that’s designed to narrow it down to a learner saying, “I need to learn about this specific subject.” And that could have several layers of hierarchy to it. It could also help your learners understand what to go back and review based on the learning objectives. It can help them make some decisions around how they need to take a course.

Related links:

LinkedIn:

Transcript:

Introduction with ambient background music

Christine Cuellar: From Scriptorium, this is Content Operations, a show that delivers industry-leading insights for global organizations.

Bill Swallow: In the end, you have a unified experience so that people aren’t relearning how to engage with your content in every context you produce it.

Sarah O’Keefe: Change is perceived as being risky, you have to convince me that making the change is less risky than not making the change.

Alan Pringle: And at some point, you are going to have tools, technology, and process that no longer support your needs, so if you think about that ahead of time, you’re going to be much better off.

End of introduction

Gretyl Kinsey: Hello and welcome. I’m Gretyl Kinsey.

Allison Beatty: And I’m Allison Beatty.

GK: And in this episode, we’re going to be talking about taxonomy, particularly for learning content. This is part one of a two-part podcast.

AB: So first things first, Gretyl, what is a taxonomy?

GK: Sure. A taxonomy is essentially just a system for putting things into categories. Whether that is something concrete like physical objects or whether it’s just information. A taxonomy is going to help you collect all of that into specific categories that help people find what they’re looking for. And if you’ve ever been shopping before, you have encountered a taxonomy. So I like to think about online shopping, in particular, to explain this because you’ve got categories for the type of item that you’re buying at a broad level that might look something like you’ve got clothing, household goods, electronics, maybe food. And then within that you also have more specific categories. So if we start with clothing, you typically will have categories for things like the type of garment. So whether you are looking for shirts, pants, skirts, coats, shoes, whatever. And then you also might have categories for the size, for the color, for the material. They’re typically categories for the intended audience. So whether it’s for adults or kids. And then within that may be for gender. So all these different ways that you can sort and filter through the massive number of clothing results that you would get if you just go to a store and look at clothing. You’ve got all of these different pieces of information, these categories that come from a taxonomy where you can narrow it down. And that typically looks like things on a website, like search boxes, checkboxes, drop-down menus, and those contain the assets or the pieces of information from that taxonomy that are used to categorize that clothing. So then you can go in and check off exactly what you’re looking for and narrow down those results to the specific garment that you were trying to find. So the ability to go on a website and do all of that is supported by an underlying taxonomy.

AB: So that’s an example of online shopping. I’m sure a lot of people are familiar with taxonomies in the sense of biology, but how can taxonomies be applied to content?

GK: Sure. So we talk about taxonomy in terms of content for how it can be used to find the information that you need. So when you think about that online shopping example, instead of looking for a physical product like clothing. When it comes to content, you’re just looking for specific information. So it’s kind of like the content itself is the product. So if you are an organization that produces any kind of content, you can put a taxonomy in place so that your users can search through that content. They can sort and filter the results that they get according to those categories and your taxonomy. And that way they can narrow it down to the exact piece of information that they’re looking for instead of having to skim through a long website with a lot of pages, or especially if you’re dealing with any kind of manuals or books or more publications that you’re delivering. Not forcing them to read through all of that instead of being able to search and find exactly what they’re looking for. So some of the ways that taxonomies can help you categorize your content would be things like what type of information it is. So whether it is more of a piece of technical documentation, something like a user manual or a quick start guide or a data sheet, or whether it is marketing material, training material. You could put that as one of the categories in your taxonomy. You could also put a lot of information about your intended audience. So that could be things like their experience level. It could be things like the regions they live in or the languages they speak. Anything about that audience that’s going to help you serve up the content that those particular people need. It can also be things like what platform your audience uses or what platform is relevant for the material that you’re producing. It can be things like the product or product line that your content is documenting. There are all kinds of different ways that you can categorize that information. And I know that both of us have a lot of experience with putting these kinds of things together. So I don’t know if you’ve got any examples that you can think of for how you’ve seen information get categorized.

AB: So a lot of the way I think about taxonomies is a library classification system or MARC records so in the same way that if you wanted to find a particular information resource and you went to your library’s online catalog and could filter down to something that fits your needs. You can think of treating your organization’s body of content like a corpus of information that you can further refine and assign metadata values to. Or in the case of a taxonomy hierarchy in the clothing example, choosing that you want a shirt would be a step above choosing that you want a tank top or a long sleeve shirt or a blouse. So a lot of my mindset around taxonomies for content is framed like libraries. The Library of Congress subject headings are generally a good starting off point for a library. But sometimes if your library has specific information needs, like the National Health Library has its own subject scheme that is further specialized than the broader categories that you get in Library of Congress subject headings, because they know that everything in that corpus is going to be health or medicine related information. And in the same way you and I have developed taxonomies for clients that are particular to their needs, you’re never going to start off knowing nothing when you build a taxonomy, right?

GK: Exactly. And with the example that you were talking about of kind of looking at information in a library catalog, we see that with a lot of documentation. So if you’re thinking about technical content and things like product documentation, user guides, user manuals, we see that similar kind of functionality. If you have that content available through a website or an app or some other kind of digital online experience, back to the online shopping example. Your user base can in all of those different cases, go to those facets and filters, those check boxes, drop down menus, search boxes, and start narrowing down the information to what exactly they’re looking for. So that really helps to enhance the user experience to have that taxonomy in place underlying the information and making it easier to narrow down. I’ve also seen it really helpful on the authoring side. So if you have a large body of content, maybe you have it in something like a content management system. And more content that you have, the harder it becomes to find the specific information that you’re looking for. In particular, we deal with a lot of DITA XML. And so there will be a component content management system that that’s typically housed in. And when you’ve got it in there, those systems typically have some kind of underlying taxonomy in place as well that can capture all kinds of information about how and when the content was created. So that can help you find it. And then of course, you could have your own taxonomy for the kinds of things I named earlier, what type of information it is, what the intended audience is in case that can help you as the author find and narrow down something in your system. And it can also help you as an author to put together collections of content for personalized delivery. So maybe you have a general version of your user guide, but then you’ve also got audience specific versions that you can kind of filter and narrow it down to based on the metadata in your content. And that’s all going to be informed by those categories in your taxonomy. So really leveraging any of the information that you have about your audience, about how they use your content or how they need to use your content is really going to help you deliver it in a more flexible way and in a more efficient way as well.

AB: I know for me personally, sometimes the amount of information out in the world can get very overwhelming.

GK: Absolutely.

AB: So I’m thinking about our LearningDITA e-learning project, and how much content we’ve collected between different versions of it and over the amount of time it’s been up, and it makes it so much easier to navigate knowing where pieces of content are when I’m looking for something as an author on that project.

March 2025 update: We have moved LearningDITA to a new platform. The Introduction to DITA course is still free, and you can sign up for courses at store.scriptorium.com.

GK: And that actually brings up a really good point because we were talking about the taxonomies used in content. We were primarily talking about technical content, so things like product documentation, user guides, legal, regulatory, but it can also be used for other types of content. And learning content is a really big one, and we are seeing that more and more.

AB: Absolutely.

GK: There’s a lot of overlap at organizations between technical documentation and learning or training material, especially if you make a product where there are certifications. So we see a lot of times, for example, with people who make software. That organization will usually have the product documentation, here’s how you use this software. But then there’s also training material so that if there are certifications around the use of that software, then there’s that material where their user base can go take a class and essentially be students or learners in that context rather than just consumers of the product. And so there’s a lot of need to share information across the technical documentation and the learning material.

And we see more and more organizations where the learning material is kind of their main product, looking for ways to better categorize that information and have a taxonomy underneath it. And so when you mentioned LearningDITA, that kind of got me thinking about how not only that useful for us as the creators of LearningDITA, but for all the other organizations that also produce learning material. How much a taxonomy helps that experience, not only for them as the authors, but also for their end users.

AB: It’s a win-win for users and creators. Something I would like to discuss is self-guided e-learning, and how a taxonomy can make it easier to tie assessments to learning objectives in that sort of asynchronous setting as opposed to a more traditional classroom.

GK: And e-learning is really interesting because there’s a lot of flexibility out there in terms of how you can present that information and how you can gather information from the students or the learners taking your e-learning courses. And we’ve seen different categories or taxonomies around gathering information or putting information on your learning material about things like the intended reading level or grade level if you’re dealing with students who are still in school. You could also put information about things like the industry. If your learner base is professionals, you can put information about the subject that you’re covering, the type of the certification associated with that material. And then like you mentioned, learning objectives. So typically with any kind of a course that’s put out there for students to take, whether it’s e-learning or whether it’s just in a classroom, there are specific learning objectives that that material is intended to cover. So whenever you as a student get to the end, it’s basically you should be able to understand this concept or perform this activity as a result of taking this course. And we have seen a lot of demand in various different industries for tying those learning objectives to the assessment questions. So if you’re in an e-learning course, you’ve got your kind of self-guided material where you’re walking through, you’re reading, maybe you’re doing some exercises, maybe you’re watching some videos or looking at some examples. And then at the end there’s some kind of a quiz or an assessment to test your knowledge. And with e-learning, that’s typically something where you’re entering answers, maybe you’re checking boxes for multiple choice questions, or you’re typing a response in, or you’re picking true faults, things like that. So you take that quiz and the questions in that quiz are tied back to those learning objectives from the beginning of the lesson. So that way if you get a question wrong, it can tell you this is the specific learning objective that you missed this question four, and that you should go back and review more material that’s associated with that learning objective. And having all of that tied together so that your e-learning environment can actually serve up that information is where it can really help to have a taxonomy underneath. When you think about it, learning objectives themselves kind of naturally fall into categories. And there are even standards when you think about things like Bloom’s taxonomy, that’s a typical standard that’s applied to learning material. And of course you could also come up with whatever categories that you want for your learning information, but those objectives are often tied directly to the categories. And then being able to have the structure in place to tie those objectives and the taxonomy categories that are associated with to your assessment questions to the rest of your material just makes the whole experience a lot more seamless and streamlined for your learners.

AB: It’s so valuable, particularly learning objectives. I’m glad you brought up Bloom’s taxonomy because I think that’s a pretty familiar entry point to taxonomies for a lot of people who work in the learning space. And I’m kind of also thinking about whether it’s learning content or technical documentation, any implementation of a taxonomy for a body of digital content. It sort of turtles all the way down, whether it’s a learning objective that is the value or significance being assigned to a piece of content. If you think about information theory and how sort of the basis of what is a node and a taxonomy is it’s a discrete thing. And I know it drives people crazy. That thing is more or less the technical term in that situation. It sounds so vague, but the thing is, it’s a discrete object that has a purpose for why it exists, whether it’s a learning objective that’s tied as an attribute in your DITA or piece of metadata somewhere or elsewhere, or whether it’s technical documentation that’s telling you which product, a piece of content assigns to. I know we’ve made taxonomies through all sorts of different frames, whether it’s structuring learning content, or we’ve made product taxonomies. It’s really a very flexible and useful thing to be able to implement in your organization.

GK: And it not only helps with that user experience for things like learning objectives, but it can also help your learners just find the right courses to take. So if you have some information in your taxonomy that’s designed to narrow it down to a learner saying, “I need to learn about this specific subject.” And that could have several, of course, layers of hierarchy to it. It could also help your learners to understand what to go back and review based on the learning objectives. It can help them to maybe make some decisions around how they want to take a course. So when you think about e-learning, you can have it be self-guided and asynchronous, or sometimes it could be instructor-led. And so if you’ve got something like that baked into your taxonomy, something about the method of delivery that could help your learners decide which mechanism is going to be better for them. So all of that can be really helpful. And I also want to talk about it again from going back to the creator side, just like we did with technical content. Because if you are designing learning material, you’re an instructional designer, you’re putting together a course, then you might want some information about things like the learner’s progress, their understanding of the material. You’re going to want to obviously capture all the information around the scoring and grading from the assessments that they take. And having that tied back to a taxonomy, whether it’s to learning objectives or to any other information, can help you to understand how you might need to adjust the material. So if you notice, for example, that you’ve got one learning objective that everyone seems to struggle to understand, you’ve got a large percentage of your students missing the assessment questions associated with that learning objective, then maybe that tells you we need to go back and rewrite this or rework how it’s presented. So the taxonomy can not only help your learners find the information, navigate the courses, and take the courses that they need, but it can also help you to adjust the design of those courses in a way that further enhances their learning experience.

AB: Absolutely. Something else that you just made me think of is say you have an environment of creating learning content with multiple authors. Another advantage of the taxonomy is that it can standardize metadata values. So say you and I, Gretyl are working within the same learning organization, and then when content that’s written by either one of us goes to publish, the metadata values will be standard if we use the same taxonomy.

GK: And that’s also a really important point because that standardization is good not only across just a subset of your content, like your learning material, but we’ve seen some organizations go more broad and say, “Our learning content and our technical docs and our marketing material.” And whatever other content they have, all needs to have a consistent set of terminology. It needs to have a consistent set of categories that people use to search it. And so you can think about taxonomy at a broader level too, for all the information across the entire company or the entire organization, and make sure that it’s all going to fit into those categories consistently because it is, like you said, very typical to have lots of different people contributing to content creation. And then in particular, with learning content, we see a lot of subject matter experts and part-time contributors who do something else, but then they might write some assessment questions or they might write a lesson here and there. And having the ability to have that consistent categorization of information, consistent terminology, consistent application of metadata is really, really helpful when you’ve got so many different people contributing to the content because that helps to make sure that they’re not going to be introducing inconsistencies that confuse your end users.

AB: That’s really a strength of most classification systems, whether it’s a controlled vocabulary or something more sophisticated like a taxonomy. And I’m thinking about something that you and I see a lot working with clients with DITA XML in particular is sort of blending technical and marketing content once DITA is implemented and having interoperability with your taxonomy definitely is a boon to that.

GK: Absolutely. I think that’s a good place to wrap up for now. We’ll be continuing this discussion in the next podcast episode. So Allison, thank you.

AB: Thank you.

Outro with ambient background music

Christine Cuellar: Thank you for listening to Content Operations by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

Behind every successful taxonomy stands an enterprise content strategy

Building an effective content strategy is no small task. The latest edition of our book, Content Transformation is your guidebook for getting started.

The post Taxonomy: Simplify search, create consistency, and more (podcast, part 1) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 22:56
Transform L&D experiences at scale with structured learning content https://www.scriptorium.com/2025/01/transform-ld-experiences-at-scale-with-structured-learning-content/ Mon, 13 Jan 2025 12:44:14 +0000 https://www.scriptorium.com/?p=22902 https://www.scriptorium.com/2025/01/transform-ld-experiences-at-scale-with-structured-learning-content/#respond https://www.scriptorium.com/2025/01/transform-ld-experiences-at-scale-with-structured-learning-content/feed/ 0 Ready to deliver consistent and personalized learning content at scale for your learners? In this episode of the Content Operations podcast, Alan Pringle and Bill Swallow share how structured content can transform your L&D content processes. They also address challenges and opportunities for creating structured learning content.

There are other people in the content creation world who have had problems with content duplication, having to copy from one platform or tool to another. But I will tell you, from what I have seen, the people in the learning development space have it the worst in that regardthe worst.

— Alan Pringle

Related links:

LinkedIn:

Transcript:

Disclaimer: This is a machine-generated transcript with edits.

Introduction with ambient background music

Christine Cuellar: From Scriptorium, this is Content Operations, a show that delivers industry-leading insights for global organizations.

Bill Swallow: In the end, you have a unified experience so that people aren’t relearning how to engage with your content in every context you produce it.

Sarah O’Keefe: Change is perceived as being risky, you have to convince me that making the change is less risky than not making the change.

Alan Pringle: And at some point, you are going to have tools, technology, and process that no longer support your needs, so if you think about that ahead of time, you’re going to be much better off.

End of introduction

AP: Hey, everybody, I’m Alan Pringle.

BS: I’m Bill Swallow.

AP: And today, Bill and I want to talk about structured content in the learning and development space. I would say, the past two years or so, we have seen a significantly increased demand of organizations who want to apply structured content to their learning and development processes, and we want to share some of the things those organizations have been through and what we’ve learned over the past few months, because I suspect there are other people out there who could benefit from this information.

BS: Oh, absolutely.

AP: So let’s talk about, really, the drivers, what are the things that people, content creators in the learning development space, what’s driving them to it? One of them off the bat is so much content, so, so very much content, on so many different delivery platforms. That’s one that I know of immediately, what are some of the other ones?

BS: Oh, yeah, you have just the core amount of content, the number of deliverables, and the duplication of content across all of them.

AP: That is really the huge one, and I know there are other people in the content creation world who have had problems with content duplication, having to copy from one platform or tool to another. But I will tell you, from what I have seen, the people in the learning development space have it the worst in that regard—the worst.

BS: Didn’t they applaud you when you showed up at a conference with a banner that said end copy, paste?

AP: Pretty much, it’s true. That very succinct message raised a lot of eyebrows, because they are in the position, unfortunately, in learning and development, having to do a lot of copying and pasting, and part of the reason for that copying and pasting is, a lot of times, the different platforms that we’ve mentioned, also, different audiences. I need to create this version for this region, or this particular type of student at this location, so they’re copying and pasting over and over again to create all these variants for different audiences, which becomes unmanageable very quickly.

BS: Yeah, copy, pasting, and then, reworking. And then, of course, when they update it, they have to copy, paste, and rework again to all the other places it belongs, and then, they have to handle it in however many languages they’re delivering the training in.

AP: So now, everything is just blown up. I mean, how many layers of crap, and I’m just going to say it, do these people have to put up with? And there are many, many, many.

BS: Worst parfait ever.

AP: Yeah, no, that is not a parfait I want to share, I agree with you on that. So let’s talk about the differences between, say, the techcomm world and the learning and development world and their expectations for content. Let’s talk about that, too, because it is a different focus, and we have to address that.

BS: So techcomm really is about efficiency and production, so being able to amass quite a wide mass of content and put it out there as quickly as possible, or put it out there as efficiently as possible. Learning content kind of flips that on its head, and it wants to take quality content and build a quality experience around it, because it’s focused on enabling people to learn something directly.

AP: And techcomm people, we’re not saying you’re putting out stuff that is wrong or half ass. That is not what we mean, I want to be real clear here. What we mean is, there is a tendency to focus on efficiency gains, and getting that help set, getting that PDF, getting that wiki, whatever thing that it is that you’re producing, getting that stood up as quickly as possible, whereas on the learning side, speed is not usually the thing that you’re trying to use to sell the idea of structured content. I don’t think that’s going to win a lot of converts in the learning space. I do think, however, you can make the argument, if you create this single source of truth so you can reuse content for different audiences, different locations, different delivery platforms, and you’re using the same consistent information across all of that, you are going to provide better learning outcomes, because everybody’s getting the same information. Regardless of what audience they are or what platform that they’re learning, whether it’s live instructor-led training, something online, whatever else, you’re still getting the correct same information, whereas if you were copying and pasting all that, you might’ve forgot to update it in one place as a content creator, and then, someone ends up getting the wrong information, a student, a learner, and that’s when you’re not in the optimal learning experience situation.

BS: Right, and it’s not to say that every single deliverable gets the exact same content, but they get a slice from the same shared centralized repository of content so that they’re not rewriting things over and over and over again. And they’re still able to do a lot of high-quality animations, build their interactives, put together their slide presentations, everything like that, but use the content that’s stored centrally rather than having to copy and paste it again and again and again.

AP: Yeah, and let’s talk about, really, the primary goals for moving to structure content for learning and development folks. We’ve already talked about reuse quite a bit, that’s a big one. Write it one time, use it everywhere, and that also leads to creating profiling, different audiences, content for different audiences.

BS: Right, I mean, these goals really are no different than what you see in techcomm, and what techcomm has been using for the past 15, 20, 25 years. It is that reuse, that smart reuse, so write it once, use it everywhere, no copy paste, having those profiling attributes and capabilities built in so that you can produce those variants for beginner learners versus expert learners versus people in different regional areas where the procedure might be a little bit different, producing instructor guides as well as learner guides. All of these different ways of mixing and matching, but using the same content set to do that.

AP: Yeah, it’s like one of our clients said, and I have to thank them forever for bringing this up, they were bogged down in a world of continuous copying and pasting over and over and over again, and maintaining multiple versions of what should’ve been the same content, and they said, quote, “We want to get off the hamster wheel.” And that is so true and so fitting, and we probably owe them royalties for saying this over and over again, because such a good phrase. But it really did capture, I think, a big frustration that a lot of people in the learning and development space have creating content, because they do have to maintain so many versions of content.

BS: And those versions likely are stored in a decentralized manner, so they could be on multiple different servers, they could be on multiple different laptops or PCs, they could be on thumb drives in some random drawer that are updated maybe once every two, three years. So being able to pull everything together into a central repository and structure it so that it can be intelligently reused and remixed, there’s so many benefits to that.

AP: Yeah, and in regard to the remixing, the bottom line is, you want the ability to publish to all your different platforms. I believe the term people like to use is omnichannel publishing, so you basically can do push-button publishing to basically any delivery need that you have, whether it’s an instructor versus student guide for training you’re having live, e-learning, even scripts for video. Even when you’re dealing with a lot of multimedia content, there is still text involved, underpinnings of that content, audio and video, there’s still probably bits and pieces of that, that can come from your single source of content, because at the core of it, it’s text-based, even though if the delivery of it is a video or audio.

BS: Now, we’ve had structured content for a good couple decades, at least-

AP: At least, yeah.

BS: … but there really is a reason why the learning world really hasn’t latched onto it completely, and it really comes down to the different types of content that they need to produce versus what traditionally a techcomm group would do. So right off the bat, there are all the different tests, quizzes, and so forth, all the assessments that are built into a learning curriculum. There was never really anything built to handle those in traditional structured authoring platforms in schemas.

AP: And there are solutions now that will let you handle assessments and different types of questions, and things like that.

BS: But the whole approach to producing learning content, it’s quite similar to techcomm and to other classic content development, but it’s also quite unique in its own right, and we do have to make sure that all of those different needs, whether it be the assessments, any interactives that need to be developed, making sure that you tie in a complete learning plan, and perhaps even business logic to your content, making sure all that can be baked in intelligently so that we’re able to produce the things that we need to produce for trainers.

AP: Yeah, and now, especially, you have to be able to create content that integrates easily with the learning management system, which has its own workflows, it’s got tracking, it tracks progress, it scores quizzes, it keeps track of what classes you’ve taken, prerequisites, all of that stuff, that is a whole delivery ecosystem, and structured content can help you communicate with an LMS and create content that is LMS friendly by baking in a lot of the things that you just talked about.

BS: And the content really does boil down to a more granular and targeted presentation to the audience rather than techcomm, which is more of a soup to nuts, kind of everything in the kitchen sink approach to offer.

AP: Yeah, and then, there’s also the whole live delivery aspect, that is not something that’s really part of techcomm at all.

BS: I wouldn’t want someone there reading a manual to me.

AP: No, nor would I. Well, it might be a good way to treat insomnia, but that’s not what we’re here for. But you do have to consider, the assessments are a big difference from a lot of other content that is a good fit for the structure world, and then, the possibility of live instruction, that’s also another big difference, which, still, there are structured content solutions that can help you with both of those very distinct learning and development content situations. So I think it’s fair to say, based on talking to a lot of people at conferences focused on learning, and a lot of our clients, that the traditional way of creating learning and development content, it is not scalable. The copy and paste angle in particular is just not sustainable in any way, shape, or form.

BS: No, you have so many hours in a day, so if you need to start producing more, you really need to start adding more people. And you add more people, then you have the likelihood that more things could go wrong with the content, or the content could get-

AP: Will go wrong.

BS: … could get out of sync with itself.

AP: Yeah. Well, let’s talk also a little more about some of the challenges. We’ve talked about the interactivity, how that and the assessments, that’s something that’s kind of particular that you have to solve for in the learning space. Let’s talk about the P word, PowerPoint.

BS: PowerPoint. Yeah, being able to pull focus slides together, which really would likely have a very small subset of a course’s content built within them, unless you’re producing a wall of text per PowerPoint. Those are quite unique to the space, so you don’t see much in techcomm where things are delivered via PowerPoint, or you hopefully don’t.

AP: No, PowerPoint is great because it’s wide open and you can do a lot of things with it, PowerPoint is bad because it’s wide open and you could do a lot of things with it. That’s the problem with PowerPoint.

BS: And a template’s only as useful as those who follow it.

AP: Exactly. And now, you mentioned templates, structure content is a way to templatize and standardize your content, and I’m sure that can rub people the wrong way. My slides need to be special, this, that, and the other. There’s a continuum here of, I want to do whatever I want to the point of sloppy, or I can do things within this particular set of confines so there is consistency. And again, I think it’s fair to say, providing consistency for different learners with slide decks, that is going to make some better outcomes instead of a free-for-all, I can do whatever I want scenario. And I’m sure there are people out there who are going to kick and scream and disagree with me, but that’s a fight we’re just going to have to have folks.

BS: Well, no, it provides us a consistent experience throughout, rather than having some jarring differences from lesson to lesson or course to course.

AP: Yeah, yeah, and I think there’s one thing, too, that, in addition to the PowerPoint angle, with the learning and development space, there is this focus on, we need to create, this thing went off, that thing went off, and this other thing went off. There’s still standardization you can do among your different delivery targets that will streamline things, create consistency, and therefore, a better learning experience. I do believe that’s true, even though some people at first in particular can find it very confining.

BS: Oh, right, I mean, it just takes the development of the end user experience, I don’t want to say completely out of the learning content developer’s hands, but it kind of frees them up to better frame the content for the lesson rather than worrying about the fit and finish of the product.

AP: Yeah, and let’s focus now on some of the options out there in the structure content world for learning and development content. There’s several out there, let’s talk about what’s on the table.

BS: It comes down to two different types of systems, one would be a learning component management system, so it’s a system that’s more built for learning content specifically.

AP: Yeah, I would say it’s purpose built, I agree, yeah.

BS: Yeah, and it functions the same way as a lot of, I guess what we would call the traditional techcomm component content management systems do, where you’re able to develop in fragmented pieces, in a structured way, in a centralized manner, and intelligently reuse and remix all of these different components to produce some type of deliverable.

AP: Right, so you can therefore, within this system, set up things for different locations, different audiences, whatever else. And if you were moving into an LCMS or one of the other solutions we’re talking about, you are also going to make localization and translation much more efficient, and you’ll get stuff turned around in other languages for other locales much more quickly. So we’ve got the LCMS’s which are more proprietary, and then, on the flip side of that, let’s talk about DITA.

BS: So DITA does provide you with a decent starting point for developing your content, and we’ve helped several clients do this already, but a lot of the tools that are out there on the flip side, where the LCMS is targeted at developing learning content, a lot of the tools for DITA aren’t, so it requires a lot of customization on the tool chain, as well as in the content model, to get things up and running. However, DITA does give you an easier point of integration with any work that is being produced by your techcomm peers.

AP: Yeah, I do think it’s fair to say it’s a little more extensible, but the mere fact it is an open standard as an extensible means that it may take some configuring to make it exactly what you need it to be. And like Bill was saying, DITA has some custom structure that is a very good fit, it is specifically for learning and training, and you can further customize those customizations to match what you need. I will say, I think some of the assessment structures are not as robust as they should be, and we’ve had to customize those for some clients. So that’s another thing that you would have to kind of think about when you’re trying to make this decision, do I need to go with an LCMS, or do I want to go with DITA and a component content management system, and understand that I’m going to have to make some adjustments to make it more learning and development friendly?

BS: No matter which way you slice it, though, moving to any kind of a structured repository in a structured system really starts to open things up from a back end production point of view, while not necessarily forgoing a lot of the experience-driven design that goes into producing those different learning deliverables. It is a way to kind of become more efficient, and as Alan mentioned, avoid the copy and paste, which can be a nightmare to maintain over time.

AP: And at the same time, you do not have to throw out your standards for the quality of the content and the quality of the learning experience. You want to have structure, support, and bolster, and maintain those things, and don’t look at it as something that is going to degrade those things, because when used correctly, it can really help you maintain that level of quality and consistency that you really need for an outstanding learning experience. And with that, Bill, I think we can wrap up. Thank you very much.

BS: Thank you.

Outro with ambient background music

Christine Cuellar: Thank you for listening to Content Operations by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

Questions about this podcast? Let’s talk!

"*" indicates required fields

This field is for validation purposes and should be left unchanged.
Data collection (required)*

The post Transform L&D experiences at scale with structured learning content appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 20:27
Creating content ops RFPs: Strategies for success https://www.scriptorium.com/2024/12/creating-content-ops-rfps-strategies-for-success/ Mon, 09 Dec 2024 12:37:15 +0000 https://www.scriptorium.com/?p=22851 https://www.scriptorium.com/2024/12/creating-content-ops-rfps-strategies-for-success/#respond https://www.scriptorium.com/2024/12/creating-content-ops-rfps-strategies-for-success/feed/ 0 In episode 179 of the Content Strategy Experts podcast, Sarah O’Keefe and Alan Pringle share the inside scoop on how to write an effective request for a proposal (RFP) for content operations. They’ll discuss how RFPs are constructed and evaluated, strategies for aligning your proposal with organizational goals, how to get buy-in from procurement and legal teams, and more.

When it comes time to write the RFP, rely on your procurement team, your legal team, and so on. They have that expertise. They know that process. It’s a matter of pairing what you know about your requirements and what you need with their processes to get the better result.

— Alan Pringle

Related links:

LinkedIn:

Transcript:

Disclaimer: This is a machine-generated transcript with edits.

Alan Pringle: Welcome to the Content Strategy Experts Podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we talk about writing effective RFPs. A request for a proposal, RFP, approach is common for enterprise software purchases, such as a component content management system, which can be expensive and perhaps risky. Hey everybody, I am Alan Pringle.

Sarah O’Keefe: And I’m Sarah O’Keefe, hi.

AP: So Sarah, we don’t sell software at Scriptorium, so why are we talking about buying software?

SO: Well, we’re talking about you, the client buying software, which is not always, but in many cases, the prerequisite before we get involved on the services side to configure and integrate and stand up the system that you have just purchased to get you up and running. And so, because many of our customers, many most, nearly all of our customers are very, very large, many of those organizations do have processes in place for enterprise software purchases that typically either strongly recommend or require an RFP, a request for proposal.

AP: Which let’s be very candid here. Nobody likes them. Nobody. 

SO: No, they’re horrible.

AP: Vendors don’t like them. People who have to put them together don’t like them, but they’re a necessary evil. But there things you can do to make that necessary evil work for you. And that’s what we want to talk about today.

AP: So the first thing you need to do is do some homework. And part of that homework, I think, is talking with a bunch of stakeholders for this project or this purchase and teasing out requirements. So let’s start with that. And this is even before you get to the RFP itself. There’s some stuff you need to do in the background. And let’s talk about that a little bit right now.

SO: Right, so I think, you know, what you’re looking to get to before you go to RFP is a short list of viable candidates, probably in the two to three range. I would prefer two, your procurement people probably prefer three to four. So, okay, two to three. And in order to get to that list of these look like viable candidates, as Alan’s saying, you have to do some homework. Step one, what are your hard, requirements that IT or your sort of IT structure is going to impose. Does the software have to be on premises or does it have to be software as a service? Nearly always these days organizations are hell bent on one or the other and it is not negotiable. Maybe you have a particular type of single sign-on and you have some requirements around that. Maybe you have a particular regulatory environment that requires a particular kind of software support. You can use those kinds of constraints to easily, relatively easily, rule out some of the systems that simply are not a fit for what your operating environment needs to look like.

AP: And by doing that, you are going to reduce the amount of work in the RFP itself by doing this now. So you’re going to streamline things because you’ve already figured out, this candidate is not a good fit. So why bother them and why make work for ourselves having to work and correspond with the vendor that ends up not being a good fit.

SO: Right, and if we’re involved in a process like this, which we typically do on the client side, so we engage with our customers to help them figure out how to organize an RFP process, right, we’re going to be strongly encouraging you to narrow down the candidate list to something manageable because the process of evaluating the candidates is actually quite time consuming on the client side. And additionally, it’s quite time consuming for the candidates, the candidate software companies to write RFP responses. So if you know for a fact that they’re not a viable candidate, you know, just do everybody a favor and leave them out. It’s not fair to make them do the work.

AP: No, it’s not. And we’ve seen this happen before where a organization will keep a vendor in the process kind of as a straw man to strike down fairly quickly. It would be kinder and maybe more efficient to do that before you even get to the RFP response process, perhaps.

SO: Yeah, and of course, again, the level of control that you have over this process may vary depending on where you work and what the procurement RFP process looks like. There are also some differences between public and private sector and some other things like that. But broadly, before you go to RFP, you want to get down to a couple of viable candidates, and that’s who should get your request for proposal.

AP: Yeah, and when it does come time to write that RFP, do rely on your procurement team, your legal team. They have that expertise. They know that process. It’s a matter of pairing what you know about your requirements and what you need with that process to get the better result. And I think one of the key parts of this communication between you and your procurement team is about use case scenarios. So let’s talk about those a little bit because they’re fundamental here.

SO: Yeah, so your legal team, your procurement team is going to write a document that gives you all the guardrails around what the requirements are and you have to be this kind of company and our contract needs to look a certain way and various things like that. We’re going to set all of that aside because A, we don’t have that expertise and B, you almost certainly as a content person don’t have any control over that. You’re just going to go along with what they are going to give you as the rules of the road in doing RFPs. However, somewhere inside that RFP it says, these are the criteria upon which we will evaluate the software that we are talking about here. And I think a lot of our examples here are focused on component content management systems, but this could apply to other systems whether it’s translation management, terminology, metadata, you know, all these things, all these content-related systems that we’re focused on. So, somewhere inside the RFP, it says, we need this translation management system to manage all of these languages, or we need this component content management system to work in these certain ways. And your goal as the content professional is to write scenarios that reflect your real world requirements that are unique to your organization. So if you are in heavy industry, then almost certainly you have some concerns around parts, about referencing parts and part IDs and maybe there’s a parts database somewhere and maybe there are 3D images and you have some concerns around how to put all of that into your content. That is a use case that is unique to you versus a software vendor who is going to have some sort of, we have 80 different variants of this one piece of software depending on which pieces and parts you license, and then that’s gonna change the screenshots and all sorts of things. So what you wanna do is write a small number of use cases. We’re talking about maybe a dozen. And those dozen use cases should explain, you know, as a user inside the system, I need to do these kinds of things. You might give them some sample content and say, here is a typical procedure and we have some weird requirements in our procedures and this is what they are. Show us how that will work in your system. Show us how authoring works. Show us how I would inject a part number and link it over to the parts database. Show us, you know, those kinds of things. So, the use case scenarios typically should not be, “I need the ability to author in XML,” right?

AP: Or, “I need the ability to have file versioning,” things that every CCMS on the planet does, basically.

SO: Right, somewhere there’s a really annoying and really long spreadsheet that has all those things in it, fine. But ultimately, that’s table stakes, right? They should not get to the short list unless you’ve already had this conversation about file versioning and the right class of system. The question now becomes, how do you provide a template for authors and what does it look like for authors to start from a template and do the authoring that they need to do? Is that a good match for how your authors need to or want to or like to work. So the key here from my point of view is don’t worry too much about the legalese and the process around the RFP, but worry a whole bunch about these use case scenarios and how you are going to evaluate all the different tools that you’re assessing against the use case scenarios.

AP: Be sure you communicate those use case scenarios to your procurement team in a way they understand so they have a better handle on what you need because if everybody is kind of on the same page as far as those use cases go the clearer it’s going to be to communicate those things to the candidate vendors when they do get their hands on the RFP.

SO: And I think as we’re going in or talking about going into a piece of software, there probably should already be some consideration around exit strategy, which Alan, you’ve talked about that a whole bunch. What does it mean to have an exit strategy and to evaluate that in the inbound RFP process?

AP: It is profoundly disturbing to have to think about leaving a before you’ve even bought it, but it does, does behoove you to do that because you need a clear understanding of how you are going to transition outside of a tool before you buy it. So when that happens, when you come to a point where you have to do it, you have an understanding about how you can technically exit that tool. For example, how can you export your source files for your content? What happens when you do that? In what formats? These are part of the use cases that you’re talking about perhaps here too. So it really is so weird to have to think about something that’s probably years down the road, but it is to your advantage to do that at this point in the game.

SO: Yeah, I mean, what’s the risk if something goes sideways or if your requirements change? This doesn’t have to be sideways. So you are in company A and you buy tool A, which is a perfect fit for what you’re doing. Company A merges with company B. Company B has a different tool and B is bigger than A. So B wins and you exit tool A as company A and you need to move all your content into tool B. Well, that’s a case where you made all the right decisions in terms of buying the software. You just didn’t account for a change in circumstances, as in B swooped in and bought you. So what does it look like to exit out of tool A?

AP: Yeah, it doesn’t necessarily have to be the tool no longer works for us. It could be what you describe. There can be external factors that drive the need to exit, have nothing to do with bad fit or failure on anybody’s part.

SO: So we have these use case scenarios and we’ve thought about exit, though this is entrance. 

AP: Or even before entrance, you haven’t even entered yet.

SO: And so now you’re going to have a demo, right? The software vendor is going to come in and they’re going to show you all your use case scenarios. Well, we hope they’re going to show you your use case scenarios. Sometimes they wander in and they show you a canned demo and they don’t address your use cases. That tells you that they are not paying attention. And that is something you should probably take into account as you do your evaluation.

AP: Yeah, and don’t get sucked in on a similar note. Don’t get sucked in by flashy things because that flash may blind you and very nicely disguise the fact that they can’t quite match one of your use cases. So look at this sparkly thing over here. Don’t fall for that. Don’t do it. Yeah.

SO: Sparkles. So, okay, so we have our use cases and they are going to bring a, they, the software vendor is going to bring some sort of a demo person and they are going to demo your use cases and hopefully they’re going to do it well. So you sort of check those boxes and you say, okay, great, it works. I think the next step after that is not to buy the tool. The next step after that is to ask for a sandbox so that your users can trial it themselves. There is a big, big difference between a sales engineer or a sales support person who has done hundreds, if not thousands of demos going click, click, click, click, click, at how awesome this is. And your brand new user who has never used a system like this, maybe, trying to do it themselves. So user acceptance testing, get them into a not for production sandbox, let them try out some stuff, let them try out all of your use cases that you’ve specified, right? 

AP: It’s try before you buy is what we’re talking about here. Yep.

SO: Mm-hmm. Yeah, I’ve just made a whole bunch of not friends among the software vendors because of course setting up sandboxes is kind of a pain. 

AP: It’s not trivial. 

SO: Yeah, but you’re talking to just one of two candidates, right? So it is not unreasonable. It is completely unreasonable if you just did a, know, a spray this thing far and wide and ask a dozen software vendors for input. That is not okay from my perspective. And when we’re involved in these things, we try very, very hard to get the candidate list down to, again, two or three at most because almost certainly you have requirements in there somewhere that will make one or another of the software tools a better fit for you. So we should be able to get it down to the reasonable prospect list.

AP: And I think too, this goes back to efficiency. Having fewer people or fewer companies in this means you’re gonna have to spend less time per candidate system because you’ve already narrowed it down to organizations that are gonna be a better fit for you. So it’s gonna be more efficient for them because they’re not having to probably do as much show and tell because you’ve narrowed things down very specifically here in my use cases. Also for you as the tool buyer and your procurement team, you’re going to have less to do because you’re not having to talk to four, six candidates, which you should not be doing for an RFP, in my opinion. I know some people in procurement will probably disagree with that though.

SO: Well, we’re just going to make everybody mad today. And while I’m on the topic of not making friends and not influencing people, I wanted to mention something that probably many of you as listeners are familiar with, which is something called the Enterprise Architecture Board. If you work in a company of a certain size, you probably have an EAB. And the EAB is kind of like the homeowners association of your company, right? They are responsible for standards and making sure that you occasionally mow the lawn and whatever else, whether there are other ridiculous rules the homeowners association set. But EABs, Enterprise Architecture Boards in a company context, are responsible for software purchases, software architecture, and looking at what kinds of systems are we bringing into this organization and usually how can we minimize that? How can we maintain a reasonable level of consistency instead of bringing in specialty solutions all over the place? Now, a CCMS, a component content management system is pretty much the definition of a specialty system. 

AP: It’s niche. Yeah.

SO: Yep, and EABs in general willl take one look at it and say something very much like, “CCMS, no, we have a CMS. We have a content management system. We have SharePoint, just use that. We have Sitecore, just use that. We have fill in the blank, just use that.” And your job, if you have the misfortune to have to address an EAB, is that you need to explain why it is that the approved existing solutions within the company architecture do not meet the requirements of the thing that you are trying to do and because that one’s not hard. The and part is the hard part and it is worth the and they’re going to talk about TCO total cost of ownership. It is worth the effort and the risk and the complexity of bringing in another solution beyond the baseline CMS that they’ve already approved to solve the issues that you’ve identified for your content. This is difficult. I’ve spent a lot of quality time with the AABs and they’re literally their job is to say no. I mean, that is just flat out their job. Their job is to streamline and minimize and have as few solutions as possible. So if you have to deal with this kind of situation, you’re going to have some real challenges internally getting this thing sold.

AP: Yeah, and while we’re making friends and influencing people with our various comments on this process today, one final thing I want to say before we wrap up is, that common courtesy goes a really long way in this process. When you have wrapped things up, you have made your selection. Be sure you also communicate that to the vendors you did not choose.

SO: Yeah.

AP: Too many times in RFP processes, there’s not the level of communication with the people who did not win. And it’s just common courtesy, let them know, no, we chose someone else. And if you’re feeling super polite, you might even tell them why this use case you didn’t quite hit. This is why we went with this organization if you choose to. So be nice and be courteous because I realize this is more of a professional business situation, but it still doesn’t hurt to tell someone exactly why you did what you did.

SO: Yeah, and I know those of you in more on the government side of things, nonprofit, typically do have a requirement to notify on RFPs and even give reasons and all the rest of it. But on the corporate side, there’s typically not any sort of requirement to let people know, as Alan said. you know, people put a lot of work into these RFPs and a lot of pain. 

AP: Yeah.

SO: And one last, last thing beyond you should notify people. I want to talk about RFP timing. So we’re rolling into the end of 2024 here. I fully expect that there will be RFPs that will come out on roughly December 15th, which will be due on something like January 1st. So in other words, “Hi vendors, please feel free to spend your holiday time filling out our RFP so that we can, you know, go into the new year with shiny RFP submissions.”

AP: RUDE!

SO: That is not polite. Don’t do that. It is extremely rude. And it signals a level of disrespect that from the vendor side of the process makes them perhaps less inclined to bend on some other things. So reasonable amount of time for the scope of work that you’re asking for. And holidays don’t count.

AP: Yeah, exactly. to go back, I think we can kind of wrap this up and go back to what we were talking about. All of that legwork that you do upfront for this RFP process, your vendors, believe it or not, would generally appreciate it because it shows you’ve done the homework, you have thought about this, and you’re just not wildly flinging out asks with no money, no stakes behind those asks. And they will probably be much more willing to work with you and go that extra mile when you have done that homework. Is there anybody else that we need to tick off before we wrap up?

SO: I think we covered our list. So I’ll be interested to see what people think of this one. So let us know, maybe politely, but let us know.

AP: And I’ll wrap up before there’s violence that occurs. So thank you for listening to the Content Strategy Experts Podcast brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

The post Creating content ops RFPs: Strategies for success appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 22:17
Pulse check on AI: December, 2024 (podcast) https://www.scriptorium.com/2024/12/ai-pulse-check-december-2024/ Mon, 02 Dec 2024 12:37:00 +0000 https://www.scriptorium.com/?p=22834 https://www.scriptorium.com/2024/12/ai-pulse-check-december-2024/#respond https://www.scriptorium.com/2024/12/ai-pulse-check-december-2024/feed/ 0 In episode 178 of the Content Strategy Experts podcast, Sarah O’Keefe and Christine Cuellar perform a pulse check on the state of AI as of December 2024. They discuss unresolved complex content problems and share key considerations for entering 2025 and beyond.

The truth that we’re finding our way towards appears to be that you can use AI as a tool and it is very, very good at patterns and synthesis and condensing content. And it is very, very bad at creating useful, accurate, net new content. That appears to be the bottom line as we exit 2024.

— Sarah O’Keefe

Related links:

LinkedIn:

Transcript:

Disclaimer: This is a machine-generated transcript with edits.

Christine Cuellar: Welcome to the Content Strategy Experts Podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, it’s time for another pulse check on AI. So our last check-in was in May, which in AI terms is ancient history, so today, Sarah O’Keefe and I are gonna be talking about what’s changed and how it can affect your content operations. Sarah, welcome to the show.

Sarah O’Keefe: Hey Christine, thanks.

CC: Yeah. So 2024, as we’re currently recording this 2024 is winding down. People are preparing for 2025. Throughout this year, we went to a lot of different conferences and events. Of course, everybody’s talking about AI. So Sarah, based on the events that you like just recently got back from, you finally get to be in your own house. What are your thoughts about what’s going on with AI in the industry right now?

SO: There’s, still a huge topic of conversation. Lots of people are talking about AI, a huge percentage of presentations, you know, had AI in the title or referenced it or talked about it. With that said, it seems like we’re seeing a little more sort of real world, hey, here’s some things we tried, here’s what’s working, here’s what’s not working. 

CC: Mm-hmm.

SO: And I’ll also say that we’re starting to see a really big split between the AI in regulatory environments, which would include the entire EU plus certain kinds of industries and the sort of wild, wild west of we can do anything.

CC: Yeah. So do you feel like it sounds like, know, when AI first came onto the scene, there was mostly, you know, let’s just all adopt this right now. Let’s go for it full steam ahead, especially marketers as a marketer. can I can say that because we’re definitely gung-ho about stuff like that. It sounds like, the perspective has shifted to being more balanced overall. Is that what you would say?

SO: Yeah, I mean, that’s the typical technology adoption curve, right? You know, have your your peak of inflated expectations, and then you have the I think it’s the valley. It’s not the valley of despair, but it’s something like that. But you know, you sort of go from this can do anything. This thing is so cool. Go, go, go, go, go to a more realistic. Okay, what can it actually do? And what you know, does the and this is true for AI or anything else? What can it do? What can’t it do? What does it do well?

CC: Mm.

SO: Where do we need to put some guardrails around it? What are some surprises in terms of things that are and are not working?

CC: Yeah. And at some of the conferences we were at this year, our team had some things to say about AI as well. So we will link some of the recap blog posts we have in the show notes. Sarah, what are some of the things AI can’t do right now? are the still, what are, Sarah, what are some of the big concerns about AI that are still unanswered, unresolved?

SO: So in the big picture, as we’re starting to see people roll out AI-based things in the real world, whether it’s tool sets or content ops or anything else, we’re starting to see some really interesting developments and some really interesting assessments. Number one is that when you look at those little AI snippets that you get now when you do a search and it returns a bunch of search, well, actually it returns a page of ads.

CC: Yes.

SO: And then some real results under the ads. And then above that, it returns an AI overview snippet. So those are surprisingly bad. You do a search on something that you know a little bit of something about and see what you get. And you will see content in there that is just flat wrong. I’m not saying it’s not the best summary. I’m saying it is factually incorrect, right?

CC: Yeah, I hate them right now.

SO: So those are surprisingly bad. And talking about search for a minute, which ties into your question about marketing, there’s some real problems now with SEO, with search engine optimization, because if I’m optimizing my content to be included in an AI overview that is A, wrong, and B, doesn’t actually give me credit, Pre-AI, those snippets that showed up would say, I sourced it from over here.

CC: Mm-hmm.

SO: And in many cases now, the AI overview is just like the sort of summary paragraph with no particular, there’s no citation. It doesn’t say where it came from. So what’s in it for me as a content creator? Why am I creating content that’s going to get taken over by the AI overview and then not lead to people going to my webpage, right? How’s that helped me? 

CC: Yeah. Yeah.

SO: So there’s some real issues there, there’s a move in the direction of thinking about levels of information. So thinking about very superficial information. How much does a cup of flour weigh? That type of thing. That’s just a fact and you can get it pretty much anywhere, we hope. And then there’s deeper information. Why is it better to weigh flour than to measure it? By volume, if you’re a baker.

CC: Yeah.

SO: And what does it look like to use weights? And are there differences among different kinds of flours? And what are some of the things I should consider when I’m going in that direction? So one of those, know, flours, a cup of flour weighs 120, sorry, a cup of all-purpose flour weighs 120 grams is a useful fact. And I don’t know if I really care if people peruse that further or come to my website for more about flour. The deeper information, the more detailed discussion of, you know, whole wheat versus all-purpose versus European flours versus American flours and all these other kinds of things, that requires more in-depth information and that is not so subject to being condensed into an AI summary. So that distinction between, you know, quick and dirty information versus deeper information, information that goes into a topic,

CC: Mm-hmm.

SO: We have a huge problem with disinformation and misinformation with information that is just flat out not either not correct or because of the way AI tools work, is trivially easy to generate content at scale. Tons and tons and tons and tons and tons of content. And because it’s trivially easy,

CC: Mm-hmm.

SO: That means it’s also trivially easy for me to generate, for example, a couple thousand fake reviews for my new product or a couple thousand websites for my fake products. It we can fractionalize down the generation of content. 

CC: Yeah.

SO: And the you know, the interesting part of this is that it implies that you could potentially, you know, we talk about doing A/B testing and marketing. You could do A/B/C/D/E/F/G testing pretty easily because you can generate lots and lots of variants and kind of throw a bunch of stuff against the wall and see what works. But the bad side of this is that you can generate fake news, fake information, fake content that is going to be highly, highly problematic from a content consumer trust point of view. And so that I think is the third piece that we’re looking at now that is going to be critical going forward. And that is information trust, content reputation or the reputation of content creators and credibility. 

CC: Mm-hmm.

SO: So for those of you listening to this podcast, how do you know it’s really us? Do you know these are live humans actually recording this podcast versus you know there’s now the ability to generate synthetic audio and you can create a perfectly plausible podcast which is really hard to say unless probably your AI and then it can probably do it perfectly but our perfectly plausible podcasts are you know how do you know that what that what you’re receiving in terms of content, digital content in particular, is actually trustworthy. And so I think ultimately there’s going to be some, need to be some tooling around verification, around authenticity, around, you know, this was not edited. You know, in the same way that you want to be able to verify that a photo, for example, is an accurate record of what happened when that photo was taken.

CC: Yeah.

SO: And if I went in and photoshopped it and cleaned it up, then that’s something that should be acknowledged. By the way, for the record, we do record these things and we do edit them. We try to stay on the right side of just editing out dumb mistakes and not editing it in a misleading way. 

CC: Yeah, ums and ahs and yeah.

SO: So it’s not like we record the whole thing from soup to nuts and never, you know, never break in and never edit things out because believe me, I’ve said some stuff that needed to be taken away. If you ever get the raw files, they are full of, I didn’t mean to say that. you might want to take that out. 

CC: Me too, so many times. Let me start over, that’s me a lot all the time.

SO: Yeah, sorry. Starting over. OK, but the point is that when we put out a podcast, we are saying this is our opinion, this is our content, and we’re gonna stand behind it. Whereas if it’s synthetic or AI generated or AI generated by these non-humans, you can do these weird, let’s make a podcast out of a blog post, well, okay, but what’s the value of that and why would I trust that content?

CC: Yeah.

SO: So that I think is going to be the big question for the next couple of years is what does it look like to be a content creator in an AI universe and to have the ability or sorry to as the content consumer to have the ability to validate what you’re listening to or reading or seeing.

CC: Yeah. And a point that you had brought up in, I believe it was the white paper that you authored back in 2023. One of the points in there was that, people are going to, because of this trust and credibility issue, people are going to have to start relying on companies and brands that they’re already familiar with for the information that they’re looking for rather than a search from scratch because, you know, search is so messed up right now. And that is something I’ve seen personally, like myself, I do it a lot more. I’ve seen that with friends and other contacts and stuff like that. That’s really what people are doing is they’re going to, you know, the source even for recipes. Recently, as I was looking for a recipe and instead of just Googling it like I used to because I’m so sick of the summarized AI search, I went to all recipes, you know, a place that I knew that I liked the recipes or I think Sally’s baking addictions or something like that. There’s a lot of different places like that that now I’ll just go there instead of, you know, a search from scratch. That’s… I don’t know how we’re gonna fix that problem, yeah, trust and credibility, that’s gonna be a huge one.

SO: It’s a really good example though because if you search for a particular recipe, even say two years ago, you would get a certain set of results and then you would say, I’ve heard of that website and I’ll go there. Now you search on a recipe, I’m getting 20, 30, or 40 websites that I’ve never heard of that all seem to have posted exactly the same recipe.

CC: Mm-hmm.

SO: I, you know, do I trust them? Do I trust them not to be AI-generated? Do I trust them to remember to not, you know, recommend that I put gravel in my recipes? You know, maybe not. And so I’m doing the same thing you are, which is, you know, reverting to trusted sources, trusted brands that I know that have a reputation for producing good recipes. Now, the flip side of this is that content is disappearing. 

CC: Hmm.

SO: So, I have an infamous triple chocolate cookie recipe, is really if you’re looking for a chocolate bar in the form factor of a cookie, that is what it is. It’s just stupid amounts of chocolate. 

CC: Mm-hmm. yes, that sounds amazing.

SO: It’s they’re delicious. And I think we’re putting them in our our holiday post, which may or may not have gone live already. So keep an eye out for that. But here’s the thing. I have the recipe because I got it out of Food & Wine about 20 years ago and I have a paper cut out of it that I wrote, hand wrote Food & Wine 12/01 on. So it was December of 2001 and so I went to Food & Wine. I went searching for this recipe knowing that it was originally published by them. I can’t find it. It is not there.

CC: Hmm. wow.

SO: It is not in their database, or at least it didn’t come up in their database when I searched on the exact name of the recipe. I then searched that exact recipe name, you know, just generally on the Internet, and I found three or four or five different places that had it, but none of them credited where I got it from 20 years ago, which I’m pretty sure is the original, right? Because these are all much more recent sites. So there are digital copies out there floating around, but they are not the original recipe and they didn’t credit the original publisher. Now, I don’t know exactly where Food & Wine got it because all I did was cut out the recipe. didn’t cut out the article. It was probably the context around it. But what I’m now reduced to is that I have a paper copy stashed in my paper recipe book, right? And I took a photo of the paper copy and put it on my phone. So I have a sort of digital version, but it is literally a photograph of a printout, which is, it is 2024 and we are doing photographs of printouts, but I can’t find it or I can’t find the original online.

CC: Yes. Yeah. That’s interesting. Why do you think that content has disappeared? Do you think it’s because of the breakdown of the content model where the AI engine is just eating what it’s already regurgitated a bunch of times? Do you think it’s that? Does an org pulled it for some reason or what do you think is the cause?

SO: Well, I mean, my best guess is that their recipe database only goes back so far and they just said anything more than X years old doesn’t need to be in here. They had some similar recipes. So maybe, well, this one’s been updated. It’s a little more modern, whatever. But it was just, it was really troubling that I, even knowing what the source was, I couldn’t find it.

CC: Yeah, that is troubling. So how can companies prepare knowing that this is our context, this is our landscape? What should we do to prepare for 2025 and beyond? Because it’s not just like next year.

SO: Beyond yeah, okay. So first of all you have to understand your regulatory environment Because that is very different by country or by region the issues that the people in the EU are looking at or American companies that sell in the EU, right. 

CC: Mm-hmm. Yeah.

SO: There’s an EU AI act, and there’s a whole bunch of guidance that goes along with that. So there’s some concerns there. Whereas here in the US specifically, we don’t have a lot of regulation around AI, if any. Mostly we lean on, well, if you put out something that’s incorrect, there’s potentially product liability. If you put out instructions that are wrong and people follow them and they get hurt or worse, then the product owner is probably liable for putting out wrong instructions. That’s kind of where our stuff lands. But as a content consumer, I think you have to do what you’re describing, Christine, and become very, very skeptical about your sources methods, right? Where’d you get this stuff? And do you trust the source that it came from? 

CC: Yes.

SO: If you are a content creator, then looking at questions around AI, the questions become, how can I employ AI inside my content workflows in a responsible way that achieves the goals that I have and doesn’t get me in big trouble in whatever way? And there’s also the question of, if I’m a content creator and I know that my consumers, my customers, are going to be using AI to consume my content, then how do I optimize that for that? How do I prepare for that? So it looks very different if you’re a person writing, creating new content, versus you’re the person deploying a chat bot on your corporate website that’s going to go read through your content corpus versus the person actually using the chat bot versus you name it. So.

CC: Yeah.

SO: And then, you know, we’re talking about AI generally, but of course we have AI tooling and we also have generative AI and we have all sorts of different things going on. So it’s a very, very broad topic, but overall, you know, what’s the problem I’m trying to solve? Can I apply this tool in a useful way? And what are some of the guardrails that I need to employ to keep myself out of trouble?

CC: Yeah, in one of our webinars from this year, from 2024, depending on when you’re listening to this podcast, Carrie Hane mentioned something along the lines of like, you know, when you’re dealing with AI, it’s such a huge topic. You need to break it down by what’s the purpose of what you’re trying to do and then tackle the problem that way. Okay. So to wrap up, Sarah, what are your final thoughts, wishes and or recommendations for the world as we enter this new era? Or I guess we’re in it, but as we try to recover.

SO: So the very short, we’ll try and keep it short. I think when all this AI stuff hit us a year or two ago, business leaders generally were hoping that they could just use AI as a general-purpose solution. Fire all the people, use AI for all the things, cool. 

CC: Mm-hmm.

SO: The truth that we’re grasping towards or finding our way towards appears to be that you can use AI as a tool and it is very, very good at patterns and synthesis and condensing content. And it is very, very bad at creating useful, accurate, net new content. That appears to be the bottom line as we exit 2024.

CC: Yeah. Well, thank you very much for unpacking this with us because I know that, you know, things are changing so fast. It’s helpful to have people like you that have been in the industry, the content industry specifically for a really long time that can help, you know, figure out a way through all this and give some practical ideas.

SO: Well, you know, in six months, we’ll just feed this podcast into the AI and tell it to fix it so that it remains accurate. And off we go.

CC: Yeah, there we go. And then we’re done. 

SO: And we’re done.

CC: Yeah. Thanks so much for being here today and for talking about this.

SO: Yeah, anytime.

CC: And thank you for listening to the Content Strategy Experts Podcast brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

The post Pulse check on AI: December, 2024 (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 19:41
Do enterprise content operations exist? https://www.scriptorium.com/2024/10/do-enterprise-content-operations-really-exist/ Mon, 21 Oct 2024 11:32:20 +0000 https://www.scriptorium.com/?p=22758 https://www.scriptorium.com/2024/10/do-enterprise-content-operations-really-exist/#comments https://www.scriptorium.com/2024/10/do-enterprise-content-operations-really-exist/feed/ 1 Is it really possible to configure enterprise content—technical, support, learning & training, marketing, and more—to create a seamless experience for your end users? In episode 177 of the Content Strategy Experts podcast, Sarah O’Keefe and Bill Swallow discuss the reality of enterprise content operations: do they truly exist in the current content landscape? What obstacles hold the industry back? How can organizations move forward?

Sarah: You’ve got to get your terminology and your taxonomy in alignment. Most of the industry I am confident in saying have gone with option D, which is give up. “We have silos. Our silos are great. We’re going to be in our silos, and I don’t like those people over in learning content anyway. I don’t like those people in techcomm anyway. They’re weird. They’re focused on the wrong things,” says everybody, and so they’re just not doing it. I think that does a great disservice to the end users, but that’s the reality of where most people are right now.

Bill: Right, because the end user is left holding the bag trying to find information using terminology from one set of content and not finding it in another and just having a completely different experience.

Related links:

LinkedIn:

Transcript:

Disclaimer: This is a machine-generated transcript with edits.

Bill Swallow: Welcome to The Content Strategy Experts podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we talk about enterprise content operations. Does it actually exist? And if so, what does it look like? And if not, how can we get there? Hi, everyone. I’m Bill Swallow.

Sarah O’Keefe: And I’m Sarah O’Keefe.

BS: And Sarah, they let us do another podcast together.

SO: Mistakes were made.

BS: So today we’re talking a little bit about enterprise content operations. If it exists, what it looks like. If it doesn’t, why doesn’t it exist? What can people do to get there?

SO: So enterprise content ops, I guess first we have to define our terms a little bit. Content operations, content ops is the system that you use to manage your content. And manage not the software, but how do you develop it, how do you author it, how do you control it, how do you deliver it, how do you retire it, all that stuff. So content ops is the overarching system that manages your content lifecycle. And when we look at content ops from that perspective, and of course we’re generally focused on technical content, but when we talk enterprise content ops, it’s customer-facing content, which includes techcomm, but also learning content, support content, product data potentially, and some other things like that. And ultimately, when I look at this, again bringing the lens back or going back to the 10,000-foot view, we have some enterprise solutions but only on the delivery side. The authoring side of this is basically a wasteland. So I have the capability of creating technical content, learning content, support content, and putting them all into what appears to be some sort of a unified delivery system. But what I don’t really have is the ability to manage them on the back end in a unified way, and that’s what I want to talk about today.

BS: So those who are delivering in that fashion, so being able to provide customer-facing information in a unified way, as far as their system for content ops goes, it’s more, I would say, human-based. So it’s a lot of workflow. It’s a lot of actual management of content and management of content processes outside of a unified system.

SO: So almost certainly they don’t have a unified system for all the content, and we’ll talk about why that is I think in a minute. It’s not necessarily human-based, it’s more that it’s fragmented. So the techcomm group has their system, and the learning group has their system, and the support team has their system, et cetera. And then what we’re doing is we’re saying, okay, well once you’ve authored all this stuff in your Snowflake system, then we’ll bring it over to the delivery side where we have some sort of a portal, website portal, content delivery CDP that puts it all together and makes it appear to the end user that those things are all in some sort of a, it puts it in a unified presentation. But they’re not coming from the same place, and that causes some problems on the backend.

BS: Right, and ultimately the user of that content doesn’t really care if it’s a unified presentation. They just want their stuff. They don’t want to have a disjointed experience, and they want to be able to find what they’re looking for regardless of what type of content it is.

SO: Right, and the cliche is “don’t ship your org chart,” which is 100% what we’re doing. And so let’s talk a little bit about what does that mean, what are the pre-reqs? So in order to have something that appears to me as the content consumer to be unified, well for starters, you mentioned search. I have to have search that performs across all the different content types and returns the relevant information. And what that usually means is that I have to have unified terminology. I’m using the same words for the same things in all the different systems. And I need unified taxonomy, classification system metadata so that when I do a search, everything, and maybe I’m categorizing or I’m classifying things down and filtering, that when I do that filtering, that the filtering works the same way across all the content that I’ve put into the magic portal. So taxonomy and terminology are the things that’ll make your search, relatively speaking, perform better. So we have this on the delivery side and that’s okay-ish, or it can be, but then let’s look at what we’re doing on the authoring side of things because that’s where these problems start.

BS: So what do they start looking like?

SO: Well, maybe let’s focus in on techcomm and learning content specifically. We’ll just take those two because if I try and talk about all of them, we’re going to be here for days and nobody wants that. All right, so I have technical content, user guides, online help, quick snippets, how-tos. And I have learning, training content, e-learning, which is enabling content, I’m going to try and teach you how to do the thing in the system so that you can get your job done. Now, let’s go all the way back to the world where we have an instructional designer or a learning content developer and a technical content developer. So for starters, almost always those are two different people, just right off the bat. And instructional designers tend to be more concerned with the learning experience, how am I going to deliver learning and performance support to the learner? And the technical writers, technical content people tend to be more interested in how do I cover the universe of what’s in this tool set, or this product, and cover all the possible reasonable tasks that you might need to perform, the reference information you need, the concepts that you need? It’s a lot of the same information. It’s there’s a slightly different lens on it. And in the big picture, we should be able to take a procedure out of the technical content, step one, step two, step three, step four, and pretty much use that in a learning context. In a learning context, it’s going to be, hey, when you arrive for your job at the bank every morning you need to do things with cash that I don’t understand. And here’s a procedure, and this is what you’re going to do, steps 1, 2, 3, 4, 5, and you need to do them this way and you need to write them down, and it tends to be a little more policy and governance focused, but broadly it’s the same procedure. So there should be the opportunity to reuse that content. And big picture, high-level estimate is probably something like 50% content overlap. So 50% of the learning content can or should be sourced from the technical content. The technical content is probably a superset in the sense that the technical content covers, or should cover, all the things you can do, and training covers the most common things or the most important things that you need to do. It probably doesn’t cover a hundred percent of your use cases. Okay, so now let’s talk about tools.

BS: Right because I was going to say these two people, the technical writer and the training developer, they are using, at least historically, two very different sets of tools to get their job done.

SO: Right. So unified content solutions, without getting into too many of the specifics, which will get me in big trouble, basically the vendors are working on it, but they’re not there yet. There’s a lot of point solutions. There’s a lot of, oh yes, we have a solution for techcomm and we have a solution for learning and we have a delivery solution, but there’s not a unified back end where you can do all this work.

And some of the vendors have some of these tools in their stable, some of them don’t. But from my point of view, it doesn’t really make a whole lot of difference whether you buy two-point solutions from separate vendors or from the same vendor because right now they’re disconnected.

 

BS: They’re two-point solutions.

SO: Yeah, they’re all point solutions. So it’s not good. And then that brings us to how can we unify this today? What can we do and what kind of solutions are our customers building or are we building with our customers? So a couple of things here. Option A is you take your structured content solution and you say, “Okay, learning content people, we’re going to put you in structured content. We’re going to move you into the component content management system. We’re going to topicify all your content, and we’re basically going to align you with the techcomm toolset and make that work.” We have a few customers doing that. It works well for learning content developers that are willing to prioritize the document structure and process over the flexibility in the downstream learning experience.

BS: Right.

SO: That’s a small set of people. Most learning content developers are not willing to prioritize efficiency and structure over delivery, which I think is actually the root cause.

BS: Right. Now, those who are doing this, they are seeing some benefit in being able to produce a wide variety of their training deliverables from that unified source. But again, it comes back to how willing people are to give up the flexibility that they have in developing course content.

SO: We can talk about big picture and we can talk about all the things, but this decision, this approach 100% of the time comes down to how badly do you want to be able to flail around in PowerPoint. And if having the ability to put random things in random places on random slides is critical, then this solution will not work.

BS: So on the flip side, you would then look to maybe somehow connect your technical communication system to your learning repository.

SO: Right. So you take your techcomm content and you treat it as a data source essentially for your learning content, and you just flow it into the learning authoring environment. It turns out that’s hard.

BS: It’s very hard.

SO: Super difficult. It’s difficult to get your structured content out into a format that the learning content system can accept in a reasonable manner.

BS: And if your content is highly structured, you’re likely losing a lot of semantic data along the way to get it there.

SO: Yeah, you lose a lot, but it’s just bad. And ultimately, this almost always lands, I mean we talk about flow it in there, but ultimately this almost always means that you’re going to be copying and pasting and reformatting and re-reformatting, and it’s just terrible.

BS: So more often than not, we’re not seeing this level of unification then.

SO: Yeah, I mean, are you connecting your techcomm and you’re learning in a structured environment? A few people, yes. And for the right use case, it’s great. Or flow the techcomm content down into the learning environment, but ultimately not worth it, we’ll just copy and paste. So in terms of unification, basically none of the above, right?

BS: Mm-hmm. So how would people get there?

SO: So there’s a couple of options. The probably most common one is some sort of a DIY solution. We’re going to find a way to glue these systems together. We’re going to find a workflow that involves converting the techcomm content, which usually is created first and move it into the learning content. Again, for the right group, for the right environment, unifying everything in a structured authoring environment makes a lot of sense. I think ultimately that’s where it’s going to land, but the structured content systems need to do some work to make themselves into what amounts to a reasonable viable authoring solution for the learning content people. Basically the learning content people are not willing to put up with the shenanigans that ensue in order to use a structured content system. And I’m not even sure they’re wrong, right?

BS: Yeah.

SO: They’re just saying, “No, this is terrible and we’re not doing it.” Okay, well, that’s fair. So either you tinker and put it all together in some way. Option B is wait for the vendors, wait for the vendors to fix this problem, fix this requirement, and deliver some systems that have a solution here. And it’ll be a year or two or five or 20, and eventually they’ll get to it. You can go with a delivery-only solution, so we’re only going to solve this on the delivery side. If you do that, you really, really, really, really need an enterprise-level taxonomy and terminology project group.

BS: Absolutely.

SO: You’ve got to get that aligned. You cannot go around having half your text say entryway, and half your text say hallway, half your text says study, and half your text says den. And I’m halfway down a clue reference, was it the wrench or the outlet? No, no, no, okay. You have to get your terminology in alignment. You must because otherwise people search on oven and it doesn’t return range because those are in fact… Well, okay, they’re not exactly the same thing, but close enough, so those types of things. So you’ve got to get your terminology and your taxonomy in alignment. Most of the industry, like most of the people out there that are doing techcomm and learning content, I am confident in saying have gone with option D, which is give up. Just don’t do it. Just don’t bother. We have silos. Our silos are great. We’re going to be in our silos, and I don’t like those people over in learning content anyway. I don’t like those people in techcomm anyway. They’re weird. They’re focused on the wrong things, says everybody, and so they’re just not doing it. I think that does a great disservice to the end users, but that’s the reality of where most people are right now.

BS: Right, because the end user is left holding the bag there trying to be able to find information using terminology from one set of content and not finding it in another and just having a completely different experience.

SO: They make it a you problem.

BS: Yeah. So if you’re seeing opportunities to unify content operations in your organization, what are some key ways of communicating that up so that you can begin to get some funding, some support, some executive level buy-in to do these things?

SO: The technology problem is hard. Putting everybody in an actual unified authoring environment is a really hard problem. So I think what you want to do is go for the easier solutions where you can get some wins. And the easier solutions where you can get some wins are consistent terminology across the enterprise. So we’re going to have some conversations about terminology and what we need to do in terms of terminology, and everybody’s going to agree on the words we’re going to use. Taxonomy, what does our classification system look like? What are the names for our products and how do we label things so that when we deliver all these different content chunks, they’re coming from all these different systems, we can bring them into alignment? I mean, you can do the work on the back end to align taxonomy or you can do it on the delivery side to say these things are synonyms. So there are some ways of addressing this even when you get down into the delivery end of things. But I think what you want to do is start thinking about the things… Oh, and translation management, which ties into both terminology and taxonomy. I think you want to start maybe with those things and then slowly work your way upstream, like a salmon, avoiding the bears on the… Okay, you’re going to try and work your way upstream towards the authoring. Because ultimately if you look at this from an efficiency point of view, it would be much, much more efficient to have unified authoring and put it all together. It’s just that right now today, that’s a heavy lift and it only makes sense in certain environments. So what can we do to prepare for that so that when we do get to that point and those tools do start to unify a little bit better, we’ve done the legwork that’ll make it easier to make that transition as we go?

BS: Right. So it’s spending the effort to unify as much as you can the content and the language and the organization, as well as trying to keep pace with where I guess all of these different industry tools are going and making sure that you are making improvements in the right direction. So if you’re thinking about structured content, that you are keeping an open mind as to where and how I guess these other groups can start leveraging what you’re using and vice versa. And I guess talking with the other groups in your organization. So if you’re in techcomm, then talk to the training group, see what they’re doing, see what their plan is, what’s their five-year roadmap? Are they looking at certain technologies? How might that play into your development, and vice versa, being able to share that information.

SO: And I know, Bill, you’re doing a session on re-platforming at tcworld this November 2024. And when you’re thinking about re-platforming, what are some of the factors that you should be looking at there that tie into this?

BS: Well, it directly plays into that next step of we have a platform on the techcomm side, we bought it 12 years ago, it served our needs. But the training group, let’s say, has been talking and they have this other system that they’re not too happy with, and they want to see if they can start sharing our content.

Well, then you have an open conversation to say, “Okay, how can we get to a shared solution, what do these requirements look like,” and go ahead and pick a system that kind of meets both requirements. But then you have that heavy lift of just saying, “Okay, so now we have these two different old systems and we need to dump our content, and I use that very generally, into the new system, so that everyone from those two groups can now author in the same place.

SO: And I’m thinking as you’re evaluating these systems, all other things being equal, which they are not, but all other things being equal, you would look for the one that’s more open, that is more flexible knowing that things are going to change because they always do. What’s available to us that’ll give us maximum flexibility in a year or two or five when these new requirements come in that we have not anticipated at this point?

BS: Right, because you’re exiting your old systems because they are potentially inflexible. We cannot accommodate anything new. We can sustain what we’re doing indefinitely, but we can’t accommodate this new thing that we need to do.

SO: Yeah, it’s interesting because looking at the the techcomm landscape, we have a lot of customers and a lot of just generalized ecosystem that has moved into structured content, and starting as early as the late nineties or maybe even the early nineties in Germany, people were moving into structured content at scale. And now we’re looking at it and saying, “Okay, well there’s all this other content out there and we need to look at that and we need to look at whether we can bring that into the structured content offerings.” But not unreasonably, those other groups are looking at it and pushing back and saying, “This isn’t optimized for the kind of work that I do. It’s optimized for the kind of work that you people do. So how can we improve this and bring it into alignment with what the new and additional stakeholders need?” And it’s a hard problem, I really feel for the software vendors. It’s easy for us sitting here on the services side to say, “Hey, do better,” because we’re not doing the work.

BS: Very, very true. And at that point, you have a winner and a loser, and I hate to say it that way, but you have a winner and a loser on the system side at that point. Where you’re pulling one other group in because you have an established structural approach and they could benefit from it, but basically they have to absorb the brunt of the change that’s going to happen, and it’s not necessarily fair.

SO: Well, yeah. I mean, life isn’t fair. But also I’ll say that that pain that you’re talking about, the people that are now in structured content, they had that pain. It was just 10 years ago-

BS: Very true.

SO: …and they’ve forgotten. For those of you that were around and in this industry 10 years ago, or 20 years ago, or 25, I mean, remember what it was like trying to get people to move from you will pry unstructured FrameMaker from my cold, dead hands. You’ll pry Microsoft Word from my cold, dead hands. You will pry PageMaker, Interleaf, Ventura Publisher from my cold, dead hands.

BS: WordStar.

SO: Okay. So tools come and go, and the tool that is the state-of-the-art, BookMaster, for today is not necessarily the tool that’s going to be state-of-the-art for tomorrow or yesterday. I mean, basically this stuff evolves and we have to evolve with it, and we have to understand what are the best and most reasonable solutions that we can offer to a customer or to a content operations group in order to deliver on the things that they need to deliver on.

BS: Very true. So there are no unicorns.

SO: No unicorns, or maybe more accurately you can construct your own unicorn and it might be awesome, but it’s going to be a lot of work.

BS: So I think we could probably talk about this for hours because there are so many different facets that we can touch upon, but I think we’ll call it done for now, and maybe we’ll see you soon in a new episode?

SO: Yeah, if this speaks to you, call us because we’ve barely scratched the surface.

BS: All right. Thanks, Sarah.

SO: Thanks.

BS: And thank you for listening to The Content Strategy Experts podcast brought to you by Scriptorium. For more information, visit Scriptorium.com or check the show notes for relevant links.

The post Do enterprise content operations exist? appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 24:34
Survive the descent: planning your content ops exit strategy https://www.scriptorium.com/2024/10/survive-the-descent-planning-your-content-ops-exit-strategy/ Mon, 07 Oct 2024 11:34:59 +0000 https://www.scriptorium.com/?p=22712 https://www.scriptorium.com/2024/10/survive-the-descent-planning-your-content-ops-exit-strategy/#respond https://www.scriptorium.com/2024/10/survive-the-descent-planning-your-content-ops-exit-strategy/feed/ 0 Whether you’re surviving a content operations project or a journey through treacherous caverns, it’s crucial to plan your way out before you begin. In episode 176 of the Content Strategy Experts podcast, Alan Pringle and Christine Cuellar unpack the parallels between navigating horror-filled caves and building a content ops exit strategy.

Alan Pringle: When you’re choosing tools, if you end up something that is super proprietary, has its own file formats, and so on, that means it’s probably gonna be harder to extract your content from that system. A good example of this is those of you with Samsung Android phones. You have got this proprietary layer where it may even insert things into your source code that is very particular to that product line. So look at how proprietary your tool or toolchain is and how hard it’s going to be to export. That should be an early question you ask during even the RFP process. How do people get out of your system? I realize that sounds absolutely bat-you-know-what to be telling people to be thinking about something like that when you’re just getting rolling–

Christine Cuellar: Appropriate for a cave analogy, right?

Alan Pringle: Yes, true. But you should be, you absolutely should be.

Related links:

LinkedIn:

Transcript:

Disclaimer: This is a machine-generated transcript with edits.

Christine Cuellar: Welcome to the content strategy experts podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize and distribute content in an efficient way. this episode, we’re talking about setting your ContentOps project up for success by starting with the end in mind, or in other words, planning your exit strategy at the beginning of your project. So I’m Christine Cuellar, with me today is Alan Pringle. Hey, Alan. 

Alan Pringle: Hey there.

CC: And I know it can probably sound a bit defeatist to start a project by thinking about the end of the project and getting out of a new process that maybe you’re building from the beginning. So let’s talk a little bit more about that. Why are we talking about exit strategy today?

AP: Because everything comes to an end. Every technology, every tool, and we as human beings, we all come to an end. And at some point, you are going to have tools, you’re gonna have technology and process that no longer supports your needs. So if you think about that ahead of time, and you’re ready for that inevitable thing, which will happen, you’re gonna be much better off.

CC: Yeah. So this conversation started around the news of the DocBook Technical Committee closing, and that’s kind of a big deal for a lot of people, and it kind of sparked this internal conversation about like, you know, what if that happened to you? How can people avoid getting caught by surprise? And of course, as Alan just mentioned, the answer to that is really to begin with the end in mind, to have an exit strategy because everything does end at some point. So this got me thinking about, you know, I don’t know, Alan, you’ve seen the horror movie The Descent, right? You’ve seen that movie? Yes, because it’s amazing and it’s a horror movie and it’s awesome. So it me kind of think of that because, you know, this group, and I’m not going to spoil it, no spoilers for people who haven’t seen it yet, but, if you haven’t, go watch it. The first one’s my favorite. I haven’t seen the second one, so I’m biased. Anyways, that’s not the point. This group plans to go along one path, you know, down these caves which are definitely in North Carolina, right Alan? That’s definitely where they take place.

AP: Well, they say it is in North Carolina, but it is quite clearly not filmed in North Carolina. As someone who is familiar with Western North Carolina, I had to laugh at this movie trying to pass off somewhere in the UK as like the Appalachian Mountains, but that’s just a quibble. So go ahead with your story.

CC: Anyways, yeah, they got a mountain in there, right? And then there’s a path into the mountain. Of course, they’re going to explore this deep, dark cave. So they’re descending as the name implies. And so they’re planning to go along one path. think someone maybe tricked someone else along the way. I can’t remember. But they’re planning on going down one path. And there’s a lot of things that begin to happen that they didn’t plan on. And one scene in particular, there’s a cave that collapses and of course that means they have to pivot, right.

AP: Yeah.

CC: So when you’re thinking about building an exit strategy and trying to plan for things that you can’t anticipate, how do you anticipate things you can’t anticipate?

AP: Well, first of all, let’s be clear. All the things that happened in that movie happened in a period of like two hours or an hour and a half. And part of the issue with any kind of process and operations is things can slowly start to go badly and you just kind of keep on trucking and really don’t pay attention to it. But…

CC: Yes.

AP: It’s not just about fine tuning your operations. That’s a whole other conversation. You your process is going to require updating every once in a while. There going to be new requirements and you need to address them in your content ops by changing your process, updating your tools, maybe adding something new. What we’re talking about here is when those tools and that process, they’re coming to an end, for example, because a particular piece of software is being defecated. It is end of life. What are you going to do?

CC: Mm-hmm.

AP:  What if there is a merger? You have a merger and there are two systems doing the same thing. One of those systems is going to lose and go away. Why are you going to maintain two of the same systems? So you’re going to have to figure out how to pivot to get to that.

CC: Mm-hmm.

AP: So there are all of these things that can happen that mean you have got to exit whatever you were doing and move into something new, something different. And the reasons are many, like I just mentioned, but the end result is, are you ready for when that happens? In a lot of cases, frankly, people aren’t.

CC: Yeah. So if you could give listeners three pieces of advice on how to be less dependent on a particular system, if you had to narrow it down to three, what would you suggest to help them not be just dependent on one particular system or maybe a set of systems?

AP: One thing is when you’re choosing tools, if you end up something that is super proprietary, has its own file formats, et cetera, that means it’s probably gonna be harder to extract your content from that system because it is proprietary. Even if your content is in a standard, and in a lot of cases, of course, I’m talking about DITA, the Darwin Information Typing Architecture and XML standard. Even with DITA, even though it’s open source and a standard, some of the systems that can manage DITA content put their own proprietary layer on top. A good example of this is, for example, those of you with Samsung Android phones. I’ve had one in the past.

CC: Yeah, that’s me.

AP: Samsung puts their own proprietary layer on top of the Android operating system and a lot of that stuff frankly I hate, but that’s not the point of this conversation, but it’s the same issue. You have got this proprietary layer where it may even insert things into your source code that is very particular to that product line. So look at how proprietary your tool is or your toolchain is and how hard is it going to be to export? That should be an early question you ask during even the RFP process. How do people get out of your system? And I realize that sounds absolutely bat, you know what, to be telling people to be thinking about something like that when you’re just getting rolling–

CC: Appropriate for a cave analogy, right?

AP: Yes, true. But you should be, you absolutely should be.

CC: And how do you know you are going to get onto the other two things to think about in just a second, but question there, how do, what are some maybe green flags for how that question should be received or how you want that question to be received if it’s going to maybe be the right fit?

AP: I would hope some variation of the answer would be you can export to this standard, although that often is probably not the answer that you’re going to get.

CC: Okay, as standard. What are some other things people need to keep in mind in order to not be system-dependent?

AP: I don’t know if it’s so much system-dependent, but you need to think culturally about what this means. People become very attached to their tools because they become very adept. They become experts in how to manipulate and do whatever with a certain tool set. And they feel like, you know, I am in total control here. I know what I’m doing. Things are running well. 

CC: Yeah.

AP: And when it turns out that tool is going to have to go away, their entire process and their focus on being an expert, it’s blown. It’s just blown away. And that can be very hard to deal with from a person level, a people level, having to tell people, yeah, this is a shock to your system. You’ve been using this tool forever. You’re really good at it. Unfortunately, that tool is being discontinued. We’re gonna have to move to something else. That can be very hard for people to swallow and it’s understandable.

CC: Mm-hmm.

AP: It’s completely understandable. One other thing that I will mention is if you can get your source content, not the actual delivery points I’m talking about here, but wherever you’re storing your source in some kind of format neutral, file format and again, talking mostly about XML content, extensible markup language, because when you create that content, you are not building in the formatting. You were creating it as a markup language. And the minute your content is in a markup language, it becomes easy to easier. I shouldn’t say easy because nothing here is easier. There is a better path to moving that content, possibly to another standard, for example, because you can set up a transformation process that’s very programmatic.

CC: Mm. Yeah.

AP: This particular element in this model becomes this. And when you hit this particular element in this model, you start a new file. If you see this particular attribute, it needs to be moved over here to this attribute.

CC: Hmm.

AP: So it’s a matching process that you have to do so it can be programmatic. So anytime you get into something that’s XML and what does that X stands for? And what does that X stand for? It stands for extensible. That gives you a little more control because it gives you more flexibility. And that’s weird to think more flexibility gives you more control. That almost seems kind of diametrically opposed, but that’s true.

CC: Yeah.

AP: Because you can move something out more easily because it is something that can be sliced, diced, transformed. So there’s that angle.

CC: Yeah. Yeah. So, okay. So as a non-technical person myself, I’m gonna see if I can summarize this and you tell me whether or not this is accurate. So from a very high level view of this, it’s almost like, you know, rather than keeping all of your content in one particular content management system or something like that, you’re keeping it in a, it’s all stored in a separate box or a separate repository. And then whatever system you’re going to use is your delivery output. It’s almost like a, is that accurate to say? Okay.

AP: Because when you are in a format that is not, doesn’t have the, if you’re in a file format that does not have the formatting of your content built in, that means you can deliver to a bunch of different presentation layers. You can automatically apply it. 

CC: Okay.

AP: And that’s really, I was kind of headed that way. You can even see your new system as almost a delivery target, I need to figure out how to transform my source content in a way that a new tool, a new system can understand. And so basically you’re saying, okay, let’s export it, let’s clean it up, maybe do some automated transformations and programming on it to make it more ingestible by the other system.

CC: Mm-hmm.

AP: So you could even look at this process of moving from one system to another as being really your final destination, another horror movie, your final delivery target, moving that source content into another system that you’re about to use.

CC: Yeah. Thank you also for unpacking that because that was much more clear than my example, but that was really helpful. So since people are planning with the end in mind, how far out are we thinking this exit strategy would typically be implemented? How far down the road is this?

AP: And that’s the thing, I can’t answer that question because you never know what is going to happen. you, right, mean, it’s like the cave collapse analogy like you mentioned, sometimes you have to take a detour, not of your own choice or of your own making. And again, mergers, tools being discontinued, companies that go under, all of these things can happen. And you need to have a contingency.

CC: Mm. Never know. So it’s a contingency plan, really. Yeah.

AP: And you need to have a contingency plan in place to get ready to exit. It’s just like during natural disaster season, you hear people say, do you have your emergency preparedness kit ready? It’s a very similar thing, but it’s in the corporate world. This is as much about risk reduction as it is about smooth content operations, at least from my point of view.

CC: Yeah. Yeah. And you mentioned several like big things that happen that can trigger the need to, you know, it’s time to exit and move on. Are there any scenarios where there isn’t a big thing that happens like a merger or a business closing or different things like that? Are there more quiet ways where you realize you may not realize that it’s time to exit? But it’s more the need to exit is more subtle.

AP: If your content process, your content operations cannot support new business requirements, for example, you need to connect to a new system, you need to deliver your content in another format. If your current system and tools can’t do that, that is a sign you’re probably going to have to find the exit door and find something that will support whatever it is that you cannot do.

CC: Mm-hmm.

AP: It’s usually you just hit this wall where you realize we have taken this tool and this process as far as it can go. It is time to move on. And here I am going to toot the consultant horn again. But that is when you start getting that uneasy feeling, that’s when you can talk to a consultant who can help you unpack it to see if it’s really a sign that the tool is no longer going to fit you or if there’s something you can do within your current system to make things work. That’s when a third-party point of view can be very valuable.

CC: Question for you on that third party perspective, since you’ve seen companies make these transitions many times and exit something and go into a new one, what’s one thing or pitfall that companies need to be aware of that maybe isn’t included in their exit strategy that should be? 

AP: Something that’s very common is to frame everything you want from your new system from the perspective of what your current system is doing. Even though your current system is not going to do something that you need it to do, you still are so fixated on how it is doing things and you can’t get beyond that. That can be a huge problem. Being able to step back and objectively look. This system can’t do this.

CC: Mmm.

AP: We need it to do that. And this is how we need to get there. People can get so mired in the, this is how we’re doing things. And we’re going to move over to this new system and do the same exact thing, just in new tools. That’s not a reason to move. There’s some compelling thing that’s forcing you out of that other tool. So now is the time to change things, update things, make some nips and tucks. Maybe undo some things. Don’t just wholesale move over into a new system and keep things status quo. Otherwise, why bother?

CC: Yeah, yeah. Is there anything else you can think of when you get to when it’s time to start the exiting process? Anything else that you can think of that companies need to have at the forefront of their mind?

AP: It’s the communication. And that includes the vendors and it includes with the people inside the company who are using the tools. And I would also mention it includes procurement. They need to understand the wins, the whys, why you’re having problems, all that, because there can be contractual obligations about when a license ends and another one begins. So you’ve got to keep that information flowing to all kinds of parties to make this exit, this transition work well.

CC: Yeah, you want it to end like the American version of The Descent where the hero actually gets out and drives away in the car, not like the UK version where the person is still stuck in the cave, which is the better ending for a horror movie, I will say, but not for your content ops project. Definitely.

AP: Yeah, but at least in a content ops project, you’re not going to get eaten by some humanoid blind thing living at a cave. 

CC: Hopefully, right? That’s ideal. That’s the best case scenario. 

AP: Hopefully not. Yeah.

CC: Well, Alan, is there any other parting advice you can think of before we wrap up today’s topic?

AP: Don’t go into a cave unprepared. Okay? Just don’t. How’s that?

CC: Yeah, don’t yeah that that is actually good advice. Yeah, don’t go unprepared. That’s really helpful. And like Alan mentioned earlier a third party perspective. I know it’s very biased to be saying it but a third party perspective when it’s time to either make the exit transition or plan for the exit transition. Content strategists can really help with that because we’ve seen we’ve seen a lot of things a lot of caves. Yes. Yeah.

AP: A lot. Maybe not cave dwellers, but a lot.

CC: Hopefully, hopefully no one has actually seen those. Yeah, well, thank you so much for being here, Alan. I really appreciate you talking about this with me today. And thank you for listening to the Content Strategy Experts Podcast brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

The post Survive the descent: planning your content ops exit strategy appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 18:06
Enterprise content operations in action at NetApp (podcast) https://www.scriptorium.com/2024/09/enterprise-content-operations-in-action/ Mon, 23 Sep 2024 11:30:27 +0000 https://www.scriptorium.com/?p=22667 https://www.scriptorium.com/2024/09/enterprise-content-operations-in-action/#respond https://www.scriptorium.com/2024/09/enterprise-content-operations-in-action/feed/ 0 Are you looking for real-world examples of enterprise content operations in action? Join Sarah O’Keefe and special guest Adam Newton, Senior Director of Globalization, Product Documentation, & Business Process Automation at NetApp for episode 175 of The Content Strategy Experts podcast. Hear insights from NetApp’s journey to enterprise-level publishing, lessons learned from leading-edge GenAI tool development, and more.

We have writers in our authoring environment who are not writers by nature or bias. They’re subject matter experts. And they’re in our system and generating content. That was about joining us in our environment, reap the benefits of multi-language output, reap the benefits of fast updates, reap the benefits of being able to deliver a web-like experience as opposed to a PDF. But what I think we’ve found now is that this is a data project. This generative AI assistant has changed my thinking about what my team does. Yes, on one level, we have a team of writers devoted to producing the docs. But in another way, you can look at it and say, well, we’re a data engine.

— Adam Newton

Related links:

LinkedIn:

Transcript:

Disclaimer: This is a machine-generated transcript with edits.

Sarah O’Keefe: Welcome to the content strategy experts podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage structure, organize and distribute content in an efficient way. In this episode, we talk about content operations with Adam Newton. Adam is the senior director of global content experience services at NetApp. Hi everyone, I’m Sarah O ‘Keefe. Adam, welcome.

Adam Newton: Hey there, how are you doing, Sarah?

SO: It’s good to see and/or hear you. 

AN: Good to hear your voice.

SO: Yeah, Adam and I go way back, which you may discover as we go through this podcast. And as those of you that listen to the podcast know, we talk a lot about content ops. So what I wanted to do was bring somebody in that is doing content ops in the real world, as opposed to as a consultant.and ask you, Adam, about your perspective as the director of a pretty good-sized group that’s doing content and content operations and content strategy and all the rest of us. So tell us a little bit about NetApp and your role there.

AN: Sure. So NetApp is a Fortune 500 company. We have probably close to 11,000 or more global employees. Our business is primarily data infrastructure, storage management, both on-prem. We sell storage operating system called ONTAP. We sell hardware storage devices, and we are most importantly, think, at this day and age, integrating with Azure, Google Cloud Platform, and AWS on first -party hyperscaler partnerships. My team at DENAP is… I actually have three teams under me. The largest of those three teams is the technical publications team. The other two teams globalization responsible for localization translation of both collateral and product. And then finally, and most new to my team is our digital content science team, which is our data science wing. Have about 50 to 53, think, employees at this point in my organization and all told probably about a hundred with our vendor partners.

SO: And so I think we all have a decent idea of what the technical publications team and the globalization teams do. Can you talk a little bit about the data science side? What does that team up to?

AN: Yeah, that’s a thank you for asking that question. So about two years ago, I was faced with an opportunity to hire. And maybe some of your listeners who are managers are familiar with that situation, right? I hope they are, rather than not being able to hire. I took a moment and thought a little bit more about what I needed in the future. And I thought a little bit differently about roles and responsibilities, opportunities inside NetApp and the broader content world and decided to bring in a data scientist. And then I thought a little bit more about, well, there are other data scientists at NetApp. Why would I need one? And I thought a little bit about the typical profile of the data scientists at that time at NetApp, mostly in IT and other product teams. Those data scientists were primarily quantitative data scientists coming from computer science backgrounds. And I thought, well, you know, we’re in the content business. I want to find a data scientist who is a content specialist and who has a background in the humanities and who also has skills in core data science skills, emphasizing, for example, NLP. And so that was my quest. And I was very, very fortunate to find a PhD candidate in English who wanted to get out of the academy and who had these skills. And it’s been an incredible boon to our organization. We’ve even hired a second PhD in English recently. And Sarah, since you and I are friends, I’ll say one was from UNC and one was from Duke. Okay. So we don’t have to have that discussion here. I’m an equal opportunity person. Although I did hire the UNC one first, Sarah.

SO: I see, I see. So for those of you that don’t live in North Carolina, this is… I’m not sure there is a comparison, but it is important to have both on your team. And I appreciate your inclusion of everybody. It is kind of like… I’ve got nothing.

AN: Yes.

SO: Okay, so you hired some data scientists from a couple of good universities. Or do they get along? Do they talk to each other? 

AN: Fabulously, yes. No petty grievances.

SO: Okay, just checking. All right. So how do you, in this context then, what does your environment look like? What kinds of things are you doing with the docs team? And what’s the news from NetApp docs?

AN: So maybe a little bit of background actually, and you and I have talked about this previously, but we used to be a data shop. And then as things sped up inside our business with the adoption and development of cloud services at NetApp, we found that some of the apparatus of our data infrastructure, our past practices weren’t able to keep up to speed of the cloud services that were being developed. I think this is actually, I’ve talked to other people in our business, this is a very common situation. We handled it in one way. There are many ways to handle it, but the way we chose to handle it was to exit data and to move in our source format anyway to a format called ASCII doc, which I always frequently describe as a dialect of markdown. And we went from being a closed system of technical writers working inside a closed CMS to adopting open source. We now work in GitHub. Our pipeline is all open source and we have now contributors to our content that are not technical writers. In some cases, they’re technical marketing engineers, solution architects, and so forth as well as a pipeline of docs that we build through automations where we, for example, transform API specifications or reference docs that are maintained by developers and output those into our own website docs.netapp.com. In addition to just the docs part, my globalization team has been using for many years, machine translation. So speaking to one particular opportunity of being in one organization, when we output our docs and whenever we update our docs in English, they’re automagically updated in eight other languages and published to docs.netapp.com. So we roughly maintain 150,000 English files and you can times those by eight. Is that right? Did I do the math right? Yeah.

SO: Or nine, depending. 

AN: Nine. Yeah. Is English the language? Yeah, sure. Let’s count it.

SO: Depends on how we use it. Okay, so you have an ASCII doc, you know, Markdown-ish. Is it fair to call it Docs as Code environment?

AN: So we often describe it as a content ops, environment. I’m not sure if that is, different from Docs as Code, but I think maybe I will accept that as a reasonable description in the sense that, we have asked our team members to think about the content that they’re writing as highly structured, semantically meaningful units of information. I think in the same way I think a developer can be asked to think of their code being that way and the systems in which we write in VS code, many engineers are writing in that.

SO: Mm-hmm.

AN: And of course our source files, as I mentioned, all in our automation and our pipelines are all based on being in GitHub.

SO: And so then you’ve got docs.netapp.com as a portal or a platform where a lot of this content goes. And what’s happening over there? Do you have any news on new things you’ve done there?

AN: Yeah. I mean, very recently, you know, the timing of this is really interesting. We, have been working on a generative AI solution, for a year, Sarah. you’ll recall the, the hype, right? When, when chat GPT exploded onto the, the, into the public consciousness, right? Through the media and, shortly thereafter, we began imagining what it might look like to leverage that technology, those types of technologies to deliver a different customer experience. And we identified a chatbot as being something we thought could add to the browse and search experiences on docs .netapp .com. And we just released that on the 20th of August announced it here internally inside of NetApp on the 27th. So we are literally like 48, 72 hours into a public adventure here.

SO: I take full credit for planning it, even though I knew nothing about any of this.

AN: Yeah. And that was a long time. I think it’s worth noting too. It was a long time. And I think it’s beyond the full dimensions of this, this discussion to talk about why it took so long. But I will say maybe to, you know, the, were early adopters and we felt, we felt the pain and the benefit of being that, you know, it was like, you know, changing the tires on a, on a race car, right? That was speeding around the track. So we had to learn and be responsive and also humble in the sense that there were some missteps that we had to recover from and some magical thinking, I think, at the beginning of the project that was qualified more over the course of the project.

SO: And so what does that GenAI solution sitting in or over the top of the docs content set, what does that do in terms of your authoring process? Do you have any, are there any changes on the backend as you’re creating this content that is then consumed by the AI?

AN: I would say we’re in the process of understanding the full implications of having this new output surface, this generative AI assistant, and fully grappling with what the implications are for the writers. We find ourselves frequently in discussions about audience. And audience is all those humans that we have been writing for and a whole bunch of machines that we now need to think more consciously about, you know, and it’s, we find ourselves often talking about standards and style, but not just from the perspective of, you know, writing the docs in a consistently patterned way for humans to be able to consume well, but also because patterns and machines are a marriage made in heaven. And we see actually opportunities to begin to think of the content we’re writing as a data set that needs to be more highly patterned and predictable so that a machine can consume it and algorithmically and probabilistically decide how to generate content from the content we’re creating.

SO: And where is this going in terms of what’s next as you’re looking at this? I think you mentioned that there’s other opportunities potentially to add more data slash content.

AN: Yeah, actually, if I back up to a detail and I shared, but maybe quickly, you know, we do have writers in our authoring environment who are not writers. They are by nature and by bias sort of, they’re, people who have their subject matter experts, right? And they’re in our system and they’re generating content. But I think that some of the opportunities that, so that was about join us in our environment, right? Join us in our environment, reap the benefits of multi-language output, reap the benefits of fast updates, reap the benefits of being able to deliver a web-like experience as opposed to a PDF. But what I think we’ve found now is that this is a data project. This generative AI assistant has changed my thinking about what my team does. And I think, yes, on one level, true. Yes, we have a team of writers and there’s a big factory devoted to producing the docs. But in another way, you can look at it and say, well, we’re a data engine. We own a large, own, maintain a large data set and the GenAI is one consumer of that data set. But we’re also thinking about our data set as being joinable to other data sets inside of NetApp. And in particular, I work inside the chief design office at NetApp, along with UX researchers and designers. And we’re also more broadly part of our platform team at NetApp, shared platform team. So we’re thinking about how might we join our data with other teams’ data to create in-product experiences that are data-led or data-driven in combination with curated experience. So if your viewers were to be able to see me, I am waving my hand a little bit, not because I’m dissembling, but more because I’m aspiring. And I think there’s a really, really cool future ahead for, a way, Sarah, that I think is super energizing for the writers, right? To see that their work is being reframed, not replaced or changed, right? The fear of writers with GenAI, right, of being replaced. Well, I would offer this as an example of, you know, maybe it’s not such a dismal view and maybe in fact there’s a very interesting future if you reframe your thinking about what you do and the opportunities to join what you do to create different experiences.

SO: And I think it’s an interesting perspective to look at GenAI as being a consumer of the content slash data that you’re putting out. A lot of the initial stuff was, this is great. GenAI will just replace all the tech writers. You’re talking about something entirely different.

AN: I guess I wanted to expand on that because I think we’re actually now hovering on a really important point. You know, what is your mindset? You know, what what how are you thinking about this moment in time? The broad we write you or the broader you us generally write who are in this industry. And, you know, I think we don’t see a great indication that GenAI can create net new content and do it well, honestly. I think you can write it summarizing, it can make your day-to-day, your meeting notes and so forth, Microsoft Co-pilot, right? There are some great uses, but I have not seen convincing, compelling indicators that docs can be written by, at least at the enterprise level, right? Our products are complex. We often talk about our writers as sense makers, right? And I think that we can take advantage of GenAI in the right ways. And I think this is one of the ways that we’re taking advantage of it, which is to give customers another experience. And frankly, also for us to learn a lot about what people are asking and assuming and we can learn a lot and continuously improve.

SO: So what’s happening on the delivery side? Somebody asks for some sort of information and it gives either, it says it doesn’t exist or it gives an incorrect response. Are you seeing any patterns there? What are you doing with that?

AN: Yeah, many of your listeners might have produced products themselves, right, or delivered products themselves and remembered what happens in the first day or two of releasing a product, right? So the timing of this chat is really good. Yeah, in the last couple days we’ve seen I was just talking to a data scientist on my team and I was saying, you know, what I think I see here emerging as a possible pattern is that people don’t actually know how to use these things effectively. That, you know, they ask of it questions that it really could never answer, or they don’t fully understand the constraints of the system, meaning that, well, it’s only based on a certain data set. you know, they don’t know that the data set doesn’t include the data they’re looking for, right? Because it sits somewhere else. You know, we’re modifying our processes to intake feedback. I think there’s a real interesting nexus is, is it the AI or is it the content? That’s the really interesting one, right? You know, was the content ambiguous, deficient, duplicitous, whatever, you know, is that a word?

SO: It is now.

AN: At UNC we use that word, not at Duke. But it is an interesting discussion inside our organization when we receive a piece of feedback, what’s causing it? Is it the interpretive engine or is it our source? And so we’re seeing a lot of gaps in our content, it’s exposing a lot of gaps or other suboptimal implementations.

SO: I mean, we’ve said that in a sort of glib manner, because of course you’re living this day to day and hour by hour, but we’ve said that, know, GenAI sitting over the top of a content set is going to uncover all your inconsistencies, all your missing pieces, all your, you know, over here you said update and over here you said upgrade. That was an example I heard from someone else. And so it basically uncovers your technical debt.

AN: Yeah, beautiful. Yeah, bingo. Yeah. Yeah. Yeah. You’re so right there. Terminology, right? my God. Can you believe how many things, how many ways we’ve talked to, talked about X, right?

SO: Right, and the GenAI thinks they’re different because, or it doesn’t think anything right, but the pattern isn’t there and so it doesn’t associate those things necessarily.

AN: Yeah, your listeners may commiserate with this, or the use of words as verbs and nouns, like cable. We often in our documentation talk about cabling devices. How would a GenAI know that the writer of the question is using cable as a verb or noun?

SO: Mm-hmm. So as you’re working through this and with your, you know, it sounds like two days of go live plus a year or two or three of suffering and a year and two days. 

AN: Well, a year and two days, a year and two days.

SO: You know, I think you’re further along than lot of other organizations. Do you have any advice for those that are just beginning this journey and just looking at these kinds of issues? What are the things you did best or maybe worst or would do the same way or not? What’s out there that you can tell people that’ll maybe keep them from, you know, get them, get them or help them as they move forward?

AN: Yeah, but maybe think of it in the old people process systems dimensions. Actually, taking that latter one, systems, I would say beware the fascination of the system without thinking more about the processes and people that are going to be involved in the creation of some kind of generative AI solution. I think, you know, this is as much of an adaptive people process as it is a problem as it is a technical problem. Probably more frankly on the adaptive. And from a process perspective, I’d say, be curious about what you learn. Be attentive to the specifics, but look for the broad patterns in the feedback or what you’re seeing as you develop these solutions, you know, for me, I think I hinted at this before and I think it for me has been frankly, the epiphany of the project. There have been many, but I’d say I I would really highlight this one, which is what does my team do? What is the value of what they generate? And for me, yes, we are, you know, primarily a team that creates documentation, but you know, holy smokes, you know, the, the idea that we are data owners, and we govern a massive, semantically rich, non-determinant, fast-changing data set, that is super, super interesting. Even here inside NetApp, Sarah, we have teams reaching out to us who frankly before probably never thought about the docs. And all of a sudden, because we have this huge data set, they’re like, wow, we can, you know stress test our system or our new technologies using what they have. That’s a super cool moment for our team. 

SO: Yeah, I think you’re the first person that I’ve heard describe this sort of context shift from this is content to this is data or this content is also data or however you want to phrase that. But I think that’s a really interesting point and opens up a lot of fascinating possibilities, not least for the English PhDs of the world. That’s super helpful.

AN: Is this where I confessed at one time trying to think I was going to be one of those and I got out because I realized I was terrible at it?

SO: No, no, no, that goes in the non-recorded part of the podcast. Yeah, I’m going to wrap it up there before Adam spills all of the dirt. 

AN: Yeah, what am I compensating for, right?

SO: But thank you, because this is really, really interesting. And I think it will be helpful to the people listening to this podcast, because it’s so rare to get that inside view of what it really looks like and what’s really going on inside some of these bigger organizations as you move towards AI, GenAI strategies and figure out how best to leverage that. So thank you, Adam. And it’s great to see you.

AN: No, Sarah, thank you. And actually, I would like to thank my team. I mean, it has been an incredible adventure, and I think the team is really amazing.

SO: Yeah, and I know a few of them and they are great. So with that, thank you for listening to the Content Strategy Experts Podcast brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

The post Enterprise content operations in action at NetApp (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 23:10
Position enterprise content operations for success (podcast) https://www.scriptorium.com/2024/09/position-enterprise-content-operations-for-success/ Mon, 16 Sep 2024 11:30:52 +0000 https://www.scriptorium.com/?p=22662 https://www.scriptorium.com/2024/09/position-enterprise-content-operations-for-success/#respond https://www.scriptorium.com/2024/09/position-enterprise-content-operations-for-success/feed/ 0 In episode 174 of The Content Strategy Experts podcast, Sarah O’Keefe and Alan Pringle explore the mindset shifts that are needed to elevate your organization’s content operations to the enterprise level.

If you’re in a desktop tool and everything’s working and you’re happy and you’re delivering what you’re supposed to deliver and basically it ain’t broken, then don’t fix it. You are done. What we’re talking about here is, okay, for those of you that are not in a good place, you need to level up. You need to move into structured content. You need to have a content ops organization that’s going to support that. What’s your next step to deliver at the enterprise level?

— Sarah O’Keefe

Related links:

LinkedIn:

Transcript:

Disclaimer: This is a machine-generated transcript with edits.

Alan Pringle: Welcome to the Content Strategy Experts Podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we talk about setting up your content operations for success. Hey everyone, I am Alan Pringle and I am back here with Sarah O ‘Keefe in yet another podcast episode today. Hello, Sarah.

Sarah O’Keefe: Hey there.

AP: Sarah and I have been chatting about this issue. It’s kind of been this nebulous thing floating around and we’re gonna try to nail it down a little bit more in this conversation today. This idea of setting up your organization for success and their content operations. And to start the conversation, let’s just put it out there. Let’s define content ops. What are content operations, Sarah?

SO: Content strategy is the plan. What are we going to do, how do we want to approach it? Content ops is the system that puts all of that in place. And the reason that content ops these days is a big topic of conversation is because content ops in sort of a desktop world is, well, we’re going to buy this tool, and then we’re going to build some templates, and then we’re going to use them consistently. And the end, right? That’s pretty straightforward. But content operations in a modern content production environment means that we’re talking about lot of different kinds of automation and integration. So the tools are getting bigger, they’re scarier, they’re more enterprise level as opposed to a little desktop thing. And configuring a component content management system, connecting it to your web CMS and feeding the content that you’re generating in your CCMS, your component content management system, into other systems via some sort of an API is a whole different kettle of fish than dealing with, you know, your basic old school unstructured authoring tool. So yeah.

AP: Right. But in their defense, for the people who are using desktop publishing, that is still content operations.

SO: Sure, it is.

AP: It’s just a different flavor of content operations. And frankly, a lot of people, a lot of companies and organizations outgrow it, which is why they’re going to this next level that you’re talking about.

SO: Right. So if you’re in a desktop tool and everything’s working and you’re happy and you’re delivering what you’re supposed to deliver and basically it ain’t broken, then don’t fix it. You are done. You should shut off this podcast and go do something more fun with your time. Right? What we’re talking about here is, okay, for those of you that are not in a good place, you need to level up. You need to move into structured content. You need to have a content ops organization that’s going to support that. What do you do? What’s your, you know, what’s your next step and what does it look like to organize this project in such a way that you move into, you know, that next level up and you can deliver all the things that you’re required to deliver in the bigger enterprise, whatever you want to call that level of things. So desktop people, I’m slightly jealous of you because it’s all working and you’re in great shape and good for you. I’m happy for you.

AP: So making this shift from content operations and desktop publishing to something more enterprise level like you’re talking about, that is a huge mind shift. is also technically something that can be quite the shock to the system. How do you go about making that leap?

SO: Well, I’m reminded of a safety announcement I heard on a plane one time where they were talking about how, you know, when you open the overhead bins after landing, you want to be careful. And the flight attendant said, shift happens. And we all just looked at her like, did you actually just say that? And she sort of smirked. So making this shift can be, it’s can be, it’s difficult, right? And what we’re usually looking at is, okay, you’ve been using, you know, Word for the past 10, 15, 20, 57 years. And now we need to move out of that into, you know, something structured XML, maybe it’s DITA, and then get that all up and running. And so what’s going to happen is that you have to think pretty carefully about what does it look like to build the system and what does it look like to sustain it? Now here I’m talking particularly to large companies because what we find is the outcome in the end, right, when this is all said and done and everything’s up and running and working, what you’re probably going to have is some sort of an organization that’s responsible for sustainment of your content ops. So you’re to have a content ops group of some sort, and they’re going to do things like run the CCMS and build new publishing pipelines and keep the integrations moving and help train the authors. And in some cases, they’re kind of a services organization in the sense that you have an extended group of maybe hundreds of authors who are never going to move into structured content. So you’re taking on the, again, word content that they are producing, but you’re moving it into the structured content system as a service, like an ingestion or migration service to your larger staff or employee population. Okay, so in the future world, you have this group that knows all the things and knows how to keep everything running and knows how to kind of manage that and maintain it and do that work. And probably in there, you have an information architect who’s thinking about how to organize content, how to classify and label things, how to make sure the semantics, you know, the actual element tags are good and all that stuff. But right now, you’re sitting in desktop authoring land with a bunch of people that are really good at using whatever your desktop authoring tool may be. And you have to sort of cross that chasm over to, now we’re this content ops organization with structured content, probably a component content management system. So what I would probably look at here is, you know, what is the outcome? You know, thinking about the system has stood up, we’ve made our tool selection, everything’s working, everything’s configured, everything’s great. What does it look like to have an organization that’s responsible for sustaining that? And that could be, you know, two or three or 10 people, depending on the size, again, the size and scope of your organization and the content that you’re supporting. But in order to get there, you first have to get it all set up. You have to do the work to get it all up and running. Our job typically is that we get brought in to make that transition. Right? So we’re not going to be for a large organization, we’re not going to be your permanent content ops organization. We might provide some support on the side, but you’re going to have people in-house that are going to do that. They’re going to be presumably full-time permanent kind of staff members. They know your content and your domain and they have expertise in, you know, whatever your industry may be.

AP: Right.

SO: Our job is to get you there as fast as possible. So we get brought in to do that setting up piece, right? What are the best systems? What are the things you need to be evaluating? What are the weird requirements that you have that other organizations don’t have that are going to affect your decisions around systems and for that matter, people, right? Are you regulated? What is the risk level of this content? How many languages are you translating into? What kind of deliverables do you have? What kind of integration requirements do you have? And when I say integration, to be more specific, maybe you’re an industrial company and so you have tasks, service, maintenance kinds of things, and you need those tasks like how to replace a battery or how to swap out breaks to be in your service management system so that a field service tech can look at their assignments for the day, which are, you know, go here and do this repair and go here and do this maintenance. And then it gets connected to, and here’s the task you need and here’s the list of tools you need. And here are all the pieces and parts you need in order to do that job correctly. Diagnostic troubleshooting systems. You might have a chat bot and you want to feed all your content into the chat bot so that it can interact with customers. You may have a tech support organization that needs all this content and they want it in their system and not in whatever system you’re delivering. So we get into all these questions around where does this content go? You know, where does it have tentacles into your organization and what other things do we need to connect it to and how are we going to do that? So I think it’s very helpful to look at the upfront effort of configure or, you know, making decisions, deciding on designing your system and setting up your system versus sustaining, enabling, and supporting the system.

AP: There are lots of layers that you just talked about and lots of steps. It is very unusual, at least in my experience, to find someone, some kind of personnel resource, either within or hiring, who is going to have all of the things that you just mentioned because it is a lot to expect one person to have all of that knowledge, especially if you are moving to a new system, and you’ve got a situation where the current people are well versed in what is happening right now in that infrastructure, that ecosystem. To expect them to magically shift their brain and figure out new things, that’s a lot to ask for. And I think that’s where having this third-party consultant person, voice, is very helpful because we can help you narrow in on the things that are better fits for what you’ve got going on now and what you anticipate coming in the future.

SO: Yeah, I mean, the thing is that what you want from your internal organization is the sustainability. But in order to get there, you have to actually build the system, right? And nearly always when people reach out to us and say, we’re making this transition, we’re interested, we’re thinking about it, et cetera, they’re doing it because they have a serious problem of some sort. We are going into Europe and we have no localization capabilities or we have them, but we’ve been doing, you know, a little bit of French for Canada and a tiny bit of Spanish for Mexico. And now we’re being told about all these languages that we have to support for the European Union. And we can’t possibly scale our, you know, 2 .5 languages up to 28. It just, it just can’t be done. We’ll, we’ll drown. Or people say, We have all these new requirements and we can’t get there. We’ve been told to take our content that’s locked into, you know, page based PDF, whatever, and we’re being required to deliver it, not just onto the website and not just into HTML, as you know, content as a service, as an API deliverable, as micro content, all this stuff. And they just, they just can’t, you can’t get there from here. And so you have people on the inside who understand, as you said, the current system really well, and understand the needs of the organization in the sense of these things that they’re being asked to do and they understand the domain. They understand their particular product set internally. But it’s just completely unreasonable to ask them to stand up, support and sustain a new system with new technology while still delivering the existing content because, you know, that doesn’t go away. You can’t just push the pause button for five months.

AP: No, the real world does not stop when you are going on some kind of huge digital transformation project like one of these content ops projects. So basically what we’re talking about here, especially on the front end, the planning discovery side, is we can help augment, help you focus. And then once you kind of picked your tools and you start setting things up, there’s some choices there that sometimes have to do with like the size of an organization about how to proceed with implementation and then maintenance beyond that. Let’s focus on that a little bit.

SO: Most of the organizations we deal with are quite large. Actually, all of the organizations we deal with are quite large compared to us, right? It’s just a matter of are they a lot bigger or are they a lot, a lot, a lot, lot bigger?

AP: Correct.

SO: Within that, the question becomes how much help do you want from us and how much help do your people need in order to level up and get to the point where they can be self-sufficient? We have a lot of projects we do where we come in and we help with that sort of big hump of work, that big implementation push, and help get it done. And then once you go into sustainment or maintenance mode, it’s 10% of the effort or something like that. And so either you staff that internally as you’re building out your organization internally, or we stick around in sort of a fractional, smaller role to help with that. The pendulum kind of shifted on this for a while, or way back, way back when it was get in, do the work and get out. We rarely had ongoing maintenance support. Then for a bit, we were doing a lot of maintenance relative to the prior efforts. And now it feels as though we’re seeing a shift in a little bit of a shift back to doing this internally. Organizations that are big enough to have staff like a content ops group or a content ops person are bringing it back in-house instead of offloading it onto somebody like us. We’re happy to do whatever makes the most sense for the organization. At a certain size, my advice is always to bring this in-house because ultimately, your long-term staff member who has domain expertise on your products and your world and your corporate culture and has social capital within your organization will be more effective than offloading it onto an external organization, no matter how great we are.

AP: To wrap up, think I want to touch on one last thing here, and that’s change management. And yes, we beat that drum all the time in these conversations on this podcast, but I don’t think we can overstate how important it is to keep those communication channels open and be sure everyone understands what’s going on and why you’re doing what you’re doing. What we’ve talked about so far is very much, okay, we’ve come up with a technical plan, we’ve done a technical implementation, and now we’re going to set it up for success and maintain it for the long haul and adjust it as we need to as things change. But there are still a group of people who have to use those tools, your content creators, your reviewers, all of those people, your subject matter experts, I mean, I can go on and on here, they are still part of this equation here and we can’t forget about them while we’re so focused on the technical aspects of things.

SO: I would say this and directly to the people that are doing the work, know, the authors, the subject matter experts, the people operating within the system. I would look at this as an opportunity. It is an opportunity for you to pick up a whole bunch of new skills, new tools, new technologies, new ways of working. And while I know it’s going to be uncomfortable and difficult and occasionally very annoying as you discover that the new tools do some things really well, but the things that were easy in the old tools are now difficult, right? There’s just going to be that thing where the expertise you had in old tool A is no longer relevant and you have to sort of learn everything all over again, which is super, super annoying. But it’s fodder for your resume, right? I mean, if it comes to it, you’re going to have better skills and you’re going to have another set of tools and you’re going to be able to say, yes, I do know how to do that. So I think that just from a self-preservation point of view, it makes a whole lot of sense to get involved in some of these projects and move them forward because it’s going to help you in the long run, whether you stay at that organization or whether you move on to somewhere else, you know, at some point in the future. That’s one of the ways I would look at this. It is certainly true that the change falls on the authors, right?

AP: Correct.

SO: They all have to change how they work and learn new ways of working and there’s a lot there and I don’t want to you know sort of sweep that aside because it can be very painful. We try to advocate for making sure that authors have time to learn the new thing that people acknowledge that they’re not going to be as productive day one in the new system as they were in the old system that they know inside out and upside down that they get training and knowledge transfer and just, you a little bit of space to take on this new thing and understand it and get to a point where they use it well. So I think there’s a, you know, there’s a combination of things there. For those of you that are leading these projects, it is not reasonable, again, to stand the thing up and say, go live is Monday. So, you know, I expect deliverables on Tuesday. That is not okay.

AP: Yeah. And you’ve just wasted a ton of money and effort because you’ve thrown a tool at people who don’t know how to use it. So all of your beautiful setup kind of goes to waste. So there are lot of options here as far as making sure that your content ops do succeed. And I don’t think it’s like pretty much everything else in consulting land. It is not one size fits all.

SO: It depends, as always. We should just generate one podcast and put different titles on it and just say it depends over and over again.

AP: Pretty much, we’d probably just get an MP3 of us saying that phrase over and over again and just loop it and that will be a podcast episode. And on that not-great suggestion for our next episode, I’m gonna wrap this up. So thank you, Sarah.

SO: Thank you.

AP: I think she just choked on her tea, everyone.

SO: I did.

AP: Thank you for listening to the Content Strategy Experts Podcast brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

The post Position enterprise content operations for success (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 19:46
Conquering content localization: strategies for success (podcast) https://www.scriptorium.com/2024/09/conquering-content-localization-strategies-for-success/ Mon, 09 Sep 2024 11:30:01 +0000 https://www.scriptorium.com/?p=22658 https://www.scriptorium.com/2024/09/conquering-content-localization-strategies-for-success/#respond https://www.scriptorium.com/2024/09/conquering-content-localization-strategies-for-success/feed/ 0 Translation troubles? This podcast is for you! In episode 173 of The Content Strategy Experts podcast, Bill Swallow and special guest Mike McDermott, Director of Language Services at MadTranslations, share strategies for overcoming common content localization challenges and unlocking new market opportunities.

Mike McDermott: It gets very cumbersome to continually do these manual steps to get to a translation update. Once the authoring is done, ideally you just send it right through translation and the process starts.

Bill Swallow: So from an agile point of view, I am assuming that you’re talking about not necessarily translating an entire publication from page one to page 300, but you’re saying as soon as a particular chunk of content is done and “blessed,” let’s say, by reviewers in the native language, then it can immediately go off to translation even if other portions are still in progress.

Mike McDermott: Exactly. That’s what working in this semantic content and these types of environments will do for a content creator. You don’t need to wait for the final piece of content to be finalized to get things into translation.

Related links:

LinkedIn:

Transcript:

Disclaimer: This is a machine-generated transcript with edits.

Bill Swallow: Welcome to the Content Strategy Experts podcast, brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we explore strategies for conquering localization challenges, and unlocking new market opportunities. Hi everybody. I’m Bill Swallow, and with me today is Mike McDermott from MadCap Software. Hey Mike.

Mike McDermott: Hi Bill.

BS: So before we jump in, Mike, would you like to provide a little background information about you, who you are, what you do at MadCap?

MM: Sure. My name is Mike McDermott. I am the director of language services at MadCap Software working with our MadTranslation Group. And we support companies that work in single source authoring in multichannel publishing tools like those offered from MadCap Software for IXIA and MadCap Flare and Xyleme and other tools.

BS: So Mike, what are some of the challenges you’ve seen and what works for overcoming some of these localization challenges?

MM: One of the main challenges I see with companies that come to us, and they typically come to us because they’re looking at working in an XML-based authoring tool and they’re curious about the advantages it has for translation. And one of the biggest challenges I see initially with these companies is just figuring out what content needs to go into translation when you’re working in different types of tools. And one of the ways I see to solve that problem is working in a tool where you have the ability to tag certain content and identify content for different audiences or different purposes. It just makes it simpler to identify that content and get it straight into translation and removes a lot of the human error around packaging up content and trying to figure out yourself what files, house texts that might be translatable for whatever the output is that you’re looking to build. So just working in those tools I see inherently helps with translation because it helps you identify exactly what needs to be translated and it gets it into translation much quicker.

BS: So I think we’re talking about semantic content there and making sure that you have all the right metadata in place so that you can identify the correct audience, the correct, let’s say versions of the product, whether to translate or not, and any other relevant information about the content. So you’re able to isolate the very specific bits of content that need to be translated and omit a lot of the content that necessarily isn’t needed for that deliverable.

MM: Exactly, Bill. It lets the technology tell you what needs to be translated in what houses text versus you trying to go through a file list and determine what do I need to send out to a translator to translate. The flip side of that is to just send everything for translation, but it’s very rare that anything in any given project for any type of system is going to need to be translated. So by tagging it in that way, you can quickly get into the translation and get things moving. And what I see happening at the end of these projects, oftentimes when you’re not working in those types of systems is you end up finding bits and pieces of content or different files that ended up needing to be translated that missed that initial pass. Now they have to go back through translation and you’re delayed. So just getting everything right the first time and relying on the tools to tell you exactly what needs to be translated by looking up metadata or different tags just simplifies the process and speeds everything up, helps translation get done quicker and just improves time to market for the end user to get their content out.

BS: So it sounds like it reduces a good amount of friction, especially with regard to finding missing bits and pieces that should have been translated and weren’t, and then needing to go back and make sure that’s done in time. What are some other ways that people can reduce friction in their translation workflow?

MM: Well, a big emphasis for us over the past few years around removing friction is working with connectors and different technologies that can orchestrate the translation process. So we can automate a lot of this and remove the bottlenecks around someone having to, like I said before, manually go into a set of files and package things up for a translator, zip up files, upload them to different locations, and they just get passed around and things can happen when working that way, even outside of just missing files. So working with connectors and these technologies that can connect directly into these systems and get the text right into translation, removing all those friction points just eliminates a lot of room for error in project delays, bottlenecks for tasks that can be easily handled by modern technology.

BS: And I assume that there’s probably some technology there as well that kind of govern other things, other parts of the workflow, like review, content validation, that type of thing?

MM: Exactly, exactly. So we’re trying to automate the flow of data into the different points in translation and then get the content ready. For example, for reviewers, you mentioned reviewers. So once content gets into translation, we can get it right into the translation system from the authoring environment that the customer’s working in, get it into translation. And as soon as the translation is done, a human reviewer on the client side or on our side or whoever can be notified that this content is ready for translation and it just helps keep things moving. So now it’s on them to complete their translation. And once that’s done, the process can continue on and the automated QA checks, the human QA checks can be done at that point, and then the project can be pushed back to wherever it needs to go and put into publication. But by automating the steps and plugging in the humans where they provide the most value, it just removes the time costs in error-prone steps that don’t need to be there.

BS: So it sounds like a lot of it does come down to saving a good deal of time. I would also imagine that these types of workflows, they also help streamline a lot of the publishing needs that come after the translation as well.

MM: Correct. And that’s kind of why we started MadTranslation when we did, was to provide our customers a place to go to work with the translation agency that understood these tools and understand how these bits and pieces come together to build an output. We put it together to provide our customers a turnkey solution where they can get a working project back where they can quickly get into publication. By removing the friction points and using modern technology to automate a lot of these processes, we’re able to get things into translation and add a translation into the final deliverable much faster. So once that happens, we can build the outputs and we can check if it requires a human check on it, things can get to that point much quicker, and we’re not waiting for somebody to manually pull down files and putting them into another location so the next actually take place. We want to automate that part of it so we can get to that final output into a project file where a customer can plug it into their publishing environment and get it out as quickly as possible. A lot of the wasted time is around those manual steps, and when it comes to validation and review, it’s just the reviewers and validators maybe not being ready for the validation or not being educated on how it will work. So it’s important to make sure that everyone in that process knows how it’s going to be done, when things are going to be ready for the review or the QA checks. And then the idea from there is to just feed the content in via connectors, removing the friction point and just send it through. And this is necessary, especially when you’re doing very frequent updates and kind of a more of an agile translation workflow. It gets very cumbersome to continually do these manual steps to get to a translation update. Once the authoring is done, ideally you just send it right through translation and the process starts.

BS: So from an agile point of view, I am assuming then that you’re talking about not necessarily translating an entire publication from page one to page 300, but you’re talking about as soon as a particular chunk of content is done and it’s “blessed,” let’s say, by reviewers in the native language, then it can immediately go off to translation even if other portions are still in progress.

MM: Exactly. Exactly. And that’s what working in this semantic content and these types of environments will do for a content creator is you don’t need to wait for the final piece of content to be finalized to get things into translation. So as you said, it becomes even more important when you’re doing updates because you don’t want to have to send over the entire file set every time you’re doing an update. Whereas when you’re working in a more linear format like Word, you end up having to send that full file every time, and the translation agency is likely reprocessing it using translation memory. But all that stuff still takes time and working in these types of tools, you can very quickly identify those new parts or those bits that you know are ready for translation, tag them or mark them in some way and send them through the translation process.

BS: Very cool. So a lot of the work that we’re seeing now on the Scriptorium side of things is in re-platforming. So people have content in an old system or they have, say a directory full of decaying word files, and they want to bring it into some other new system. They want to modernize, they want to centralize everything, basically have a situation where they’re working in data or some other structured content, bring it into semantic content. What are some of, I guess, the benefits of doing that give you as far as translation goes when you’re looking at content portability? So being able to jump ship from one system to another.

MM: I think working in those systems where the text or the content is stored away from the output that you’re building has a lot of benefits to not only translation being able to just get the text that needs to be translated, exported out of the system and then put back where it needs to go. But it really future-proofs you and gives you the portability that you talk about to make changes because the text is stored in a standard format that can be ported versus you see some organizations getting locked into a closed environment to where when it goes to make a change, it requires certain types of exports to other type of file types that other tools can then import. But by storing them in a standard way in XML, for example, it gives you that flexibility in a future proves you from being locked into any one scenario.

BS: Excellent. So I have to ask, since I’ve come from a localization background as well, what’s one of the hairier projects that you’ve seen or one of the hairier problems that people can run into and in a localization workflow?

MM: One of the challenges we run into sometimes around client review, when you start incorporating validators into the translation system and include them as part of the process, when you get multiple reviewers. Sometimes that will happen where a company will assign a reviewer for every language, but you might have different people reviewing the same set of content. I mean, that’s the biggest delay that we see with projects is translations delivered and then the translation is dumped on a native speaker within the company’s desk and they’re asked to review it and they’re not ready to do the review, it’s not scheduled and it can delay the project. That’s one of the biggest delays we see. So that’s why we try at the front end of a project to figure out on the client side, what’s going to happen after we deliver this project, after we send the files, is the content going to be reviewed or validated? If so, let’s figure out a way to incorporate them into our translation system where they can review the translations before we build the outputs and do all the QA checks. So that’s one of the hairier situations in terms of time delays. Expectations around just time in general have always been a thing in localization. As you know, people can be surprised as to how long it can take for a translator to get through content. I mean, the technology is there certainly to speed it up. Since we’ve started MadTranslations a little over 10 years ago, we’ve seen the translation speed increase quite a bit, but it still takes time for a good translator to get through that content and know when to stop and do the research that’s needed to get a technical term right. So that’s one of the surprise moments I think for new buyers of localization is the time that it can take and there’s solutions in place, like I said, to make it go faster. But if you want that human review and that expertise and the cognitive ability to know when to stop and figure out what this term is or what the client wants or doesn’t want around certain terminology, and then to database it and then include that as part of the translation asset so it stays consistent every time. That takes time versus just sending something through a machine translation, doing a quick spot check and sending it back to the customer.

BS: So it sounds like having that workflow defined and setting those expectations that certain things need to happen at each point of that workflow. Some of it might be automated, some of it does require a person, and that person I guess should probably be identified ahead of time and given a heads-up that, “Hey, something’s going to be coming at you in three weeks. Be ready for it.”

MM: Be ready for it. And also, what are you ready for? So it’s kind of training a reviewer, what are you looking for here? Are we looking for key terms? Are we looking for style preferences? Everyone kind of understanding what it is that a reviewer is going to be looking for, and they might be looking for different things when it comes to technical documentation versus a website, for example. So just having everyone communicate and understand what the intended purpose of the final output is and where everyone fits in the process and defining a schedule around that process definitely helps.

BS: Definitely. I know myself, I’ve seen cases where working for a translation agency, having a client come to me and basically say, “I need this done as soon as possible. What can you do?” And it was a highly technical manual, and we said, “Well, we have an expert in these different languages. This person is available now. This one won’t be available until next month. And this person really only works nights and weekends because they are a professional engineer in their day job.” So turnaround is going to be a little slow, and the client persisted that we just need it as soon as possible. We need to get it out the door in a couple of weeks, and I’m thinking to myself in the back of my head, why are you coming to us now when you need this in a couple of weeks? You shouldn’t just be throwing it over the fence at the last possible minute and expecting it to come back tomorrow. So there was that education. Unfortunately, they decided that they didn’t care. They wanted us to use as many translators as possible and get it done as quick as possible. And we had them sign documents that basically said that we are not liable for the quality of the translation since the client is basically looking to get this done as quickly and cheaply and dirty as possible. It was a nightmare, and I think it took one round of review on the client side for them to basically circle back and say, “Okay, I get what you were saying now.” None of these translations work at all together, because we were literally sending out a chapter to a different translator and there was no style guide because the client hadn’t provided anything. There was no terminology set because the client didn’t provide anything and everything came back different. And they said, “Okay, we get it. We get it. We’ll revise our schedules, get it done the right way. I don’t care how long it takes.”

MM: I’ve run into something very, very similar to what you described, and it was put disclaimers in the documents to where this is going to be poor quality. We’re admitting it right now. This is the only way we’re going to get it back within a week, and we do not recommend publishing. And as soon as the files come back and so on, looks at it and says, “Okay, let’s back up and do it the right way.”

BS: Yes. I guess the biggest takeaway there is plan ahead and plan for quality and not just try to get it done as fast as possible.

MM: And that’s one of the benefits to where we sit at MadTranslations with MadCap Software companies, companies coming into these types of environments. They’re typically at the front end, the planning stages on trying to figure out how all this is going to work. So we have an ability to help them understand what the process looks like and then define it in combination with our tooling and their needs and come up with a workflow that’s going to keep things moving fast, but gives you that human level quality that everyone needs at the end.

BS: Being able to size up exactly what the process needs to look like before you’re in the thick of it definitely helps. And having that opportunity to coach someone through setting up the process for the first time, I’d say that’s definitely priceless because so many mistakes can happen out of the gate between how people are authoring content, what their workflow looks like.

MM: And it’s even more important for companies to have to maintain the content. So it’s one thing to just take a PDF and say, “Hey, I need to translate this file and I’m never going to have to update it again. I just need a quick translation.” It’s another to have a team of authors dispersed around the globe working on the same set of content that then needs to be translated continuously.

So different needs, but like you said, planning, defining the steps and knowing what the requirements of the content are from authoring to time to publication in each language, and how to fit the steps and to meet that as best as possible is best done, like you said, upfront versus when it needs to be published in a week.

BS: Planning, planning, planning. I think that sounds like a good place to leave it. Mike, thank you very much.

MM: Thank you, Bill. Thanks for having me on.

BS: Thank you for listening to the Content Strategy Experts Podcast, brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

The post Conquering content localization: strategies for success (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts Conquering content localization: strategies for success (podcast) full false 19:23
Cutting technical debt with replatforming (podcast) https://www.scriptorium.com/2024/08/cutting-technical-debt-with-replatforming-podcast/ Mon, 19 Aug 2024 11:18:03 +0000 https://www.scriptorium.com/?p=22617 https://www.scriptorium.com/2024/08/cutting-technical-debt-with-replatforming-podcast/#respond https://www.scriptorium.com/2024/08/cutting-technical-debt-with-replatforming-podcast/feed/ 0 When organizations replatform from one content management system to another, unchecked technical debt can weigh down the new system. In contrast, strategic replatforming can be a tool for reducing technical debt. In episode 172 of The Content Strategy Experts podcast, Sarah O’Keefe and Bill Swallow share how to set your replatforming project up for success.

Here’s the real question I think you have to ask before replatforming—is the platform actually the problem? Is it legitimately broken? As Bill said, has it evolved away from the business requirements to a point where it no longer meet your needs? Or there are some other questions to ask, such as, what are your processes around that platform? Do you have weird, annoying, and inefficient processes?

— Sarah O’Keefe

Related links:

LinkedIn:

Transcript:

Disclaimer: This is a machine-generated transcript with edits.

Sarah O’Keefe: Welcome to the Content Strategy Experts Podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we talk about replatforming and its relationship to technical debt. Hi, everyone. I’m Sarah O ‘Keefe. And the two of us rarely do podcasts together for reasons that will become apparent as we get into this.

Bill Swallow: And I’m Bill Swallow. 

SO: What we wanted to talk about today was some more discussion of technical debt, but this time with a focus on a question of whether you can use replatforming and new software systems to get rid of technical debt. I think we start there with the understanding that no platform is actually perfect. 

BS: Mm-hmm.

SO: Sorry, vendors. It’s about finding the best fit for your organization’s requirements and then those requirements change over time. Now Bill, a lot of times when we talk about replatforming, you hear people referring to the burning platform problem. So what’s that?

BS: Yeah, it’s well, it may actually be on fire, but likely not. What we’re really talking about is, you know, a platform that was chosen many years ago. Perhaps it’s approaching end-of-life. Perhaps your business needs have taken a, you know, a left or sharp left or right turn and it no longer, you know, the platform no longer supports those business needs or, you know, it really could be just a matter of cost. You know, the, platform you bought 10 years ago was, was built upon a very specific cost structure and model. And you know, the world is different now, and there are different pricing schemes and whatnot. And you may just want to, you know, replatform to recoup some of that cost.

SO: So does that, I mean, does that work? mean, if you exit platform A and move on to platform B, are you necessarily gonna save money? So no.

BS: In a perfect world, yes, but we don’t live in a perfect world. Yeah. I mean, I hate to be the bearer of bad news, you know, if you’re looking to switch from one, you know, from one platform to another to save costs, there is a cost in making that switch. And, you know, at that point, you need to look at, weighing the benefits and drawbacks, you know, is the cost to move to a new system going to be worth the cheaper solution in the long run. I mean, it’s a very, very basic model to look at. And there’s a lot of other costs and benefits and drawbacks to making a replatforming platform switch. But it’s one thing to consider there.

SO: Yeah, I think additionally, it’s really common to have people come to us and say, you know, our platform is burning. We’re unhappy with platform X and we want to replatform into platform Y. Now, what’s funnier is that usually we have some other customer that’s saying, I’m unhappy with platform Y and I need to go to platform X, right? So it’s just like a conveyor belt of sorts.

BS: You can’t please everybody.

SO: But the real question I think you have to ask before replatforming is, is the platform actually the problem here? Is it legitimately broken? And as you said, it’s evolved away from the business requirements to a point where they no longer meet your needs. And or there are some other questions to ask, like, what are your processes around that platform look like? Do you have weird, annoying, and inefficient processes?

BS: Mm-hmm.

SO: Do you have constraints that are going to force you in a direction that isn’t maybe from a technology point of view the best one? Have you made some old decisions that are now non -negotiable? So you’ll see people saying, well, we have this particular kind of construct in our content and we’re not giving it up ever.

BS: Mh-hmm.

SO: And you look at it and you think, well, it’s very unusual and is it really adding value, but it’s hard to get rid of it because it’s so established within that particular organization. So the worst scenario here is to move from A to B and repeat all the same mistakes that were made in the previous platform.

BS: Yeah, you don’t necessarily want to carry, well, you don’t want to carry that debt over, certainly. You know, so anything that you have established that worked well, but doesn’t meet your current or future needs. mean, absolutely. You do not want to move that forward. That being said, you have a wealth of content, a wealth of technology that you have built over the years and you want to make sure that you can use as much of that as possible to at least give yourself a leg up in the new system. So that you don’t have to rewrite everything from scratch, that you don’t have to completely rebuild your publishing pipelines. You might be able to move them over and change them and you might be able to move and refactor your content so that it better meets your needs. But I guess it’s a long way of saying that not only are you looking at a burning platform problem, but you’re also looking at a futureproofing opportunity. And you want to make sure that if you are going to do that lift and shift to another platform, that you, you take a few steps back and you look at what your current and future requirements are or will be and you make the necessary changes during the replatforming effort before you get into the new system and then start having to essentially deal with the same problems all over again.

SO: Yeah, I mean, to give a slightly more concrete example of what we’re talking about, relative to 10 years ago, PDF output is relatively less important. 10 years ago, we were getting a lot of, need PDF, we have to output it, and it has to meet these very, very high standards. People are still doing PDF, and clients are still doing PDF, but relatively, it is less of a like showstopper, primary requirement. It’s more, yes, we still have to do PDF, but we’re willing to negotiate on what that PDF is going to look like. Instead of saying it has to be this pristine and very complex output, they’re willing to drop that down a few notches. Conversely, the importance of HTML website alignment has gotten much, much higher. And we have a lot of requirements around Content as a Service and API connectors and those kinds of things. So if you just look at all your different publishing output connection pipelines 10 years ago PDF was really still unquestionably the most important thing and that’s not necessarily the case anymore.

BS: And on the HTML side, there’s also, could be HTML, could be JSON, but you do have a wealth of apps, whether it be a phone app or an app in your car or an app on your fridge that needs to be supported as well where your PDF certainly isn’t going to cut it. And a PDF approach to content design in general is not going to fly.

SO: So when we talk about replatforming, we tend to, in many cases, I look at this through the lens of, okay, we have, you know, DITA content in a CCMS and we’re gonna move it to another DITA CCMS. But in fact, it goes way, way beyond that, right? What are some of the, I guess, input or I’ll say legacy, but what are some of the formats that we’re seeing that are on the inbound side of a replatforming?

BS: Let’s see, on the inbound side, we certainly have maybe old models of DITA. So maybe something that was developed in DITA 1.1, 1.2, pre 1.0, something that’s heavily specialized. We have things like unstructured content, like Word files, InDesign, unstructured FrameMaker, and what have you. We’re also seeing that there’s an opportunity there as well to move a lot of developer content into something that is more centrally managed. In that case, we’ve got Markdown and other lightweight formats that need to be considered and migrated appropriately. And then, of course, all of your structured content. So we mentioned DITA. There’s DocBook out there. There are other XML formats and whatnot. And potentially, you have other things that you’re you’ve been maintaining over the years that now is a good opportunity to migrate that over into a system, centralize it, and get it aligned with all your other content.

SO: Yeah, and looking at this, I think it’s safe to say that we see people entering and exiting Markdown, like people saying we’re going to go from DITA to Markdown, but also Markdown to DITA. We’re seeing a lot of going into structured content in various flavors. Unstructured content, we largely are seeing as an exit format, right? We don’t see a lot of people saying, “Put us in Word, please.”

BS: No, no one’s going from something like DITA into Word.

SO: So they might go from DITA to Markdown, which is an interesting one. Okay, so I guess then that’s the entry format. That’s where you’re starting. What’s the outcome format? Where are people going for the most part?

BS: For the most part, there are essentially two winners. There are the XML-based formats, and then there is the Markdown-based formats. And I’m lumping DITA, DocBook, and other proprietary XML models all into XML. But generally, people are migrating more toward that direction than to Markdown. And there’s really a division there. It’s whether you want the semantics ingrained in an XML format and the ability to apply or heavily apply metadata. Or if you want something lightweight, that’s easy to author and is relatively, I don’t want to say single purpose, but it’s not as easily multi-channel as you can get with XML.

SO: Yeah, I mean the big advantage to Markdown is that it aligns you with the developer workflows, right? You get into Git, you’re aligned with all the source control and everything else that’s being done for the actual software code. And if that is a need that you have, then that is, you know, that’s the direction to go in. There are some, as Bill said, some really big scalability issues with that. And that can be a problem down the line, but Markdown generally, you know, okay, so we pick a fundamental content model of some sort, and then we have to think about software. So what does that look like? What are the buckets that we’re looking at there?

BS: For software, we’ve got a lot of things. First and foremost, there’s the platform that you’re moving to. What does that look like? What does it support? You have certainly authoring tools that are there. You also have all of your publishing pipelines. All of that’s going to require software to some degree. Some of it’s third party. Some of it’s embedded in the platform itself. And then you have all of your extended platforms that you are connecting to. Those might change. Those might stay the same. You might not change your knowledge base, for example, but you still need to publish content from the new system. The new system doesn’t quite work the way the old system did. So your connector needs to change. Things like that. I would also say that, you know, with regard to software, there’s also a hit. It’ll be a temporary blip, but it will be a costly blip in the localization space because when you are replatforming, especially if you are migrating formats to a new format, you’re going to take a hit on your 100% matches in your translation memory. So anything that you’ve translated previously, you’ll still have those translations, but how they are segmented will look very different in your localization software.

SO: Yeah, and there are some weird technical things you can do under the covers to potentially mitigate that, but it’s definitely an issue. 

BS: And it’s still costly.

SO: OK, so we’ve decided that we need to replatform and we’ve done the business requirements and we picked a tool and we’re ready to go from A to B, which we are carefully not identifying because some of you are going from A to B and some of you are going from B to A. And it’s not wrong, right? There’s not a single, you know, one CCMS to rule them all. 

BS: Mh-hmm.

SO: They’re all different and they all have different pros and cons. So depending on your organization and your requirements, what looks good for you could be bad for this other company. But within that context, what are some of the things to consider as you’re going through this? So you need to exit platform A and migrate to platform B.

BS: Mm-hmm. I think the number one thing you should not do is expect to be able to pick up your content from platform A and just drop it in platform B. Yeah, it’s never going to be that easy and it shouldn’t be something that you really are considering because not only are you replatforming, but you’re aligning with a new way of working with your content. So just picking it up and dropping it in a new system is not going to help you at all with in that regard. And given that you need to get the content out of the system, that’s the best time to look at your content and say, how do we clean this up? What mistakes do we try to erase with a migration project on this content before we put it in the new system?

SO: Yeah, I think the decisions that were made that tend to take on a life of their own, like this is how we do things. And much, much, much later you find out that it was done that way because of a limitation on the old software. This is like that dumb old story about, you know, cutting the end off the pot roast. And it turned out that Grandma did that because the roasting pan wasn’t big enough to hold the entire pot roast. It’s exactly that, but software, right? So bad decisions or constraints, you need to test your constraints to see whether your new CCMS, in fact, is a bigger roasting pan that does not require you to cut the end off the pot roast. What about customization?

BS:  Customization is a good one. And what we’re finding is that a lot of the old systems or people who are exiting an older system for a newer system, they have a lot of heavy customization because there wasn’t a, in many regards, there wasn’t a robust content model available at the time. So they had to heavily specialize their content model and make it tailored to the type of content that they were developing. And now, you know, something that was built 10, 15 years ago that is using highly structured, specialized structured content. If you look at what’s available now, a lot of those specializations have been built into the standard in some way. So you can unwind a lot of that. It’s a great opportunity to unwind a lot of it and use the standard rather than your customization. That helps you move forward as the specifications for the content model change, you will be aligned with that change a lot better than if you had used a customization along the way. Specialization or any kind of customizations for that matter, you know, they’re expensive. They’re expensive to build. They’re expensive to maintain. They’re expensive to train people on. You know, they affect every aspect of your content production from authoring to publishing. There’s, something that needs to be specifically tailored, whether it’s training for the writers, whether it’s a training, you know, designing your publishing pipelines to understand and be able to render those customers, customized models, the translators that are involved, making sure that, you know, their systems can understand your tags if they’re custom so that they know whether, you know, that they can show and hide them from the translators and you don’t get translations back that contain translated tags, which we’ve seen. There’s a lot going on there. So the more that you can unwind, if you have heavily customized in the past, the better off you will be.

SO: Yeah, I think, mean, and here we’re talking, I think specifically about some of the DITA stuff. So if you’re in DITA 1.0 or 1.1 with your older legacy content, they added a lot of tags and did a 1.3 and they’re adding more and did a 2.0 that might address some of the things like you added a specialization because there was a gap or a deficiency in the DITA standard. So you could probably take that away and just use the standard tag that got added later. Now, I want to be clear that, I mean, we’re not anti-specialization. I think specialization is great and it’s a powerful tool to align the content that you have and your content model with your business requirements. And you have to make sure that when you specialize, all the things that Bill’s talking about, all those costs that you incur are matched by the value that you get out of having the specialization.

BS: Mm-hmm.

SO: So, you’re going to specialize because it makes your content better and you have to make sure that it makes it enough better to make it worthwhile to do all these things. Very, very broadly, metadata customization nearly always makes sense because that is a straight-up, we have these kinds of business divisions or variants that we need because of the way our products operate. And those nearly always make sense. And element specialization tends to be a bigger lift because now you’re looking at getting better semantics into your content. And you have to ask the question, do I really need custom things, or is this out of the box, did a doc book, custom XML content model good enough for my purposes? That’s kind of where you land on that. And then reuse, I did want to touch on reuse briefly because, you know, we can do a lot of things with reuse from reusing entire, you know, chunks, topics, paragraph sequences, list of steps, that kind of thing, all the way down to individual words or phrases. And the more creative you get with your reuse and the more complex it is, the more difficult it’s going to be to move it from system A to system B.

BS: Absolutely. It’ll be a lot more difficult to train people on as well. And we’ve seen it more times than not that even with the best reuse plan in mind, we still see, you know, what we call spaghetti reuse in the wild, where, know, someone has a topic or a phrase or something in one publication and they just reference it into another publication rather, you know, from one to the other. And it doesn’t necessarily, some systems will allow that. I’ll just put that out there. Other systems will absolutely say, absolutely not. You cannot do this. And you have to, you know, make sure that whatever you’re referencing exists in the same publication that, you know, that, that you’re publishing. so we’ve had to do a lot of unwinding there, you know, with regard to this spaghetti reuse and we’ve, we’ve had a podcast in the past with Gretel Kinsey on our side who I believe she talked extensively about spaghetti reuse. What it is what it isn’t and why you should avoid it. But yes as you’re replatforming if you know you have cases like this It’s best to get your arms around it before you put your content in the new system.

SO: Yeah, and we’ll see if we can dig it out and get it into the show notes. What about connectors?

BS: Connectors are interesting. And by that, we’re talking about either webhooks or API calls from one system to another to enable automation of publishing or sharing of content and what have you. For the most part, if you’re not changing one of the two systems, managing that connector can be a little bit easier, especially if it’s your target or the receiving end of the content is reaching out and looking for something else in like a shared folder using the webhook or using an FTP server, what have you. But generally, know, those webhooks can or sorry, those connectors can get a little sketchy. You know, it might be that your new platform doesn’t have canned connectors for the other systems that you have always connected to and need to connect to. So then you need to start looking at, well, do we need to build something new? we find a way of, find some kind of creative midpoint for this? They can get a little dicey. So I think it’s important to, before you re -platform, before you even choose your new content management system, that you look at where your content needs to go. And if you have support from that system to get you there.

SO: So a simple example of this is localization. If you have a component content management system of some sort, you’ve stashed all your content in, and then you have a translation management system. And the old legacy system, the platform you’re trying to get off of, has or maybe doesn’t have, but you need a connector from the component content management system over to the TMS, the translation management system, and back so that you can feed it your content and have the content returned to you. 

BS: Mm-hmm.

SO: Well, if that connector exists in the legacy platform, but not in the new platform, you’re gonna have to either lean on the vendors to produce a new connector or go back to the old zip and ship model, which nobody wants, or conversely, you were doing a zip and ship in the old version, but the new version has a connector, which is gonna give you a huge amount of efficiency. 

BS: Mm-hmm.

SO: The connectors tend to be expensive and also they add a lot of value, right? Because if you can automate those systems, those transfer systems, then that’s going to eliminate a lot of manual overhead, which is of course why we’re here. 

BS: Mm hmm. Human error as well.

SO: So they’re worth looking at, you know, pretty carefully to see what that connector, as you said, Bill, you know, what’s out there, what already exists. Does the new platform have the connectors I need? And if not, who do I lean on to make that happen so that I don’t go backwards, essentially, in my processes? Okay, anything else or should we leave it there?

BS: I think this might be a good place to leave it. We could talk for hours on this.

SO: Be good place to leave it. Let’s not and say we did. OK, so with that, thank you for listening to the Content Strategy Experts podcast brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

The post Cutting technical debt with replatforming (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 24:05
Renovation revelations: Managing technical debt (podcast) https://www.scriptorium.com/2024/08/renovation-revelations-managing-technical-debt-podcast/ Mon, 12 Aug 2024 11:20:04 +0000 https://www.scriptorium.com/?p=22612 https://www.scriptorium.com/2024/08/renovation-revelations-managing-technical-debt-podcast/#respond https://www.scriptorium.com/2024/08/renovation-revelations-managing-technical-debt-podcast/feed/ 0 Just like discovering faulty wiring during a home renovation, technical debt in content operations leads to unexpected complications and costs. In episode 171 of The Content Strategy Experts podcast, Sarah O’Keefe and Alan Pringle explore the concept of technical debt, strategies for navigating it, and more.

In many cases, you can get away with the easy button, the quick-and-dirty approach when you have a relatively smaller volume of content. Then as you expand, bad, bad things happen, right? It just balloons to a point where you can’t keep up.

— Sarah O’Keefe

Related links:

LinkedIn:

Transcript:

Disclaimer: This is a machine-generated transcript with edits.

Alan Pringle: Welcome to the Content Strategy Experts Podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we talk about technical debt and content operations. What is technical debt and can you avoid it? Hey everybody, I am Alan Pringle and I’ve got Sarah O’Keefe here today.

Sarah O’Keefe: Hey everybody.

AP: And we want to talk about technical debt, especially in the context of content operations. And to start off, we should probably have you define what technical debt is, Sarah. I think this is something most people run into during their careers, but they may not have had a label to apply to what they were dealing with. So what is technical debt?

SO: We usually hear about technical debt in the context of software projects. And it is something along the lines of taking the quick-and-dirty solution, which then causes long-term effects, causes long-term costs. So Wikipedia says it’s the implied cost of future reworking because a solution prioritizes expedience over long-term design. And that’s really it. You know, I have this thing, I need to deliver it this week. I’m going to get it done as fast as possible. But then later, I’m going to run into all these problems because I took the easy road instead of the sustainable

AP: So it’s basically when the easy button bites you in the backside weeks, months, years later.

SO: Yeah, and with any luck you are aware that you’re incurring technical debt. The one that’s really painful is when you don’t realize you’re doing it.

AP: Right, or you didn’t know you weren’t part of the process when it happened. And I think this is kind of moving into where I want to go next. Let’s talk about some examples, especially in the context of content of where you can incur or stumble upon technical debt.

SO: So right now, the example that we hear actually most often is that any inconsistencies and problems in the quality of your content, the organization of your content, and the structure of your content lead to a large learning model or AI misinterpreting information and therefore your generative AI strategy fails. So essentially, because the content isn’t good enough, genAI you know, tries to see patterns where there are none and then produces some stuff that’s just complete and utter junk. Now, the interesting thing about this is that probably you are aware, at least at a high level, that your content wasn’t perfect. But the LLM highlights that it’s like, it’s like a technical debt detector. It will show that, look at you, you took a shortcut and it didn’t work or you didn’t fix this and it didn’t. And so here we are. Another good example of this is any sort of manual formatting that you’re doing. So you’re producing a bunch of content, a bunch of docs, a bunch of HTML pages, PDF, whatever. And in the context of that, you’ve got some step in there that involves cleaning it up by hand. So I get it sort of 90—95% is I just apply the template and it all just works. But then I’ve got this last step where I’m doing a couple of little finicky cleanup things and that’s okay because it’s just an hour or two and all I’m delivering is English. Okay, well along comes localization and suddenly you’re delivering in not just one language but two or three or a dozen or 27 and what looked like one hour in English is now 28 hours, you know, once time for English and 27 times again where you’re having to do this cleanup. And so all of a sudden your technical debt balloons into something that’s basically unsustainable because that choice that you made to not automate that last 5% suddenly becomes a problem.

AP: It’s a scalability issue, really, at the core.

SO: Yeah, in many cases, you can get away with the sort of, as you said, the easy button, the quick-and-dirty approach when you have a relatively smaller volume of content. And then as you expand, bad, bad things happen, right? It just balloons to a point where you can’t keep up.

AP: Yeah, and I have recently run into some technical debt, not in the content world, but in the homeownership world. And I’m sure this painful story will resonate with many people and not in a good way. But how many times have you gone to update a kitchen, update a bathroom, only to discover that there was some weird stuff done with the wiring? The plumbing is not like it really should have been. And basically you want to jump into a time machine, go back to when your house was built to have either a gently corrective conversation with the people who are building your house or just murder them outright because you are now having to pay to untangle the mess that was made 30, 40, 50 years ago. I am there right now and it is not a happy place.

SO: And it would have been, whatever it was they did was presumably cheaper than doing it right. But what they actually paid to do it the cheap way, plus what it would have cost to do it right, you know, would have been an extra 5 % or whatever at the time. But now it’s compounded because you’re having to, you know, in the case of plumbing, you know, tear out walls and go back and replace all these pipes instead. So you have to essentially start over instead of just do it. Another great example of this is accessibility. So when you start thinking about a house that has grab bars or wide doorways that wheelchairs will fit through, right? If the house was built with it, it costs a little bit more, not a lot, but a little. But when you go back to retrofit a house with that stuff, it is stupidly expensive.

AP: Exactly. And really, these things that we’re talking about in the physical world very much apply when you’re talking about software infrastructure, tool infrastructure as it can be bad.

SO: Yeah, I mean, there’s a perception of it’s just software, right? We’re not doing a physical build. We’re not using two by fours. So how bad could it be? It can be real bad. But that is the perception, right? That we’re not building a physical object so we can always go back and fix it. And I mean, you can always go back and fix everything. It’s just how much is it going to cost?

AP: Right, how much time and money and effort is it going to suck up to get you to where you need to be so you can then do the next thing that you intended to actually do in the first place? So yeah, I think this is something where this technical debt, sometimes there is no way around it. You inherit a project, you’ve got some older processes in place and you’re gonna have to deal with it. Are there some strategies that people can rely on to kind of mitigate and make it less painful? 

SO: Well, first I’ll say that not all technical debt is bad or destructive in a way. And the canonical example of this is if you’re trying to figure out is this thing gonna work, I wanna do a proof of concept, I don’t wanna see if the strategy that I’m considering is even feasible. So you go in and you take a small amount of content and you build out a proof of concept or a prototype, proof of concept like, look, we were able to generate this PDF over here and this HTML over here, and we fed it into the chat bot and everything kind of worked. And you look at it and you say, okay, so that was good enough. And because it was a proof of concept, you maybe didn’t sort of harden it from a design point of view. You just did what was expedient and you got it done. That’s fine, provided that you go into this with your eyes open, knowing where you cut the corners, recognizing that later we’re going to have to do this really well and we probably can’t use the proof of concept as a starting point, or it’s good enough and we can use it as a starting point, but here’s where we cut all the corners. You have this list of like, we didn’t put in translational localization support, we didn’t put all the different output formats we’re going to need, we just put in two to prove that it would more or less work. But I think you made a really good point earlier. So often you inherit these things. So you walk into an organization and you’re brand new to that organization and you get handed a content ops environment. This is how we do things. Great. And then the next thing that happens is that genAI comes along or a new output format comes along or, we’ve decided we want to connect it to this other software system over here that we’ve never thought about before, or, hey, we’re bringing in a new enterprise resource planning system and we need to connect to it, which was never on the requirements day one. And now you realize, looking at your environment, that what’s there won’t, you can’t get from what you have to where you need to be because the requirements shifted underneath you. Or you came in and you just didn’t have a good understanding of how and when these decisions were made because it was five or 10 years ago with your predecessor kind of thing. So. So how do we deal with this? It’s I mean, it just sounds awful, but it’s like you have to manage your debt just like actual debt. 

AP: All right, sure.

SO: Right, so understand what you have and haven’t done. We have not accounted for localization. We’re pretty concerned about that if and when we get to a point where we’re doing localization. Scalability. We are only going to be able to scale to maybe 10 authors and if we end up with 20, we’re going to have a big problem. So let’s just be aware of that when we get to eight or nine. But the thing is you always have technical debt that you identify that you know about this is hopefully unlike personal finance, you always have more debt than you think you have, right? Because in the content world, things change. Or in your housing example, like the building code changes. So they built the thing, umpteen years ago, and it was okay in the sense that it conformed with the requirements of the building code at the time, I assume. 

AP: Of course.

SO: And now you’re going in and you’re making updates and suddenly the new building code is in play and you’re faced with the technical debt that accrued as the building code changed, but your house, your physical infrastructure did not change. And so there’s a gap between where you need to end up and where you are, part of which is just time has elapsed and things have changed.

AP: Right, and that is very true of some of the requirements you mentioned in regard to content operations. Generative AI, that’s what, the past two years, if that, that wasn’t on the horizon five years ago when some decisions were made. it absolutely is very much parallels. And when it comes to personal finance, sometimes things get so bad, you have to declare bankruptcy. And I think that can also apply to technical debt as well.

SO: Yeah, it’s a, you know, it’s an unhappy day when you look at, you know, a two-story house and you’ve been told to build a 50-story skyscraper. It just can’t be done, right? You cannot take a, you know, a sort of a stick-boiled house made of wood and put 50 stories on top of it. At least I don’t think so. We’ve now hit the edges of what I know about construction. So sorry to all the construction people, you build differently if you know that it’s going to be required to be 50 stories. Even if you only build the initial two, so either you build two knowing that eventually you’ll scrape it and start over with a new foundation or you build what amounts to a two-story skyscraper, right, that you can then expand on as you go up. So you overbuild, mean, completely overbuild for two stories knowing you’re going forward.

AP: Scalability.

SO: But yeah, we have a lot, a lot of clients who come in and say, you know, we’re in unstructured content, know, word unstructured frame maker, InDesign, basically a PDF-only workflow. And now we need a website or we need all of our content in like a content as a service API kind of scenario. And they just can’t get there from a document page-based, print-based, PDF-targeted workflow, you can’t get to, and also I wanna load it into an app in nifty ways. I mean, you could load the PDF in, but let’s not. So you end up having to say, this isn’t gonna work. This is the, I have a two-story suburban house and I’ve been told to build a 50-story skyscraper. Languages, localization are really, really common causes of this. So separately from the, “I need website, in addition to PDF,” the, “We’re only going to one or two languages, but now we’re going to 30 because we’re going into the European Union,” is a really, really common scenario where suddenly your technical debt is just daunting.

AP: So basically you’re in a burn it all down situation. Just stop and start all over again.

SO: Yeah, I mean, your requirements, it’s not that you did it wrong. It’s that your requirements changed and evolved and your current tools can’t do it. So it’s a burning platform problem, right? The platform I’m on isn’t isn’t going to work anymore. And so I have to get to that other place. It’s really unpleasant. Nobody likes landing there because now you have to make big changes. And so I think ideally, what you want to do is evolve over time, evolve slowly, keep adding, keep improving, keep refactoring as you go so that you’re not faced with this just crushing task one day. But with that said, most of the time, at least the people we hear from have gotten to the crushing horror part of the world because it’s good enough. It’s good enough. It’s not great. We have some workarounds. We do our thing until one day it’s not good enough.

AP: And it’s very easy to get used to those workarounds. That is just part of my job. I will deal with it. You kind of get a thick skin and just kind of accept that’s the way that it is. While you’re doing that, however, that technical debt in the background, it’s accruing interest, it’s creeping up on you, but you may not really be that aware of. 

SO: Right. Yeah, I’ve heard this called the missing stair problem. So it’s a metaphor for the scenario where, again, in your house or in your life, there’s a staircase and there’s a stair missing and you just get used to it, right? You just climb the steps and you hop over the missing stair and you keep going. But you bring a guest to your house and they keep tripping on the stairs because they’re not used to it, at which point they say, what is the deal with the step? And you’re like, yeah, well, you just have to jump over stair three because it’s not there or it’s got a, you know, missing whatever. So missing stair is this idea that you can get, you can get used to nearly anything and the workaround just becomes, “Get used to jumping.”

AP: And it ties into again, there’s technical debt there, but you have kind of almost put a bandaid on it. You’re ignoring it. You’ve just gotten used to it. Yeah, you do. So really, there’s no way to prevent this? Is it preventable?

SO: I mean, if you staffed up your content ops organization to something like 130% of what you need for day-to-day ops and dedicated the extra 30 or maybe 10%, but you know the extra percentage to keeping things up to date and constantly cleaning up and updating and refactoring and looking at new and yeah so no there’s no way to do it and everybody is running so lean.

AP: I’m gonna translate that to a no. That is a long no. So yeah.

SO: And as a result, you make decisions and you make trade-offs and that’s just kind of how it is. I think that it’s important to understand the debt that you’re incurring, to understand what you’re getting yourself into. And, you know, I don’t want to, you know, beat this financial metaphor to death, but like, did you take out like a reasonable loan or are you with the loan sharks? Like how bad is this and how bad is the interest going to be?

AP: Yeah, so there’s a lot to ponder here and I’m sure a lot of people are listening to this and thinking, I have technical debt and I’ve never even thought about it that way. it is a topic that is unpleasant, but it is something that needs to be discussed, especially if you’re a person coming into an organization and inheriting something you may not have had any say in the decisions that were made 10 years ago, five years ago, and things have changed so much that might be why they’ve brought you in. So it is something that you’re gonna have to untangle.

SO: Yeah, sounds about right. So good luck with that. Call us if you need help, but sorry.

AP: Yeah, so if you do need help digging out of the pit of technical debt, you know where to find us. And with that, I’m going to wrap up. Thank you, Sarah. And thank you for listening to the Content Strategy Experts podcast brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

SO: Thank you.

The post Renovation revelations: Managing technical debt (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 19:12
Accelerate global growth with a content localization strategy https://www.scriptorium.com/2024/07/accelerate-global-growth-with-a-content-localization-strategy/ Mon, 15 Jul 2024 11:35:16 +0000 https://www.scriptorium.com/?p=22577 https://www.scriptorium.com/2024/07/accelerate-global-growth-with-a-content-localization-strategy/#respond https://www.scriptorium.com/2024/07/accelerate-global-growth-with-a-content-localization-strategy/feed/ 0 In episode 170 of The Content Strategy Experts podcast, Bill Swallow and Christine Cuellar dive into the world of content localization strategy. Learn about the obstacles organizations face from initial planning to implementation, when and how organizations should consider localization, localization trends, and more.

Localization is generally a key business driver. Are you positioning your products, services, what have you for one market, one language, and that’s all? Are you looking at diversifying that? Are you looking to expand into foreign markets? Are you looking to hit multilingual people in the same market? All of those factors. Ideally as a company, you’re looking at this from the beginning as part of your business strategy.

— Bill Swallow

Related links:

LinkedIn:

Transcript:

Disclaimer: This is a machine-generated transcript with edits.

Christine Cuellar: Welcome to the Content Strategy Experts Podcast, brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we are talking about content localization strategy. So maybe you’re starting to think about introducing a localization strategy. Maybe you’re hitting some pain points in your localization processes, all that good stuff we’re going to be talking about today. Hi, I’m Christine Cuellar.

Bill Swallow: And I’m Bill Swallow.

CC: Bill, thanks for being here today to talk about localization. Bill is our go-to localization expert, and localization has been coming up a lot. So I noticed for me, on the marketing side of things, there’s been a lot of, you know, SEO stuff coming up for localization. People seem to be searching about it, asking questions at a more beginning to thinking about the whole localization process level. So that’s what we wanted to talk about today. Give you the chance to have some upfront knowledge about what you could be getting into with introducing localization in your content strategy. And yeah, let’s talk about it with an expert. So thanks, Bill.

BS: Thank you.

CC: First things first, the most basic question, what is content localization strategy? So what do we mean by that?

BS: Okay, so I can kind of frame this in, I guess the same point of view as a content strategy, but basically you’re taking a look at your entire localization process from start to finish. Plus you’re looking at what are the systems that are involved? How are authors prepping the content for localization? Are they writing well upfront? What does the publishing preparation look like? How are you choosing your translators? Are you going to pure machine translation? Are you using live people to do the translation? Are you using people who are content experts? Are you using people who are market experts? So there are a lot of different factors there that all kind of get balled up into this grander strategy of how are you going to approach getting your content authored and translated appropriately in other regional markets.

CC: Yeah, okay. That makes sense. And taking a step back even further, can you walk me through the difference between localization and localization strategy?

BS: Sure. Localization itself is kind of more of an action, and whereas strategy is more planning around that action, I think that’s the best way to put it. So localization involves a bunch of different things. It involves the act of internationalization. So that’s prepping your content, your code, your product, whatever it is to be delivered for multiple regional and language markets. And then you have the translation component of localization, which is actually getting things written, spoken however, in other languages. And the strategy piece is more bridging both of those and adding additional components so that you have a solid plan for every step in that process.

CC: Okay, yeah, that makes sense. And where do we step in? We here at Scriptorium, where do we sit?

BS: Generally we at Scriptorium, we sit on the source content authoring side. And we look at the overall content strategy, and we do look at a localization strategy as a component of that. They’re not separate. They’re very intertwined and we need to take a look at really both of them. So a lot of our clients do come to us because they have localization requirements.

And we have to account for those in the content strategy that we build for them. So we’re looking not only at the source content authoring process and what needs to happen in that to get the job done, but we also have to look at where are they going with their content, how are they going to localize it, what do they need to localize, what processes do they have in place now? Are they working? Are they not looking at systems? Are they adequate? Are they not? And look at the markets. Are they already reaching those markets? Do they need to do something different? How do we need to position the content as it moves through that funnel of production so that when it comes out the other side, it is ready for those markets. So they’re kind of intertwined there.

CC: Okay. Yeah. So when are organizations typically thinking about a content localization strategy?

BS: Well, localization generally it’s a key business driver. Are you positioning your content for one market, one language, and that’s all? Or are you positioning… I shouldn’t say just product because product services, what have you. Are you looking at diversifying that? Are you looking to expand into foreign markets? Are you looking to hit multilingual people in the same market? All of those factors. So ideally as a company, you’re looking at this from the beginning as part of your business strategy. And what are you doing to… What are you producing? Who are you producing it for? How do they need to consume it? So as soon as you catch a whiff of those multilingual requirements, bells should be going off saying, “Hey, we need a plan for this.” More commonly, an organization might be producing for one market or producing for several markets. They’re kind of doing things ad hoc, producing content, then sending it out to a translator. They’re getting something back, they may be polishing it up or it’s a finished product and then they send it out. It’s a very time-consuming process. It’s a very costly process, and it’s very difficult to kind of juggle when things will be done. Because if you don’t have a set process around things and you don’t have an idea of how long things will take, what efficiencies you’re able to build up front and so forth, you’re throwing caution to the wind and just putting stuff out there and hoping that it comes back in time so that you can go to market with it. We’ve worked with clients who have said that generally it takes about nine months or so to get their localized product out the door and into the market after the English is done. And for a lot of those, we’ve brought that number into three months, one month, depending on exactly what they’re producing and how they need to produce it, so-

CC: Yeah, it’s a huge difference.

BS: Looking at that… Oh, huge difference. And looking at that time to market, that’s perhaps more valuable than the cost that you’re dumping into putting a localization strategy or a content strategy together because you’re able to sell quicker into those markets. You’re not waiting for the opportunity to start seeing revenue come back from the initiatives that you’re taking to get stuff out there.

CC: Yeah. Yeah, that makes sense. And I feel like… So correct me if I’m wrong here, but in the global world that we live in, it feels like localizing products and getting them ready for new regions is a very… I think that would be something that executives think about from the get-go like, yes, of course we want our product ready for new regions and locations. But why is the… It sounds like maybe the content piece of that is not thought about or maybe left behind until it’s an absolute emergency. Would you say that that’s… First of all, is that accurate?

BS: Sadly, I’d say yes.

CC: Okay.

BS: Content is often an afterthought in general, whether we’re talking about producing stuff just in your native language for a native market. Localization is usually even more of an afterthought because it’s like, oh, well, we wrote it in English, we’ll just have someone translate it. And by then you’re waiting until that product is done and then sending it to somebody else who’s looking at it going, “I can’t make sense of this. It’s not written well. And I’m going to take my best guess at how to translate this.” It could take months to get that back.

CC: So maybe organizations see the value in having their products and services available in other markets, but they don’t necessarily think of all of the content localization pieces that are involved in getting that out the door.

BS: No, and it’s similar for pretty much anyone trying to get anything done that you want to do something. But for example, I really want to put a new patio in the back of my house. I know exactly… I even have an idea of exactly how that should go in. I don’t have the time. I don’t have the materials needed to do it. And I’d much rely on somebody else who knows what they’re doing to put it in the correct way so it’s not graded improperly, so that there aren’t uneven portions that people will trip over and so forth. So looking at it that way, the same thing with localization. People who are running a company or starting a company, they may have an idea that yes, they need to get from point A to point B to point C to point D. They don’t know those steps along that path, and they need some help figuring out, okay, it’s not that you just write your English content, you throw it over to somebody else and they send it back. It’s a more intricate process. You have some systems in place that we’ll manage that handoff that will allow people to gate the content and proof it and make sure it’s correct before it goes anywhere. And you may have some other efficiencies built in that allow you to automatically format things when the time comes to actually produce. So there are a lot of bits and pieces that people just generally don’t think about because it’s not in their wheelhouse.

CC: Yeah, they can’t know what they don’t know.

BS: Exactly.

CC: Okay. So it sounds like most organizations realize that this is a problem once they’re actually trying to get their product out the door and into a new market, into a new region. What are some obstacles to getting a content localization strategy set up? I’m sure that one issue is probably like, oh, you’re in emergency mode and we just need to get this product out the door. That might present a challenge in and of itself.

BS: Absolutely.

CC: Yeah. Are there other obstacles as well to getting a more future-focused strategy in place?

BS: Oh, that one is a good one. That is the first hurdle to get over.

CC: Is the emergency mode.

BS: So being able to recognize or realize that you’re in emergency mode and getting out of that mindset and saying, okay, it’s not just that this will be a forever problem of just waiting and hoping for good quality coming out in the end. Once you’re able to realize that you need to break that mindset and start looking forward, then we start hitting other obstacles. One of them is going to be funding because there will be systems involved, there will be personnel required, there will be processes that need to change and so forth. And that will certainly cost a lot upfront. You’re going to basically see that return on investment in a pretty quick amount of time. We’ve seen one company make their investment back within a year, but they were producing an insane amount of languages already, and they just needed to tidy up their process. And again, by bringing that window in from nine months to about a month and a half or so, to be able to get their localized stuff out, they were able to quickly realize that return on investment there. But another one is buy-in, because you have a lot of people who are busy doing their job and you’re suddenly telling them that they need to change how they do their job, and it might be abandoning the tools that they like to use. Writing in a different way, looking at publishing in a different way and interacting with people who they normally don’t interact with on a day-to-day basis. So your source author’s interacting with a localization manager internally who needs to send stuff out to translators or your writer’s interacting with translators to explain what they had written so that the translator has a definitive idea of what it is and how to translate it for the market that they’re translating for. And then of course, you have the obstacle of governance and change management comes along with that. You need to be able to make sure that any of the changes that you introduce, that people are following the new way of doing things and aren’t falling back to old bad habits or even old good habits at the time. And you need to make sure that you have these gating processes so that once something is written in English, you have a formal review on that to make sure it’s correct, to make sure it’s written appropriately. That goes out to translation. They have their own gating process of making sure they receive all the files, that they understand the content that they have, all the supporting information that they need to help them translate and localize this information for that market. Then of course, they do their own quality checks. It comes back, you make sure that there’s a final review on the company side to make sure the translation seems good. And then you’re able to publish and deliver. So it still sounds like a lot of gating factors, but once you kind of get things going and figuring out where you can expedite and make things a lot easier, you start to bring in that entire timeline.

CC: Yeah, that makes sense. You mentioned buy-in, and so I could see how if people feel like their workload’s being increased by suddenly needing to talk to more people, coordinate between more departments or even just have more things on their radar, I could see how that could create a lot of, oh, I don’t know if I want to go in this direction. What are some ways… And that’s probably one of the… As you mentioned, that’s just one of a few buy-in challenges. What are some of the ways that you maybe win people over or show people how this can benefit their work life versus just make it harder?

BS: That’s a good question. I think that authors in general to understand where their content is going and who is consuming it. And even though it’s… We’re talking about corporate content, we’re talking about everything from website content to product manuals to troubleshooting tips and all that stuff and training materials. So it’s not really… Even though it belongs to the company, a lot of authors tend to have a kind of, I guess, personal pride built around what they write.

CC: Yeah, okay. Yeah, that makes sense.

BS: So knowing who is consuming it down the road and the reason why you have these additional checkpoints and processes in place will kind of help, I think get a lot of them around the idea of, yeah, this is a good thing and I’m looking forward to helping any way I can. Because the last thing they want is to have something written completely correctly in English and have it go out to, I guess let’s say a market in Denmark. And the content was translated incorrectly because the translator maybe didn’t understand what something meant, and they gave it a different term, which had a different meaning in that market.

CC: Yeah. And I could also see from a safety standpoint, that could be really dangerous too, if you’re not properly translating instructions for high stakes content, medical devices, stuff like that. Just like you do in English, you want that content to be accurate and understandable. Because if it’s not accurate, of course it’s wrong and people could get hurt also if people don’t understand it, even if it’s totally accurate. But it’s just hard to understand. That presents, I’m sure, a lot of dangerous situations where your people could get hurt and your company is liable. So yeah, it makes sense that you would really want to have a good process in place.

BS: Oh, absolutely. And even more along those lines, the regulations that we have to adhere to here in the US are somewhat different to… Very different to anywhere else in the world. There are different directives in place depending on where you are regionally, things that have to be included that have to be said a very specific way. So I guess the easiest way to look at it is that there are more legal ramifications in the US. So you could get sued if something is wrong, whereas opposed to if you go over to the UK, it’s generally more that there’s a directive you have to follow and you simply cannot release in that market if your, for example, machinery content does not meet that specific directive’s requirements. So there’s a slightly different approach. So it might be… There’s still a legal ramification if things go wrong, but there’s also another set of requirements that need to be met before you even start worrying about the legal stuff.

CC: And are most organizations aware of those kind of requirements when they start trying to get into a new market?

BS: Some of them might be, but again, if you’re in one particular region, chances are that’s the region you’ve grown up with and that’s the region you understand. And there’s been very little attention paid to what are their requirements in other geographic regions, other countries and so forth. So I can’t say is it common, is it not common? But in general, you know what? And when you’re looking to move to a foreign market, there’s the foreign context. You’re going to have very little insight into what that foreign market demands by its very nature. As a company moves into a new language market, new geographic market, they’re going to learn things as they go, and they’re going to bring that knowledge back and refine how things are being done currently so that it also satisfies that new requirement. And it’s going to be an iterative process until they really get their arms around it. And again, going back to a localization strategy for your content, you can kind of start putting those feelers out. Because if one market has one set of requirements, it’s like, wait a minute, now we want to go to three. What are the requirements for the other two before we even start thinking in that direction? So you’re able to start building upon that strategy that you’re developing. I mean, we’re not experts in all the requirements for every single market on the face of the earth. I can say that outright, but we can help companies start to identify what they need to start looking into before they start running.

CC: So since we mentioned one of the reasons this topic came about was seeing some SEO search trends, people trying to get more information on localization. What other trends are you seeing in localization right now?

BS: I think the big one is still going to be machine translation. It’s continually evolving and it’s getting smarter, still not, I would say, better than a human. It’s certainly quicker, but we’re getting there. And a lot of that… We talk about AI a lot. And obligatory nod to AI for this podcast, but when we talk about AI, and I think I mentioned this on another podcast already, that when you look at machine translation, that was really like AI Alpha or AI Beta where it was already using an algorithm to start putting together translations for written text. So with AI in the mix now, we’re getting a lot more, I guess, interesting results, a lot more targeted results with machine translation. I still don’t think it’s a perfect solution, and we’ll certainly need some proofreading, but it’s come a long way. And I think that that trend is certainly not going to fall off the radar anytime soon. In fact, recently Sarah O’Keefe had a podcast with Sebastian Göttel about strategies for AI and technical documentation, and they actually recorded that podcast in German. And they used AI to translate and voice augment into English. So not only were things machine translated from German into English, but the German speaking was then synthetically reproduced in English, which just is really cool.

CC: Yeah, it’s super cool to listen to, and we’ll link those in the show notes as well. There’s two versions, the German version and the English version. But yeah, you’re right. It was a super cool process, but you had mentioned earlier there was a human piece to it that was still needed because when it was originally recorded in German, then we got the German transcript, translated that into English. And when we translated that, at first it was Google Translate just to get it all done, but then Sarah needed to go and check it because she speaks both English and German. And we needed that human element to make sure that the translation was correct. Because like you were saying, you can’t just necessarily put it into a machine and cool, yay, it’s done. We need the human to make sure that it was actually translated properly and the things make sense. And we did notice once Sebastian’s synthetic audio was created in English, a lot of the prompts or the questions just were different lengths. The English version sometimes was shorter or sometimes longer of just the exact same question. It’s just the languages are different. So it’s really cool. It was a really cool experiment and does open up some interesting possibilities, would you say, for localization. And we’ve never been able to have a German and English podcast before, so that’s kind of cool.

BS: Yeah, no, it was very cool. I sat in the back of the room just watching the entire process, but it was definitely something I was quite interested in seeing. Yeah, there was a lot of editing of the English translation because again, it was pure machine translation and it needed some help. But once that was done, the synthetic audio really came right together, and I was impressed in how that happened.

CC: Yeah. And it’s so interesting because it’s definitely… It sounds like Sebastian, but then also it sounds not quite human, but it’s really close. It’s really interesting. But it did-

BS: Very uncanny valley.

CC: Yeah, it was, and I only speak English. I don’t speak German, so it made that podcast accessible to me. I was able to listen to it, and it does present some interesting opportunities, but as always with AI, the human element was definitely needed. It was very important to make sure that the humans at the other end of the screen could eventually consume it.

BS: Oh, yeah.

CC: Awesome. Well, bill, thank you so much. We covered a lot of ground today, and we really appreciate it. This was really helpful, and yeah, thanks for being on the show.

BS: Yeah, thanks.

CC: And thank you for listening to the Content Strategy Experts podcast, brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

The post Accelerate global growth with a content localization strategy appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 24:32
Strategies for AI in technical documentation (podcast, English version) https://www.scriptorium.com/2024/06/strategies-for-ai-in-technical-documentation-english-version/ Mon, 24 Jun 2024 06:00:52 +0000 https://www.scriptorium.com/?p=22545 https://www.scriptorium.com/2024/06/strategies-for-ai-in-technical-documentation-english-version/#respond https://www.scriptorium.com/2024/06/strategies-for-ai-in-technical-documentation-english-version/feed/ 0 In episode 169 of The Content Strategy Experts podcast, Sarah O’Keefe and special guest Sebastian Göttel of Quanos engage in a captivating conversation on generative AI and its impact on technical documentation. To bring these concepts to life, this English version of the podcast was created with the support of AI transcription and translation tools!

Sarah O’Keefe: So what does AI have to do with poems?

Sebastian Göttel: You often have the impression that AI creates knowledge; that is, creates information out of nothing. And the question is, is that really the case? I think it is quite normal for German scholars to not only look at the text at hand, but also to read between the lines and allow the cultural subtext to flow. From the perspective of scholars of German literature, generative AI actually only interprets or reconstructs information that already exists. Maybe it’s hidden, only implicitly hinted at. But this then becomes visible through the AI.

How this podcast was produced:

This podcast was originally recorded in German by Sarah and Sebastian, then Sarah edited the audio. Sebastian used Whisper, Open AI’s speech-to-text tool to transcribe the German recording, followed by necessary revisions. The revised German transcript was machine translated into English via Google Translate and then we cleaned up the English transcription.

Sebastian used ElevenLabs to generate a synthetic audio track from the English transcript. Sarah re-recorded her responses in English and then we combined the two recordings to produce the composite English podcast.

Related links:

LinkedIn:

Transcript:

Disclaimer: This is a machine-generated transcript with edits.

Sarah O’Keefe: Today’s episode is available in English and German. Since our guest works with AI in German-speaking countries, we had the idea to create this podcast in German. The English version was then put together with AI support, particularly synthetic audio. So welcome to the Content Strategy Experts Podcast, today offered for the first time in German and English. Our topic today is Information compression instead of knowledge creation: Strategies for AI in technical documentation. In the German version, we tried to put it all together in one nice long word, but it didn’t quite work. Welcome to the Content Strategy Experts Podcast, brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we talk about best practices for AI and tech comm with our guest Sebastian Göttel of Quanos. Hello everyone, my name is Sarah O’Keefe. I am the CEO here at Scriptorium. My guest is Sebastian Göttel. Sebastian Göttel has been working in the area of ​​XML and editorial CCMS systems in technical documentation for over 25 years. He originally studied computer science with a focus on AI. Currently, he is Product Manager for Schema ST4 at Quanos, one of the most used editorial systems in machinery and industrial engineering in the German-speaking regions. He is also active in Tekom and, among other things, contributed to version 1 of the iiRDS standard. Sebastian lives with his wife and daughter, three cats, and two mice just outside Nuremberg. Sebastian, welcome. I look forward to our discussion. In English, we say create once, publish everywhere. This is about recording once and outputting multiple times. So, off we go. Sebastian, our topic today is, as I said, information consolidation instead of knowledge creation and how this strategy could be used for AI in technical documentation. So please, explain.

Sebastian Göttel: Yes, first of all thank you for inviting me to the podcast. It’s not that easy to impress a 14-year-old daughter. And I thought, with this podcast I have a chance. So I told her that I would be talking about AI on an American podcast soon. And the reaction was a little different than I expected. Youuuuu will you speak English? You can put quite a lot of meaning into a single “uuuu” like that. And that’s why I’m glad that I can speak German here. But, and this is now the transition to the topic, what will the AI ​​make of the “You will speak English”? How does it want to pronounce that correctly in text-to-speech or translate it into another language? And that’s what I think our conversation will be about today. If we want to understand how AI understands us, but also how we can use it in technical documentation, then we have to talk about information compression, but also invisible information. “You will speak English?” Can the AI conceptualize that my daughter doesn’t trust me to do this or simply finds my German accent in English gross? Well, if the AI ​​can understand that, then it is new information or actually information that was already there and that both father and daughter were actually aware of during the conversation. I find it quite exciting that German scholars have often dealt with this. Namely, what is in such a text, and what is meant in the text? What’s between the lines? And when you think back to your school days, these interpretations of poems immediately come to mind.

SO: So poems. And what does AI have to do with poems?

SG: Yes, well, you often have the impression that AI creates knowledge; that is, creates information out of nothing. And the question is, is that really the case? I think it is quite normal for German scholars to not only look at the text at hand, but also to read between the lines and allow the cultural subtext to flow. And from the perspective of scholars of German literature, generative AI actually only interprets or reconstructs information that already exists. Maybe it’s hidden, only implicitly hinted at. But this then becomes visible through the AI. Wow, I never thought I would refer to German literature scholarship in a technical podcast.

SO: Yes, and me neither. But the question remains, how does AI work and why does it work? And then why do these problems exist? What is our understanding of the situation today?

SG: Well, I think we’re still pretty impressed by generative AI, and we’re still trying to understand what we’re actually perceiving and what’s happening there. There are things that just make our jaws drop. And then there are those epic fails again, like this recent representation of World War II German soldiers by Gemini, Google’s generative AI. According to our current understanding, the soldiers were politically correct. And there were, among other things, Asian-looking women with steel helmets. I always like to compare this with the beginnings of navigation systems. There were always these anecdotes in the newspaper about someone driving into the river because their navigation system mistook the ferry line for a bridge. It was relatively easy to fix such an error in the navigation system. It was clear why the navigation system made the mistake. Unfortunately, with generative AI it’s not that easy. We don’t know, actually, we haven’t even really understood how these partially intelligent achievements come about. But the epic fails make us aware that it’s not an algorithm, but a phenomenon that seems to emerge if you pack many billions of text fragments into a matrix.

SO: And what do you mean here by “emerge”?

SG: That is a term from natural science. I once compared it to water molecules. A single water molecule isn’t particularly spectacular, but if, for example, you’re sailing in a storm on the Atlantic or hitting an iceberg, you get a different perspective. Because if you put many water molecules together, completely new behavior emerges. And it took physics and chemistry many centuries to partially unravel this. And I think we will, maybe not for quite as long, but we will have to do a lot more research into generative AI in order to understand a little more about what exactly is happening. And I think the epic fails should make us aware that we would currently do well not to blindly place our fate in the hands of a Large Language Model. I think the human-in-the-loop approach, where the AI ​​makes a suggestion and then a human looks at it again, remains the best mode for the time being. The translation industry, which feels like it is a few years ahead of the world when it comes to generative AI or neural networks, has recognized this quite cleverly and implemented it profitably.

SO: And if translation is the model, what does this mean for generative AI and technical documentation?

SG: That’s a good question. Let’s take a step back. So at the beginning of my working life, there was a revolution in technical documentation, these were structured documents; SGML and XML. This has been known for several decades now, and it is still not used in every editorial team. And that means we now have these structured documents and the other thing, which are the nasty unstructured documents. I always thought that was a bit of a misnomer because unstructured documents are actually structured. Well, at least most of the time. There’s a macro level where I have a table of contents, a title page, and an index. There are chapters. Then there are paragraphs, lists, and tables and that goes down to the sentence level. I have lists, prompts, and so on. It’s not for nothing that some linguists call this text structure. And if I now approach XML, the beauty of XML is that I can now suddenly make this implicit structure explicit. And the computer can then calculate with our texts. Because if we’re being honest, in the end, XML is not for us, but for the machine.

SO: Is it possible then that AI ​​can discover structures that, for us humans, have so far only been expressed through XML?

SG: Yes. Well, I recently looked into Invisible XML. There you can overlay patterns onto unstructured text and they become visible as XML. Very clever. I think generative AI is a kind of Invisible XML on steroids. The rules aren’t as strict as in Invisible XML, but genAI also understands linguistic nuances. I found it very exciting, a customer of ours fed unstructured PDF content into ChatGPT; that is unstructured content from the PDF, in order to then convert it to XML. The AI ​​was surprisingly good at discovering the invisible structure that was hidden in the content and converted XML really well. So that was impressive. When AI now appears to create information out of nothing, I think it is more likely that it makes existing but hidden information visible.

SO: Yes, I think the problem is that this hidden structure, in some documents, it’s there, but in others, there’s what we call “crap on a page” in English. So that’s, there’s no structure. And from one document to another, there is no consistency, so they are completely different. Writer 1 and Writer 2, they write and they never talk. And so if the AI ​​now creates an entire chapter and an outline from a few keywords, how does it work? How does that fit together?

SG: Yes, you’re right. So far we’ve been talking about we take PDF and then XML is added to it. But if I’m put on the spot, I’ll throw in a few keywords and ChatGPT suddenly writes something. But also, I think this idea also applies that this is actually hidden information. It might sound a bit daring at first, but there’s nothing new, nothing completely surprising. Now if I just ask, let’s say ChatGPT, give me an outline for documentation for a piece of machinery. And then something comes out. I think most of our listeners would say the same thing. This is nothing new. This is hidden information contained in the training data, which is easily made visible through the query. Because ultimately, generative AI creates this information from my query and this huge amount of training data. And the answer is chosen so that it fits my query and the training data well. It creates a synthetic layer over the top. And in the end, the result is not net new information, but hopefully, the necessary information delivered in a way that’s easier to process further. Either like the example with PDF, enriched with XML or I maybe now have an outline. And I imagine it’s a bit like a juicer. The juicer doesn’t invent juice, it just extracts it from the oranges.

SO: Making information easier to process sounds almost like a job description for technical writers. And what about other methods? So if we now have metadata or knowledge graphs, what does that look like?

SG: That’s right, in addition to XML, these are also really important. So metadata, knowledge graphs. I find that metadata condenses information into a few data points and the knowledge graphs then create the relationships among these data points. And this is precisely why knowledge graphs, but also metadata, make invisible information visible. Because the connections that were previously implicit can now be understood through the knowledge graphs. And that can be easily combined with generative AI. At the beginning, the knowledge graph experts were a bit nervous, as you could tell at conferences, but now they’re actually pretty happy that they’ve discovered that generative AI plus knowledge graphs is much better than generative AI without knowledge graphs. And of course, that’s great. By the way, this isn’t the only trick where we have something in the technical documentation that helps generative AI get going. If you want to make large knowledge bases searchable with Large Language Models, you can do that today with RAG, or Retrieval Augmented Generation. And this means you can combine your own documents with a pre-trained model like ChatGPT very cost-effectively. If you now combine RAG with a faceted search, as we usually have in the content delivery portals in technical documentation, then the results are much better than with the usual vector search, because in the end it is just a better full-text search. That’s another possibility where structured information that we have can help jump-start AI.

SO: Is it your opinion that structured information will not become obsolete through AI, but will actually become more important?

SG: My impression is that the belief has taken hold that structured information is better for AI. I think we’re all a bit biased, naturally. We have to believe that. These are the fruits of our labor. It’s a bit like apples. The apple from an organic farmer is obviously healthier than the conventional apple from the supermarket. I think this is scientific fact. But in the end, any apple is better than a pack of gummy bears. And that’s what can be so disruptive about AI for us. Because at the end of the day, we are providing information. And if users gets information that is sufficient, that is good enough, why should they go the extra mile to get even better information? I don’t know.

SO: Okay, so I’m really interested in this gummy bear career and I want to hear a little bit more about that. But why is your view on the tech comm team’s role so, let’s say, pessimistic?

SG: I think my focus has gotten a little wider recently. I think I’m not really just looking at technical documentation. When it comes to technical documentation, we are lost without structured data. It will not work. But if we take the bigger picture, at Quanos we not only have an CCMS, but we also create a digital twin for information. I’m in all these working groups as the guy from the tech doc area. And I always have to accept that our particularly well-structured information from tech doc, the one with extra vitamins and secondary nutrients, is actually the exception out there when we look at the data silos that we want to combine in the info twin. When I was young, I believed that we had to convince others to work the way we do in tech docs. That would have been really fantastic. But if we’re honest with ourselves, it just doesn’t work. The advantages that XML provides for technical documentation are too small in the other areas and for individuals to justify a switch. The exceptions prove the rule. As a result, tons of information is out there locked up in these unstructured formats. And it can only be made accessible with AI. That will be the key.

SO: And how do we do that? If XML isn’t the right strategy, what does that look like?

SG: Well, so let’s take an example. So many of our customers build machinery and let’s take a look at the documentation that they supply. There are several dozen PDFs for each order. And of course the editor has a checklist and knows what to look for in this pile of PDFs. The test certificate, the maintenance table, parts lists, and so on. And even though the PDFs are completely “unstructured” as compared to XML files, we humans are able to extract the necessary information. And the exciting thing about it is that anyone can actually do it. So you don’t have to be a specialist in bottling systems or industrial pumps or sorting machines. If you have an idea of ​​what a test certificate, a maintenance table, a parts list is, then you can find it. And here’s the kicker: the AI ​​can do that too.

SO: Ahh. And so in this case are you more concerned with metadata…or something else?

SG: No, you’re right. So this is in fact about metadata and links. I find it fascinating what this does to our language usage. Because we have gotten used to saying that we enrich the content with metadata. But in many cases we have simply made the invisible structure explicit. No information was added. Nothing has become richer, just clearer. But now imagine that your supplier didn’t provide a maintenance table. Then you need to start reading, understand the maintenance instructions, and extract the necessary information. And that’s tedious. Even here, AI ​​can still provide support. But how well depends on the clarity of maintenance procedures. The more specific background knowledge is necessary, the more difficult it becomes for the AI to provide assistance.

SO: What does that look like? Do you have an example or use case where AI doesn’t help at all?

SG: It depends on contextual knowledge. I once received parts of a risk analysis from a customer. And her question was, “Can you use AI to create safety messages?” And I said, “Sure, look at the risk analysis and then look at what the technical writers made of it.” And they were exemplary safety messages. But there was so little content in the risk analysis that with the best intentions in the world you couldn’t do anything with artificial intelligence; that end result was only possible because the technical writers had an incredibly good understanding of the product and also had the industry standards. The information was not hidden in this input, but in the contextual knowledge. And that’s so specialized that it’s of course not available in the Large Language Model.

SO: In this use case, you don’t see any possibility for AI at all?

SG: Well, at least not for a generic Large Language Model. So something like ChatGPT or Claude, they have no chance. There is an opportunity in AI to specialize these models again. You can fine-tune this with context-specific content. But we don’t yet know at the moment whether we normally have enough content. There are some initial experiments. But let’s think back to the water molecules. We need quite a few of them to make an iceberg or even a snowman. Ultimately, you have to ask which supporting materials are needed from which point of view, and fine-tuning is really expensive. So there are costs. It takes a long time. Performance is also an issue. And how practical is this approach? Do we have training data? So, given all these aspects, it is still unclear what the gold standard is for making a generic large language model usable for content work in very specific contexts. We just don’t know today.

SO: Can you already see or predict how generative AI will change or must change technical documentation?

SG: I really think it’s more like looking into my crystal ball. So it’s not that easy to estimate which use cases are promising for the use of AI in technical documentation. As a rule, you have a task where a textual input needs to be transformed into a textual output according to a certain standard. And it used to be garbage in, garbage out. In my opinion, the Large Language Models change this equation permanently. Input that we were previously unable to process automatically due to a lack of information density, we can now enrich it with universal contextual knowledge in such a way that it becomes processable. Missing information cannot be added. We’ve discussed that now. But these unspoken assumptions, in fact, we can pack them in. And that helps us in many places in technical documentation, because one of the ways good technical documentation differs from bad documentation is that fewer assumptions are necessary in order to understand the text or if you want to process it automatically. And that’s why I find condensing information instead of creating knowledge to be a kind of Occam’s Razor. I look at the assignment. If it’s simply a matter of making hidden information visible or putting it into a different form, then this is a good candidate for generative AI. What if it’s more about refining the information by using other sources of information? Then it becomes more difficult. If I now have this information, this other information in a knowledge graph, if it is already broken down there, then I can explicitly enrich the information before handing it over to the Large Language Model. And then it works again. But if the information, for example, the inherent product knowledge, is in the editor’s head, as was the case with my client’s risk analysis, then the Large Language Model simply has no chance. It won’t generate any added value. Then you may have to rethink your approach. Can you divide the task somehow? Maybe there is a part where this knowledge is not necessary, and I have an upstream or downstream process where I can optimize something with AI. And I think that’s the mother lode of opportunities lies. This art of distinguishing what is possible from what is impossible, and this will be more of a kind of engineering art, will be the factor in the coming years that will decide whether generative AI is of use to me or not.

SO: And what do you think? Of use, or not of use?

SG: I think we’ll figure it out. But it will take much longer than we think.

SO: Yes, I think that’s true. And so thank you very much, Sebastian. These are really very interesting perspectives and I’m looking forward to our next discussion, when in two weeks or three months there will be something completely new in AI and we’ll have to talk about it again, yes, what can we do today or what new things are available? So thank you very much and see you soon! 

SG: … soon somewhere on this planet.

SO: Somewhere.

SG: Thank you for the invitation. Take care, Sarah.

SO: Yes, thank you, and many thanks to those listening, especially for the first time in the German-speaking areas. Further information about how we produced this podcast is available at scriptorium.com. Thank you for listening to the Content Strategy Experts Podcast, brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links. 

The post Strategies for AI in technical documentation (podcast, English version) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 20:57
Strategien für KI in der technischen Dokumentation (podcast, Deutsche version) https://www.scriptorium.com/2024/06/strategien-fur-ki-in-der-technischen-dokumentation-deutsche-version/ Mon, 24 Jun 2024 06:00:49 +0000 https://www.scriptorium.com/?p=22544 https://www.scriptorium.com/2024/06/strategien-fur-ki-in-der-technischen-dokumentation-deutsche-version/#respond https://www.scriptorium.com/2024/06/strategien-fur-ki-in-der-technischen-dokumentation-deutsche-version/feed/ 0 Folge 169 ist auf Englisch und Deutsch verfügbar. Da unser Gast Sebastian Göttel sich im deutschsprachigen Raum mit KI beschäftigt, kam die Idee, diesen Podcast auf Deutsch zu erstellen. Die englische Version wurde dann mit KI-Unterstützung zusammengebastelt.

Sarah O’Keefe: Was hat die generative KI mit Gedichtinterpretationen zu tun?

Sebastian Göttel: Ja, nun, also oft hat man da ja den Eindruck, dass KI das Wissen schöpft, also Informationen aus dem Nichts erschafft. Und da ist die Frage, ist das denn wirklich so? Denn für die Germanisten ist es, glaube ich, schon eher normal, nicht nur den vorliegenden Text anzuschauen, sondern auch zwischen den Zeilen zu lesen, den kulturellen Subtext einfließen zu lassen. Und aus dem Blickwinkel der Germanisten, interpretiert oder rekonstruiert generative KI eigentlich nur Informationen, die schon vorhanden ist. Möglicherweise ist die verborgen, nur implizit angedeutet. Aber die wird durch die KI dann sichtbar.

Related links:

LinkedIn:

Transcript:

Sarah O’Keefe: Die heutige Episode ist auf Englisch und Deutsch verfügbar. Da unser Gast sich im deutschsprachigen Raum mit KI beschäftigt, kam die Idee, diesen Podcast auf Deutsch zu erstellen. Die englische Version wurde dann mit KI-Unterstützung zusammengebastelt. Also herzlich willkommen zum Content Strategy Experts Podcast, heute zum ersten Mal auf Deutsch. Unser Thema ist heute Informationsverdichtung statt Wissensschöpfung. Strategien für KI in der technischen Dokumentation. Wir haben versucht, das alles in ein Wort zusammenzubringen, das hat aber nicht ganz geklappt. Welcome to the Content Strategy Experts Podcast, brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize and distribute content in an efficient way. In this episode, we talk about best practices for AI and TechCom with our guest Sebastian Göttel of Quanos. Hallo, ich heiße Sarah O’Keefe. Ich bin hier bei Scriptorium die Geschäftsführerin. Mein Gast ist Sebastian Göttel. Sebastian Göttel arbeitet seit über 25 Jahren im Bereich XML und Redaktionssysteme in der technischen Dokumentation. Ursprünglich hat er mal Informatik mit Schwerpunkt KI studiert. Aktuell ist er bei Quanos Product Manager für Schema ST4, einem der meistgenutzten Redaktionssysteme im Maschinen- und Anlagenbau in DACH. Er ist auch in der Tekom aktiv und hat unter anderem an der Version 1 des iiRDS-Standards mitgewirkt. Sebastian lebt mit Frau und Tochter, drei Katzen und zwei Mäusen vor den Toren von Nürnberg. Sebastian, herzlich willkommen. Ich freue mich auf diesen Austausch. Auf Englisch sagen wir ja create once, publish everywhere. Hier geht es um einmal aufnehmen und mehrfach ausgeben. Also, los geht’s. Sebastian, unser Thema ist heute, wie gesagt, die Informationsverdichtung anstatt von Wissensschöpfung. Und wie diese Strategie für KI in der technischen Dokumentation eingesetzt werden könnte. Also, bitte, erklär doch mal.

Sebastian Göttel: Ja, erstmal vielen Dank für die Einladung in den Podcast. Es ist ja gar nicht so einfach, eine 14-jährige Tochter zu beeindrucken. Und ich dachte mir, mit diesem Podcast habe ich eine Chance. Also habe ich ihr erzählt, dass ich demnächst in einem amerikanischen Podcast über KI sprechen werde. Und die Reaktion war ein bisschen anders, als ich mir das erwartet habe. Duuu wirst da Englisch sprechen? Man kann schon ziemlich viel Bedeutung in so ein einzelnes Duuu legen. Und von daher bin ich zum einen froh, dass ich hier Deutsch sprechen darf. Aber, und das ist jetzt die Überleitung zum Thema, was wird die KI aus dem “Duuu wirst da Englisch sprechen” machen? Wie will sie das beim Text-to-Speech korrekt aussprechen oder in eine andere Sprache übertragen? Und darum, glaube ich, wird es in unserem Gespräch heute gehen. Wenn wir verstehen wollen, wie KI uns versteht, aber auch wie wir sie in der technischen Dokumentation einsetzen können, dann müssen wir über Informationsverdichtung, aber auch unsichtbare Informationen sprechen. Duuu wirst da Englisch sprechen. Kann die KI rekonstruieren, dass meine Tochter mir das nicht zutraut beziehungsweise meinen deutschen Akzent im Englischen einfach grottig findet? Naja, also wenn die KI das rekonstruieren kann, ist es dann neue Information oder eigentlich eher Information, die schon da, war und die eigentlich im Gespräch sowohl Vater als auch Tochter bewusst war. Ich finde das ziemlich spannend, dass die Germanisten sich damit schon ganz häufig beschäftigt haben. Nämlich, was steht in so einem Text drin und was ist in dem Text gemeint? Was steht zwischen den Zeilen? Und wenn man so an seine Schulzeit zurückdenkt, dann fallen einem ja sofort diese Gedichtinterpretationen ein.

SO: Also Gedichte und was hat die generative KI mit Gedichtinterpretationen zu tun?

SG: Ja, nun, also oft hat man da ja den Eindruck, dass KI das Wissen schöpft, also Informationen aus dem Nichts erschafft. Und da ist die Frage, ist das denn wirklich so?

Denn für die Germanisten ist es, glaube ich, schon eher normal, nicht nur den vorliegenden Text anzuschauen, sondern auch zwischen den Zeilen zu lesen, den kulturellen Subtext einfließen zu lassen. Und aus dem Blickwinkel der Germanisten, interpretiert oder rekonstruiert generative KI eigentlich nur Informationen, die schon vorhanden ist. Möglicherweise ist die verborgen, nur implizit angedeutet. Aber die wird durch die KI dann sichtbar. Ui, hätte nie gedacht, dass ich mal in einem technischen Podcast mich auf Germanisten berufe.

SO: Ja, und ich auch nicht. Da bleibt aber doch die Frage, wie funktioniert das? Also wie funktioniert die KI und warum funktioniert das? Und wieso gibt es dann diese Probleme? Was ist denn heute unser Verständnis von der Lage?

SG: Also ich glaube, wir sind immer noch ziemlich beeindruckt von der generativen KI und wir versuchen noch zu begreifen, was wir da überhaupt wahrnehmen, was da passiert. Da gibt es Dinge, die lassen uns einfach den Kiefer runterklappen. Und dann gibt es wieder diese Epic Fails, wie vor kurzem diese Darstellung von Wehrmachtsoldaten von Gemini, der generativen KI von Google. Die Soldaten waren nämlich nach unserer heutigen Vorstellung politisch korrekt. Und da gab es dann unter anderem asiatisch aussehende Frauen mit Stahlhelm. Ich vergleiche das immer so ganz gern mit den Anfängen der Navigationssysteme. Da gab es ja auch immer diese Anekdoten in der Zeitung, dass wieder jemand in den Fluss gefahren ist, weil sein Navi die Fährlinie für eine Brücke gehalten hat. So einen Fehler konnte man im Navigationssystem relativ einfach fixen. Da war klar, warum das Navi den Fehler gemacht hat. Bei der generativen KI ist das leider nicht ganz so einfach. Wir wissen nicht, eigentlich, wir haben es noch nicht mal wirklich verstanden, wie diese teilweise intelligenten Leistungen zustande kommen.Die Epic Fails, die machen uns aber bewusst, dass es sich nicht um einen Algorithmus handelt, sondern um ein Phänomen, das scheinbar emergiert, wenn man viele Milliarden Texte in eine Matrix packt.

SO: Und was meinst du da mit emergiert? Was ist das denn?

SG: Das ist ein Begriff aus der Naturwissenschaft. Ich habe das mal mit Wassermolekülen verglichen. Ein einzelnes Wassermolekül ist nicht sonderlich spektakulär, aber wenn du zum Beispiel im Segelboot in einem Sturm auf dem Atlantik unterwegs bist oder auf einen Eisberg aufläufst, dann kriegst du eine andere Perspektive. Denn viele Wassermoleküle zusammengenommen zeigen ganz neues Verhalten. Und das nennt man Emersion. Und Physik und Chemie haben viele Jahrhunderte gebraucht, um das halbwegs zu enträtseln. Und ich denke, wir werden, vielleicht nicht ganz so lange, aber wir werden noch ein gutes Stück weiterforschen müssen bei der generativen KI, um auch da ein bisschen mehr zu verstehen, was da jetzt genau passiert. Und ich finde, die Epic Fails, die sollten uns bewusst machen, dass wir aktuell gut daran tun, unser Schicksal nicht blind in die Hände eines Large Language Models zu legen. Ich finde, der Ansatz Human in the Loop, wo die KI einen Vorschlag macht und dann ein Mensch nochmal drüber schaut, das bleibt bis auf weiteres der beste Modus. Und die Übersetzerbranche, die gefühlt der ganzen Welt ein paar Jahre voraus ist, wenn es um generative KI geht oder um neuronale Netze, die hat das ziemlich klug erkannt und gewinnbringend umgesetzt.

SO: Und wenn also jetzt die Übersetzung das Muster ist, was heißt das dann für generative KI und die technische Doku?

SG: Das ist eine gute Frage. Lass uns mal einen Schritt zurück machen. Also am Anfang meines Arbeitslebens, da war die Revolution in der technischen Dokumentation, das waren diese strukturierten Dokumente. SGML und XML. Und das kennt man jetzt also mittlerweile schon seit mehreren Jahrzehnten und es ist ja immer noch nicht in jeder Redaktion gebräuchlich. Und das heißt, wir haben jetzt diese strukturierten Dokumente und das andere, das sind die bösen unstrukturierten Dokumente. Und ich fand das schon immer so ein kleines bisschen einen Etikettenschwindel, denn unstrukturierte Dokumente sind ja in Wirklichkeit auch strukturiert. Also meistens zumindest. Da gibt es so eine Makro-Ebene, da habe ich ein Inhaltsverzeichnis, ein Titelblatt, ein Stichwortverzeichnis. Es gibt Kapitel. Dann gibt es Absätze, Listen und Tabellen und das geht dann runter bis auf die Satzebene. Da habe ich Aufzählungen, Aufforderungen und so weiter. Und nicht umsonst nennen das manche Linguisten ja Textstruktur. Und wenn ich jetzt mit XML rangehe, das Schöne daran an XML ist, dass ich diese implizite Struktur nun plötzlich explizit mache. Und damit kann dann der Computer mit unseren Texten rechnen. Denn wenn man ehrlich ist, am Ende ist XML nicht für uns, sondern für die Maschine.

SO: Kann es dann sein, dass die KI Strukturen entdecken kann, die für uns Menschen bis jetzt zwangsweise nur durch XML ausgedrückt wurden?

SG: Ja. Also ich habe mich da mal vor kurzem mit Invisible XML beschäftigt und da kann man über unstrukturierten Text Muster legen und die werden dann als XML sichtbar gemacht. Ganz clever. Und ich finde generative KI ist so eine Art Hochleistungs-Invisible XML. Also weil es zwar nicht so ganz strikt wie Invisible XML Regeln enthält, aber dafür auch sprachliche Nuancen versteht. Und ich fand es ganz spannend, ein Kunde von uns, der hat unstrukturierte PDF-Inhalte in Chat-GPT gefüttert, also unstrukturierte Inhalte aus dem PDF, um sie nach XML dann zu konvertieren. Und die KI hat erstaunlich gut die unsichtbare Struktur entdeckt, die in den Texten verborgen war und echt prima XML konvertiert. Also das war beeindruckend. Also wenn KI jetzt scheinbar Informationen aus dem Nichts schafft, dann ist es eben eher so, dass es existierende, aber verborgene Informationen sichtbar macht.

SO: Ja, ich glaube das Problem ist ja so, dass diese verborgene Struktur, also in manchen Dokumenten ist das da, aber in anderen da ist das, was wir auf Englisch, bei uns heißt das Crap on a Page. Das ist also, da gibt es keine Struktur. Und von einem Dokument zum anderen, da gibt es keine, also keine, die sind ganz anders. Also Redakteur 1 und Redakteur 2, die schreiben und die unterhalten sich niemals. Und also wenn die KI jetzt aus ein paar Stichworten ein ganzes Kapitel und eine Gliederung erstellt, wie geht das? Wie passt das zusammen?

SG: Ja, du hast recht. Jetzt haben wir die ganze Zeit drüber geredet: Wir nehmen PDF und dann wird da XML noch dazu gepackt. Aber wenn ich jetzt hier an der Stelle bin und sage, ich haue mal ein paar Stichworte rein und ChatGPT schreibt dann plötzlich etwas. Aber auch, ich finde auch da gilt dieser Gedanke, dass das eigentlich verborgene Information ist. Klingt vielleicht zuerst mal ein bisschen gewagt, aber da entsteht nichts Neues, nichts völlig Überraschendes. Wenn ich jetzt, sagen wir mal ChatGPT, einfach frage, gib mir mal eine Gliederung für eine Maschinen-Dokumentation. Und dann kommt da was raus. Ich denke, das würden die meisten von unseren Zuhörern genauso hinschreiben. Das ist nichts Neues. Das ist versteckte Information, die in den Trainingsdaten steckt, die durch die Anfrage einfach sichtbar gemacht werden. Denn letztendlich erstellt die generative KI diese Information aus meiner Anfrage und dieser riesigen Menge an Trainingsdaten. Und die Antwort, die ist so gewählt, dass sie gut zu meiner Anfrage und den Trainingsdaten passt. Die, ja, legt sich so ein bisschen wie so ein Layer da drüber, sodass das einfach gut das simuliert. Und am Ende habe ich damit dann keine neue Information, sondern hoffentlich die benötigte Information in einer besser verarbeitbaren Form. Entweder wie vorhin beim Beispiel mit dem PDF, mit XML angereichert oder ich habe jetzt eine Gliederung. Und ein bisschen stelle ich mir das vor wie so bei einer Saftpresse. Ja, die erfindet den Saft ja auch nicht, sondern die holt das aus den Orangen einfach raus.

SO: Informationen besser verarbeitbar zu machen, also das klingt doch fast schon wie eine Tätigkeitsbeschreibung für technische Redakteure. Und was ist mit anderen Methoden? Also wenn wir jetzt Metadaten oder Knowledge Graphs haben, wie sieht das denn da aus?

SG: Stimmt, das ist neben XML natürlich auch total wichtig. Also Metadaten, Knowledge Graphen. Ich finde Metadaten, die verdichten Informationen auf wenige Datenpunkte und die Knowledge Graphen, die machen dann die Beziehungen zwischen diesen Datenpunkten. Und gerade dadurch machen Knowledge Graphen, aber auch Metadaten ja unsichtbare Informationen sichtbar. Denn die Zusammenhänge, die vorher implizit wahr waren, die können jetzt durch die Knowledge Graphen nachvollzogen werden. Und das lässt sich prima mit generativer KI kombinieren. Am Anfang waren die Knowledge Graph Experten ein bisschen nervös, das konnte man merken auf Konferenzen, aber jetzt sind sie eigentlich ziemlich froh, dass sie festgestellt haben, generative KI plus Knowledge Graphen, das ist viel besser als generative KI ohne Knowledge Graphen. Und das ist natürlich prima. Das ist übrigens nicht der einzige Trick, wo wir in der technischen Dokumentation etwas haben, was der generativen KI auf die Sprünge hilft. Wenn man mit Large Language Models große Wissensbasen durchsuchbar machen will, dann macht man das ja heutzutage mit RAG, also Retrieval Augmented Generation. Und damit kann man sehr kostengünstig eigene Dokumente mit einem vortrainierten Modell wie ChatGPT kombinieren. Und kombiniert man jetzt RAG mit einer Facettensuche, so wie wir das in den Content Delivery Portalen in der TechDoc normalerweise haben, dann sind die Ergebnisse viel besser als mit der üblichen Vektorsuche, denn die ist am Ende ja nur eine bessere Volltextsuche. Und das ist dann auch wieder eine Möglichkeit, wo strukturierte Informationen, die wir eben haben, der KI auf die Sprünge hilft.

SO: Also bist du dann auch der Meinung, dass die strukturierte Information, durch KI nicht obsolet wird, sondern sogar noch wichtiger wird?

SG: Ich habe schon den Eindruck, dass sich so ein bisschen der Glaube durchgesetzt hat, strukturierte Informationen sind besser für KI. Ein bisschen sind wir dann natürlich, glaube ich, alle biased. Also wir müssen das glauben. Das sind ja die Früchte unserer Arbeit. Ein bisschen ist es auch so, also genauso wie der Apfel vom Biobauern natürlich gesünder ist, als der konventionelle Apfel aus dem Supermarkt. Ich denke, das ist wissenschaftlich klar erwiesen. Aber am Ende ist ein Apfel immer besser als eine Packung Gummibärchen. Und das ist es, was bei KI so disruptiv sein kann für uns. Denn am Ende machen wir Informationsvermittlung. Und wenn der Anwender Informationen bekommt, die ausreicht, die gut genug ist, warum sollte er dann noch die Extra-Meile gehen, um noch bessere Informationen zu bekommen? Ich weiß nicht.

SO: Ja, also ich interessiere mich wirklich an dieser, also Gummibärchen-Karriere. Da will ich mal ein bisschen mehr hören. Aber warum ist das denn so, sagen wir mal, pessimistisch für die Redaktion von dir?

SG: Ich glaube, mein Bild ist ein bisschen größer geworden in der letzten Zeit. Ich glaube, da geht es mir gar nicht so sehr um die technische Dokumentation. In der technischen Dokumentation sind wir ohne strukturierte Daten aufgeschmissen. Das wird nicht funktionieren. 

Aber wenn wir das größere Bild machen, bei Quanos haben wir ja nicht nur ein Redaktionssystem, sondern wir machen auch so einen digitalen Informationszwilling. Und dann sitze ich immer in diesen Arbeitskreisen drin als der Typ aus dem Tech-Doc-Bereich. Und da muss ich immer hinnehmen, dass unsere besonders gut strukturierten Informationen aus der Tech-Doc, also die mit den besonders viel Vitaminen und sekundären Pflanzenstoffen, das ist halt doch in der Realität da draußen eher die Ausnahme, wenn wir uns die Datensilos angucken, die wir im Info-Twin zusammenfahren wollen. Und als ich jung war, da habe ich noch dran geglaubt, dass wir die anderen davon überzeugen müssen, auch so zu arbeiten wie in der technischen Dokumentation. Das wäre doch echt prima gewesen. Aber wenn wir ehrlich sind, es klappt halt nicht. Die Vorteile, die wir in der technischen Dokumentation dank XML haben, die sind in den anderen Bereichen für die einzelnen Kollegen zu klein, als dass sie umsteigen wollen. Also Ausnahmen bestätigen die Regel. Das bedeutet, da draußen gibt es Tonnen von Informationen, die in diesen unstrukturierten Formaten eingesperrt sind. Und die können nur mit KI zugänglich gemacht werden. Das wird der Schlüssel sein.

SO: Und wie machen wir das? Also wenn jetzt XML da nicht der richtige Pfad ist, dann wie sieht das aus?

SG: Naja, also nehmen wir ein Beispiel. Also viele unserer Kunden sind ja Maschinenanlagenbauer und gucken wir mal auf die Zulieferdokumentation. Da kommen für einen Auftrag, mehrere Dutzende PDF. Und natürlich hat die Redakteurin dann so eine Checkliste und sie weiß, was sie in diesem Haufen PDF suchen muss. Das Prüfzertifikat, die Wartungstabelle, Ersatzteillisten und so weiter. Und obwohl die PDFs ja komplett unstrukturiert sind, also dieses unstrukturiert, wie wir halt als XML-Leute das dann so nennen, sind wir Menschen in der Lage, diese Informationen zu extrahieren. Und das Spannende daran, eigentlich kann das jeder. Also dafür muss man kein Spezialist für Abfüllanlagen oder Industriepumpen oder Sortiermaschinen sein. Wenn du eine Vorstellung davon hast, was ein Prüfzertifikat, eine Wartungstabelle, eine Ersatzteilliste ist, dann findest du die. Und jetzt kommt’s. Dann kann die KI das nämlich auch.

SO: Aha. Und also geht es dir in diesem Fall eher um Metadaten oder um was anderes?

SG: Nee, du hast schon recht. Also es geht hier in der Tat um Metadaten und Verlinkungen.

Ich finde das spannend, was das mit unserem Sprachgebrauch macht. Denn wir haben uns ja so angewöhnt zu sagen, wir reichern die Inhalte mit Metadaten an. Aber in vielen Fällen haben wir einfach nur die unsichtbare Struktur explizit gemacht. Da ist gar keine Information dazugekommen. Da ist nichts reicher geworden, sondern einfach nur klarer. Aber jetzt stell dir mal vor, dein Zulieferer hat keine Wartungstabelle geliefert. Dann musst du anfangen, die Wartungsarbeiten zu lesen, zu verstehen und die notwendigen Informationen zu extrahieren. Und das ist ziemlich mühsam. Selbst hier kann dann die KI noch unterstützen. Aber wie gut, hängt dann schon davon ab, wie verständlich die Wartungstätigkeiten beschrieben sind. Und umso mehr spezifisches Hintergrundwissen notwendig ist, umso schwieriger wird es, für die KI hilfreich zuzuarbeiten.

SO: Und wie sieht das denn aus? Also hast du ein Beispiel oder ein Use Case, wo die KI gar nicht weiterhilft? 

SG: Wie schon gesagt, das hängt natürlich dann vom Kontextwissen ab. Ich hatte von einer Kundin mal Teile der Risikoanalyse bekommen. Und da ging es darum, kann man daraus mit KI Sicherheitshinweise erstellen? Und ich habe dann gesagt, ja klar, guck mal die Risikoanalyse an und dann guck mal an, was die Redakteure daraus gemacht haben. Und es waren mustergültige Sicherheitshinweise. Aber es stand so wenig in der Risikoanalyse drin, dass beim besten Willen konnte man da nichts mit Künstlicher Intelligenz machen, sondern das ging nur, weil die Redakteure ein wahnsinnig gutes Produktverständnis hatten und auch noch die Normen im Hinterkopf hatten, die dafür notwendig waren. Da war eben die Information nicht in diesem Input versteckt, sondern im Kontextwissen. Und das ist so speziell, das ist natürlich auch nicht im Large Language Model vorhanden.

SO: In so einer Anwendung oder in so einem Anwendungsfall siehst du dann überhaupt keine Möglichkeit für KI?

SG: Also zumindest nicht für ein generisches Large Language Model. Also sowas wie ChatGPT oder Claude, die sind da chancenlos. Es gibt die Möglichkeit in der KI, diese Modelle nochmal zu spezialisieren. Man kann die ja mit kontextspezifischen Texten feintunen. Aber ob wir da im Normalfall ausreichend Texte haben, weiß man im Moment noch nicht so. Da gibt es die ersten Experimente. Aber denken wir nochmal zurück an die Wassermoleküle. Für einen Eisberg oder schon für einen Schneemann brauchen wir ziemlich viele davon. Also heute ist letztendlich so, welche Hilfsmittel unter den Gesichtspunkten dann auch, also Feintuning ist echt teuer. Also Kosten. Dauert lange. Also auch Performance ist ein Thema. Und wie praktikabel ist das? Haben wir Trainingsdaten? Also unter diesen ganzen Aspekten, was da jetzt wirklich der goldene Weg ist, um so ein generisches Large Language Model für die Textarbeit für sehr spezifische Kontexte brauchbar zu machen, ist einfach noch unklar. Weiß man heute einfach nicht.

SO: Kannst du denn heute schon sehen oder voraussehen, wie die generative KI die technische Dokumentation verändern wird oder muss?

SG: Ich finde das noch echt mehr so einen Blick in die Kristallkugel. Also das ist noch gar nicht so einfach einzuschätzen, welche Use Cases jetzt für den Einsatz von KI in der technischen Dokumentation vielversprechend sind. In der Regel hast du eine Aufgabenstellung, wo ein textueller Input nach einer bestimmten Maßgabe in einen textuellen Output transformiert werden soll. Und früher galt da Garbage in, Garbage out. Die Large Language Models nach meiner Meinung verändern diese Gleichung nachhaltig. Input, den wir mangels Informationsdichte früher nicht automatisch verarbeiten konnten, den können wir jetzt durch universelles Kontextwissen anreichern, so anreichern, dass er verarbeitbar wird. Fehlende Informationen können nicht ergänzt werden. Das haben wir ja jetzt besprochen. Aber diese unausgesprochenen Annahmen, in der Tat, die können wir mit reinpacken. 6Und das hilft uns in der technischen Dokumentation an vielen Stellen, weil sich eine gute technische Dokumentation ja unter anderem dadurch von einer schlechten unterscheidet, dass weniger Annahmen notwendig sind, um den Text zu verstehen, beziehungsweise auch, wenn man ihn maschinell verarbeiten will. Und deshalb finde ich Informationsverdichtung statt Wissenschöpfung für mich so eine Art Ockhamsches Messer. Ich betrachte mir die Ausgabenstellung. Geht es jetzt einfach nur darum, verborgene Informationen sichtbar zu machen oder sie in eine andere Form zu bringen, dann ist das einfach ein guter Kandidat für den Einsatz von generativer KI. Oder geht es jetzt eher darum, durch den Rückgriff auf andere Informationsquellen die Informationen zu veredeln? Dann wird es schon schwieriger. Wenn ich jetzt diese Informationen, diese anderen Informationen in einem Knowledge Graphen habe, wenn die dort schon aufgeschlüsselt sind, dann kann ich ja die Informationen explizit vor der Übergabe an das Large Language Model anreichern. Und dann geht das auch wieder. Wenn aber die Informationen, zum Beispiel das inhärente Produktwissen im Kopf des Redakteurs ist, wie bei der Risikoanalyse meiner Kundin, dann hat das Large Language Model einfach keine Chance. Das wird da keinen Mehrwert generieren. Dann muss man eventuell nochmal überlegen, kann man die Aufgabenstellung noch irgendwie aufteilen? Vielleicht gibt es einen Teil, wo dieses Wissen nicht notwendig ist und ich habe einen vor- oder nachgelagerten Prozessschritt, wo ich mit der KI was optimieren kann. Und ich finde, da wird in der Zukunft die Musik spielen. Diese Kunst, das Machbare vom Unmachbaren zu unterscheiden, und das wird eher so eine Art Ingenieurskunst sein, das wird in den kommenden Jahren der Faktor sein, der entscheidet, ob die generative KI mir einen Nutzen stiftet oder nicht.

SO: Und was glaubst du, mehr oder nicht?

SG: Ich glaube, wir werden das raustüfteln. Aber es wird viel länger dauern, als wir das glauben.

SO: Ja, also ich glaube, das stimmt.

Und also vielen Dank, Sebastian. Das sind wirklich ganz interessante Perspektiven und ich freue mich auf unsere nächste Diskussion, wenn in so zwei Wochen oder drei Monaten was ganz Neues da in der KI ist und wir uns noch mal darüber unterhalten müssen, ja, was können wir denn heute machen oder jetzt machen? Also vielen Dank und wir sehen uns …

SG: … demnächst irgendwo auf diesem Planeten.

SO: Irgendwo.

SG: Vielen Dank für die Einladung. Mach’s gut, Sarah.

SO: Ja, und vielen Dank an die Zuhörenden, besonders zum ersten Mal im deutschen Raum. Weitere Informationen sind bei scriptorium.com verfügbar. Thank you for listening to the Content Strategy Experts Podcast, brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links. Thank you for listening to the Content Strategy Experts Podcast, brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

The post Strategien für KI in der technischen Dokumentation (podcast, Deutsche version) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 25:17
Overcoming operational challenges for learning content, feat. Leslie Farinella (podcast) https://www.scriptorium.com/2024/06/overcoming-operational-challenges-for-learning-content/ Mon, 17 Jun 2024 11:00:50 +0000 https://www.scriptorium.com/?p=22531 https://www.scriptorium.com/2024/06/overcoming-operational-challenges-for-learning-content/#respond https://www.scriptorium.com/2024/06/overcoming-operational-challenges-for-learning-content/feed/ 0 In episode 168 of The Content Strategy Experts podcast, Sarah O’Keefe and special guest Leslie Farinella, Chief Strategy Officer at Xyleme, discuss the challenges facing content operations for learning content, insights for navigating information silos, and recommendations for successful enterprise-wide collaboration.

Why do we still have these silos of content? Back to what you said, Sarah, if we’re thinking about the learner experience, the learner doesn’t distinguish between classroom, e-learning, looking something up, or going to technical documentation. They just know, “I gotta get my job done. I need to perform. I need to know what I’m doing.”

— Leslie Farinella

Related links:

LinkedIn:

Transcript:

Disclaimer: This is a machine-generated transcript with edits.

Sarah O’Keefe: Welcome to the Content Strategy Experts Podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we talk about the challenges that organizations face with content operations for learning. Hey, everyone. I’m Sarah O ‘Keefe, and today I’m delighted to welcome Leslie Farinella of Xyleme to the podcast. Xyleme, as you may know, has recently been acquired by MadCap Software, which also owns Flare and IXIASOFT. So Leslie, welcome. Tell us about yourself and your role at Xyleme/MadCap.

Leslie Farinella: Hi, Sarah. I’m super excited to be here today. So I’ve been at Xyleme for over the last eight years. Actually, prior to that, I was in the learning content space, but on the business side, helping organizations to drive performance within their workforce. And I realized that, you know what, if we wanted to scale, we were going to have to bring technology to help solve this problem. So I got really excited. So I jumped over to the product side. And since I’ve been at Xyleme, I’ve pretty much covered almost all of the roles, ending up with my last role being the chief strategy officer.

SO: And so here we are. And I think you’re probably the perfect person to talk to about this topic where we’re getting a lot of interest all of a sudden. Well, from my point of view, maybe not from your point of view, but from my point of view, we’re getting a lot of interest in content operations for learning content. 

LF: Yeah.

SO: So people are asking questions like, if I have overlapping content between my tech comm content and my learning content, why, you know, why can’t I combine those in some efficient way as opposed to what I’m doing now, which is this terrible copy and paste or worse rewrite without, you know, people ever talking to each other. But also we’re hearing from learning organizations that don’t actually have what I would consider to be tech comm content who need a more mature content workflow. So they’re asking questions like, “How can I develop learning better, faster, cheaper?” So what does that look like on your side of the fence?

LF: We absolutely hear the exact same thing, and I think it’s only gonna get worse because if we think about the root cause and think about what’s really driving this conversation and what’s making this conversation escalate is the speed of change and the need to drive agility within the organizations so organizations have to adapt faster than they have before which means people have to learn new skills new mindsets and new behaviors faster than before and which means inevitably they have to learn on the go, which means that performance support and tech comms is part of that learning. And as you and I know, cause I know you and I’ve had past conversations breaking down that silo between tech comm and learning is gonna be essential to driving that agility that organizations need to change. And that’s why they’re feeling the pressure.

SO: And so what does that look like? You know, Xyleme in particular is an enterprise learning content management system, which perhaps I should have said in the intro. What does it look like when people start considering something, you know, a solution like that? What’s the executive-level argument for that?

LF: Speed, agility, cohesiveness, learner experience. And I think that what we all have to remember is when you’re buying something like a CCMS or an LCMS, you know, component content management system or learning content management system, they’re kind of flip sides of the same coin, but they also need to work together. And I think that is the change in mindset we need in the industry is that if you think about learning, you have formal learning. I take a course. Usually, I’m a novice. I need some scaffolding. But the majority of the learning, once I kind of get my initial scaffolding happens by experience. It happens by solving problems. And inevitably that means looking stuff up. So it means going back to the documentation because no one’s going to go to the LMS to go flip to halfway through the e-learning course to look something up that’s just very painful. So what I hear from the top executive level is how do we make that whole system work together? How do we consider it from a job performance perspective and moving people from a novice to proficiency across that entire spectrum, which is learning and tech comm. And that’s where this idea that we have these separate systems and these separate processes really start to get in our way. And I think that’s where the opportunity is, is to see how do we break down that silo and how do we think about how these technologies can work better together or maybe even collapse into a single tech stack.

SO: Yeah, and I think that, you know, a big part of this is if you if you go back 20, 25, 30 years, we had classroom training, basically, and we had paper like books or maybe a cheat sheet or a job aid, but, you know, some sort of a printout. And so the distinction between I’m going to go to a class and learn the things and they’re going to give me a like a student guide or a textbook or, you know, but something, some sort of supporting material. And then there’s my reference library of books. And today, we still have that. I mean, we still have that distinction between class, e-learning, blended learning, and online, and all the rest of it. But there’s that bucket. And then there’s that bucket of, OK, there’s this other book adjacent or book-derived stuff. However, today, it’s all sitting on the same website. And so now as an end user, as a software user or learner, I show up on your website, your product website, and like, hey, I’m blocked on this task that I need to do. I’ve got a job I need to get done. I don’t know how to do it. And I just frankly don’t care. I just want you to give me the answer. Now, I don’t care where it lives. Not my problem. But give me the answer and give it to me better, faster, cheaper. And then, you know, infamously, we always say, “Don’t ship your org chart,” except we always do. So what does it look like to start to foster these connections and improve the integration or the interaction or the, I’m struggling for words, which is probably a symptom of this problem. What does it look like to start fostering those connections to improve the end-user experience?

LF: I think what you just said, end user experience. You know, we have to map that user experience. And I think that’s one thing that the learning side has done well is they’ve invested in the LMS, the learning experience platforms. Everybody still complains about them, but at least they were, you know, investing and trying and those experiences are getting better and better because there’s more competition in the market. People are coming up with other tools. They’re bringing, you know, more algorithms into play and then, you know, AI will play into that as well. But what they haven’t done well is content management and structured authoring. So Xyleme is an LCMS. So actually, you know, there obviously are people on the learning space that have bought into, we need to bring structured authoring into learning. But it’s not the majority. A lot of organizations still haven’t done that. And I think that once you start to bring what tech docs already knew is, you know, you’ve got to standardize to personalize. You’ve got to bring in, you know, you got to think modular. You’ve got to be able to standardize against your terminology. And then you can start to scale. That’s something that the learning side, you know, needs to learn and that’s something that the LCMS, which is the counterpart to component content management brings in and we’ve tailored it to the audience of instructional designers and learners to help with that transition. But the base ideas underneath the technology are the same. One of the interesting things in the acquisition with MadCap and the IXIA team when we started comparing products, we’re like, we do that, we do that too. yeah, we’ve always wanted to do that. You guys already have it, but we all, we realized very quickly we were solving the same problem and getting to the same result. We made them make different design decisions along the way, but we were solving the same problem and the fundamental premise underneath both technologies were the same, which then starts to beg the question, why aren’t they combined? Like, why are we still have these silos of content? If we’re thinking about back to what you said, Sarah, the learner experience, the learner doesn’t distinguish between classroom e-learning, looking something up, going to technical documentation. They just know, I gotta get my job done. I need to perform. I need to know what I’m doing. And I wanna, you know, ready myself for my next role in promotion within the organization. And they have expectations on their performance. And so how do we look at that and understand it needs to be more cohesive and how can we as both the tech docs and the learning industry break down that silo with the content but also the experience itself to make that more cohesive. 

SO: Yeah, I think one thing that’s sometimes overlooked in this is that the default emotional state of a person who is looking for information is something like frustration and anger, right? Because they’re not reading for fun. They’re not going to class for fun. I mean, probably. They are doing it because this class or this learning piece or this piece of information that I don’t have, is standing between me and getting the job done. I need to generate a pivot table and I don’t know how, so show me how to do it. I need to do a thing and until I do the thing, I can’t progress in my tasks of the day and so I’m annoyed. And we’ve set aside knowledge base for the purpose of this conversation, but knowledge base usually is even worse because usually that’s something like my system crashed, why? So they’re not just annoyed, they’re like incandescently angry because something is not working. Okay, so, and I think you said something really interesting in there about how the learning experience, you know, the downstream user experience for learning, there’s been a lot of work put into that and comparatively less on the tech comm side. I’m not saying all tech comm is bad or anything like that, but when you look at some of the work that’s been done in producing really sophisticated e-learning and really interesting learning experiences on a platform of some sort, and then conversely on the back end, tech comm has done a huge amount of work around reuse and efficiency and automated formatting and automated delivery and multi-channel and all these things, which I think there’s some advantages there and there’s some things there that I think the learning world can can probably leverage and you know vice versa so, you know, while you and I are ruling the world and we’re fixing all of this, you know, we can’t fix the integration next week and I mean I’ve been complaining about that for a while, but you know that is a legitimately difficult hard problem. But what are some of the steps that we can take as content creators, whether learning or tech comm, to start thinking about this sort of more unified approach to enabling content? What can we do there? And what are some of those first steps?

LF: I think the first step is we have to collaborate. I think the first step is, do you even know the people in your tech comm team or your learning team? Like, do you even know who they are? So, you know, I think there’s a conversation. I think the second one is to have a shared goal of we want to create a better user experience. Like, you know, an agreement that that is a goal that’s worth, you know, pursuing. And I think your managers, your VPs, definitely your leadership would agree that is. and then I think mapping that out. Like what would that learning experience look like? What’s your utopia? And then break it down, right? You can’t boil the ocean. You have to kind of have a plan. You know, what does the vision look like? And then what’s the first step? Start small. Like what’s the first step in the vision? And I think you and I talked earlier, a procedure is a procedure. Like there’s no magic. There’s some obvious low hanging fruit here as far as, you know, where you can share content that drives efficiency and makes sense. And then to the learner, you’re not coming up with different terminology. We all know the brain loves consistency because it helps with retrieval within the brain. So when I see the same picture, when I see the same example, when I see the same terms, it unlocks memory within the brain. It helps with retrieval. So, you know, we can make it easier for people. But then also looking at, you know, can we put some of that technical documentation and embed it in the learning content in the LXP so it’s easy to find where are people going? Maybe it is to the tech doc portal. Maybe we put it in both places and we figure out single source, right? We can update it in both places. We can keep that in sync, but really understanding and mapping that learner, the end user experience for performance and working together and understanding that we both have something to contribute to the conversation. I think, you know, to your point, tech comms can learn a little bit about experience and how people, you know, retrieve information, but the learning team can definitely learn a lot about structured authoring content management from the tech comm team. So bring those expertise together, which is 80% business and 20% technology. I mean, the first part is, you know, you just got to agree and set your goals and then figure out what’s the best technical solution that will drive those goals. And I would even argue, take it small. Like, you know, do experiment, see what works, what doesn’t work, trial and error. Cause I wish I had the whole answer. I don’t. I think it definitely is a problem that we need to invest in solve. And the only way we’re going to solve is through experimentation. But I also don’t think there’s a one size fit all answer either. I think each organization has legacy tech stacks. We all know we can’t just throw out the tech stack we have, you know, we have different competing business priorities. We have different skills and capacity within our teams. So do what you can. And I think that sometimes people throw up their hands and they do nothing because they think it’s too big. But you got to start small and you got to start somewhere. And step one is go have lunch with the people, maybe a virtual lunch these days, but on the other side, like talk to them, share you guys. At the end of the day, you have a common goal of driving performance within your organization. You have a shared mission. That’s where I would start.

SO:  Yeah, I like figure out who your counterpart is. That seems like a reasonable achievable goal. And then, yeah, and then back, you know, work from there. What can you, you know, can you reach consensus on shared terminology? Because, you know, I mean, never mind unified content authoring, that would be lovely but can we agree to call a car seat a car seat and not sometimes a safety seat and sometimes a baby seat and sometimes a something else? Because that would be like a really good start.

LF: Yeah. And the more things you can agree on, the more things you’ll find to agree on. So start with that. How do I share, you know, the procedures? How do I keep stuff in sync? You know, how do I even reduce the time between, you know, product release, the technical documentation and any formal training that needs to have? How do I make sure, you know, how can I generate FAQs? There’s a lot of things that you could brainstorm that you could do together, which then it fosters that collaboration.

SO: Mm-hmm.

LF: And then figure out what are the technical barriers I’m hitting. And I’ll say this as a vendor and then talk to the vendor and say, hey, here’s the business problem we need to solve. We think it’s a market problem. We think there’s value with you. We need you to fix this. Like we need you to be able to integrate these systems. And again, from the vendor side, if you make a good business case and you can show that the market in general, it’s good for the market, you can probably push their roadmap. But if you don’t speak up, if you haven’t tried, how do you know what those barriers are? So how do you know what to push? So just because it doesn’t do it today doesn’t mean you can’t get a solution.

SO: And the bigger you are, the more we would like you to kindly contact the vendors because…

LF: Yeah, the more money you have, the more clout you have. But I’ll be honest with you. As far as our roadmap on the vendor side, many times anyone who’s willing to experiment and to put some skin in the game as far as a real use case and to work together, I would rather build features and integrations based on real-world examples and real-world data than a theoretical PowerPoint we may put together from a nice product feature. And I know most product vendors are the same.

SO: I’ll have leverage.

LF: So partnering with your tech vendors and coming to them with, this is the business problem we want to solve. This is why we think it’s worth solving. And partnering with them to solve it is going to help to break down some of those technical silos. And the good news is on the MadCap side, because we do have the IXIA, we have the Flare, we have the Xyleme, that’s our vision is to how do we bring it together? It’s not gonna happen overnight because we all have. Like I said earlier, we all kind of made different design decisions which aren’t necessarily all compatible right this second, but we’re figuring out how do we make them more compatible? Like who’s got to kind of give up what and how can we make these work together? And because they’re all in our product stack, we have a vested interest in doing that. And honestly, we’re looking for customers, if there’s any MadCap customers out there listening, we’re looking for customers who want to partner with us on that journey and help us to figure out the answer because we know the problem pretty clear. We know some of the answer, but the only way you truly find the answer is by partnering with customers to figure it out.

SO: So I have to ask you about AI because we’re not allowed to do podcasts without asking about AI anymore. Tell me a little bit about your take on AI in the content universe that you live in.

LF: Yeah, I can, you know, there’s so much buzz about AI generation and the large language models and chat GPT. And I think because it kind of like wowed us all and it made the news and not that there’s not some efficiencies to be found there around summarization and descriptions. Cause one thing we know is that. The quality of the descriptions that go in the LMS and the LXP really drive retrieval or people being able to find something. And humans actually write really bad descriptions. AI does a better job of writing descriptions that search can find. So I think there’s something there there. But what really excites me is AI retrieval. Being able to match content to a person, like to me specifically based on my role context, where am I searching from? Am I searching from within Salesforce? Am I searching within my technical app? You know, what gives some idea of what I might be my problem that I’m having? Maybe even send error messages in what’s my region? What are my current skills? What are my skill gaps that would get me the information that I need faster and just the information that I need, not, you know, the 20 page document and now I’ve got to go find page five of 20. The great thing about AI retrieval is it can just bring me topic seven out of 70 and just bring that back to me. So I think that really solving that retrieval problem is huge, that time to an answer. The second one is AI data. AI’s been doing a lot with data. It’s not new news as far as data classification, looking at patterns. But if we think about if our common mission is performance and people being able to do their job, understanding holistically somebody’s journey from novice to proficiency and expert and what really drove those. And we might find out it’s all on the managers. And I argue a lot of it is their manager, you know, their manager and their coaching and had nothing to do, nothing against the audiences we’re talking to, but had very little to do with the learning team and the comp team. It had a lot to do with the managers, but understanding that and how we contribute into that journey will help us to understand what’s really important. And the nice thing about AI is it can bring in a lot more data and look at patterns that are much more sophisticated than us as humans can. We can’t hold that many variables in our head at one time. So I’m excited about AI to bring personalization of content, matching people to content, helping us better understand the value of the content we write and what drives that value of the content so that we can drive those best practices because I think we guess a lot and we have our ideas, we might be surprised at the answer. And then yeah, I mean, AI generation does definitely have a role. I don’t want to say it doesn’t have any, but honestly, it doesn’t excite me quite as much as the other two.

SO: Well, I, you know, I sort of lost interest early on when I asked chat GPT to generate a bio for me and it informed me that I had a PhD, which I mean, cool, but no. So, you know, it, it just, there were a couple of other things like that. It, and, and you said this earlier, you know, it is, it is important in our context to get the information right. And the thing that ChatGPT and the other generators don’t necessarily do is accuracy. They generate plausible content. But if we care about getting it right, cut the blue wire, then the red wire. no, wait, wrong. So it’s important to have this stuff be correct. And that’s the thing that GenAI really struggles with because it doesn’t really have a concept of correct.

LF: Yep. I think that’s where we are sitting on a gold mine with our content, because if you think of RAD, which is Retrieval Augmented Generation, which is the current, you know, leading answer as far as proprietary information that must be correct, it really is about retrieval. And what it does is it points to your vetted database of content. Well, where are those? By LCMS? CCMS? Gold mines of content, because, well, AI can do unstructured content. So not saying that you can’t give it a PDF or PowerPoint, whatever unstructured content. If you give it structured content, it’s like rocket fuel. It’s just easier, it’s better. And if you tagged that content, even if you use AI to help tag it, but if you’ve tagged that content, now that retrieval accuracy goes up exponentially, so we are sitting on rocket fuel. If you’ve already invested in an LCMS or a CCMS, you’re doing structured authoring, you have rocket fuel to drive your AI solution. And you don’t need AI to do it. It’s not that it has AI inherently in our databases. It’s just that we have the content that’s going to generate those AI agents and help to generate those answers and drive those right answers. And one of the key things when you think about proprietary content in these rag systems is the attribution. So it will provide a response. It’s not totally, it may summarize it, but it’s not rewriting it to the, it’s not just making it up like Jack GPT would, where it’s writing it from scratch. It is retrieving it. It may summarize it, but it gives an attribution. It tells me where it got that content from so as the person looking at it, I can decide whether I trust that source and I can verify it. So if it’s red wire versus blue wire and the wrong blue wire, something’s gonna blow up, I can go check the source and say, okay, yes, I trust that source and I’m gonna cut the blue wire.

SO: And on that cheery and I think hopefully explosive note, that seems like that sounds like a good place to wrap it up. Leslie, thank you so much for coming on. I hope we’ll continue this conversation and drive some positive change and some new cool integration and cooperation possibilities. And with that, thank you for listening to the content strategy experts podcast brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

The post Overcoming operational challenges for learning content, feat. Leslie Farinella (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 25:36
The challenges of content operations across the enterprise (podcast) https://www.scriptorium.com/2024/06/the-challenges-of-content-operations-across-the-enterprise-podcast/ Mon, 03 Jun 2024 11:24:37 +0000 https://www.scriptorium.com/?p=22521 https://www.scriptorium.com/2024/06/the-challenges-of-content-operations-across-the-enterprise-podcast/#respond https://www.scriptorium.com/2024/06/the-challenges-of-content-operations-across-the-enterprise-podcast/feed/ 0 In episode 167 of The Content Strategy Experts Podcast, Sarah O’Keefe, Alan Pringle, and Bill Swallow discuss the difficulties organizations encounter when they try to create a unified content experience for their end users.

AP: Technical content, your tech content or product content, wants to convey knowledge so the user or reader can do whatever thing that they need to do. Learning content is about improving performance. And with your knowledge base content, it’s when, “I need to solve this very specific problem.” So those are the distinctions that I see among those three types.

SO: Okay, and from a customer point of view, what does this mean?

AP: Well, in reality, I don’t think the customers care. They want the information available, and they want it in the formats they want it in. And also, they want the right information so they can either get that thing done, improve their performance, or solve a specific problem.

Related links:

LinkedIn:

Transcript:

Disclaimer: This is a machine-generated transcript with edits.

Sarah O’Keefe: Welcome to the Content Strategy Experts Podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we talk about the challenges of content operations across the enterprise. Hi, everyone. I’m Sarah O ‘Keefe. I’m here today with two partners in crime, Alan Pringle and Bill Swallow.

Alan Pringle: Hello.

Bill Swallow: Howdy.

SO: That first one was Alan, and the second one was Bill. Good luck with that everybody. So I have a big topic today. I want to focus on the intersection of technical content, learning content, and knowledge base content. And Alan, what’s the difference between the three?

AP: Okay, let me see if I can break this down, because I’m sure people have very strong opinions about this, and we may hear about them, but this is how I’m gonna break them down. Technical content, your tech content or product content, wants to convey knowledge so the user or reader can do whatever thing that they need to do. Learning content is about improving performance. And with your knowledge base content, it’s when, “I need to solve this very specific problem.” So those are the distinctions that I see among those three types.

SO: Okay, and from a customer point of view, what does this mean?

AP: Well, in reality, I don’t think the customers care. They want the information available, and they want it in the formats they want it in. And also, they want the right information so they can either get that thing done, improve their performance, or solve a specific problem.

At the end of the day, they don’t care what department or what group wrote it. They just want it, and they want it then and there.

SO: So this enabling content is like, here’s how you can get your job done. Here’s how you can do the thing you need to do and move on with your day so that you can generate the report or write the thing or do the code or whatever it is. They need this content so that they can do the thing. So then, we have all these silos, right? We have technical content in its silo, and we have learning content, and we have knowledge base content, and then we have tools optimized for each of those use cases or for each of those sets of authors. So, now, is this a bad thing from a content perspective?

AP: That is possibly the worst leading question I’ve ever heard on this podcast. The worst. 

SO: Okay, I’ll rephrase.

AP: You don’t need to, but of course it’s bad. It is very, very bad. And the reason that it’s bad is because there is so much overlap in this content. Roughly half of technical content, there’s overlap because you’re both dealing with tasks. You’re dealing with tasks. 

SO: Procedures, yeah.

AP: Yeah, step-by-step instructions. So you don’t need two sets, one for each group. Why are we doing this? And when I say we, I mean the entire content world because folks, we are. You’ve also got overlap between your technical/product content and your support content. Troubleshooting instructions, Q&A’s on avoiding very specific problems. Same exact stuff, yet again, we’re often maintaining two different versions of that information. So there you go.

SO: So what we want is shared content, right? But we can’t do it because the tools aren’t there. Is that right? It is right. I know it’s right.

AP: Well, yeah, I mean, but it’s not just the tools. It’s the people that write this content because they often have, shall we say, fairly strong opinions that they need a special flavor or they need a special twist on the content. So it’s tools, but they’re also these opinions that the content creators have that inform these problems as well, I think.

SO: Okay, and then Bill turning to infrastructure, what does this look like from an infrastructure point of view as opposed to a, I mean, shared content is kind of an infrastructure problem, but I think there’s additional ones. What does that look like?

BS: Goodie, it’s my turn. Yeah, so shared infrastructure is a big one, you know, getting everyone to kind of play in that, you know, that same sandbox. But there are other things that really need to be shared across the enterprise. 

AP: Hmm.

BS: So things like taxonomy, you know, making sure everyone is aligning, you know, with the same terms, the same way of categorizing things, the same way of organizing information, their localization workflow, and even the vendors that they’re using, you know, that they’re all going under the same process so that they get a uniform result back. And then, you know, design systems, making sure that there’s a federated search in place, and making sure that anything that’s being produced for customer or reader consumption has the same unified experience might be a little bit different from content type to content type from delivery platform to platform, but in the end, you have a unified experience so that people aren’t relearning how to engage with your content in every context you produce it.

SO: So from an infrastructure point of view, what does it look like today to set up shared infrastructure? Can you tell us a little bit about the software tools that are available that allow you to do all of this in a unified way?

BS: You know, it’s too big of a list. And that list is basically consumed with things like duct tape, string, Bondo, you name it. There is nothing out there that will give you a unified experience across the enterprise for every content type out there. Right now, it does not exist.

SO: So on the authoring side, I think there’s some unified delivery kinds of integrations. But I think we’re talking about the back end. 

BS: We’re starting to see a lot with portals that are starting to collect a lot of information and present them all in one unified space, or at least provide one universal point of access for that content. And we are seeing some tools start to reach out and kind of embrace other traditional content silos. So things like, for example, being able to do develop all of your content in one single place and be able to push to a same branded let’s say knowledge base and documentation portal But I don’t think that there’s anything out there that really grabs everything and says okay. We’re going to do you know manuals We’re going to do other tech content. We’re gonna do web-based references we’re going to do knowledge base articles and tech support guides and training materials, you name it, and produce it all from one source to all these different things. So we have a lot of duct tape and string in place at the moment.

SO: And point solutions like, hey, we’re optimized for learning. Hey, we’re optimized for KB. We’re optimized for tech com. And I mean, it does seem to me that there’s a really big disconnect between what our clients are asking for and what the market has available because our clients are asking for slash demanding unified authoring solutions. And like you said, we have duct tape and string to offer them.

BS: Mm-hmm.

SO: So, okay, so if let’s step back a little bit and say you don’t do this. So you take the departmental approach and you push your tech com content through your tech com solution to the web and you push your KB to a KB article database thing and you have learning content which goes to a learning management system and therefore some sort of a learning platform. What happens when those are not unified? And I’ll, Alan, I’ll start with you. What happens with that if they’re not unified from a content point of view?

AP: Well, the terminology you’re using is not gonna be consistent or often is not consistent across your content types. For example, you go to your knowledge base, and you find a support article that uses a certain term for some widget. And then later on, when you try to search for the name of that widget and some other content, like on the product side of the content, and that product side uses a slightly different term, you’re not gonna get a search result because they’re using different terminology for what is really the same exact thing. So you have that lack of alignment. And the same thing is true, for example, with your product content and your training content. You may have slightly different how-tos or tasks to accomplish the same exact thing. So you’ve got those contradictions there in how to do things, in terminology, and you’re not getting a consistent voice at all in what you are presenting to your customers because of these departmental silos that we were talking about.

SO: And then Bill, on the infrastructure side, what do you see there in terms of problems that surface?

BS: A lot of it comes around or comes back to user experience, you know, because all these tools have very, I guess, a targeted focus. They have a lot of custom feature sets that are built just for that type of content. And a lot of the more generalized features are built out in slightly different ways. And you don’t have a lot of, or you may have a lot of ability to customize, but generally they’re not customized for whatever reason. Either it’s too difficult, no time, one group likes it one way, one group likes it another way. So you have these disjointed user experiences, just going from one area of the website to another. So being able to navigate manuals online to going over to a knowledge base and seeing a completely different interface and not knowing how to navigate it out of the box. So you’re now asking your customers to learn how to use your content in addition to having to use your content to find information in the first place.

SO: So we’re, I mean, we’re doing a lot of complaining, right?

BS: It’s fun to complain.

SO:  It is fun to complain. But I guess as consultants, our job is, in fact, to take on the complaints and then come up with a solution. So in the absence of the magic system that does all the things, you know, one thing we’ve seen a lot of customers do is make that compromise where they say, okay, we’re gonna take the thing that’s optimized for A, but we’re gonna use it for A and B even though it’s suboptimal for B. And of course, then the B people feel like B-class citizens, which isn’t great, but enterprise-wide, it’s very, very helpful. On the taxonomy side of things and some of these others, it does feel as though you can build that over the top and then just integrate it into all the other tools and push it down onto those. So I guess that part’s okay-ish. But I mean, what does this look like? And I guess my question to both of you is what’s the solution here? I mean, what’s the path forward and where do we want this to land? I mean, for our personal gratification, but mostly for our customers. What do our customers need the solution to look like so that this is, infamously, the line is you don’t want to ship your org chart, right? You don’t want your website to be a reflection of your org chart at a level that is recognizable to the end customer because, again, they don’t care. So what are some of the solutions here? What are some of the options that people have?

AP: Well, I think one thing you’ve got to do and step back and realize this is not just a tech problem. Now, the tech problem is very real in regard to the silos because you’re using different sets of tools, especially on the authoring side and the content creation side to get things done. But I think all of those content creators need to step back and think a little more globally across the company and not just about this is just for my people, this is just for me. Need to take a bigger step back and think, how can other departments potentially use this information? And then you start getting into tech, how can they actually reuse it? And that’s where you slip away from more culture to tech and how it can enable that sharing and that reuse.

BS: Mm-hmm.

SO: And some of the things like terminology is a good example. If you standardize terminology, you can ask people to follow that across all their systems, right? Like use this term and not that term does not require a unified, you know, content management solution. It’s just a writing practice. And you could layer the terminology management over the top of multiple systems. I mean, it’s more expensive, but you could. Bill, do you have any hope?

BS: Mm-hmm. There’s always hope. You know, we’re starting to get there and especially as, you know, at least systems are starting to be able to somewhat talk to each other via API. So there is a way to share information across. It’s not a, it’s not what I would call anything remotely close to, you know, intelligent reuse, because you’re still duplicating content from one system to another. But at least if you’re consistent about writing in one place, and pushing it out where it needs to go via those hooks, then it’s better than authoring everything separately.

AP: You still have a single source of truth in what you’re talking about and that’s the end goal or it should be the end goal for this problem.

BS: Exactly.

SO: And it might be helpful to look at single source of truth less as the process of doing a task, like how do I change my password in a database, right? There’s a four-step or a two-step or one-step procedure, but there’s a procedure and there’s only one way of doing it. And I think a lot of times the ultimate solution to this is to do some, essentially, forensics on where does that information originate. And if it originates here and I am a downstream user of that information, that’s fine. Just don’t ever modify it. Always go back to the source of the information and modify it at the beginning and then flow it back through. The problem that arises is that in a scenario where flow it back through involves manual processes or copy and pasting, it’s always going to fail because people fail, right? People don’t do the thing. And so you get those inconsistencies and now there’s four different ways of changing your password. One in the tech docs, one in the learning and like two in the knowledge base. And now what do you do with it?

AP: And you’ve got frustration among your users because they’re getting inconsistent information. And then you’ve got frustration with your content creators because they constantly feel like they’re having to go hunt for something or it is not worth my time to go find it. 

BS: Mm-hmm.

AP: I’m just gonna copy and paste and then they forget to update one of the umpteen versions they have. And then they’re stuck in this constant go go go process so it’s bad on both sides of the content equation for your content creators and the people who were consuming that content as well.

SO: Okay, well, this is super encouraging. And with those helpful words from Bill and Alan and maybe me, but mostly them, I will leave you to it. And so with that, thank you for listening to the Content Strategy Experts podcast brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

The post The challenges of content operations across the enterprise (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 17:15
Pulse check on AI: May, 2024 https://www.scriptorium.com/2024/05/pulse-check-on-ai-may-2024/ Mon, 13 May 2024 11:28:34 +0000 https://www.scriptorium.com/?p=22486 https://www.scriptorium.com/2024/05/pulse-check-on-ai-may-2024/#respond https://www.scriptorium.com/2024/05/pulse-check-on-ai-may-2024/feed/ 0 In episode 166 of The Content Strategy Experts Podcast, Sarah O’Keefe and Alan Pringle check in on the current state of AI as of May 2024. The landscape is evolving rapidly, so in this episode, they share predictions, cautions, and insights for what to expect in the upcoming months.

We’ve seen this before, right? It’s the gold rush. There’s a new opportunity. There’s a new possibility. There’s a new frontier of business. And typically, the people who make money in the gold rush are the ones selling the picks and shovels and other ancillary services to the “gold rushees.”

— Sarah O’Keefe

Related links:

LinkedIn:

Transcript:

Disclaimer: This is a machine-generated transcript with edits.

Alan Pringle: Welcome to the Content Strategy Experts Podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we’re checking in on the state of artificial intelligence. Things are moving really fast in the AI space, so we want to let you know we recorded this podcast in May of 2024. Hey everyone, I’m Alan Pringle.

Sarah O’Keefe: And I’m Sarah O’Keefe, hi.

AP: And we’re going to talk about AI yet again, but we need to circle back to it because it’s been a while and kind of assess the space right now. Last week I saw a really great meme. It was a still of Carrie Brownstein and Fred Armisen from the Put a Bird On It sketch from Portlandia. And it said, “Put an AI on it!” And that’s kind of where we are now.

SO: Yay.

AP: So many companies, so many services, so many products look at this AI thing that we’ve got now. And a lot of these AI birds, if you will, have landed on content creation, kind of our wheelhouse. So let’s pick that apart for a minute.

SO: So I guess we can start with generative AI, GenAI, which is a ChatGPT and all of its general ilk, right? The chat interfaces. And generally speaking, at least for technical content. There does seem to be an emerging consensus that this is not where you go for content creation. You’re not going to start from scratch. Now, maybe you get it to throw out some ideas. Maybe you can do a first draft, but overall, the idea that, you know, ChatGPT or generative AI is just going to generate your docs for you is not the case. So there’s a big nope on content creation, but there’s also a big yes for productivity enhancement. I wrote a draft, but did I write it at the appropriate seventh or eighth grade level? Can I run it through the AI and let it clean it up? I need a summary. I need this cleaned up. I need my XML tag set corrected. I need a proposal for keywords that metadata that I haven’t put in yet, those kinds of things. So there does seem to be a rising level of capabilities in that space, in that productivity enhancement, how can I take this thing that I wrote or that I created and refine it further to get to where I need to be.

AP: Yeah, I was at a conference a few weeks ago, and in the expo hall, so many of the vendors were selling an AI service or some kind of AI add-on. And my thought was, how can the market possibly sustain all of these new products and new services? And I know there was an article in the New York Times last week that was talking about the business viability of AI. And it really doesn’t matter how cool or neat what your AI tool does. If there’s not business viability behind it, you’re going to have a really hard time in the marketplace, because you’ve got so many established players like the likes of Google and Microsoft who are really starting to dig into AI, and does it leave room for anyone else? So part of me wonders, is this going to help some vendors, hurt some vendors, or some vendors just going to go away at some point because of this tussle with AI features?

SO: Well, I mean, we’ve seen this before, right? And it’s the gold rush. There’s a new opportunity. There’s a new possibility. There’s a new frontier of business. And typically, the people who make money in the gold rush are the ones selling the picks and shovels and other ancillary services to the “gold rushees.”

AP: Exactly.

SO: And so to me, I’m starting to think about this as it’s going to fade into the background eventually in the sense that it would not occur to me at least to write a document without a spell checker and/or, you know, some sort of built-in grammar checker. They’re super useful, but I don’t necessarily do exactly what they tell me at all times. I look at what they tell me and then I use my own judgment. So I think that’s where we’re going to land where AI is going to be this useful tool that sort of a little bit fades into the background and that has human review. And we’re starting to see people refer to human in the loop, just as we did with machine translation, which is another place where you can look for patterns. What is AI adoption gonna look like? Go look at machine translation. Sometimes it’s good enough, sometimes it needs a human in the loop. Sometimes if you’re translating, let’s say, literary fiction, it’s maybe not that well suited, because it’s just not going to pick up on the kinds of things you need to pick up as a literary translator.

AP: Yeah, yeah, I agree. Is this going to be a feature that you accept as part of whatever suite of tools that you’re using? It’s just built in and there it is. So let’s talk now about something a little more complicated and I think maybe a little more dangerous with AI and that’s intellectual property. It has always been a problem, and there are all kinds of lawsuits flying about with different content creators claiming that different AI engines are stealing their copyrighted content, that sort of thing. And I don’t think that haze, that cloud has really been removed at this point. It’s still a problem that we need to address.

SO: Yeah, it’s a huge question mark. And, you know, it’s terrifying from the point of view of if I use AI and the AI injects something into my content or into my code that is that belongs to somebody else that’s copyrighted by somebody else. What’s that going to look like? What’s going to happen? And I have seen, you know, differing opinions on this from all sorts of people in our industry, in adjacent industries, from attorneys, non-attorneys, everybody has an opinion on this. And the thing is that the responses are, they just run the gamut from, do not use under any circumstances because we could get ourselves in trouble to eh, whatever, YOLO, it’ll be fine. I saw a comment just the other day along the lines of, well, I can’t believe that people would get sued for this because everybody’s doing it essentially. And I mean, they might be right. I’m not saying they’re wrong, but remember Napster? I mean, they got taken down. 

AP: Yes, they did.

SO: We now have streaming and those kinds of things, but the original one that was kind of the unlicensed in a pirate version, really, did get taken out. And I haven’t the slightest idea whether the regime that we’re under right now is going to end up like, you know, a Napster or like a Spotify. Not a clue.

AP: Yeah, yeah. And this conversation on IP intellectual property kind of ties into something else I’m going to talk about too. And that’s the regulatory angle. Different governments are taking a look at this. And I think that’s absolutely worth discussing as well.

SO: Yeah, again, I think more questions than answers, but just in the last, say, two months, the European Union has passed an AI act, which divides AI into risk categories based on what kinds of things it is doing. And so they’re banning certain kinds of AI, they are regulating certain kinds of AI, and then they’re allowing certain other kinds, you know, but they’ve basically said, if it’s in the highest risk category, then you have to follow these kinds of rules, or maybe it’s not allowed at all. China has taken a different approach. The US has so far done nothing in terms of regulations. 

AP: Nothing.

SO: We’ve talked about it, but we haven’t done anything. So it’s quite likely that at least in the short term, the regulatory schemes will be different in different locations, in different countries. And then just in the past week or so, I bumped into a pretty interesting article that was talking about GDPR, the European Privacy Regulation. And basically, under GDPR, you have certain kinds of rights. You have the right to be forgotten. You have the right to be taken out of a database, and somebody has anonymously sued OpenAI because when they go into OpenAI and they say more or less, “What is my birthday?” It gives them the wrong answer. So this is apparently a, again, anonymous but public figure, and we’ll put the article in the show notes. So this anonymous public figure is suing AI on the grounds that it reports an incorrect birthdate for that person and you have the right to have your data be correct under GDPR. Well, OpenAI’s response to this lawsuit is along the lines of it is impossible for us to correct that, right? Because there’s not an underlying database that says John Smith date of birth X. It’s just generative, which is sort of the crux of the whole issue here. But the legal footing, the legal argument appears to be that under GDPR you can’t say, oh, I’m sorry, it’s impossible for me to correct that fact. So you’re just gonna have to deal with it. And so we’re gonna have a really interesting collision between the content that generative AI creates, which may or may not be factual. That’s not a thing, right? And GDPR, which is an established law. And I have the slightest idea where that’s going.

AP: Yeah. And I think too, in this vacuum or with this absence of regulations in some countries, you’re going to see companies then make their own rules. And a lot of them have telling employees when they, what they can and cannot do with AI, which really, like I said, in the absence of there being any kind of rules to help kind of create a baseline, it makes sense, especially if you’re trying to be very careful about liability, putting out incorrect information or using copyrighted information that you shouldn’t be, it would make sense for you to protect your bottom line by basically instituting your own guidelines for how you can and cannot use AI.

SO: And if you look at social media where the platform is basically not responsible for the content that people are putting on it, right? So if I’m on LinkedIn, let’s say, and I put something on LinkedIn and then somebody else reads it, if what I’ve said is problematic, they’re gonna sue me, not LinkedIn, right? LinkedIn is not responsible. And with AI right now and generative AI, who’s responsible? If I go and I generate something using generative AI, and then I publish it in some way, and then I guess I assert copyright on it, which is a whole other can of worms because I can’t right now under current law. But if I do that, then if what I post is wrong and legally problematic, so it’s, I don’t know, defamatory or something, then like who gets sued? Do you sue OpenAI for being incorrect? Do you sue me? Do you sue the platform I put it on? Like who is responsible when the AI gets the content wrong? Is it me because I didn’t validate it or correct it or clean it up? If we build out a chatbot that’s AI-driven, that’s generating information and you know, we’ve already seen this use case legally. You know, the company is going to be responsible for the information that the chatbot is putting out if the chatbot is sitting on the company website. But if it’s impossible to be sure that the chatbot’s gonna be right, what do you do with that?

AP:  Yeah. And it’s been established as of last night before we recorded this, the big Met Gala happened. And apparently there were two quite realistic photos or images of Katy Perry in two different dresses. How she pulled that off, I don’t know, at on the steps at the Met Gala. So and the problem is a lot of the social media platforms absolutely could not, they just didn’t do anything. These photos just exploded. 

SO: Right. Because they were fake, right?

AP: They were fake. 100% generative AI fake. And even her mother was fooled by it, apparently. And Katy Perry’s response was, “I was working, so no, I was not there.” But it just goes to show you that you’re right. Once these images got out there, they exploded on social media, and those platforms really are not equipped to handle flagging of that or even removing it at this point.

SO: “I didn’t know you were going to the Met Gala.”

AP: Exactly.

SO: Yeah, I’ve seen a decent number of it largely in AI news coverage where, you know, a New York Times or Washington Post will put up an image and they’ll put a slug on it or a caption that says this is AI-generated. And usually, they watermark it. So it’ll be actually in the image, not on a caption below, but in coverage of AI itself. And for example, talking about this deep fake or, you know, the one where the Photoshop UK princes, princesses, that one. They carefully labeled the photo itself as altered on the photo so that people would know when they were reading the news story what they were dealing with. But of course, you know, that’s not going to happen on social media, where it’s just going to fly around the world faster than anything. And so, yeah, I think I don’t know. I mean, I’m saying I don’t know a lot. That’s where we are. We don’t know.

AP: We don’t. Yeah.

SO: We don’t know what’s going to happen. Things are changing very quickly. The legal and regulatory and risk scenarios are completely unclear. I did want to touch on one other sort of more practical matter. We’ve seen a lot of complaints recently, and I think I’ve experienced this personally, and I think you have as well, that search, like Google search, Bing search, all the traditional search is actually getting worse. 

AP: Oh, 100%. Yeah.

SO: You search and you get bad, you know, just junky results, and you can’t find the thing you’re actually looking for. And the basic reason that that’s happening is that the internet, the worldwide web has been flooded with AI-generated content at a scale that has completely overwhelmed the search algorithms, such that they are unable to sort through all this stuff and actually give you good information. I mean, we did at one point, a year ago, have a scenario where if you had a pretty good idea of what you were looking for and you typed in the right search phrase, you would get some pretty decent results and you could find what you were looking for. And now it’s just junk, which has to do with AI-generated content that is micro-targeting SEO phrases. And ultimately, I think this means, well, it’s going to be a war between the search engine algorithms and the AI-generated content. But I suspect that search and SEO as we know it today is done because it won’t win this. And then people are like, “Oh, I like it a lot better when I go to ChatGPT, and it gives me this nice conversational paragraph of response,” notwithstanding the fact that that paragraph of response probably isn’t super accurate.

AP: But it’s so chatty and friendly.

SO: Uh-huh. So I’m not terribly optimistic about that one either. And so what does this mean if you are a company that produces important and high-stakes content, like all of our clients, basically? What does that mean to you? And I think it means that you’re going to be looking hard at a walled garden approach, right? To say, if you are on our site, and you are behind a login on our site, we have curated that information, we have vetted it, we have approved it, and you can rely on it. If you go out there, you know, in the big wide world, there’s no telling what you’re gonna find out there. And that implies that I have to know who I’m buying from so that I can go to the right place and get the right information. And I’ve already found myself doing this. Instead of going to a big general purpose, e-commerce buy things site, such as the one I’m carefully not mentioning, I find myself saying, oh, I need a new stand mixer. I like KitchenAids. I’ll go to their site and buy it there. And so I’m buying direct from brands that I’m familiar with and that I know because that feels safer than going to the great big site that has a little bit of everything, including a stunning array of what seems to be problematic counterfeit and or knockoff kinds of things. So instead, yeah, but so if I don’t know the brand, if I don’t know the brand, then what? Like, how do I find the right thing if I don’t know where to start already?

AP: The same as true of information. Right. Yeah, I don’t think that’s going to be fixed anytime soon and it’s probably going to get worse after this podcast, in fact.

SO: So I’m concerned. Yeah, and you know, as a parting gift of, I guess, fear, we will put it in the show notes using a gift link. But there was an article that appeared in the Washington Post about a month ago, maybe two, having to do with apps for identifying wild mushrooms when you’re foraging. So this already seems kind of like a high-risk activity to me, just generally going out in the forest and looking for mushrooms that you’re going to forage and hope you get it right and you pick the really delicious one and not the one that’s gonna kill you. And Alan’s making faces at me because he hates mushrooms.

AP: I have the solution for this problem. Don’t eat them. But that’s not helpful. Yeah.

SO: Yes, you have a really simple solution. But for those of us who do like mushrooms and don’t want to die, there are a whole bunch of apps out there. And so there was some research done in apparently Australia on mushroom identification apps, which are apparently AI-driven, which seems like kind of not a good idea. However, what they found was that the best of the AI-driven apps was 44% accurate. And I wish for my mushroom identification app to be a whole lot more than 44% accurate, especially in Australia where everything kills you!

AP: So a 56% chance of poisoning yourself. That’s excellent. Great.

SO: Yeah, or at least of getting it wrong. But again, it’s Australia. And so if it’s wrong, it’s probably going to kill you because that’s Australia. So yeah, that’s not good. And that feels like a not acceptable outcome here. So I don’t know where this is going, but I am pretty concerned.

AP: Yeah. So as we wrap up, there are some good things to talk about, especially, there are a few. Sarah was whispering, “Are there? Are there?” Or made a face. There are, I mean, on the content-creation side, I think there have been some tools that have added some useful features, much like the spell-checker analogy that you talked about. But there are still so many unanswered questions in regard to intellectual property and legal risk. All of those things are still way up in the air. A lot of countries are trying to adjust by taking a look at regulations, but you know, those aren’t in place yet. So we’re in, we’re at a crossroads, I think, and we’ve still got to pay a lot of attention to what’s going on with AI right now.

SO: Yeah, you know, there’s some really, there’s some really nifty tools out there. It’s also worth pointing out that there have been tools that use machine learning and AI that are already out there. They just weren’t, it wasn’t AI front and center. Now everything, as you said, put an AI on it because you can get sales that way, and you can get attention. But there are a lot of companies that are doing some really interesting and really difficult work with this. And I want to, you know, I’m not against any of this stuff. I just want to make sure that we use these tools in a way that, you know, maximizes the good outcomes and minimizes the, “Oops, I ate the wrong mushroom.”

AP: Yeah. Fatal mistakes. Not a fan. Not a fan at all. Well, I think we’ll wrap it up on that cheery note about eating poisonous mushrooms on the Content Strategy Experts podcast. We go places, folks. We will talk about almost anything on this, not just content. So thank you for listening to the Content Strategy Experts podcast brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

The post Pulse check on AI: May, 2024 appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 22:55
Self-service content in the age of AI with Patrick Bosek https://www.scriptorium.com/2024/04/self-service-content-in-the-age-of-ai-with-patrick-bosek/ Mon, 29 Apr 2024 11:00:42 +0000 https://www.scriptorium.com/?p=22473 https://www.scriptorium.com/2024/04/self-service-content-in-the-age-of-ai-with-patrick-bosek/#respond https://www.scriptorium.com/2024/04/self-service-content-in-the-age-of-ai-with-patrick-bosek/feed/ 0 In episode 165 of The Content Strategy Experts Podcast, Sarah O’Keefe and guest Patrick Bosek of Heretto discuss how the role of customer self service is evolving in the age of AI.

I think that this comes back to the same thing that it came back to at every technological shift, which is more about being ready with your content than it is about having your content in the perfect format, system, set of technologies, or whatever it may be. The first thing that I think either of us will say, and a lot of people in the industry will tell you, is that you need to structure your content.

— Patrick Bosek

Related links:

LinkedIn:

Transcript:

Disclaimer: This is a machine-generated transcript with edits.

Sarah O’Keefe: Welcome to the Content Strategy Experts Podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we talk with Patrick Bosek about the changing role of content in self service and whatever the opposite of self service is, maybe just service. Hi, everyone. I’m Sarah O ‘Keefe, and I’ve got Patrick Bosek, the CEO of Heretto with me today. Hey, Patrick!

Patrick Bosek: Hey Sarah, it’s good to be here. I guess to be back, technically.

SO: Yeah, you’ve been here one or two times before, so I’m going to cut to the chase here. And our topic today is self-service content and how things are changing in self-service content. So talk a little bit about that. What’s going on?

PB: Well, I think to talk about self-service content, we have to talk about what’s changing in, I think, self service more generally, which you kind of alluded to in the idea of, you what is the opposite of self service, right? So the landscape is, as I see it, is very interesting today, because historically we had what was very obviously self service, and then what we had was very obviously not self service. So I guess just people or service or something—people service maybe. And for the most part, self service was content, right? So if you went someplace and you read something and you figured it out on your own, over the last decade or so, self services started to involve a little more action. You can go to a McDonald’s and self-service order a coffee today. We can talk about it in a minute whether or not we think that’s a good idea or a bad idea. But now as we’re getting into the age of true…intelligent, if you want to call them, virtual assistants, AI, those types of things. Now we’re in a place where the things that were very traditionally handled by humans. So helping you figure out what you really mean, helping you dig through something when you’re not exactly sure where you’re finding, finding it, or if it is possible, those types of questions are actually performing actions. Some of those are going to continue to bleed over into systems. So now self service isn’t just content anymore that you go and look up and then you yourself go and figure it out and do something. There’s going to be this mixing between these two things where the service that’s provided by automated systems is going to perform some of the things that humans were performing. They’re going to need a bunch of content in order to be able to do this properly. And probably also as instructions, you’ve got to teach these things what to do somehow. And as we all see with the way that we interact with them, you use words, you don’t use programming languages as much. So content plays a role in its traditional form. It continues to play a role in training people. It also plays a role in establishing what this new generation of systems that are going to help us perform actions and learn things and answer questions, what those things are going to do.

SO: So it feels like we’re adding another dimension to this, because when mobile apps first came out, the big development there was that they were contextually aware. So you can get your app to tell you what the weather is at your location because it knows where you are or it can know where you are. And it feels like this is a similar kind of thing that some of this is more a matter of not just, “Hey, here’s a page with some instructions,” but rather, you know, let me, let the system do some work around what your context is and what some of your knowledge is and adapt accordingly.

PB: I think there’s absolutely an aspect. So I would actually put that in the category of just like maybe even traditional personalization. You know, feed metadata in those are things about yourself and then you perform some type of a matching or a computation. And then you feed content back. So that’s, that’s effectively personalization out of its core. Like here’s some things about me. Okay. Then those match to some things about the content. Give me just the content that matters to me. I think where this starts to become new and really interesting is where you start to have systems, so probably, you know, AI-based systems that are actually not just like filtering or personalizing content, they’re actually manipulated in content or they’re manipulating your journey with the content. And one of the places that, you know, we are seeing more of that I think is an interesting place for this is in learning. So you start to think about how learning systems and self service have worked for a long time. And as much as there’s been like micro learning content, and then there’s been more organic things like that kind of stuff, by and large learning has been linear when it comes to self service. Here’s a guide, here’s a course, whatever it may be. And it didn’t matter if you didn’t need to know a good chunk of it or if you already knew a good chunk of it or whatever it may be. Like you went through that, right? That’s how that works. So that, I mean, it’s how colleges, college courses work. It’s how, it’s how learning works in general. Like unless you’re sitting across from a tutor, like learning is linear by and large, when it’s being taught. Well, with AI, all of a sudden we can deploy systems at scale potentially where you can always be sitting across from a tutor. The entire paradigm around how it is that we, you know, air quotes here, “self service,” if we still want to call it that, the things that we need to learn can fundamentally change.

SO: So what does it look like for us sitting in the content universe when customer experience is moving in this direction towards this, I guess, more sophisticated self service? Not just here’s what we have, deal with it, but rather here is information or learning or a chunk of content or whatever that is adapting to that person’s requirements. I think it is, right?

PB: In certain circumstances, it certainly has the ability to be adapting to people’s requirements. I think that’s the thing that we will absolutely be seeing. But more generally, what does content need to look like to give you the range of things you might want to do as an organization in this new paradigm? And I think that this comes back to the same thing that it came back to at every technological shift, which is more about being ready with your content than it is about having your content in the perfect format or the perfect system or the perfect set of technologies or whatever it may be. So the first thing that I think either of us will say, a lot of people in the industry will tell you is like, you need to structure your content. And I do think that the story for this on the learning side, on the traditional self service side, on the AI-agent side, and I think even on the people service side of things still does start there. But I don’t think it’s because self service is intrinsically something which is powered by structured content. What I think is that structured content really just gives you, the organization, you, the content creator, a lot more control over what goes into these systems no matter the range of intelligence that they have. And that means that you have input control on the experiences. And as we all know, LLMs, even when they’re backed by like RAG, retrieval augmented generation, or other systems that are meant to kind of keep these things fenced in, they’re black boxes. And the bigger the box, the blacker the box. So, the strategy, if you look out over the industry, there’s a lot of very sophisticated stuff, but some of the stuff that works the best is input control. And that’s where I think that structured content is really gonna be a key element of this, no matter how far in the future you look.

SO: Yeah, and I mean, my explanation of this to people, which I’m sure makes the actual AI experts cry, is AI likes patterns. And so if you feed it content that follows consistently the same pattern, you greatly improve your odds of getting good output from what you’re putting into the system. When you have stuff that’s not well organized or structured or anything else, you know, garbage in, garbage out—you’re gonna get a mess.

PB: So I think there’s that is true. There are caveats, but the thing that remains true at the center of that is that if you don’t have very precise control over what goes in, you lose an enormous amount of control over what comes out. So even there’s, there’s such things like overtraining, right? Where you can actually get AI that will produce, less high-quality results with certain quantities or certain types of training. And what you end up with in those circumstances is that like, okay, well, if your stuff is just a bunch of stuff and you stuff it all into an AI system. 

SO: That was excellent.

PB: It’s good, right? I gotta have some fun. So you don’t have any ability to say, okay, well, these pieces really, these are the ones that are impacting our outputs. Let’s pull this out, or even just iterate in an intelligent way. So, which part of the corpus, which part of the things that we put into this system are the ones that are having the negative impact? Retraining becomes a much more complicated process. And at the same time, when we’re looking out over deploying across multiple experiences, right? So, you know, let’s take the learning and reference. So, you know, probably speaking documentation portals, whatever they may be. Some people call them knowledge bases in certain circumstances. Those are the two obvious things, because in the past they’ve been highly bifurcated, even though they use a lot of the similar information underneath the hood. Well, if you’re trying to build AI-backed much more like personalized learning systems. Well, you can’t have the content in those systems being different than the reference content, because when you go and you look at the stretch of things that go into that stuff, well, if you have the AI system telling your, user something which is wholly inaccurate. And you can’t pull it back to the rest of the stuff that’s published on the internet, you can get highly divergent results, and you could end up in a circumstance where you have no ability to actually deploy these things properly. So you can’t have one set, which is very cottage based, like, you know, we will go in and we craft these things and one set, which is highly structured. And then you power all of the learning, the intelligent learning systems of the structured stuff, because it’s going to be easier. So you have to find a way to pull these things together, and then use the mechanisms underneath the content to put the right inputs into the right places.

SO: And we’ve been talking for years and years and years about problems with silos and how they’re an outgrowth of the organization itself, right? You’ve got a learning organization and a documentation organization and a tech support organization. They’re all producing content into their respective silos. And the question becomes, if organizationally that’s what the company looks like, then it is almost impossible to rip those silos apart or put them together, destroy them to collaborate across them because the org chart doesn’t encourage it or even makes it impossible potentially. And so then you’ve got different terminology being used by the same company but in different departments, which is really common. And then what? So we’re back to, you know, the fundamental truth that when you have a website as a company, even if you segment that website into like, oh, learning.xyz.com and docs.xyz.com and KB or support.xyz.com, your customers don’t care, right? I mean, they’re not interested in the fact that you have three separate organizations that all hate each other. That’s just not on their list of things they care about.

PB: So I mean, this goes back to the classic, like don’t ship your org chart, which is, yeah, obviously, right. So obviously we’ve been doing this in content for ever, basically. I do think that it’s gonna be, we’re gonna be forced to change because, you know, again, you go back to the idea of like, if you contradict yourself in your learning content and your docs content and it’s being read as written or as built and presented to a human being. Human beings are incredibly flexible creatures. We can go in and be like, oh, well, okay, fine. So like they didn’t update that piece, those dummies, they should have done this, but I understand what’s going on. But when you put an abstraction layer over that, now you have a system that just basically does what it’s told, you know, by and large, or understands what it’s, what it’s educated in, and what it’s told, it’s not going to have that same intuition. That’s a very human thing, even at this point in time. I don’t, I don’t really see the current generation of LLMs getting to a point of having that style of intuition. I mean, the thing that just happened with the ASCII art is a great example, right? And it’s a little bit divergent here, but bear with me. So people figured out how to hack these AI systems by going and asking them questions with ASCII art, which in one sense shows their brilliance because they’re able to understand it. But in the other sense, it shows their lack of intuition because they were like, oh, well, this doesn’t apply to my rules. Who cares? Right? And whereas a human being would have been like, oh, I’m still not allowed to talk about bombs. Right? It doesn’t matter if you’re in a theater, if you write, don’t yell bomb in ASCII or in Sans Serif, people understand that it’s still talking about the same general concept. So this is the same thing that you run into where you can’t have these discrepancies and stretch a single system over the top of these. And you can’t have a really strong customer experience that properly educates people, properly answers questions, and all those types of things, unless they’re joined together.

SO: Yeah, apparently in addition to ASCII art, if you use Morse code, that will also work around all the guardrails, which sounds fun. So, okay, so in our couple of minutes that are left here, how does Heretto and CCMSs in general, but Heretto specifically, how do they play into this?

PB: Yeah, sure. So that’s a great question and one I appreciate you asking it for obvious reasons. I’m going to answer the CCMS part first, then I’m going to answer the Heretto part. So CCMSs as platforms, and I think this is probably true for pretty much every major CCMS in the industry or in the space. They’re going to give you the ability to manage more structured content at a higher velocity and a higher level of governance. Now they’re all going to be able to do this, you know, to different efficacies, right? So some are going to do it better or worse for your particular circumstance. But broadly speaking, that’s why you buy a CCMS. A bunch of content, you want to be able to have your per author or per information developer content be higher than it is, and you wanna make sure that you have the proper amount of governance so that what you deploy is what should be deployed, right? It has to be good enough, it has to meet the criteria, especially today. So those are the things that you build into the process in the CCMS. And then, obviously, they help you track things like localizations as well, but I would broadly put that in the same bucket. So as we’re looking at, how does this relate to the future of the customer experience, you know, be it directly with the content or be it derived from the content through some intermediary system like AI. It’s the governance piece, and it’s also the quantity piece. You have to have enough to be able to answer all the cases to be able to touch all the learning points to be able to educate and guide these systems in all the proper ways. And now, because you don’t have that human intuition as your last fail save, the level of governance has gone up a bar. You have to be able to have much better governance on your content to be able to control inputs. So I think that this is a new age of CCMS. And it’s funny because we have seen an acceleration in interest from people and an acceleration in interest that’s educated where people are coming. And they’re like, we have to get this in order, because we realize that if we don’t have our hands around this, we’re gonna have a huge mess at the end of the toolchain. So I do think that people are more aware today, you know, it’s probably still relatively niche in the grand scheme of things, but there’s a growing awareness that the having the right systems in your content operations ecosystem to produce the right outcomes down the chain is gonna be critical. And CCMS for this style of content is 90% of the time gonna be the best place to start. Not to say there’s not other ways to get there. On the Heretto question, Heretto does all that stuff like the other CCMSs. Obviously there’s some aspects of collaboration and things like that that we think we do better. But I think the key thing as it relates to the future of these technologies that Heretto provides that you don’t really get in other CCMS technologies is our ability to efficiently and agilely deploy content into specific pods. So we have static publishing, we have the ability to generate HTML, PDF, all that kind of stuff, like all the other CCMSs. But we also have the ability to dynamically deploy content into an API layer where the content is in its own little pod. So you can kind of deploy as many little content APIs as you want. Most organizations have one big API that they deploy that powers an entire doc site. It can be tens of thousands or more, you know, topics. But you also have the ability to say, all right, I want to just deploy this here, and this API is only going to have this content in it. And that comes back to the critical aspect of the bigger the block box, the blacker the box as it relates to AI systems. So you don’t go hook your AI system up to the totality of, you know, your large API that serves your web experience or general web experience. You hook your AI up to this specific pod that only has these specific things in it. And therefore, you know exactly what’s going into that AI system, you know, be it for something that’s more like RAG-based, which is really just like search and summarize, I think it’s a much better name for it. Or BSL which is more training-based. So I think that’s kind of the critical piece that Heretto offers right now that is not present in other systems that, you know, relates to what we’re talking to today anyways.

SO: Alright, I mean, that seems like a good place to leave it, because basically you’re saying, hey, this stuff is coming. All these things are changing, and here’s a helpful roadmap for how to get there. Any closing words before I wrap this up?

PB: No, other than this is a really exciting time to be part of content. You know, we’ve both been here for a little while, and we’ve seen a lot of changes in this industry. But this is certainly unique. There is no doubt that the acceleration understanding and the change in the landscape over the last, you know, year, 18 months, I would say has been unprecedented. You know, I think you see that all the way from, you know, the types of experiences that we want to start deploying that we believe are possible, but we’re not totally sure based on new technologies. And then also the change in approach to things like learning content, seeing organizations suddenly starting to say, okay, so like PowerPoint is not our primary method of training people. Like our primary method of training people is going to be dynamic digital experiences. And we need to be prepared for that. And then whatever comes after that, you know, I think that this is a shift that, I’ve been waiting for, you’ve been waiting for, for a long time. I mean, like, for gosh sake, like we implemented the experience for DITA learning and training. What is it? One dot one or something like that, like 10 years ago. And we’ve had a couple of customers that have used it across that time, but it’s just been recently that we’ve had more and more people coming in and starting to use it. And there has, there seemed to be a bit of a Renaissance in the understanding around these things. So, I don’t know, this is just, it feels very new. It feels very fresh again, which I think is part of the structured content cycle. And this is the fun part of the cycle. So I’m enjoying it.

SO: Yeah, I would I would agree with that. So I’ll leave it there. Patrick, thanks for being here. Always good to see you. And with that, thank you for listening to the content strategy experts podcast brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes.

The post Self-service content in the age of AI with Patrick Bosek appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 22:23
How reuse eliminates redundant learning content with Chris Hill (podcast) https://www.scriptorium.com/2024/04/how-reuse-eliminates-redundant-learning-content/ Mon, 08 Apr 2024 11:27:38 +0000 https://www.scriptorium.com/?p=22447 https://www.scriptorium.com/2024/04/how-reuse-eliminates-redundant-learning-content/#respond https://www.scriptorium.com/2024/04/how-reuse-eliminates-redundant-learning-content/feed/ 0 In episode 164 of The Content Strategy Experts Podcast, Alan Pringle and special guest Chris Hill of DCL talk about where you can find redundancy in your learning content, what causes it, and how a single source reuse strategy can eliminate duplication.

You really start to run into trouble when you need to make version two, and you discover a problem with version one. If I’m making some marketing materials, maybe I need to use some information from the engineering team or from the manuals for whatever product I’m marketing. I might just copy that information over and put it into my marketing materials. Then, when we go to produce our training for that particular product, we might say, “Okay, I need that stuff. I’m gonna copy that from wherever I can find it,” which might be from marketing or engineering depending on where I look and who I know better or which repository is easier for me to get to. The problem here is that if anybody has made any edits along the way, they have to ensure that those edits are propagated through all these departments. And that doesn’t always happen. 

— Chris Hill

Related links:

LinkedIn:

Transcript:

Disclaimer: This is a machine-generated transcript with edits.

Alan Pringle: Welcome to the Content Strategy Experts Podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we talk with guest Chris Hill of DCL about learning content and where you can find redundant duplicated content, what causes it, and how a reuse strategy can eliminate that duplication. Hey everyone, I am Alan Pringle and we have a guest here today, Chris Hill of DCL. Hey Chris, how are you doing?

Chris Hill: Doing well, thank you, Alan. It’s nice talking to you.

AP: Great, yes as always. Chris, tell folks out there a little bit about yourself, DCL, and your role there if you would.

CH: Sure. DCL stands for Data Conversion Laboratory. And so we got our start doing data conversion, which is moving content between formats. And over the last, let’s see, that started in the 80s, if you can imagine a tech company starting in the 80s. 

AP: Yes, I can. I am of an age, yes.

CH: So since then, we’ve expanded out into lots of areas, but basically any kind of content transformation, workflows, content enrichment, all sorts of activities around content. So that’s our key theme. I joined DCL about four years ago, and I’ve actually been in the content management space for a good more than 20 years now, and have a lot of experience with both migrating from, you know, using tools like Word and such, and then moving into a content management system. I actually managed, product managed a content management system and then got into conversion. And as part of my job here, I oversee a product called Harmonizer, which is our tool for doing content analysis and specifically reuse analysis to find places where content is redundant, duplicated, and help users figure out what they need to do to improve that situation.

AP: Well, in this conversation today, I think we’re going to tap into all the wisdom that you bring to the table with your background and content and your experience at DCL identifying reuse. And let’s start with just the concept of redundant content. And there are lots of ways to describe this. And I’ve heard it referred to several different ways. Redundant content, duplicated content, overlapping content. If you would kind of give people a bird’s eye view of what we’re talking about here.

CH: So we’re really talking about any place where you’ve got similar or exactly the same content reproduced. And you usually know you’re doing this because anytime you hit that Control C and Control V or choose the copy paste menu, if you’re a menu person, anytime you’re doing that, you’re creating redundant content. And, you know, it’s usually the easiest way to get it done if you’re working in a tool like Microsoft Word or a word processor or a desktop publishing environment or something like that. Generally, you copy stuff from one document to another. And that can be fine for version one of those documents. Where you start to really run into trouble is when you need to make version two, and you discover a problem with version one. So if I’m making some marketing materials, maybe I need to use some information from the actual engineering team or from the manuals for whatever product I’m marketing. I might just copy that engineering data or whatever information over and put it into my marketing materials. And then when we go to produce our training for that particular product, we might say, okay, I need that stuff. I’m gonna copy that from wherever I can find it, which might be their marketing or it might be engineering depending on where I look and who I know better or which repository is easier for me to get to. And the problem with that is that if anybody’s made any edits along the way, they have to ensure that those edits are propagated through all these departments. And that doesn’t always happen. 

AP: It usually does not happen. You’re being kind.

CH: Yes, I am. So when the engineers find out, oops, we made an error here in the technical manual, we better fix that or somebody is gonna do the wrong procedure or come out with a bad result, they might fix their manual, but. Are they aware that there’s all this marketing material with that stuff in it? Are they aware that the education team actually copied the stuff from marketing? They may not have even talked to the engineers to tell them they were using that content. And so what happens is you pretty soon have a sort of information entropy where things start to go fall apart and the information gets out of sync and you may have inaccurate information through various departments that no one can really trace. So that’s kind of where I see this grow and it’s pretty much a natural feature of using computers and the more traditional desktop application approach to creating content.

AP: Absolutely, and you’ve really kind of covered the big picture here really well. And what I want to do is kind of move just a little bit away from that and talk now about, especially for people in the learning and training space, where they might see some of that content overlap. And you’ve kind of touched on one. Anytime that you have a new version of the product or service that you are creating content for, it’s very common to just copy and paste the previous version, create a new file set, and then make your edits and updates in there. There’s one scenario right there where you have used copy and paste, and there’s a very good chance there’s a lot of overlapping information between those two versions that probably really should be maybe one set of files instead of duplicated content. So that’s one example I can think of immediately off the top of my head. Where are some other places where learning and training people might see this duplication of content?

CH: Yeah, so, well, the copy and paste happens for a lot of different reasons. Sometimes you’ll have product diagrams that somebody has or engineering schematics or something like that that need to be part of multiple divisions. And you’ll see that stuff sort of get copied around, if you will. I think you make a very good point about new versions of the product or because even, even in the case where you wrote perfect content the first time, if you ever could do that. And I wrote it perfectly well for the current release of the product. When the product is upgraded or changed in some way or a new revision is released. When you make those changes, that doesn’t mean all the old product disappears. People are still accessing that older content.

And if you start to find issues in the older content that get addressed maybe through your user support, is that getting pushed up to the newer stuff? Because if the newer stuff did what you described, which is I copied it, I may not even know that that’s now inaccurate in the new release of the manual. Or vice versa, it could be someone using the new product who identifies a problem with our documentation, and we go back and we neglect to fix the old ones, well then all the users of the older product are going to run into that issue sooner or later.

AP: Yeah, and then you’ve got this whole layer too. What if you were delivering to all of these different delivery targets, different delivery formats, you’re using Microsoft Word over here to create perhaps more study guides or scripts or something like that. Then you’re also using PowerPoint over here to create slides. You are copying and pasting perhaps into some kind of software. That will help you with simulations or more audio video kinds of things. So in addition to what you and I’ve just talked about with the different versions, how those are out of sync, if you were copying and pasting content into all of these different tools that create these different delivery types, then this problem is multiplying rapidly because you’re gonna have to go in and touch all of that source for all of those different delivery targets; your Word files, your PowerPoint files, your Articulate content, whatever else. So it kind of can explode pretty quickly in your face.

CH: It sure does. And we haven’t even touched on if you’re in different countries translating to different languages, what do you do about all the translated content? And that can quickly overwhelm you as well.

AP: Exactly. Yeah, so basically this problem becomes exponential, both from say, the versions of your product to service, across the different delivery targets that you’re dealing with. And then if you have to localize that content, all of the, shall we say, bad behaviors that are in your, or let’s call them inefficient behaviors, that’s less judgmental. 

CH: There you are.

AP: Yeah, less judgy. These inefficient, behaviors are then duplicated in every single language that you crank out. So yeah, it’s very ripe for inefficiency. It’s very ripe for errors because it is unfair to expect a human being to go through and keep track of all of this. So things go sideways and you even touched on something else a little earlier.

CH: For sure, yes.

AP: And then in some cases, you are pulling content from other departments, other content creating groups. And then that’s another layer of this exponential explosion where if you’ve changed something and someone quote, borrowed that from your group, are you sure they’re gonna know that you changed that? Or you fixed it when they copied and pasted it into their version? And then think about the poor end users, the content consumers who were getting this information they’re probably not getting a consistent picture at all about what you’re talking about because of all this copy and paste all over the place. It’s a mess. Yeah. So let’s go on and try and put a more positive spin on this mess and start talking about the process for identifying this duplicated content. 

CH: That’s good.

AP: So what can people do to start kind of taking the pulse of this problem?

CH: I think a lot of it depends on your resources and your organization’s commitment, but there’s always things you can do, whether they’re smaller efforts or larger efforts. So, you know, at the very BMW view or let’s say Cadillac view, if you’re from the 80s like me. You would probably have a huge budget to be able to implement a whole new set of tools and workflows that allowed you to use all sorts of technologies to do what’s called single-source publishing. And that’s where you author in a format neutral format. And then you take those pieces and really you’re creating sort of Legos of content, you could imagine.

AP: I call them puzzle pieces, so yeah. Yep.

CH: There you go, puzzle pieces, Legos. And they fit together in lots of different ways. You can put them together for training. You can put the little pieces together for your user manuals. You can put some of the pieces together for your marketing materials. But the key is that you’re using the same piece in all of those places. And what these sort of advanced tools allow you to do is keep track of all those pieces. Use those pieces in all those multiple places and then still create your deliverables out of those pieces. So instead of authoring directly in PowerPoint when you’re writing a course or writing in Word when you’re writing a manual or maybe working on HTML when you’re creating your website instead of creating content in those sort of single-use formats you create your content in a neutral format and then you have it output to those formats. 

AP: Exactly.

CH: And so you still can deliver those end formats that you need to actually put out in the world, but you’re doing it from a single source of truth. And that’s that single content repository. Now that’s the ideal, that’s the perfect one.

AP: Yeah, and you’re not, and you are not going to get to what you just described overnight. You are not going to snap your fingers and that’s going to happen. So yeah, I think you’re headed kind of where my brain was. And that is you can start small with this and you don’t even have to think about tools. You can start very small and start thinking about, you know, where is this duplicated information? Just trying to ferret it out. And one way you can do that, you as a content creator have a very good idea of what is in your set of training content. You’re the people who are creating it. You know where the bodies are buried, where things are going wrong, where you’ve noticed that there’s this duplication. You could also work with a consultant like me who has been doing this kind of stuff for years and can help you by asking the right questions and maybe trigger some things in your brain. Oh yeah, I didn’t think about that. But there is also technology out there like your Harmonizer tool that can help people start to identify that reuse. And I think it’s worth noting it doesn’t have to be things that are exactly the same. Your tool can help find things that are fuzzy matches that are sort of the same because that’s equally valuable as well. And I want you to talk a little bit about how that process works because I think that’s important.

CH: Sure. So the tool we developed, which was actually kind of a companion to our conversion work, is we had the same problems. People come to us and bring those content reuse problems, and they would ask us if we could help them in some way, because when they’re converting content, even if they’re going to move to some neutral format or they’re just moving from, say, Word to FrameMaker or FrameMaker to something else. That was a lot of the work we were doing, but they would bring us lots of duplicated content. And sometimes at that conversion stage is a good time to nip that a little bit or make some headway against those duplications. So we developed Harmonizer as a tool that was very format neutral. It just basically extracts all the text from whatever content you have and puts those into blocks and then it compares every single text block to every other text block and it’ll tell you which ones are all the same, which ones are close and that close can be pretty far apart actually. So I could do things like if I had a sentence and I told you when you go to the store pick up some milk and then somewhere else I tell you to pick up some milk when you are at the store. Those aren’t exactly the same. And in fact, if you do a word-by-word analysis, they’re completely different. But if you do a harmonizer style analysis, we use some linguistic algorithms to be able to tell that linguistically, those are essentially the same thing, or at least very close in what they’re describing, even though the words and the letters are all in a different order.

AP: It’s the intent of that sentence, basically. Yeah. Yeah.

CH: Very much, yeah. So we detect that as well and put that into groups. So then you can look and you can say, okay, I’ve got this block of text. It says this. Here’s all the places Harmonizer will highlight where they’re different, sort of like a diff tool so that you can see, oh, I use the word or here and I use the word and in this other place. Or maybe I used one version of our product name, in some of the content and I am using a different version of the product name in another part of the content. Or maybe I’m comparing two products and their manuals are 75 % the same content just every now and then the product name is mentioned and that has to be different. All of those things can really illuminate why you have duplication. It can also help you find those places where maybe you’ve made corrections in one place and haven’t got to those other places because you might see, oh, this paragraph is the same except we added a warning at the bottom, do not do something. We better tell everyone else that warning in all the other formats that we’ve created. So that’s kind of what Harmonizer does. It’s not a magic bullet. It gives you a very large, well, if you have a lot of content, it’ll give you a large report if you’ve got a modest amount, you’ll get a modestly sized report. It’ll scale to whatever amount of content you want to feed it. What we do is we use it very strategically. And for instance, we can use it to identify just why you have maybe close but not matching content. So maybe you’re using inconsistent wording in different places. We can identify maybe if you have already some standard content, we can identify if there are places where maybe it varies in ways you didn’t expect. So you can check your standard content libraries if you need to. There’s all kinds of ways it can be used, but at its core, it’s again, just giving you those matches and helping you see, really shining a light on where your content is as far as redundancy.

AP: And one thing point I want to make here is it really doesn’t matter what tools you’re using to create content. This work you can do is not dependent necessarily on those tools. Like I said, you yourself can kind of do a self-service thing where you start to think more deliberately about where you think content is. You can work with a consultant who can help you figure this out. You can use a tool like Harmonizer to help dive deeper and really find this content. So there are all these layers that you can do. And the first layer is you can start thinking about that yourself. So there’s a lot of options there. So once you have started to identify this duplicated content through whatever those methods are that we just talked about, it’s time to get into a reuse strategy. And you’ve already touched on this really well. The core of that reuse strategy is you have a single source of truth for every piece of content, every piece of information, there is one version, one format-neutral version that you can then pull into all your different delivery targets and all your different types of content. So that’s kind of the core of that. Once you know where that duplication is, you can start coming up with this more formal reuse strategy. And I think you also pointed out to the copy and paste that is like the morning light going off, you’ve got duplicated content. There’s copying and pasting going on. That’s what you want to try to eliminate with the single source of truth. Give people a little idea of the benefit of the single source of truth. And I’m talking about both for content creators and for the content consumers because it falls on both sides, the benefit of that single source of truth.

CH: For sure it does. Content creators know this. We’ve already touched on when there’s a problem found or a change needed in the documentation. Maybe the product’s changing or was updated. If it’s software, who knows? Maybe we’ve added a new menu item. So we need to add that to the documentation. Well, if we’ve got a single source for everything and everyone draws from that source, we update that source and it will flow out into all the other channels without any real effort. Now, you can simulate this with your copy-paste activities, but you have to really formalize how you do copy-paste. So you’ve got to only copy from say the source of truth, not from each other or something like that as a starting point. If you can’t actually implement true, single source tool chain. Another area though where this really impacts is on the quality of the content you’re delivering to your readers and your consumers. We all know and I deal with this all of the time and this should be make everyone feel a little bit better that even a giant company like Microsoft has this problem. I work in SharePoint quite a lot. And SharePoint has a lot of different versions. It’s been around forever. One of the biggest challenges I have is when I go look for answers, and this isn’t to pick on Microsoft, by the way. Every software company, you could probably find some of this. 

AP: Absolutely.

CH: But I go looking in the content for something, I’ll read it one way in one place. I’ll read something a little bit different about the same feature in another place and Sometimes they’re just describing them in two different ways Because maybe one was written by the engineering team and one was written by the marketing department Maybe another version was written by the training department. So that’s going to happen. But then I also run into a lot of places where it’s not easy to tell when this stuff was even created. So it might be very old stuff that I’m looking at that no longer is even applicable. All these issues become simplified if you’re doing that single source of truth because you can start tying together a strategy to deal with that. When you just publish stuff out there and it all gets sort of thrown out in a fire hose to your consumers, that can become a very big challenge for them when there are all these inconsistencies in different language styles or different ways of writing the information.

AP: And if people are using all the different content that’s available out on your website to make a purchasing decision, and it doesn’t just have to be the marketing content, they can be looking at the product content. They can be looking at the publicly available training content. If they are getting mixed messages, different information that should be talking basically about the same thing, that can be a huge turnoff and it can hurt you financially because people will be like, I’m not comfortable buying this product or service because I’m getting mixed messages in the content that’s available out here. The bottom line is people don’t care what department or what your organization is like, what your hierarchy is, what your tree is, whatever you want to call it for your different departments and your management. They don’t care about that. They just want a consistent message, consistent information and they want to get it from wherever they find it and they want to be sure that it’s the same message they get regardless of what quote, department’s content that they’re touching.

CH: Yeah, when I’m working with your product, your product is really what I, how I see you. I see you through the product. I don’t see you through your departments and your channels and whatever organizational structure you’ve created to manage your company. So I think that’s a really important point you make to really make sure that that product experience is consistent and clean. And doing this, you know, even if all you’re doing is just trying to make things more consistent than addressing the redundancy issue can help just in ensuring that we’re presenting that unified view of our product to the world.

AP: This conversation really has probably given people a lot of food for thought. There’s a lot to think about when we’re talking about this duplicated content, redundant content. If we want to kind of back up a little bit and give people maybe one or two pieces of advice on where to get started, even if it’s starting small, what are some things that people can start to do now to start thinking about this bigger picture of duplication, reuse, single source of truth? Any recommendations there?

CH: For sure. So the first thing is you’ve got to tame a little bit of your Wild West if you’ve got that of content. So if I can just go on the corporate network and start willy-nilly looking around and copying and pasting stuff out of anywhere I can find it, which is sometimes the case, that’s probably a big area where you’re creating a lot of content entropy. So you need to think about that. And it may just be even a training issue. It may be a network organization issue. But you should start considering how you can make the authoritative repository accessible to everyone and then limit where they’re getting stuff to that authoritative repository. You don’t have to implement a whole new content management system and tool chain to do that. You can do that using permissions, using training, and having regular contact between the groups that create the content that’s getting copied around. So making sure there’s some interface between them so that they can coordinate and know that they need to coordinate these content activities. A lot of times that simple piece just gets overlooked because a lot of companies really treat content as kind of the afterthought. I’ve built the product, okay, hurry up, make a manual, do some training, do whatever, because we’re product-focused. And so it’s kind of natural, but you really need to see your content as an integral part of that product that you’re delivering so you can get started with just using the tools that you have and working on the processes and the consistency with which you apply those tools. You can also start strategizing for the future. So you can look at, okay, maybe we need to figure out, first of all, how much money is it costing us to copy and paste a lot? Again, knowing how much duplication you have, and then you can put estimates on, okay, if I’ve got all this amount of duplication, how much does it cost to make a change to this manual if we are to ensure that it gets to all the delivery channels, including marketing, training, all the languages, all the manuals? How much does a change cost? Once you start quantifying that, you might find out there’s a better budget than you think for working on this problem. 

Again, you’re going to have to look at your content itself and figure out how much redundancy there is. So planning that strategy, figuring out how much it’s costing you, all of that can be very helpful, I think. And then ongoing maintenance. How are we going to maintain it? I’ve been to a lot of organizations where they’ll do a big push to clean things up and they’ll say, okay, we’re going to hire someone, we’re gonna get some new tools going and man, we’ve fixed it, right? And so they fix it. And then three years later, they’re in the same boat they were in because they didn’t really follow up on that. They didn’t plan to maintain the content. Nobody was charged with the duty to ensure that we were adhering to the strategies that the tools were providing and nobody really had the responsibility to look at that stuff. So if you’re going to make the investment, you have to also have the follow-through. And a lot of times that involves consultants, because let’s be honest, if this is the first time I ever do this, I’m not going to do it very well. And I usually don’t get a chance to do this 100 times. I’m not going to do this over and over in my organization.

AP: Yeah.

CH: But if you find a consultant, you can find someone that’s done this 100 times for a lot of different organizations. And they already know where all the pitfalls are and where all the trouble is. And they’ll help steer you in the right direction the first time. Because you don’t get a lot of bites at this apple. Like, your company’s not going to say, oh, just keep working on content reuse for the rest of time. They’re going to want to see some progress.

AP: Exactly. Yeah, and it comes down to return on investment. You do not do these kinds of things for fun. You’re doing them for business reasons, and business reasons include making money and getting a return on investment on any kind of investment in technology, and that includes content technology.

CH: Absolutely.

AP: Chris, this has been very helpful. I think this is a good place to wrap up. Thank you so much for your insights. I think you’ve given people a whole lot to think about.

CH: Well, I appreciate the conversation.

AP: Thank you for listening to the Content Strategy Experts Podcast brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

The post How reuse eliminates redundant learning content with Chris Hill (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 31:57
What’s next after LearningDITA? (podcast) https://www.scriptorium.com/2024/03/whats-next-after-learningdita-podcast/ Mon, 18 Mar 2024 11:44:36 +0000 https://www.scriptorium.com/?p=22422 https://www.scriptorium.com/2024/03/whats-next-after-learningdita-podcast/#respond https://www.scriptorium.com/2024/03/whats-next-after-learningdita-podcast/feed/ 0 If you’ve taken the courses at LearningDITA.com and you’re interested in starting a DITA project, check out episode 163 of The Content Strategy Experts Podcast where Bill Swallow and Sarah O’Keefe talk about the steps you can take to get funding.

“Showing up with cookies never hurts, but what is your executive’s motivation from a business point of view? What are they trying to accomplish in their goals for this next quarter or month or year, and so on? You need to show them, assuming that you can, that moving to structured content, moving to DITA, and changing tools is going to help achieve those business goals.

— Sarah O’Keefe

Related links:

LinkedIn:

Transcript:

Disclaimer: This is a machine-generated transcript with edits.

Bill Swallow: Welcome to the Content Strategy Experts Podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we talk to you about next steps after LearningDITA, how to get your boss to sign off on a DITA project. Hey everybody, I’m Bill Swallow.

March 2025 update: We have moved LearningDITA to a new platform. The Introduction to DITA course is still free, and you can sign up for courses at store.scriptorium.com.

Sarah O’Keefe: And I’m Sarah O ‘Keefe, hi.

BS: So yeah, we’re going to talk a little bit about what to do after you’ve completed your learningDITA.com courses and you have some DITA knowledge under your belt. So I guess we’ll start off with completing the courses. You pretty much have a working DITA environment then after you complete all of the courses on learningDITA .com, you have a batch of topics, a batch of tasks, a batch of other types of files, a map, you have a publishing scenario, you have some reuse going on. So it kind of makes for a neat little proof of concept package. But now that you’ve got that, how do you bring it to management? How do you sell it as we want to move forward in this direction?

SO: We’re assuming, of course, that I think a lot of the people that do LearningDITA, they come for different reasons. And a big chunk of it is just, I need to learn this because I want to be more marketable. I want to get a new job. I want to have a chance at the jobs that require DITA information or DITA knowledge.

BS: Right.

SO: But I guess what we’re focused on here today is this question of, all right, so you’ve run through the courses and you’ve decided that this is potentially a good idea for your company, for your employer. You really want to advocate for “we need to move our content into DITA because it feels like this is a good idea for this particular organization.” So what do you do next? And at that point, you’re right, you’ve got a proof of concept and that may be enough to show to your peers and maybe your immediate manager just to let them look at that. Almost certainly the next thing they’re gonna ask you for though is to take some of your content because the learning ditto ducklings are only gonna get you so far. They’re gonna say, well, how does this apply to our content? So probably you’re gonna have to go off and do some…

BS: Haha.

SO: …test topics or some test content using your actual live content. I think that’s probably step one. The key thing though, I think to recognize is that DITA and structured content is not just a tool. You’re not going to your manager and saying, hey, I need $500 for a piece of software or even a thousand.

BS: Mm-hmm.

SO: You can of course do this potentially with source control, which is at least free in theory, free but not cheap, right?

BS: Haha.

SO: And, but the effort of, for example, taking five or 10 or 50,000 pages of word content and moving it into DITA is really significant. And so you’re talking about a big, you know, departmental or even enterprise effort to make this happen. And so the bottom line is that somebody needs to agree that this is a good use of your time and or resources, which means you need an executive sponsor.

BS: Right. So how would you start moving up the chain to have those discussions in order to, I guess, get to an executive sponsor or how would you frame a pitch to an executive sponsor then? If you are convinced this is the right direction to move in, you have your proof of concept, you are hopefully moving some of your actual real-world content into DITA to beef up that proof of concept to pitch. What are the things we need to start thinking about?

SO: I think that many of us that live in this technical writing, technical communication world tend to be really interested in new technology. Something new comes along, we’re like, oh, this is so cool and I can’t wait to use it and I can’t wait to apply it and it is delightful and fun and new and different and nerdy. That pitch, you know, look at this, this is so cool. That doesn’t work unless you’re selling AI, then it pretty much works. But the question you have to ask is who is the person that’s going to fund this project? And the more money you need, the higher up the chain you’re going to have to go. And what motivates that person from a business point of view?

BS: Haha.

SO: Right? I mean, showing up with cookies and things never hurts, but what is their motivation from a business point of view? What are they trying to accomplish in their goals for this next quarter or month or year or whatever? And you need to show them, assuming that you can, that moving to structured content, moving to DITA, changing tools is going to help achieve those business goals.

BS: Mm-hmm.

SO: So the number one most obvious way to do this is to show cost avoidance, right? There’s a decent amount of research that says that if you’re using desktop publishing tools, you’re probably spending something like 50% of your time doing formatting work as opposed to content work. And the formatting automation that you have with structured content, gets you out of that formatting work. So you can basically say, hey, we were spending 50% of our time on formatting. Instead, we’re going to write some transforms, and then we’ll be done. We’ll have push-button processing, which is a pretty clear cost avoidance and a pretty clear gain. And we have a calculator for this that addresses looking at those issues, which we’ll make sure to put in the show notes.

BS: Mm-hmm.

SO: But there are other levers that may matter more to your executives and to your leaders than cost avoidance. Time to market is a big one.

BS: Mm-hmm.

SO: Can we get this stuff done faster? Not necessarily cheaper, but can we get it out the door faster? Or do you have these delays of, oh, I still have to reformat it, and oh, now all my numbering is wrong, and I have to go back through and fix everything, and it’s just a nightmare. Can we get a competitive advantage? Can we do a better job of supporting the brand? Can we do a better job of translation localization and expanding what we’re doing? You need to really understand where’s the business going and why? What direction is it going in? It is really, really common after a merger to have a situation where you have two or three or 15 mutually incompatible content development systems. And what you really need to do is bring them all together in the same way that the products are being brought together and the company is being brought together into one unified thing so that you can sell, you know, products A, B, and C, which used to belong to different companies as a unified set. Well, you need the content to also be a unified set, which pushes you towards a unified approach. And if that’s, you know, if DIDA can solve that for you, then that’s a story that’s gonna be compelling to a leader who’s dealing with merger headaches.

BS: Mm-hmm.

BS: So finding a way to kind of translate what you need to do to expedite and streamline your work to align with the company goals, or at least the goals of essentially the person with the money who’s going to make this effort happen.

SO: Yeah, and know, expedite and streamline is really useful and in and of itself doing your job more efficiently as opposed to less efficiently is generally a good idea. I think it goes beyond that into additional factors. When we talk about, so did a reuse is a great example, right? You walk into this presentation and you’re like, Conrefs are the coolest thing ever and look at these keys and look at what I can do with scope, right? And you’re…

BS: Haha.

SO: …you’re the people you’re presenting to are like what what what they have no context they don’t care if they’re software engineers you can maybe talk to them about like object-oriented things and how you can you know whatever but no you walk in there and you say okay um you know how we have this problem where your content over in this bucket contradicts the content over in this other bucket. And the reason is that we copy and paste from A to B, and then we update A, but we don’t update B. And they’re like, oh, yes. And then if right now you’re going to bring up that Air Canada issue with their chatbot that had incorrect information, which is a great example of this, where almost certainly what happened was that the chatbot was fed a bunch of information that wasn’t kept up to date.

BS: Mm-hmm. Yeah, they have no context for what you’re talking about.

SO: Or they were fed the incorrect information to begin with, well, that shouldn’t happen. You should have a single place where you stash all of that information and then you just push it to all of your endpoints, such as a chat bot. And, you know, solving those kinds of problems so that the company doesn’t get embarrassed slash sued slash held liable for making mistakes with their content is valuable. And that is you know, different and arguably more important than we can do it better, faster, cheaper.

BS: Mm-hmm. So yeah, it’s really avoiding those risks that you have in producing content where you can have inconsistencies. If you’re doing things right, you will write once, use everywhere by reference so that the same copy goes out in every place it needs to go. Are there any other things that we should really start looking at with regard to risk management there?

SO: One of the biggest challenges with making a change in tools, whether DITA or anything else at all, is that people, people who are not consultants really hate change. Actually, we hate change too. We’re just in the business of inflicting it on other people, but when the shoe is on the other foot and somebody’s advising us, we’re just as bad as all of you. Hi, everyone. Yeah. It’s pretty bad.

BS: Mm-hmm.

BS: That is true. That is entirely true.

SO: So, okay, so we all hate change, right? Change is bad. And change is perceived as being risky, right? Because there’s the thing I’m doing right now, which I know how to do, and I know where the problems are, and I know it’s inefficient, but I know how to get around it. It’s all known. 

BS: Mm-hmm.

SO: And when you walk up to me and say, hey, I found this cool new way of doing content and it’s gonna be awesome and we’re gonna solve all these problems and it’s gonna be so great. My reaction as a human is A, I don’t believe you and B, this sounds like change and change is bad. So what you have to do is you have to convince me that making the change is less risky than not making the change. 

BS: Mm-hmm. Yep.

SO: And, there’s a lot of things you can do to mitigate that, but probably the biggest one is to start small and do like a proof of concept and show some stuff and say, look, you know, we have this ongoing problem and I’ve solved it over here and look at how this just works. And I made this little update and look, it percolated into five different locations automagically and isn’t this cool. So to start to build that confidence and that trust and that knowledge, that understanding of the techniques or the technology or you know the thing that you’re trying to convince people to use to switch to. But the unknown, whatever that unknown is, is always going to be perceived as being riskier than the known, even when the known is bad. Like known bad is actually easier than unknown good.

BS: Mm-hmm.

BS: Mm-hmm, right, because you’re asking people to take a step forward in the dark.

SO: Right. And, you know, the dark is bad and I don’t like it. So now there you get into other issues. We’re talking here mostly about how do you deal with leadership and leadership is looking at it and saying, you know, is the risk worth it? Is the funding worth it? They have X amount of funding, some number. They have one hundred dollars and you’re asking for 50. But there’s eight other people also asking for fifty dollars and they have to pick.

BS: Mm-hmm.

SO: You know, two that are going to get $50 a piece out of their eight projects. So your pitch has to be, you know, you’re competing almost certainly for limited resources within your organization. So it has to be a good pitch. I mean, you have to make a compelling argument and, you know, con -KeyRefs are really cool is not actually a compelling argument.

BS: Yeah, how does it impact the bottom line of the company?

SO: Yeah, how can I fix these issues that we are wrestling with as an organization? We have localization problems, stuff we don’t, you know, we’re not managing our content properly and we get all these problems in localization. We’ve got writing issues, our warnings are not standardized and that’s gotten us into trouble because, you know, we got sued and these two documents didn’t agree with each other and they pointed out the discrepancy and that had real-world implications. We’re having trouble delivering content that complies with the EU directives, the machiner directive or the product documentation directives, because we don’t have enough control over the content that we’re delivering. Those are conversations that need to happen, and underlying that is, and so if we use DITA and we redo reuse and we do this and this and this and we automate our formatting,

BS: Mm-hmm.

SO: We can address these issues, but you have to start with the business problem and not with the feature.

BS: Mm-hmm. Right. Yeah, Conrefs are cool is definitely not a selling point out of the gate.

SO: I mean, it works for me, but you know.

BS: Well, but if you frame it the right way and get the executives on board with the business reasons for moving, you might actually get the executives saying, hey, con refs are cool.

SO: Right, now the big challenge here is that, you know, we’re talking about leadership as this amorphous thing, but it turns out that what’s gonna happen almost certainly is that the priorities change as you go up the line. So your tech com manager has one set of priorities and a vision or a, you know, amount of stuff that they’re looking at. 

BS: Mm-hmm.

SO: And the director above that is looking at something different because tech com is just a part of their responsibilities. And the VP above that, again, so you have to understand what messaging is going to work at every level in the organization. And accordingly, provide the proper message or a message that is going to work. 

BS: Mm-hmm.

SO: So ultimately, this comes down to know your audience, know who you’re talking to and what their priorities are and figure out how, whether and how. I mean, we should start with, does this actually fit into the game plan? I mean, is this the right solution for your organization? If you’re convinced it is, then how do you communicate that in a way that is understandable to your non-interested in content leadership?

BS: And then magic happens.

SO: And then magic happens.

BS: So that’s a big leap from doing a LearningDITA proof of concept course, more or less, to doing an executive pitch. And I know we covered a lot of ground here, and there are a lot of things that we still have not even discussed. But I guess in the interest of time, we do have a lot of resources available to you to start thinking in this direction, being able to put that pitch together, get the data that backs up your position that you do need to move if you are looking for a move into DITA. So we will put a bunch of these resources in the show notes. Sarah, do you have any particular ones in mind you’d want to share?

SO: So I mentioned the Content Ops ROI Calculator, and we’ll get that in there. There’s also a chapter that I wrote called the Business Case for Content Ops, which sort of goes through all of these different factors and the risk management issues. We haven’t really touched on compliance, but that’s another key factor that tends to play into this. That is available both on our site and then, you know, the larger Content Ops book is out there now and available for free. So there’s a whole bunch of interesting stuff in there that might be of use. So we’ll post all of that and links to some of the white papers that are floating around that may be of use to our listeners. And beyond that, if you’re, you know, working on building this case out, I would say feel free to reach out to us and we’ll do the best we can to help.

BS: And that sounds like a good place to close. Thank you, Sarah. And thank you for listening to the content strategy experts podcast brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

The post What’s next after LearningDITA? (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 17:41
Brewing a better content strategy through single sourcing (podcast) https://www.scriptorium.com/2024/02/brewing-a-better-content-strategy-through-single-sourcing-podcast/ Mon, 19 Feb 2024 12:34:41 +0000 https://www.scriptorium.com/?p=22378 https://www.scriptorium.com/2024/02/brewing-a-better-content-strategy-through-single-sourcing-podcast/#respond https://www.scriptorium.com/2024/02/brewing-a-better-content-strategy-through-single-sourcing-podcast/feed/ 0 In episode 162 of The Content Strategy Experts Podcast, Bill Swallow and Christine Cuellar discuss the benefits of single sourcing as part of your content strategy through the example of two things they love: coffee and beer.

“We know companies that have moved away from a do-it-yourself approach because they had maybe two or three different people putting in half to almost full-time work on the publishing system and not on other facets of the company’s core business or the writing. They were simply there to keep everything working. It just blows my mind that on a scale where you have hundreds of writers contributing content, you are saying, Okay, you three people are going to be solely responsible for keeping this thing up and running so that they can produce their content, rather than having a system that’s designed to keep itself up and running.

— Bill Swallow

Related links:

LinkedIn:

Transcript:

Disclaimer: This is a machine-generated transcript with edits.

Christine Cuellar: Welcome to the Content Strategy Experts Podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we’re talking about how you can brew a better content strategy through single sourcing. Hi, I’m Christine Cuellar.

Bill Swallow: and I’m Bill Swallow.

CC: Hey, Bill, thanks for being here today. 

BS: Hey, thanks.

CC: So what I mean by brewing a better content strategy is that both Bill and I really love coffee. Right now for both of us, we’re recording it fairly early times in the morning. So actually we’re heavily reliant on coffee and other caffeinated sources to enable this conversation. Also, Bill, I know you like homebrewing beer. I like drinking beer. I have no idea how to homebrew, but I do enjoy beer as well. So we just thought that beer, coffee, drinks in general actually have some good analogies for single sourcing, which can be part of your content strategy. And it’s something that’s been coming up more and more in a lot of conversations with clients and people that are interested in content strategy so we thought this would be a good topic for today. So Bill I’m gonna kick it over to you for our first really big-picture question. First of all what is single sourcing? What do we mean when we say that? Let’s kick it off there.

BS: All right, so in a nutshell, single sourcing is writing content once for multiple purposes. It’s about as simple as you can get. It could be authoring centrally, it could be authoring collectively in a group or centrally as a single person for a wide variety of publishing needs, whether it be for different audiences, different output types, or what have you.

CC: Okay, yeah, that’s great. So how does, what are some ways that single sourcing can start to mimic drinks? Coffee, beer, any of that?

BS: We could take the example of multiple output formats. So traditionally with single sourcing, we’ve been doing that since I think the mid-90s. I remember working in. Oh, that’s based in the name. I remember working in Doc to help back in, I think it was 1996, to produce online help and written manuals from the same source using a very high-tech convention called RTF, which is basically the backbone of Microsoft Word at the time.

CC: Ooh.

BS: So that was fun. I had many nightmares about RTF coding. I solved problems in my dreams using RTF. It was a scary time. Yeah, I was essentially fully immersed, let’s say that. But in many ways, to take the same analogy, you’re producing a wide variety of, or you’re producing a core set.

CC: That’s when you know it’s really stressful. Yeah, that’s not good.

BS: of stuff that needs to go to many different places. And it’s a lot like, let’s say a coffee roaster since it’s early in the morning and we want to talk about coffee. A coffee roaster is going not going to sit there and roast a pound of beans, put it in a bag, and then send it off, and then roast a pound of beans, put it in a bag, send it off. You know, they’re going to roast, you know, a ton of beans, 10 tons of beans, however, you know, many that they can fit into their roaster.

CC: Yep.

BS: and do it all at once. A couple of things that allow them to do, one, it streamlines the process and speeds things up because now they have a wealth of products that they can then put in large bags for distribution to restaurants or cafes or what have you. They can put it in smaller bags and send it out to the grocery stores. They can do their online mail orders for coffee that way. Five-pound bags, one-pound bags what have you or they can even grind it up themselves put it in k-pods and people can destroy the planet with those I’m not a big fan of those the k-cups and pods but It also helps them create a more homogenous product because they’re working at a very large scale So and they’re they’re producing things in very large batches So with all their beans together in one roaster, they are able to produce a very consistent product that way.

CC: Yeah, that’s great. That’s a great analogy and that definitely makes a lot of sense. So when it comes to both coffee and beer, there’s a, you know, the commercial option that you just outlined, which is really helpful. And there’s also usually a DIY component. I mean, you can home brew beer, you can home brew coffee, of course, and do that in a bunch of different ways. Like it can just be your coffee pot or you can get all fancy and do, you know, all the little like other fancy things you can do with it, all of which I’ve done and they’re all leaving my brain at this moment. French roast, okay, that’s one. Anyways, it should have been more top of mind. But is that also an option for content?

BS: It can be. Looking at a commercial solution versus a DIY or DIY approach, it’s not so much a question of which approach do you prefer to take. Because I mean, yes, I’m a hobbyist when it comes to brewing beer. But I still, and really for the past 10 years, I stopped brewing because there were just so many high-quality

CC: Okay.

BS: Options on the market at that point. I’m like, why am I spending my time doing this when I can just go to the store and pick up? one of a thousand different types of beer but you really need to look at it from the standpoint of how much money do you have to spend on a commercial product versus how much time and commitment do you have to doing it yourself if you do it yourself. 

CC: Yeah.

BS: The results can vary, but if you put the time and energy into it, you can produce some amazing results, but there is always a hidden cost of time and labor. When I used to actively brew, I brewed with a buddy of mine and we would do it every Monday night. So he would either come to my place with his equipment or I would go to his place with my equipment. And from like six o ‘clock until about midnight, we would be either brewing beer, cleaning equipment, bottling beer, doing whatever. It was a commitment. I mean, it was six hours a week, and literally it was every week. Unless we had something going on and we took a bye week, we were doing that every single week because there is always something that needs to be done in the process.

CC: Hmm. Yeah. Yeah, that’s a big time commitment. And I like that you mentioned that not only was there a big time commitment in actually brewing the beer, but also the cleanup, also the prep work. Yeah, I like that there’s this there’s other factors that you don’t think about that also are involved in doing it yourself? Is that also something that applies to, you know, single sourcing and content strategy? Are there a lot of factors that can come into play?

BS: Oh, absolutely. I mean, if you’re a homebrewer, you have to enjoy the monotony of cleaning. And it’s the same thing if you’re doing it yourself with putting together a publishing system and an authoring system that relies on, let’s say, open source tools and a lot of human care and feeding. You have to really enjoy the monotonous. 

CC: Hmm.

BS: Droning kind of day-to-day maintenance work. You know, when you’re brewing, it’s, it literally is 90% cleaning, 10% brewing. Because I mean, you start, you know, you have to have everything completely sanitized. And once you get the pot boiling, you know, it’s, it’s doing its thing for about an hour. You know, you might be adding, you know, some hops here and there, or some other flavoring agents, depending on the type of beer you’re producing. 

CC: Wow.

BS: But, largely, you’re just waiting for an hour. So while you’re waiting, you’re cleaning other stuff that you’re gonna need later in the process. And then you take five minutes to move to that next step, and then you have to wait for the beer to cool down. So then there’s another round of cleaning. Okay, all the stuff that I used to make this batch of beer now needs to get cleaned. And then you go to put it into the fermenter, and now you have to clean everything else. And the cycle just continues.

CC: Yeah, oh wow.

BS: It’s the same thing with these with, you know, with a do-it-yourself approach. And it’s not to say that it’s wrong or that it’s not ideal, because you can learn quite a lot in a do-it-yourself environment. But it does come at a cost. You know, you’re going to spend I actually we know companies that have, you know, moved away from a do-it-yourself response, because they had, you know, maybe two or three different people putting in half to almost full-time work on the publishing system and not on other facets of the company’s core business or the writing or what have you. They were simply there to kind of keep everything working. And it just blows my mind that on a scale where you have hundreds of writers contributing content that you are saying, okay, you three people are going to be solely responsible for keeping this thing up and running so that they can produce their content rather than having a system that’s designed to keep itself up and running.

CC: Yeah. Would you say that because it sounds like with a DIY approach, it can work, but it has to be very intentional and you have to be very realistic, like you said, about the cost and the time that’s involved. And I can see from like a coffee analogy. Do companies, I guess, default or kind of slide into a DIY approach without really thinking about it. Because I could see with like coffee, I do love enjoy, okay, with coffee, I do enjoy attempting to make lattes and, you fun stuff with my espresso machine, which I have like a really crappy one right now, but it’s really fun to play with. And I’ve practiced a lot with it. But still, the best cup that I make does not compare to like, basically every one of our local coffee shops here, I would 100 %… enjoy their stuff more than what I make. It’s just fun to play with. But it’s not realistic for me to go and buy, you know, the best coffee from one of the local places every single day. So instead, I have a coffee machine and instead I brew stuff here at home, you know, every day for my regular coffee addiction. And then when I want to be fancy, I go to a coffee shop. But just, you know, I don’t have the capacity to go somewhere else every single day. So that’s kind of why I’ve just

BS: Yeah.

CC: Not really thinking about it, slid into a DIY approach. Is that also something that happens with companies that they kind of DIY until they realize there is a different way to do it? Is it kind of a default method if that makes sense?

BS: Yes and no. And the decision as to why a company might choose to do it themselves rather than purchase a more packaged or commercial solution. It really varies. You know, you have some companies that, yes, they started out small. They hired someone perhaps who had some serious technical chops and was able to put together something very, very, very slick.

CC: Okay.

BS: But you know, they were the only ones who really knew how it worked. And as they hired more writers, you have varying degrees of varying degrees of, I guess, capability and willingness to learn how this thing works.

CC: Mmm.

BS: You know, so if, you know, for example, let’s say that they’re doing Markdown and they have all of these, you know, different scripts that run and fire off and they produce, you know, all these different outputs. It’s very, very slick. I’ve seen lots of implementations like that and that, you know, they’re actually pretty cool. But, you know, as you hire more people,

CC: Hmm. Oh yeah, that’s true.

BS: You start getting into, why do I have to write and mark down? I always keep forgetting to use this character instead of this character when starting a bulleted list. Or I always forget to, you know, close off the end of my, you know, my title or what have you, anything like that. You know, why can’t I use Microsoft Word? Why can’t we move to just using HTML? Why can’t we move to XML? Like, you know, you start getting a lot of that pushback and the pushback may not be direct.

CC: Mm.

BS: So you have cases at that point where you have quality slips starting to make their way into the core content set. And that’s where things get a little hairy. But to go back to your analogy, making coffee at home, you can have, there are plenty of really, really good espresso machines out there that you can buy for home, but it will never compete with that $8,000 Italian espresso maker that your cafe, you know, your choice, you know, the cafe has in town. You know that they, you know, they paid a ton of money for and they’ve spent hours and hours and dollars and dollars to train their staff on how to appropriately use it and clean it to produce that same, you know, questionably but perfect cup of coffee every single time.

CC: Yeah.

BS: You know, same thing with buying coffee, you know, buying beans or buying grounds. You know, these companies you buy and you know, people will laugh. I make the same comments about, you know, certain beer manufacturers. But, you know, you buy something like Folgers. It’s not, you know, in my opinion, it’s not the world’s best coffee. You know, I just don’t like, you know, what it tastes like.

CC: Yes.

BS: But every single time you buy a, they’re not tins anymore, are they? I think they’re more like plastic jugs of coffee. But you buy a jug of coffee and it’s always gonna be the same every single time. And you can say the same thing about Budweiser. People may say, oh, Budweiser, why would you ever drink that? It’s horrible. It’s like, yes, but it is absolutely consistent. You can buy a Budweiser anywhere in the United States, in the world.

CC: They’ve evolved. Yeah, yeah, yeah.

BS: Open it up and it will taste exactly the same.

CC: That’s true. Yeah, that’s true.

BS: You know, there are really no differences there. And they spend quite a lot of time and energy into ensuring that that product is consistent from every single batch that’s made in every single location across the world because they have breweries all across the world that produce this stuff because shipping it from one location around the world is just not gonna work. So all of these different locations have their equipment set up just the right way. Their chemists work, yes chemists, their chemists are working to make sure that the pH balance is perfect every step along the way as that beer is being produced. So otherwise, if you’re brewing yourself at home, your equipment may vary. I’ve put stuff together literally with duct tape and string.

CC: Hmm.

BS: I made it, I made a shower head out of a nine-inch tin foil pie pan 

CC: Hahaha! Wow. Yeah, that’s a DIY way.

BS: Because, it was available, you know, to sparge or to clean my grain as it was, being run off as the beer was being run through. Or, you know, even if you roast your own beans at home, you know, the level of quality is going to vary, you know, because you are likely using your, your oven to do that roasting. And if you step away for a minute too long, or if you didn’t get the temperature setting quite right, so if you don’t have a digital temperature setting, or maybe your heating element is a little futsy, so sometimes it might be 310 degrees, sometimes it might be 332, who knows? There are lots of elements that can go wrong in a do-it-yourself environment.

CC: Yeah, that’s true. And like you mentioned earlier that a lot of that comes down to the people, not only the equipment that you’re using, but also the people. Like, do they know what they’re doing? Do they know why they’re doing it? And especially as you introduce more people, like you mentioned, if it’s you and your buddy that are brewing beer together, that’s another person that’s been added. And, you know, in a scenario where one person’s not as interested or, you know, just doesn’t know as much about the process that can really change things and vice versa. Like if you have two people that both really know what they’re doing and both really enjoy it, that can lead to a really good output. 

BS: It can vary because yes, we both knew exactly what we were doing and But you know you start biting heads. I want to do it this way No, I want to do it this way if we do it this way. You’re gonna get this result I don’t believe you I think if we do it this way, we’ll get this result and yeah, we’ve had You know, we actually tried it and tried two different techniques of brewing the same beer and they came out very very different so

CC: That’s true, yeah.

BS: You know, it is what it is. But yeah, it all comes down to that quality control element, you know, and, you know, generally when you have a bigger commercial system, you can kind of get there a lot quicker. Now, it’s not going to do everything for you, but it’s got the pieces already laid out and it’s got some recommended workflows and processes for using that system to produce consistent results. Whereas with a do-it-yourself, you’re kind of left at your own devices and how well you document your stuff and how well you regulate it.

CC: Absolutely. Yeah, which can is it in and of itself is another time commitment. Yeah. So we’ve talked a lot about consistency, which is a really important element of this, but also personalization is another important value that you can get out of single sourcing. How does that? So let’s say if we put it in our coffee or beer analogy, let’s say you’re personalizing your packaging for, you know, different restaurants and different cafes or whatever. How does single sourcing make that more effective or what does that look like?

BS: Well, at the core of it, like for example, if you’re putting stuff out to cafes and restaurants, you’re typically not going to use the same level of pomp and flash on your branding packaging that you would if it was going to a grocery store. Because you want that product to pop off the shelf in the grocery store and catch people’s eyes, whereas at the restaurants and so forth, as long as the logo’s on the bag so they know they got the right thing.

CC: Yeah.

BS: It’s usually just a pretty nondescript bag with a description of what’s inside it. But in the end of the day, you’re not producing different product for these different groups. You’re producing the same product that’s going out to many different people, depending on who it needs to go to. So you may have one conveyor belt that takes the beans down to where they dump them into 25-pound bags or 50-pound bags. And then you have this other conveyor belt that goes off and does the one-pounders.

CC: Mm-hmm.

BS: And so it’s really streamlining from that. You’ve spent the time to build this, I guess, storage heap of beans that you then are distributing to many different people. So at that point, you’re taking from that same source and you’re partitioning it off as you need to for multiple different consumers.

CC: Mm. That’s true.

BS: Same thing with single sourcing. I mean, you have a core collective of content that ideally is all written in the same tone and voice. Aside from all the mechanics of how content gets produced, it needs to be written in the same tone and voice for…

to be able to blend and remix and be able to send it out to different audiences so that it doesn’t sound like, you know, eight different people, even though eight different people may have written the content, it doesn’t sound like eight different people wrote different parts of whatever it is you’re delivering. It’s a little jarring to go from…

CC: Hmm. Yeah.

BS: You know, one style of writing to another within the same paragraph or within the same, you know, chapter of a book or, you know, series of topics in an online help system. It can get very distracting. So, you know, in that case, you do need some attention toward how all these people are developing the content and what tone and voice they’re using. But aside from that, with regard to packing your…

CC: Yeah.

BS: You know, packaging your output from a content standpoint, you have things like, you know, templates that drive the look and feel of what the various outputs are going to look like. So templates or, you know, style sheets or what have you to produce these things. But also behind the scenes, you have other conventions such as variables, conditions. Perhaps you’re leveraging some form of reuse. So that you can kind of mix and match your content, turn things on and off depending on, oh, this is going out to, you know, an advanced user or this is going out for our, this is going out for our premium product. And this one’s going out for our base-level product. And base level product has features A, B, and C, but our premium has features D and E also tacked on. That type of thing. So you’re not rewriting content for these.

CC: Mm.

BS: You know, very many different outputs, but rather you are pulling from a single, you know, managed source of content and, you know, mixing and remixing and turning things on and off to produce that desired result.

CC: Yeah, absolutely. And then so taking that a step further, when you when it’s time to start selling your coffee or selling your beer in a location in a different country or different region, what happens then in that localization process? I mean, I’m assuming all of that is involved plus more. Yeah.

BS: Oh, plus more, because then you have language on the packaging and so forth that needs to change. But more importantly, with any kind of food-based product, and particularly with alcohol, there are different rules that govern how things can be sold, what you can say, what you can’t say on the packaging. We’re pretty loosey-goosey here in the United States where you can say anything. You can put out a package that is the same size product and say now 20% more. And you look, it was a 16-ounce box before, it’s a 16-ounce box now, but now it says 20% more.

CC: What? Maybe they meant air, 20% more air in the package.

BS: I guess, I guess, but you start going overseas and the nutrition labels need to change. You have to take very different stances when you’re listing ingredients. There are certain claims you can and cannot make on the packaging and in the advertising. And when it comes to alcohol, particularly, there are different rules that govern what can go into it that can be then passed off to a consumer and what you have to disclose and what you can’t disclose. And I go back to one of these things, and it’s not so much a governing rule anymore as far as you know how strict it is but there’s the Reinheitsgebot I hope I am pronouncing that right but it’s basically the German purity rule for beer and it basically governs and says that beer can only be made of three components water barley and hops and they omitted yeast even though yeast is what does the fermenting process because at the time they created the law, they didn’t really know about it. But yeah, essentially those four ingredients are the only things that can go into beer for Germany Not so much a rule anymore, but it’s it’s it’s an example of you know, if you were to produce something And call it something. Yes, if you were to produce beer and call it beer, but you’re making beer with Barley and corn and rice, you know, so something that let’s say Budweiser does. Would that technically be beer? Maybe not in Germany. So what do you call it? How do you package it? Can you sell it? Again, it’s not so much a thing anymore. It’s more of a historic note, but it kind of shows the differences in what you can do and what you can say in different countries. 

CC: Oh, okay, interesting.

BS: Likewise, when you go to different, when you publish for different locales, you have not only different languages, but you have different fonts that you have to consider. You have different complete character sets, you know, so, you know, there’s, you know, the more the Latin character set that we use throughout the United States and throughout Western Europe. We start going more into Eastern Europe and you start getting into needing to use a Cyrillic alphabet. 

CC: Mm-hmm.

BS: Certainly you move into Asia and now you’re starting to look into, oh, I’m going to need you know a completely different character set a double bite character set to put these things together and You know in some places you’re gonna have to change the complete layout of your content as it as it gets published because you know certain languages they go from right to left, not left to right, so that’s a completely different change and you know a lot of that you bake you hopefully are baking into the infrastructure that you are driving your content production with and not doing this by hand every time you need to send something out.

CC: Yeah, I can’t even imagine. Yeah, that would be a lot. Well, so for organizations that may not have adopted this single-sourcing approach yet, what are some factors? I mean, we’ve talked about a lot of them, but what are some either factors or like pain points or experiences they may be having that signal, hey, maybe it’s time to start thinking about this? How would you sum up those indicators?

BS: I think the biggest indicator is that you have a very overworked team of people who are spending their time on everything but their core job. So their core job should be producing content, developing content. It should not be formatting and reformatting content to produce it.

CC: Hmm. Yeah.

BS: You know, it certainly should not be copying and pasting content from one place to another and then making sure that any change to that copy and pasted content is reflected in the two or eight or 16 or 150 different places they pasted it into last time. You know, that’s a lot of busy work and you know, a lot of things that I hear, especially from small teams, is that they reach a point where they are so busy.

CC: Yeah.

BS: And making so little progress on new content development because they are spending all their time, you know, prepping for publishing, prepping for publishing, you know, prepping for publishing literally should be content is done. And that should be your prep for publishing. It shouldn’t be, okay, now let’s apply this template and let’s reformat everything. And now let’s send it off to the translator and oh, we got it back. Okay, now we have to reformat it so that it fits in this language because German is now, you know, eight pages instead of five. You know, it shouldn’t be fixing these things. Those are things that really should be handled automatically and, you know, allow the content developers to do what they were hired to do, which is develop the content.

CC: Yeah, exactly. Yeah, allow them to be able to do what not only you hire them to do, but I’m thinking that they’re more passionate about. That’s where their passion is. That’s why they’re here. I could see that being very discouraging if you’re passionate about the content and you spend almost all your time on formatting and other stuff. That sounds awful. And it would be discouraging. And I’m assuming leads to burnout, lead to, you know, high turnover.

BS: Yeah.

CC: Because you’re not getting to do what you want. You want to write content.

BS: True, although some people do thrive in that environment and they love that they love the fiddly bits, you know, and, you know, you’re not going to make them happy by taking that away. But then again, it’s like, you know, as you know, your company is growing, you’re producing more stuff, you need to produce more content, you need to do it quicker, you need to do it at a higher quality. You know, you’re you’re publishing at a higher volume, you’re adding more languages. You know, at that point, it’s like, do I keep that person happy?

CC: Oh, yeah.

BS: Or do I focus on what we need to get done?

CC: Yeah, fair. And maybe they can have some say or you can include them in what’s what the big vision is. But yeah, like you said, that you can’t always just make one person happy with the system. There’s all these other people that may also not be happy because of, you know, not having an efficient process and a way to pump out a lot of content at scale in a way that’s still quality.

BS: Mm-hmm.

CC: Still consistent. Yeah.

BS: Yeah, and there is a risk there as well because those who put together the DIY approach, they may love that. You know, I mean, that that’s something that they built from the ground up. That’s their baby, you know, and you’re taking their baby away. That can lead to some big problems. 

CC: Yeah, makes sense.

BS: You know, either you lose that person who has all the publishing knowledge, even though you may be transitioning away from that system, they kind of know how it was set up and they know they know where the I hate to use the analogy, but they know where the bodies are buried in their infrastructure, and what made it tick. And you don’t want to lose that knowledge. Instead, you want to try to hopefully work with them to stand up the new one and give them some governance over how that runs. That might be an approach. But it does get tricky.

CC: Yeah, it makes sense because at the end of the day, it’s still about people. The people that you’re working with, you wanna make sure that they’re, the people on the team that are creating the content, it’s still about them, it’s still about the people at the other end of the screen or book or whatever, whatever kind of content you’re writing. Yeah, it’s still about people and people are complicated. We are.

BS: That’s putting it lightly.

CC: Yeah, that’s my deep wisdom for the day. That’s what comes from five cups of coffee in the morning. And on that note, I think we have exhausted every part of this beverage analogy for single sourcing and content strategy, but it was really helpful even for me to hear. I mean, I knew some of this, but there was a lot of this that I hadn’t thought of in terms of something very tangible like drinking coffee or drinking beer. So thanks Bill for exploring this with me. I also just love talking about coffee anytime it’s possible. So yeah, it’s great.

BS: It was fun.

CC: Well, yeah, thank you so much for being here and thank you for listening to the content strategy experts podcast brought to you by Scriptorium. For more information, visit scriptorium .com or check the show notes for relevant links.

The post Brewing a better content strategy through single sourcing (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 30:25
Our demands for enterprise content operations software (podcast) https://www.scriptorium.com/2024/01/our-demands-for-enterprise-content-ops-software/ Mon, 29 Jan 2024 12:31:23 +0000 https://www.scriptorium.com/?p=22338 https://www.scriptorium.com/2024/01/our-demands-for-enterprise-content-ops-software/#respond https://www.scriptorium.com/2024/01/our-demands-for-enterprise-content-ops-software/feed/ 0 In episode 161 of The Content Strategy Experts Podcast, Sarah O’Keefe and Alan Pringle share their ideal world for enterprise content operations software, including specific requests for how content management software needs to evolve.

SO: “When I envision this in the ideal universe, it seems that the most efficient way to solve this from a technical point of view would be to take the DITA standard, extend it out so that it is underlying these various systems, and then build up on top of that. I don’t really care. What I do care about is that I need, and our clients need, the ability to move technical content into learning content in an efficient way. And right now that is harder than it should be.”

AP: “Oh, entirely. And I would even argue it should go the other way, because there is stuff possibly on the training side that the people in the product content side need. So both sides need that ability.”

SO: Right, so give us seamless content sharing, please. Pretty please.

Related links:

LinkedIn:

Transcript:

Disclaimer: This is a machine-generated transcript with edits.

Sarah O’Keefe: Welcome to the Content Strategy Experts Podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. You may have heard that Madcap has added a learning content management system called Xyleme to their portfolio. In this episode, we are providing an entirely unsolicited roadmap to the vendors in this space, including but not limited to MadCap, for enterprise content ops software as we move forward. Vendors, welcome to the show and think of this as your roadmap to success and call us if you need help. You totally do. Hi there. I’m Sarah O’Keefe and I’m here with Alan Pringle.

Alan Pringle: Hey there, I’m not sure this is the best idea, but we’re about to find out.

SO: Yes, it’s going to be great. We will totally not get in trouble. Alan, let’s dive in and maybe get in trouble as fast as possible. What is the number one item on our list of demands for content ops enterprise software?

AP: Going to vote for seamless content sharing and with a little asterisk here this is not just about us as consultants I think this is as much about our clients and what we have seen over the past few years in the content operation space. We need some kind of way where you can author in a component content management system and then turn around and use that information, for example, in a learning content management system. And there’s, well, exactly, and I was just getting to that. There’s some logistics here. It would be maybe nice to have the same content model underlying all of this, but considering the different authoring audiences, I don’t know if that necessarily has to be the case.

SO: And does that have to be DITA?

AP: I really I’m not even sure if it’s possible. We can discuss that right now. It’s really not possible. I don’t think.

SO: Yeah, as far as I know, nobody can do this right now. You cannot take DITA content and efficiently ingest it into a learning content management system. If I’m wrong, call me.

AP: Yeah. That said, I do know some people, including our clients, who are on the learning training side, and they have chosen to use DITA as their model. But that is not true for every learning organization on this planet, not by a long shot.

SO: And they’re in CCMSs. They’re not in “L” learning CMSs. So they’ve, you know.

AP: Exactly.

AP: The LMS is a target. It is not the place where they are actually building the content.

SO: Yeah. And so, I mean, when I envision this in the ideal universe, it seems that the most, you know, efficient way to solve this from a technical point of view would be to take the DITA standard, extend it out so that it is underlying these various systems, and then build up on top of that. I don’t really care. What I do care about is that I need, and our clients need, the ability to move technical content into learning content in an efficient way. And right now that is harder than it should be.

AP: Oh, entirely. And I would even argue it should go the other way, because there is stuff possibly on the training side that the people in the product content side need. So it’s both sides need that ability.

SO: Right, so give us seamless content sharing, please. Pretty please.

AP: Yes, and I’m going to throw the ball to you this time. What’s number two on our list of demands?

SO: Number two on our list of demands is a unified portal for content delivery. So setting aside the authoring issue for a minute, you know, maybe it’s unified, maybe it isn’t. Give the end user a seamless user experience where they’re going in and they can get all the content they need across all the different, you know, technical content types. Now, there are a couple of specialized portal vendors that do have this and have solutions in this area. But if you’re going to position yourself as we are the solution for all things content, then this needs to be in your portfolio in some way, not just, oh, you know, go talk to this other vendor. So I think a unified content portal, again, I don’t really have a strong opinion on how this needs to be done from a technical point of view, other than words like seamless and good customer experience.

AP: I do have some opinions on how technically it should happen and that is copy and paste from one tool to another better not be part of this picture at all because today it is and it kills me. Especially on the product content side, we got over this hump of automated formatting or manual formatting. We’ve pretty much handled that. I think on the learning side, they’re starting to understand they should not be futzing and manually touching things. And right now, especially on the training content side, to get things to go to different delivery targets, there’s entirely too much copying and pasting between and among tools. It’s like every delivery portal requires you to do that. This is the 21st century people and it should not be happening, no.

SO: So okay, I would like to revise my opinion too. I do have some demands and they are those. I am co-signing Alan’s demands. Okay, what’s next?

AP: Hahaha! Okay, let’s talk about classification, taxonomy, because you gotta be able to label your things to sell different versions. If you’re selling software, you’ve got a light version, you’ve got a professional version and maybe an enterprise level solution. You gotta build in that taxonomy, that intelligence. How are you gonna do that? And how are you gonna do it across multiple content types? That’s tricky, that last bit in particular.

SO: Yeah, and so, you know, the terrible keyword here is enterprise taxonomy, right? You have to build out a classification system for your content, both for the authors and for the end users. Like, the end users need the ability to say, oh, I bought the lite version, only show me that. Not, all these enterprise level features that you don’t have. And how many of us have seen the infamous like car manual that says, oh, if you have the XYZ CXE extended edition, you have this feature in your car. Well, I didn’t buy that version and I don’t have that feature and.

AP: That just happened to me with a printer. I will not name the manufacturer because I’ve been happy with it overall, but the user guide, it actually came with a printed user guide, which was shocking for 2023, which is when I bought it. It was like, and then it will do this, and this. And then it’s like a little parenthesis later. And this model only. Well, that’s not the model I have, man. You’re killing me. So yeah, that’s not where you need to be with that kind. New.

SO: Oh man.

SO: Yeah. And you’re not going to run out and buy the upgraded printer or car. That is not happening. Yeah, it’s too late. So OK, so we need labels so that we can do versioning. But additionally, in this sort of enterprise content ops demand, we need those labels to be consistent across shared content. So for example, in the.

AP: Too late.

SO: And I’ve seen this happen. In the technical content, we have free, pro, and enterprise. And then the learning content, we have light, intermediate, and enterprise. And they’re referring to the same thing, but the labels are different. And hey, guess what? That’s not going to work. So fix it and give us a classification system, a taxonomy that we can use across all these different content dimensions. Now again, there are some tools that’ll do this. I mean, there are enterprise taxonomy tools, barely, some people are using them. Many, many, many people need to be using them and are not, so.

AP: There are.

AP: Right, I was about to say many people are using them and even more should be using them right now. And I will almost give people a pass on this one, almost, almost because it’s like get your ops to a certain point and then this can be let’s improve them even further. But having that built in from the get go, that would not be a bad thing either at all.

SO: Yeah, and related to this terminology, the words that you use for different things. If my learning content talks about a door and my technical content talks about a doorway or an entry point or an I don’t even know, then that’s not going to work. So you need to call the thing what it is and do that consistently across all of your content.

AP: Including your marketing content because if you’re talking about brand and consistency and voice this is a huge part of that and I’m sure your marketing department would be delighted for there to be some controls, some kind of corralling of this to be sure people are consistent and give a consistent brand image but the way we refer to things.

SO: All of it. Yeah. Yeah, and it also ties into some, you know, typically some protection for trademarks and those kind of branding and those kinds of things. And this isn’t, you know, the focus of this, but if you are translating or localizing your content, you have to do this work in all your languages, not just your source language.

AP: 100% and if your source language is crap, the translation is going to be crap too as far as consistency and anything else. Yeah, right, degrade.

SO: It’ll be crappier. It’ll always degrade slightly. So yeah, okay. And then what else have we got in our unified hallucinations slash vision?

AP: There’s one more. Yeah.

Yeah, it’s like we want everything. The last one, that’s yours. I’m gonna give that to you.

SO: Oh, so, you know, we’ve talked about unifying technical content, marketing content, help content, maybe UX content, those kinds of things. But there are two other missing pieces, which you touched on marketing, that’s one, and that’s a big one. And the other one is knowledge base, support content. So you know, where are those in this unified vision? All of these things are…

AP: Yep.

SO: …from an end user’s point of view, they look at all of this content, and yet all of it is being done in point solutions, in dedicated, this is only for the knowledge base, this is only for marketing, this is only for tech comm, this is only for whatever. And so we need to unify all this stuff so that there is in fact a unified customer experience. I don’t see a whole lot going on here with knowledge bases. If you look at marketing content, there are a couple of vendors that have ways to take the technical content and push it over or integrate it into the web CMS. But in general, this is much more challenging than it should be. And depending on your web CMS, you may or may not have a path for this at all, other than put it side by side or something like that. So I would…

AP: And news, yeah, and news flash, guess what? The people reading your content do not give one about how you classify this as sales or marketing or KB or whatever. They just want the information and they want it right then and now. And in a way they can get to it very quickly. They don’t care if you think this is quote marketing content. Just give it to them and make, be sure it’s correct, please. P.S. That’s also very important.

SO: Yeah, and I mean the reality is that people’s websites reflect their org charts and there are all these points solutions and different people own different chunks of the website or subdomains or whatever. But okay fine you know if you’re going to have these acquisitions and tell me how great it’s going to be then show me the results, and this is what this is what we want.

AP: Well, we are asking for the world’s, we might as well get all of our demands out here and that is certainly one of them. All these tools really kind of support these increasingly false kind of classifications based on org charts and whatever else, but the end result, the end content result, shouldn’t necessarily reflect those things, there should be unification there, not these weird distinctions that are based on the way people report to each other within the company, because your customers don’t care.

SO: So while we’re making friends and influencing people, we did also come up, as always, we did also come up with a list of things that we do not care about. So what you got?

AP: As always, as always.

AP: Yeah. I do not care that you have four or five different solutions that do different things under your brand, especially if they don’t talk to each other. If they’re just these multiple tools speaking to different audiences, how is that really any different than, you know, different people owning different things? I don’t, there’s a disconnect there for me, a huge one.

SO: Yeah, and, you know, single, we have, you know, single vendors with lots of tools, which may or may not integrate. We have multiple vendors with individual tools, which again, do or do not integrate like the level of or the degree of difficulty in integrating these various tools does not appear to be particularly tied to whether they live under the same roof or not. You know, fix the integration. I don’t really care about the ownership. I understand that from a business point of view, you do, that’s fine, but fix the integration. And so to my vendor friends who are currently apoplectic, you know, have a drink, whatever, of choice. But your mission, should you choose to accept it, is to address our pain points and our customers’ pain points and actually deliver on the challenge of unified content. And I am so looking forward to seeing progress in this area. Alan, any closing words?

AP: I think I am going to throw back to the Willy Wonka and the Chocolate Factory movie character Veruca Salt and say, “I don’t care how, I want it now.”

SO: Thank you for listening to the Content Strategy Experts Podcast. I have nothing to add to that. Brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

What do you want to add to this wish list? Leave your thoughts in the comments below, or let us know on LinkedIn!

 

The post Our demands for enterprise content operations software (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 14:41
Rise of the learning content ecosystem with Phylise Banner (podcast) https://www.scriptorium.com/2024/01/rise-of-the-learning-content-ecosystem/ Mon, 22 Jan 2024 16:30:35 +0000 https://www.scriptorium.com/?p=22321 https://www.scriptorium.com/2024/01/rise-of-the-learning-content-ecosystem/#respond https://www.scriptorium.com/2024/01/rise-of-the-learning-content-ecosystem/feed/ 0 In episode 160 of The Content Strategy Experts Podcast, Alan Pringle and special guest Phylise Banner talk about the limitations of the learning management system, the rise of the learning content ecosystem, and more.

I think about enterprise-wide applications. Consider the tools that are used to generate help solutions. Let’s just use Jira as an example. You have a knowledge base, enterprise-wide, and everyone at the organization has access to ask a question or search the knowledge base, or something like that. That’s where I want to go, that’s what I want to see. I want my learning experience platform to be like that. I want a knowledge base that I can tap into any place, anytime, anywhere. And then, have my mastery checked in the ways that I want to have it checked.

— Phylise Banner

Related links:

LinkedIn:

Transcript:

Disclaimer: This is a machine-generated transcript with edits.

Alan Pringle: Welcome to the Content Strategy Experts Podcast, brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize and distribute content in an efficient way. In this episode, we’re talking about how people in the learning space are addressing challenges in their content operations. What do those changes mean for learning management systems? Is this the end of the monolithic LMS?

Hey, everybody, I’m Alan Pringle. Today, we have a special guest, Phylise Banner. Phylise, welcome. Please tell us a little bit about yourself and your background. 

Phylise Banner: Sure. Thanks for having me, Alan. My name is Phylise Banner. I’m a learning experience designer. I have, I want to say, over 25 years. I did the math the other day, actually. It’s about 27 years in higher education, and corporate and non-profit government learning design. Before that, I worked in data visualization and information design. I came into this field in a little bit of a different way, although there’s other folks who came in it the same way that I did, considering this from an information perspective rather than from a teaching perspective. 

The minute I started working in the field, I was fascinated by educational theory, and pedagogy, and philosophies, and andragogy, and hudagogy. And techno-hudagogy, thanks to my friend Bill Pelz, there. But throughout the years, I have watched technology evolve alongside learning theory, and I’m fascinated by that. I had been in the content strategy space this whole time, both from the information design, data visualization side, also going over into learning design. I’ve had a focus on content strategy all along. I’ve known folks at Scriptorium for probably 25 years. 

AP: Well, probably so because we’ve been around since ’97. And we have crossed paths in conferences probably more times than we can tell people. Indeed.

Well, with your background, you’re the perfect person to talk to about this, in the learning management system, LMSs, and what’s going on with them because we’re certainly seeing a shift. And you’ve got your feet even more firmly planted in the learning space than we do. I’m very interested in your perspective. I think a good place to start, especially for people who may not necessarily be in the learning space, maybe even a little more content focused, let’s start with a quick definition of what a learning management system is and what it does for an organization. 

PB:  Oh, I didn’t know that was going to be on the test, Alan. 

AP: Curveball.

PB: Curveball. I’m not going to have the ultimate, perfect definition of what a learning management system is. If we want to talk about a content repository and what different content repositories look like, overlay a content repository, content management system with registration. The ability to create courses, to offer courses and show progress through those courses. Whether it’s simply content, or content interaction and assessment. I would say those are the features that would differentiate a learning management system from a content management system. 

Early on, when learning management systems started to become more widely available, the joke was all it is is a content management system with the ability to register thrown on top of it. But there are so many pieces that are built into learning management systems these days, which is why the behemoths got to become behemoths. With student privacy, the data that’s being collected, learner privacy, the interactions between student information systems. Setting up the databases behind the scenes, so that it would be possible, back in the day, for a student information system, akin to Banner, Banner is one of the systems, to be able to talk to the data in the learning management system. 

AP: Sure. The people that use these LMSs, and I’m talking more about the trainers, the learning people, what’s the general process for creating content and getting it into one of these LMS systems? What’s their process? Or, does it vary from system to system? 

PB: It varies from system to system. It also varies from practice to practice. If we want to talk in any learning space, imagine a training session … We’ll talk about a physical learning experience, where you’re all designated to meet in the same place. You all show up in the same place. Someone walks in the room, drops a bunch of material or folders on the desk, and walks out of the room. That’s how most learning management systems are used. Unfortunately, it’s the way they were designed. It was upload your file-

AP: A dumping ground, essentially. 

PB: Exactly. A dumping ground with no context. That’s the same as someone just dropping … I used to do that when I did training. I’d come in, I’d drop it on the table, I’d walk out and say, “That’s what you’re doing to your learners.” If you don’t provide context for your information, that’s exactly what you’re doing to your learners. 

The process depends on the tool that’s being used. So we’ll say, am I using one of the big tools. Let’s say it’s Canvas, or Moodle, or Brightspace, or even if it’s Teachable, or Kajabi. It depends on how the LMS itself, or this learning platform, is enabling you to structure a learning experience and upload content, create assessments, enable interactions. The instructional design process or practice needs to happen first. 

AP: Right. 

PB: We need to design this learning experience. We need to consider how we want the learners to progress through that, how we want them to communicate with each other, with an instructor, with themselves. It’s sort of there’s no right answer to your question. 

AP: Sure. That’s true, even on the content side too. It leads into what I want to talk about next. It sounds like dumping ground, or maybe better than dumping ground, there’s still going to be some challenge and obstacles, especially to assign any intelligence to all of this content that you’re putting into these systems. What kinds of things, in general, do you see these people who are creating this learning content, what kind of hoops are they jumping through? What kind of workarounds, what things are they doing to get things to work better in these systems? 

PB: I’m going to roll it back a little bit, and talk about when learning content has typically developed, and shared, and reused.

AP: Yeah. 

PB: Because that reuse is something that we didn’t think about very much. Not everyone. 

AP: You’re not the only industry, either.

PB: Right. 

AP: Learning folks, we’re not slamming you at all because, trust us, it is a problem everywhere. 

PB: But coming from an information design space, reuse was always in the back of my mind, and classification’s always in the back of my mind. 

AP: Sure. 

PB: Having always known what a library system could do, what a database could do, how classification could help organize any type of information. Taking a look at learning management systems, and the ability to tag content and content types has been missing. 

AP: Yeah. 

PB: All along. I remember when I could first build a course in WordPress, and was able to program the heck out of that backend, and classify learning content, classify activities as activities. I also remember Angel, the learning management system, where we could do that within a learning object repository. And then, Blackboard acquired Angel so that went away. 

But, I think the struggles we’re up against now to make things talk to one another, our learning content repositories, our learning management systems. If we’re using these old, big solutions, there’s Lectora. I don’t want to go through and just bring out all these names. 

AP: Brand name salad, yeah. 

PB: Brand name, yeah, salad. But the newer tools are really taking into consideration how we might reuse content. How we might want to, how we might need to. 

Some of the things you and I have talked about in the past, and other folks at Scriptorium, are the possibilities of even going as far as using micro-content. Or the DITA learning terms, to really tap into those frameworks to become a little bit more consistent with tagging our content so that we can reuse it. 

One of the things that I see, one of the biggest challenges I see right now is you’ve got the training department, and the marketing department, and the documentation department not able to share content, using different systems. You see this all the time, Alan. 

AP: We do. We do.

PB: That’s your job. How do we make that go away? 

AP: You’re speaking my language. Yes, you are. 

PB: Yeah. 

AP: We have noticed, we have more clients from the training space now. They are really up against what you just talked about. Reuse and the single source of truth, those are two things that really, a lot of them, their hair is on fire because they are being forced to do copy-and-pasting for different versions. Copying and pasting from one system to another. 

PB: Right. 

AP: It’s my observation, and you can tell me if this is unfair, that a lot of tools marketed to the learning groups seem very closed and do not play well with others, at all.

PB: Completely. Completely. 

AP: Yeah. 

PB: We see that changing a little bit.

AP: Good.

PB: Once we started becoming comfortable using APIs and getting things to talk to one another. But the thing that’s still missing is that centralized database of information. 

You’ll hear the term learning experience platform being thrown around a lot these days. The way I have seen them used, I have never seen one used to its full potential. If we want to talk about how are we including or taking into consideration informal learning, what I learn in my kitchen about my job just because I happened to learn something that has something to do with something else I’m …

AP: Sure. 

PB: Just these tangents and things like that. And, how we capture them.

I am going to call out a product. I want to call out Docebo. The folks at Docebo know I love them. I’ve seen the best approach to learning experience platform, With the standards that exist in the learning space. You’ll need to find someone whose more versed in SCORM than I am, which is the standard for exporting and importing across different platforms. But that’s just taking a package, and downloading it, and putting that package somewhere else. It’s not letting one assessment, or seven questions from one assessment, talk to a different learning experience. 

AP: Yeah. I compare a SCORM package almost to an ebook, like an ePUB file, which is basically a container, a ZIP file really, a container file, full of HTML files. A SCORM package is very similar. It is just a container for a lot of files. 

I will tell you, one of our clients has been concerned about SCORM packages from an intellectual property, IP point of view, because the second you let that go and it’s just manually uploaded or imported into a system, it can be hard to get controls. But you’ve already mentioned APIs, there are ways to make virtual SCORMs, almost like an API, where you can hold onto it. But the traditional SCORM package, if you just hand it over, I’ve just given you my stuff.

PB: Yeah. 

AP: What if there’s an update, what if something’s outdated, whatever-

PB: Exactly. 

AP: It’s a big mess. I have also noticed that we will create automated transformation processes to basically create SCORM packages so people can put content into an LMS. The problem is LMS A likes a slightly different version of SCORM than B. Yeah, it is a standard, but there are flavors within that standard, we have observed.

PB: Exactly. 

AP: Yeah. 

PB: Oh, exactly. Yeah. 

AP: That’s another big pain point. But what I’m hearing from you is it sounds like two things are going on. Some of the vendors, the companies are getting wiser with letting people create smarter content, number one. Number two, people are starting to move to those platforms and realize maybe the older ways of having just that LMS sitting in the middle, that pretty much it, maybe is not the way our things need to be. It needs to be connectivity, there needs to be a wider ecosystem of tools that’s not just in your department. It needs to be cross-departments, in a lot of cases, in organizations. 

PB: Absolutely. I think about enterprise-wide applications. Consider the tools that are used to generate help solutions. Let’s just use Jira as an example. You have a knowledge base, enterprise-wide, everyone at the organization has access to ask a question or search the knowledge base, or something like that. That’s where I want to go, that’s what I want to see. I want my learning experience platform to be like that. I want a knowledge base that I can tap into any place, anytime, anywhere. And then, have my mastery checked in the ways that I want to have it checked. 

AP: Sure. 

 

PB: A lot of times, the learning management systems are talking about being really focused on the learner, and more adaptive. I’ve seen adaptive systems, and especially with generative AI being so widely available. 

AP: I wondered if that was going to come up. There we go, the requisite AI mention. 

PB: We’ll get there again. The adaptive pieces, what I’m seeing are in content and serving up content. 

AP: Yeah. 

PB: Adaptive learning means you’re giving me different content because maybe something I’ve searched for. But are you giving me a different assessment? Are you giving me an different option to interact? This is where I see the future of learning experience platforms going. 

AP: Sure. 

PB: That it’s the experiences that I have will be different, will change. I haven’t seen it well done yet. I want someone to show me. 

AP: Well, even on the content side of the world, because we’re focused, Scriptorium, on the product content side of the world. You talk about, “We need to deliver omnichannel content, we need to deliver content, what people want at the time they need it, and the format that they want.” Yes, that sounds great but not everybody is doing it. So again, this is not just about learning folks. This is a problem that’s universal. Yeah, I think there is a lot of room for improvement. 

Wherever content is, you have got to have, your source has to have that intelligence built in that lets you do that adaptive content on the fly. A quiz based on your location, and you’re at this particular branch, or at this particular hospital, or this location so you’re going to get this training. If you don’t have that intelligence, metadata, yeah I said, built into that source content, and then it needs to be processed by the various systems, you’re sunk. That goes right back to you’ve got to start during the creation process and get that intelligence built into that content so you can do the adaptive things that you are discussing. 

PB: Yeah. You talk about being in that place, or that space, and being served what’s appropriate in that moment of learning need. I’m fascinated by location-based tools. Lidar, iBeacons, like when I walk past this, I might need to learn something different in order to do something past this point. I think all of that is really important.

Let’s go into AI.

AP: Yeah.

PB: We touched on it. We don’t know what might come next.

AP: Yeah.

PB: I’ve embraced it. I love playing in this space. Anything that can help with the … I talk about dreaming drudgery design in development, and anything that can help with the drudgery piece is always welcome in my book. 

AP: It’s hilarious you said that because I was about to say we see it as another tool. If it can handle the drudgery of content creation, there’s several things I can think of. It could help you sort. It could help you … Yes, people still index things. Why not let AI take a whack at it? It may not be perfect, but then you can go clean it up. Any kind of pattern matching, that sort of thing, I think it does very well.

Now, we can quibble about should you be going out on open sites and dumping your corporate information in there. 

PB: Right. 

AP: But if you’re in a closed, large language model that is specific to your company, your organization, why not let it look at your stuff and find relationships that you probably don’t have the time to go dig around and find, and it can. It’s just another tool. Do I think it’s going to replace content creators in any space right now? The only space where I think it might is if you are someone who is cranking out low-quality content, of people who do, shall we say, not entirely truthful reviews on various sites, things like that. Things that can be put together fairly quickly, I think there might be problems for those kind of people, low quality content. But when you’re talking about the spaces you and I are in, I see it more as a tool and not the replacement. 

PB: Absolutely. A lot of what I love about this community, we talk about documentation and training on new products, well nothing exists. 

AP: Yeah. 

PB: We can’t tap into existing content to generate this content. In the learning space, I see so much potential for different types of tutors based on information that we have, existing knowledge. 

You talked about intellectual property earlier, and that’s a big deal. 

AP: Very. 

PB: On the higher ed side, there’s the open education movement, open education resources, and just open education about enabling more access. I’d love to hear your thoughts on what open access looks like in this content space, the struggles we have and maybe what advice do you have for protecting intellectual property, but sharing content? Creative Commons licensing is a beautiful thing, and being able to share learning content would be so helpful but we don’t go there. Companies spend so much money creating from scratch, the same trainings that other companies are creating.

AP: Right. 

PB: I just think about all the compliance training I’ve written in my lifetime. 

AP: What you’re talking about is a tightrope, and it’s a very difficult tightrope because we are a profit-based society, unfortunately. This is business. It can be very hard to give things away. 

I’m going to toot Scriptorium’s horn here for two seconds, in this regard, because what we did is we created a WordPress-based site called learningdita.com, to teach people about an XML specification, the Darwin Information Typing Architecture. Which, by the way, can be a very good fit for learning content. We basically created that where it is out on GitHub, you can download the source files, do whatever, and then you can take the classes for free. This was our thought on that. We are proving our own bonafides in this space, the DITA space, by putting these courses together, but then they benefit people too. And I’ll be blunt, they also benefit our clients because instead of paying someone to pay for an introduction to DITA course, people can take it at their own pace, through this self-paced learning that’s online, and do it that way to get a baseline and it also saves the client some money. It doesn’t even have to be our clients, anybody can go out there and take advantage of this free training and not pay for it.

March 2025 update: We have moved LearningDITA to a new platform. The Introduction to DITA course is still free, and you can sign up for courses at store.scriptorium.com.

You really have to think very carefully, bigger picture, how this could pay off. If getting content out there, if providing some open training, open source training to people, it can help you indirectly. It can prove your competence in topics. That’s the angle that I’m coming it. I don’t know if that exactly answers your question, but that’s where my brain has gone. 

PB: No, I like that. I wonder, can people reuse and reshape it? If it’s on GitHub, it’s there and someone could take it-

AP: They could take it, and adapt it, and they can take that source.

PB: Yeah. 

AP: Basically, the site that we have, a learning management system that sits on top of WordPress, that is ours. That is just one instance, one instance of how you could use this content. If people wanted to take that content and then do something more print or PDF based, or some other format, they can. We’ve even had some other people in our line of work, in different parts of the world, take that GitHub content, translate it, and then create their own instances. In German, in French. That same content is out there and it’s been localized. If you want to, you can go to their learning DITA sites and do it through that language, if you’re more comfortable in say German or French. 

PB: There’s nothing to stop someone from taking it and charging for it, either. 

AP: If they wanted to, they could.

PB: Yeah. 

AP: Again, everything that you said, these are the kind of considerations you have to think about when you put things out there for free.

PB: Yeah. 

AP: It’s like you’ve got to let your child go out into the world and do their own thing. I’m comfortable with that, because at the end of the day, a more informed world about DITA, in our case, that is a possible customer for Scriptorium down the road. That is very business-y and maybe even a little repellent to put it so bluntly, but there you have it. There is a case for providing free, open source information, even from a for-profit corporation. 

PB: Absolutely. Absolutely. For those of you that are listening that are not familiar with Creative Commons licensing, I highly encourage you to go out and take a look, to see what the different types of licenses are. Because there’s that non-commercial use that I would recommend, in many cases. But, I like to think about learning content and the levels at which we would share it. 

AP: Yeah. 

PB: I would love to see more collaboration. Not just across departments, but across organizations. 

AP: Yeah. 

PB: I don’t know how we do that. 

AP: And again, there’s goodness to be had, but sometimes in a profit-driven situation, longterm thinking is not the motivator. Short term profits are the motivator, so it gets very sticky in there. 

PB: Yeah. 

AP: Unfortunately. Because at the end of the day, if you’re a for-profit corporation, you’re not there for the sake of giving things away. You’re just not. With education, I think it’s even a little stickier perhaps, because you are talking about trying to improve people, their knowledge, to give them more information. Where do you draw that line? 

PB: I’m going to stop you there and say okay, if we separate this out into knowledge, skills, behavior, attitude-

AP: See, this is the learning person talking right here, and I’m going to sit back and let you do it. 

PB: Knowledge, skills, behaviors, attitudes. Let’s think of, what if for skills, because yes there are some skills that are unique, but what if we shared that learning content? I can’t tell you how many times I have reinvented the wheel.

AP: Oh, sure. 

PB: Every other learning designer has done the same exact thing.

AP: Yeah. 

PB: That’s our job, to continuously reinvent the wheel.

AP: Yeah. 

PB: Maybe, we need one giant learning experience platform, where we can have skills. I would say, knowledge, skills … Think about how we’ve learned, and how you want to learn, and how you will learn in the future. I’ve heard people come out and say, “I don’t want to learn from a robot.” You already have been for years and you may not have known that, but you have. The expertise that we’re relying on in any learning experience when we introduce an instructor, we need to factor that in. What does it mean if instructor A has this content that they’re delivering as part of a learning experience, or instructor B has that content and they’re delivering this learning experience? Are they going to be two different experiences? In my mind, yes. They’ll be different, depending on the individual’s expertise that they’re bringing. But is that going to get minimized, will that go away? 

I’m rambling on here, Alan. I’m so sorry. But now I’m thinking of influencers, and influencers online, and product influencers. They’re educators, too. 

AP: They are. 

PB: Where does that content lead us when we’re learning from TikTok, or we’re learning from Instagram?

AP: Sure. 

PB: Gosh, I’m all over the place, here.

AP: No, it’s a valid point. I know a lot of people who, they run to YouTube for a video to learn how to do things. This drives me back to the question I would like to wrap, is okay, with all of these changes that you’re talking about, all this sharing that needs to be going on, all this reuse that should be going on, what does that mean for the LMS, from your point of view? 

PB: Well, this is a dinosaur I would like to see hit by a meteor tomorrow. I have never been a fan of the learning management system. I think that the information repository with the ability to customize the interactions you want is what an LMS needs to be. Too many times, I have been forced into designing, developing and delivering a learning experience around the limitations of the learning management system.

AP: I have seen clients do exactly what you just said, and they hit a breaking point and they say, “No more.” 

PB: Yeah. They’ll sit there and say, “No more,” until someone offers them a solution. What is that solution going to look like? What I see that solution looking like is a lot of pieces that fit together. That it’s app salad, strung together with a central content repository, that can be classified or searched.

There is a learning experience platform out there that you can just create these adaptive learning experiences, and there is no tagging, there is no metadata. I know that they’ve used large language models to generate results, but I still don’t love it. 

So for me, the future for me, the meteor may not hit. It may be a slow death. I’d like to be there when they bury Blackboard. Just throw a handful of dirt on that LMS. But I’d love to see someone come up with a solution that helps us stop reinventing the wheel, helps us invent a new form of transportation that we don’t even know about, to push that metaphor a little too far. 

AP: No, but I think that’s a very good place to end it. Future thinking, some positivity, but there’s some real work that needs to be done before that.

PB: Absolutely. 

AP: Phylise, thank you so much. This conversation went to some really interesting places that I didn’t expect and that is always a plus on a podcast like this. So thank you so much for your expertise, we deeply appreciate it. 

PB: Oh, thanks for having me. I love you folks at Scriptorium, I love the work that you do. I love the way that you educate folks. And maybe, someday, we can partner and solve this problem together.

AP: A lot of people would be very happy if we did, indeed. 

Thank you for listening to the Content Strategy Experts Podcast, brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

The post Rise of the learning content ecosystem with Phylise Banner (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 32:36
Tips for moving from unstructured to structured content with Dipo Ajose-Coker https://www.scriptorium.com/2024/01/tips-for-moving-from-unstructured-to-structured-content/ Mon, 08 Jan 2024 12:31:42 +0000 https://www.scriptorium.com/?p=22282 https://www.scriptorium.com/2024/01/tips-for-moving-from-unstructured-to-structured-content/#respond https://www.scriptorium.com/2024/01/tips-for-moving-from-unstructured-to-structured-content/feed/ 0 In episode 159 of The Content Strategy Experts Podcast, Bill Swallow and special guest Dipo Ajose-Coker share tips for moving from unstructured to structured content.

“I mentioned it before: invest in training. It’s very important that your team knows first of all not just the tool, but also the concepts behind the tool. The concept of structured content creation, leaving ownership behind, and all of those things that we’ve referred to earlier on. You’ve got to invest in that kind of training. It’s not just a one-off, you want to keep it going. Let them attend conferences or webinars, and things like that, because those are all instructive, and those are all things that will give good practice.”

— Dipo Ajose-Coker

Related links:

LinkedIn:

Transcript:

Disclaimer: This is a machine-generated transcript with edits.

Bill Swallow: Welcome to The Content Strategy Experts Podcast, brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way.

This is part two of a two-part podcast. I’m Bill Swallow. In this episode, Dipo Ajose-Coker and I continue our discussion about the top challenges of moving from unstructured to structured content. 

So we talked about a lot of different challenges, and I don’t want this to be some kind of a scary episode for people. Let’s talk about some tips you might have for people, as they do approach this move from unstructured content to structured content. 

Dipo Ajose-Coker: Yeah. Now, I would always say the first thing is start small and then scale up. You need to take one example of each type of manual. I used to work with we had user manual, pre-installation manual, service manuals, maintenance manuals, and so on. Some of them are similar in that they’ve got similar type of content, we’re just removing parts of it. But some of them are really radically different. So we took one user manual, and one service manual, and one pre-installation manual, three major types of content. And then you convert that, test it to breaking point. And then, by the back-and-forth that you’re doing in making that the conversion matrix, so fine-tuning that conversion matrix, you’re more confident that, when you then throw the rest of the manuals in there, you’ll have a lot less cleanup. I’m never going to say that you’re going to have zero cleanup, you will always have cleanup. But you will have a lot less to do in cleanup, in manually going to look for those areas where the conversion didn’t work. 

I mentioned it before, invest in training. It’s very important that your team knows, first of all not just the tool, but also the concepts behind the tool. The concept of structured content creation, leaving ownership, and all of those things that we’ve referred to earlier on. You’ve got to invest in that kind of training. It’s not just a one-off, you want to keep it going. Let them attend conferences or webinars, and things like that, because those are all instructive, and those are all things that will give good practice. And share that in between. Maybe have a train-the-trainer type of program, where there’s one person who’s your champion within the company, and who does all the conferences, and does all that. And then comes back, and resumes, and trains the rest of the staff. 

Your migration must be detailed in the planning. You’re basically, “Step one, we’re going to do this. Step two, we’re going to do this.” I create phases of those because you might have to repeat a whole phase again at a different point in time. The phases, for example, verification of the content. Was what I put in what came out? When I compare my Word document and I compare the XML of it, does it match? And then, you’ll do a few things, and then you’ll publish. But you’ve got to verify again because some of those mechanisms, like I said, pushing content at publication, picking the wrong key, using the wrong DITA val would create different content. So again, you’ve got to do that verification again. You’ve got two verification phases, in that case. 

BS: Yeah, I think that’s actually a really good point. Because we also see that, even when you have a smooth migration of one particular content set, once you move on to a different manual, there might be something unique about that one that suddenly, everything goes sideways when you try migrating. And you don’t have a home, or you don’t have a structure planned for a certain piece of content that you probably didn’t realize existed. 

DA-C: I’d say also, you’ve got to be flexible. No matter how much planning you put into place, the plan is always 100% correct until you start executing it. And it’s at that point that you’ve got to be flexible and be able to say, “Okay, well things did not turn out right. Let’s adapt to that.” And by the end of that phase, we’ll be able to take a look back and say that, “Okay, well this went wrong at this point. Can we fine-tune it? Or is it something that we should just anticipate that it will always go wrong?” If you know that it’s always going to go wrong, you’d better able to plan for that. You know that you just need to add that step to the phase, to the next phase, in that check that this was as expected. 

Look at the long-term benefits. That translation example, in that first boom, bang, “We already paid for the translation six years ago. Why do we have to pay for it again?” The long-term benefit is that, six years ago, you paid 100 grand for your translation, say. And then, every year, you were paying 20 grand because of every update. So that’s six years of 20 grand, 120, plus your 100 initial cost. Then, you switched over to DITA, where they’ve promised you your translations are only going to cost you 10 grand a year from now on. Yeah. Well, that first hit is going to still be maybe not 100 grand, but let’s say 80. People balk at that and say, “Well, you said it’s going to be 10.” No. Because for the next six years, you’re only going to be paying 10. So in the long term, it is eventually costing you less. Apply that to whatever part of it, of the scenario you want. You find long-term, it’s best. 

If you look at what’s happening today, and I will only mention this once, ChatGPT and training large language models, and that. Well, training large language models on structured content has proved for efficient than just hoovering up content that does not have a semantic meaning to it, attached through the metadata. You know, attributes that you add onto that saying, “This is author information. Or this is for product X, version Y. But there’s a version X as well available.” All of that, if you look at it in the long term, those companies that have already moved to DITA are going to be better able to start quickly switching their content, repurposing it, feeding it to their large language models. Using it to train their chatbots. Their chatbots are better able to pick up micro-content. 

If you look at Google today, you search for something and you get this little panel. You know, that YouTube video that tells you which section of the video answers your question. That’s micro-content. And having structured content, because you’ve got smaller, granular pieces of information, enables you to provide that sort of granularity of answers. Your users are going to be happier in the long term.

You need to, let’s say, plan for compliance. We’ve already mentioned that. Look at how you’re going to manage your terminology because that’s another aspect. How are you going to, first of all, tag it? Making that decision is your information architect. Which element are you going to use? UI control, or are people still going to be using bold italics around that? And how are you going to enforce that people don’t use that non-standard use of the correct elements? 

Localization is another area that you need to … First of all, warn all your stakeholders. If there’s people that are going to be people for … Explain. Give this example that I just gave, that in the longterm your translations will end up costing less, the turnaround time will be faster, and so on. And, those issues that we used to have in that world, there was an update while it was out for translation, and then we had to pick up the PDF and highlight all those points that changed in between those two translations. That used to be such a headache for us. 

BS: Those were the worst. 

DA-C: Totally. And your CCMS is able to do that for you, in that it’ll send only the changed content. It can lock out content, I can lock out things that you don’t want translated. 

There’s nothing worse than sending your translations out, and you know that all your UI variables have been pre-translated as string files, and what you’re doing is just importing those and that then puts the correct term inside of those tags. Well, if you send it off and then your translators then decide, “Well, no, I think that’s a better translation for that UI label that is inside,” you’re just causing a whole load of trouble that’s going to come up and catch you later. I’m speaking from experience, again. Things that will get changed during a translation, your system can lock those things out. 

Another top tip is to invest in a quality translation service provider. Having a translation service provider that understands structured content is better than one who is just used to doing words translations all the time. They’re better able to understand the concept of, “Well, this topic is reused, so when I’m creating my translation, I must also translate with reuse in mind.” Looking at not breaking tags in content, not moving things around in the content, all of that training needs to be present as well on your translation service side. 

And, you’ve got to leverage your technology for efficiency. Major tip there is create workflows, create templates. Templates will help your authors know that, “Well, for this topic type, these are the sorts of information types that I need to put into it. This particular topic needs a short description, and this one doesn’t.” So by picking the right template, they’re guided. They can concentrate, they can focus on creating their content. 

Workflows. Oh God, workflows. That’s another big one in that review and approval workflows. What has been reviewed, what has been approved? If you’ve got content that’s already been approved, and then somebody goes and makes a change to that already approved content where it was not due for a change, that will cause problems during your audit. Because remember, you said you could prove to them that this topic was at version X, and we didn’t touch any other topics. Well, if you sent everything off, and then an SME made a change to one of the topics because they saw a mistake in there. 

Well, that’s not a good enough reason, when it comes to audit. That, “I saw a mistake, so I made that.” No, you need to follow engineering change management processes, which say that for every single change … I’m talking in regulated industries. For every single change, I must have a reason for change. I saw a type in the text and I just decided to change it is not a good enough reason. If you saw that, then you must create a defect and add that to the change log that you’re submitting to say that, “We changed these. Oh, and by the way, we were trying to fix this error. But as we were going through, we saw that somebody did not put any full stops in all the sentences in this topic, so we decided to raise that as an improvement opportunity, and we added to the docket.” So we have a reason why those other topics, which were initially analyzed as those are the ones we need to change, what are these other topics that got changed? Well, we also created a ticket for that and put it in there. 

So leveraging workflows will allow you to force things to go also to the right person. How many times have you forgotten to send it through to legal?

BS: Yeah. 

DA-C: Using the final approval workflow, make sure that okay, well the initial engineers are excluded from that because they’ve already done their workflow, but we’re sending it for that final boss-level approval, and legal can finally sign off on it. Those are the things that are parts of what your tool can do.

Your tools can also help you find out what went on where. By being able to roll back, “Well, we made this change. We thought it was an improvement, but eventually it was just a stop-gap, we’ve made a better one. Let’s roll back to before, and then create that new one that documents this.” Well, your toolset, your CCMS is able to do that for you. We used to have to do this, again talking from experience, going into the archive database, looking for one that was roundabout the date of the change that we made, picking that one out, unzipping it. And then, the whole load of trouble. 

BS: I remember doing that. 

DA-C: Use and leverage technology. Yeah. 

BS: I remember doing that quite a bit, especially when we’d have someone from legal running down to the engineering floor and saying, “Hey, we need to find X version from X date, and see if it contains this particular sentence.” 

DA-C: Yeah. Yeah. 

BS: That was always fun. 

DA-C: Oh, yeah. Totally. 

BS: And then, needing to roll back and then reissue all the other following versions with the correct change. 

DA-C: That was always a nightmare. I can remember, there was one particular incident where someone, again, had gone off on holiday. Again, ownership of documents and so on. This change had to be made. There was a stop shipment, which means there was a defect found and the regulatory body said, “You’re not allowed to sell any more until you fix this, and you make sure that it’s all done.” So connect stations, everyone. This person’s on holiday, so we go into the archives, look through, find what we thought was the right one. Only, that person that person had not checked in the real last version. So the corrections were made to the last but one version. And then, when you published it, some of the information that was supposed to be in there was not in there. But we were looking for that specific phrase, we found it. We thought, “Yeah, everything’s good.” Only by the time it goes out and gets off to the regulatory body. Then they say, “Well, what happened to all these other changes then?” 

So investigation goes on, and then you’ve got to find out why. Those are all parts of the reason that pushed this organization to say, “Look, we need something that handles this a little bit better.” We had a stop-gap interim period where introduced an SVN system, but that was on a local computer, and we were able to recreate repositories on everyone’s. But that relied a lot on discipline as well. People checking in stuff. And you could always break locks. I spent so much time fiddling with the SVN system on every update. It was just a lot, too much. The CCMS was able to resolve, let’s say, 80% of all those kinds of issues. I’ll never say that a tool is of 100%, but it does help quite a lot. 

BS: Yeah. Having had some SVN or GIT collisions in the past that we’ve had to unwind. Branches, upon branches, upon … Yeah. Having a system that can at least manage some level of that automatically is a godsend.

DA-C: Totally. 

BS: Well, Dipo, thank you very much. I think this will pretty much wrap the episode. But thank you very much for joining us. 

DA-C: Oh, thanks for having me. 

BS: Thank you for listening to The Content Strategy Experts Podcast, brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links. 

The post Tips for moving from unstructured to structured content with Dipo Ajose-Coker appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 16:24
Challenges of moving from unstructured to structured content with Dipo Ajose-Coker https://www.scriptorium.com/2023/12/challenges-of-moving-from-unstructured-to-structured-content-with-dipo-ajose-coker/ Mon, 18 Dec 2023 12:48:20 +0000 https://www.scriptorium.com/?p=22276 https://www.scriptorium.com/2023/12/challenges-of-moving-from-unstructured-to-structured-content-with-dipo-ajose-coker/#respond https://www.scriptorium.com/2023/12/challenges-of-moving-from-unstructured-to-structured-content-with-dipo-ajose-coker/feed/ 0 In episode 158 of The Content Strategy Experts Podcast, Bill Swallow and special guest Dipo Ajose-Coker discuss the challenges of moving from unstructured to structured content.

“I think we could make broad categories of challenges as tools, technology, people, and methodologies, and I think we’ll just dive into these because they’re not necessarily independentsome of them flow one into the other. One of the most complex and challenging parts is implementation. Changing over to a new tool also involves changing processes and training the staff. Basically, some documentation teams struggle with that initial learning curve.”

— Dipo Ajose-Coker

Related links:

LinkedIn:

Transcript:

Disclaimer: This is a machine-generated transcript with edits.

Bill Swallow: Welcome to the Content Strategy Experts Podcast, brought to you by Scriptorium. Since 1997 Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we talk about the top challenges of moving from unstructured to structured content. This is part one of a two-part podcast. Hi everyone. I’m Bill Swallow, and today I have a special guest. I have Dipo Ajose-Coker from MadCap IXIA. Dipo, hi.

Dipo Ajose-Coker: Hi there, Bill. Thanks for having me on.

BS: Can you let our listeners know a little bit about yourself?

DA-C: Yeah, I’ve got a background in languages and IT. I did a bachelor’s in that and then, well, almost 20 years ago, I made the move to come over to France, and as teaching doesn’t pay that much, I thought I’d retrain and to do something that still combines languages and informing people, and I found a master’s program for technical writing and that’s how I got into that. I did my master’s and I’ve been working in medical devices, financial technology companies as a technical writer, as a technical editor. Then a couple of years ago I got that itch to change professions again. I wanted a little bit more creativity in my writing, and so I went to content marketing, and so now I’m a product marketing manager for Madcap software representing MadCap Flare, Madcap Central, and Madcap IXIA CCMS.

BS: Excellent. Today we’re going to be talking about how you might be moving from unstructured to structured content and what some of the, I guess, challenges are in that move. I guess we’ll jump right in. I’ll just ask you what is one of the key challenges that people face?

DA-C: Yeah, I think we could make broad categories of challenges as tools, technology, people and methodologies, and I think we’ll just dive into these because they’re not necessarily independent, some of them flow one into the other. One of the most complex parts, the most challenging parts is the complexity of implementation. Changing over to a new tool also involves changing processes, training the staff. Basically, some documentation teams struggle with that initial learning curve. You’ve got to learn a new markup language, you’ve got to learn a new way of writing. Then you also need additional help mostly from IT. You’re getting teams that never used to be involved in helping you put in your Framemaker or whatever it is that you’re using. You didn’t need your IT department in setting up Microsoft Word, for example, where that used to be the writing tool, setting up CCMS involves a little bit more of a lift that documentation teams might not be experienced with or be comfortable with.

BS: The implementation really checks all of the complication boxes, doesn’t it?

DA-C: Totally. You’ve got so many more people involved and you’ve got time scales and everything as well to consider.

BS: I guess let’s dig a little bit into that. You mentioned conversion, learning a new markup system. What goes into that type of an effort?

DA-C: Okay, let’s look at the first thing. Everyone goes to school learns to write English, French, whatever language it is, but then when you want to start moving to structured content, it’s usually an XML-based language, XML markup, we say. It’s not real coding, but it is still learning a new vocabulary if you want a new syntax, a new way of expressing yourself. The fact that it’s structured then means that as you do in your own language, you have a certain way of creating a sentence. You have subject, verb, object, and so on in a particular order, it gives you a particular meaning. That also applies to markup languages. Writers have to learn, in effect, a new language, a new way of expressing themselves that is valid and that the machine at the end of the day… Because we are writing for machinery, when you start writing in XML that the machine can understand, so you’re learning a new syntax, a new vocabulary as well.

BS: I guess coming from that angle in learning to essentially write in a different language, there would be some cultural and probably some workflow changes that would need to happen there.

DA-C: Absolutely. Learning that language for some people might be easy and there’s lots of courseware that’s out there that can get you into that way of writing, but it does involve classes, training entire teams, and not everyone might be open to retraining in a new way of writing. Once you have trained those writers and they’ve got up to a certain level, you can only do so much training. Afterwards, the rest comes as experience. Then another big change that your writing teams will have to make is that ownership, that question of “I own this content, this is my…” Owning the source content is something for the past, it’s cultural change that has to happen within the team in that we’re writing for a team, we’re just contributors now. We contribute to a pool of information and you have to learn a way of writing that makes it that the content that you put into the pot can be used by other people.

My style of writing things might differ from somebody else’s style of writing things. All of those have to start disappearing in the way that the writers actually create that content, and that’s a big change for a lot of people. I’ve worked in teams where during the summer holidays someone says, “Well, okay, look, if there’s any changes, I’ll make them when I come back,” and even if there’s an emergency, they’ve locked down their files, you don’t have the latest versions and so on. You’re having to wait for that person to come back. If your teams, I suppose, one of the ways that you can make the medicine go down better is to let them know that they can own the output.

You own what you put together and in structured in DITA, you have the concept of maps and book maps, so well they own that because they’re the ones that have decided which topic goes before which, and so on so forth. Then when they press that button, the PDF or the HTML output that comes out of it, they can sign their name to that. However, in the creating of the content, you must start thinking “I’m writing for a pool,” as they used to have in newspaper, poolrooms. Everyone would contribute, and then in the end you have a whole newspaper.

BS: I think that would probably go doubly for any content that certainly is going to be written for reuse so that you are absolutely writing for your team and not for just your particular need.

DA-C: Exactly.

BS: All right, so going from old to new, let’s talk a little bit about data migration.

DA-C: Now, this part of it is, I think, one of the most complex and the longest parts of that migration from unstructured to structured. You’ve got to make decisions as to how you’re going to convert that content. Are you going to bring in an outside consultancy or are you going to do it one at a time? You’ve got to make decisions as to whether you’re going to continue updating content that is being migrated, whether to use a production and staging server, whether to wait for that pause. If you are lucky to work in a company that does not do Agile, for example, and you have big breaks in between product releases, you could say, “Okay, well we’re going to take that time to then create all the new content.” Do you also want to convert all of your content? If there’s stuff that you’re not going to be updating, this is your chance to get rid of all that stuff.

Just don’t convert it and know that whatever you find inside of your CCMS is what has a life and is able to continue living. Then you also have to consider that no matter how much help you get, whether you’re writing it yourself or getting a conversion done by a consultancy, there’s going to be some cleanup to be done because if your content was written so well in the first place in Word that you could create a matrix, mapping it directly to DITA, there was no real point moving over to DITA.

Basically, that content was good enough as is, so you are going to have to come back and go over the stuff and change strategies as you go along and think, “Okay, well, we thought we’d be able to reuse this, but actually maybe it’s best to have a branch of this or create a duplicate of that topic.” You’ve also got to think a little bit further forward as to how that content is going to be localized, it’s going to be translated, and some of your reuse decisions must also consider that part of it, as well. In that, is it something that is translatable or should we have separate topics, and so we’re able to translate them differently depending on the context and so on. I think that that shows just some of the aspects of that complexity of that data migration.

BS: Yeah, the localization angle is a big one because even if you had a perfect migration, the way that the content is now essentially tagged is going to be different than how it was tagged before. Even if the text doesn’t change, there’s still going to be some segmentation problems, so you’re not going to get that 100% match that you were looking for the first time out. It’s something that we actually caution a lot of our clients with, as well. It’s like, “Expect to take a hit on the first localization pass. You’ll get a lot of leverage, but it won’t be a hundred percent, and then from then on you’ll see a huge improvement.”

DA-C: Yeah, totally. Real-world experience, this is what we went through when I was working with a medical device manufacturer, and we planned pretty much what we thought for everything, and we had that in mind, all the advantages. Oh yeah, drop in translation costs and so on, and that was what was communicated to the engineering teams who were the ones that eventually paid for the technical publications and so on, you know the way companies work, different departments, different budgets and so on. Then we converted everything and it came to that first release and we sent them what we sent out for translation. We got that translation quote back, and it was just a little under what the initial translation was, whereas what we were doing was just an update of some of the content, and we had some explaining to do in that.

“Oh, yes, well look…” Because of the way, and as you said, segments are different, and if you look at the code for a paragraph in Word, you’d put a bold on there, and then that segment goes off into the translation memory, and it doesn’t matter whether it’s bold or not, the words, that paragraph is there as one segment. However, in XML, your bold is actually elements, B elements, before and after, and when the translation management system starts looking through it basically cuts off at that point where it encounters a new element.

It used to encounter P and then end with P/P, whatever. With this new translated migrated content, it’s going to start off with possibly a P, and then it’s going to come up in bold and then possibly another italics, and end italics and then UI control if you were doing things properly and things like that. Each of those becomes a segment, and so the translator then ends up with, “Well, it matches, but this changes,” those fuzzy matches do cost you a bit more. Think of when we had to go back to engineering and explain all of that in that further translations will cost a lot less, but this first one, you’ve got to be prepared to take that hit.

BS: Absolutely. Actually, speaking of costs, I’m sure there are others that we could mention here.

DA-C: Oh, yeah. Well, apart from training costs, which we’ve already brought in, while there’s free training, it’s never 100% free, because you are paying your staff while they’re doing that training, and so they’re not producing content, so it’s not free. You’re paying someone to do that, but you really should invest in formal training for your staff. There’s the initial setup costs, so there’s the cost of the software, there’s the cost to your IT department in putting in place all of these things. You might need to pay for someone to create the publication outputs that you need to have if you don’t have that expertise in-house.

You might need to also invest in a content delivery system because you were delivering PDFs before, but part of the whole content strategy is to have everything on a portal, on a website, and so well, there’s maybe cost that’s going to be added on to that. There’s the cost of the conversion. It’s either you’re paying a consultancy to do it or somebody in your team is going to be doing that and not working on the project that they’re normally working for, but these are all costs that will be in there. Some of them can be quite high and some of them would be just normal, one-off costs and so on. We’ve already talked about the translation.

BS: I guess let’s talk a little bit about the challenges of maintaining your consistency, because once you move to structured content, yes, structure has a series of rules. You can’t have this element before this element, and a lot of the systems enforce that for you, but what are some of the other things that you need to be careful about when it comes to consistency?

DA-C: Many teams think, many organizations think that once we’ve got this thing in there itself policing, if you want in inverted commas, you don’t need an editor, you don’t need someone to go over that because you’re overly reliant on the tools. However, you need to know that even if you have these rules in the order of elements that are allowed to be used, you might not want a particular element to appear in a particular type of content. For example, you have short descriptions of a particular type of content that you can add to your editor content, but it’s not always appropriate. Well, between user manual for product X, who is being written by Tech Writer One, and the same thing for another product within the same company, but it’s being written by a different person, one or the other might decide to include a short description, and they’re both valid.

They’re both valid topics. However, why does one have a shorter description than the other? You need that editor, you need someone who’s there to be able to take a look at that sort of thing and to help harmonize content across the different content types that you have. You would have maybe an information architect who’s there not just to help set up that order of elements and help your writers learn how to use and put them, but also who’s there to show good practice, who maybe has a session every month to just say, “Okay, well this is the best way to do this,” or “We found these examples. Could we make sure that we’re all following the guide for this type of manual, and this is the way we do it?”

Terminology is another big one in that, and you can either enforce it using a third-party tools that can plug in, or you’d have someone in there making sure that you’ve used this term. When you’re creating terminology lists, it’s not just a list of approved terms. You also should be looking at terms that are not approved.

BS: Absolutely.

DA-C: That must not be used.

BS: Absolutely. I would probably also mention the classic need for style, tone, and voice as well, especially now that you don’t have writers who own their manuals, “This is my manual. I wrote it from cover to cover, it has my voice, or it has my interpretation of the corporate voice in there.” But now you have a situation where you do have that reuse of individual topics in a myriad of different places, and if that style of that tone or whatever changes from one topic to the next, it’s going to be pretty jarring to someone who’s reading the whole piece.

DA-C: Yeah, a simple example is you have a writer who likes to use, “Please do this before you do that,” another writer who just goes, “Do this, do that.” If you are reading from one to the other, that can be really jarring and you might even take offense because you’re so used to the pleases and thank yous from one author, and then you get into this topic, which is actually a troubleshooting one, and you find you get this tone that they’re telling you off, whereas it was just a difference in style that should have been enforced globally.

BS: Yeah, equally jarring going from one topic to the next, active voice, passive voice, active voice, passive voice.

DA-C: Oh, yeah.

BS: Let’s see. We’ve got translation challenges, consistency challenges, some cost implications there, migration, overall cultural issues, and just the overall complexity of doing all of that work. Is there anything else we should mention here?

DA-C: Regulatory compliance.

BS: Ah, yes.

DA-C: I’ve worked in regulatory for pretty much all of my technical writing career, so that’s maybe about 14, 15 years of the 18 that I used to be a tech writer. Adhering to industry specific regulations can get very complex, and while the promise of having a CCMS with version control and being able to prove that this output was created using this version of this topic, I could get that whole list out and prove it to you. If it’s not integrated within the quality management systems of the entire enterprise, then you’ll find that certain departments will not accept that as proof. Also, the mechanisms between your source files and what you can produce with DITA, you’ve got different ways of compiling your final output, and there’s stuff that you use variables for and the stuff that you’re referencing by keys, and so it’s going to use this version as opposed to that version.

You can also push content at the point of publication, so you don’t see it in that source. However, when you do publish it, then you see this new word in there, how do you prove to the regulatory department that all that content is sane it is sound, it meets with the requirements and so on? That was another really complex thing that we had to deal with that. But by integrating the tools between each other, linking topics to requirements, for example, so you always have a requirements database, even if you’re using Jira, that’s your requirements database if you want, but if you can link those two things as a starting point, then wherever a requirement changes, for example, you know which topics are impacted. When you have to do a regression analysis, a topic impact, a change impact analysis, you’re better able to prove that to the relevant departments that, “Well, you changed this requirement. One, we’re sure that all the topics that did refer to that requirement were analyzed and we made the necessary changes, but we’re also sure that we didn’t create any fallback impacts on other topics in the entire manual.”

There’s a lot of complexity in that makes it that you really need to strategize from the start on how you’re going to respond if you’re a regulated industry, but then there’s also the part where it can help you. It’s a very interesting use case that I saw where we’re mapping DITA XML to machinery standards, and so a company that is an OEM manufacturer is able to supply the exact information required by each of the different subcontractors that we have by mapping that to the IIRDS machinery standard. That is a very interesting use case where regulatory and compliance is enhanced by being able to map those two standards and being able to push the right information based on the metadata attributes and things like that, that are tying both together. You’re easing some of the workload, the heavy lift that used to go on there.

BS: Very cool. I think this is a good place to wrap up, but we’ll be continuing this discussion in the next podcast episode. Dipo, thank you.

DA-C: Thank you very much for having me, Bill.

BS: Thank you for listening to the Content Strategy Experts Podcast, brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

 

The post Challenges of moving from unstructured to structured content with Dipo Ajose-Coker appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 23:32
Design thinking & equity in design with guest Dee Lanier (podcast) https://www.scriptorium.com/2023/12/design-thinking-navigating-equity-in-design-with-guest-dee-lanier-podcast/ Mon, 04 Dec 2023 12:45:09 +0000 https://www.scriptorium.com/?p=22252 https://www.scriptorium.com/2023/12/design-thinking-navigating-equity-in-design-with-guest-dee-lanier-podcast/#respond https://www.scriptorium.com/2023/12/design-thinking-navigating-equity-in-design-with-guest-dee-lanier-podcast/feed/ 0 In episode 157 of The Content Strategy Experts Podcast, Sarah O’Keefe and special guest Dee Lanier discuss design thinking: what it is, what it isn’t, and obstacles and ideas for equity in design.

“Design thinking is not a model first. It is a mindset that incorporates a strong inquisitiveness. What’s happening here? Who are the people that are being affected by whatever problems that are happening here? And what don’t I know that I need to learn before proposing any solutions? That’s design thinking in a larger understanding.”

— Dee Lanier

Related links:

Dee’s top 4 design models:

  1. IDEO, 1978
  2. Stanford d.school, 2005
  3. Liberatory Design, 2016 (updated 2021)
  4. Solve in Time!, 2019 (solveintime.com)

Books:

Contact Dee:

LinkedIn:

Transcript:

Disclaimer: This is a machine-generated transcript with edits.

Sarah O’Keefe: Welcome to the Content Strategy Experts podcast, brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way.

In this episode, we’re talking about design thinking with a special guest, Dee Lanier. Hi, everyone. I’m Sarah O’Keefe, and welcome, Dee.

Dee Lanier: Hi. Thank you so much for having me.

SO: It is great to have you. For those in our audience who don’t know, we literally met on a plane. So we were both headed to San Diego for different reasons, and had a really great discussion. And then I decided that that discussion really needed to be recorded, so, here we are. And thank you for being here.

DL: It was a fantastic conversation, and so I’m happy to continue it now.

SO: So, we complained a lot about AI and the state of the universe and a bunch of other things. But Dee, you’re a published author and a consultant, running around doing cool workshops. Tell us a little bit about what you do and how and where.

DL: The where part will probably be the most difficult because it’s literally all across the country, and sometimes internationally. But I am oftentimes brought in to do human-centered design, also known as design thinking work, helping organizations tackle challenges that they are experiencing, and then come up with some form of contract or goals. And then coaching them longer term in executing on their stated goals, and really being one who can infuse some form of instruction and help and supports in some cases. But also just being responsive to the roadblocks that they’re experiencing, some of their communication challenges, things of that nature, and helping them see their goals through. And then celebrating what they have accomplished as well as setting up some of their longer term goals that need to be evaluated over the course of three to five years.

SO: And so, this really sounds a lot like what we do here at Scriptorium, except where you’re talking about design thinking and human-centered kind of approaches, I’m deeply afraid that we are more about the systems and the tools and the software and the, I guess, automation centered approaches. But how do you define, for this audience that sits more on the techie software side of the world, how do you define design thinking for them, for us?

DL: Sure. Well, I feel like I have to always start off with helping people understand what I don’t mean by design thinking. And that is if your brain lights up, and I’m sure some listeners say, “Oh, I know exactly what design thinking is,” and what immediately comes to mind is a model or a process. That is what first comes to mind. And I would venture to say it’s either coming from IDEO’s Model established in 1978, or it’s Stanford D School’s model established 2002 or five, something of that nature. So by and large, what they’re thinking of is a model and they’re thinking of a fairly recent phenomenon. And I like to say first and foremost, design thinking is exactly what it sounds like. It is thinking like a designer.

So if you’ve ever been in contact with any form of designer, someone who does graphic design, industrial design, interior design, you start to notice that these people think differently. And I would say it’s not just different in they just think in a manner that is different than other people. But they literally, they slow down and they ask questions and they seek to understand. And that really is the goal, is the seeking to understand before proposing any solutions.

So with that, I say, “Well, design thinking is not a model first. It is a mindset. And that mindset incorporates a strong inquisitiveness about what’s happening here. Who are the people that are being affected by whatever said problems that are happening here? And what don’t I know that I need to learn before proposing any solutions?” So, that is design thinking in a larger sort of understanding. And then if you’re curious about models, I could share a couple, because you can Google search at least 10. Which again becomes something that sometimes blows some people’s minds when they’ve been introduced to design thinking through a particular model.

SO: Well, we’ll take your top three or four and stick them in the show notes. And I wanted to touch… I mean, it’s interesting, right? Because we go in and we will look at things, and a lot of times we’ll say, “That’s not actually the problem. That’s the symptom.” Right? You see these issues, but you have to figure out what’s the root cause. And so I think really at the end of the day, there’s a lot of overlap there.

And I know that one of your focuses in addition to this design thinking lens and this really understanding the stakeholders and the organization and how they need to change to address the issue that you’re dealing with, is that you have a strong focus on design equity or equity in design. And I wanted to touch on that. I mean, I think most of us are familiar with the really obvious problems like you ask a search engine for images of a CEO and you get a collection of white men with good hair. But your practice goes way beyond this. And so what I wanted to ask you was, how do you look at equity and design? And what are some of the issues that leak into that work in ways that are not as obvious as my really dumb CEO example?

DL: That’s not a dumb example, that’s an excellent example. Or even just doing a Google image search on good hair or professional hairstyles versus unprofessional hairstyles, and then we’ll see what you discover. But that is part of it, even doing that, starting with an investigative practice or a prompt to get the conversation going. But equity and design or elevating, as I like to say, elevating equity in the problem solving process is twofold. The first being making sure that you’re actually gathering the people that are most proximate to whatever pain that is being experienced as part of the process. And so it is not just an expert or consultant who’s coming in, who’s taking inventory of whatever’s happening. And then going off to the side and developing whatever their solutions are. And then coming back to the team and say, “This is what I got. This is what you hired me as a professional or as an expert to do.”

I think that there’s a need for that in certain instances, but when you think about problems or challenges that affect a community, it requires that the community is engaged in identifying what is the root problem, what is the core of the problem. And being a part of the process for describing not just what the problem is, but also gathering the research so that they can see for themselves that they can also share the antidotes of their experience and their exposure to whatever the challenge is. And then them also ideating and being a part of, “Well, we could do this, we could do this, we could do this, we could do this.” And bringing in their thoughts, their brilliance, but really it’s because they’re bringing their pain to the table, and they want to be a part of the solutions.

Because then lastly, whenever their solution is then proposed, and then there are goals set and there’s some action planning and some execution of those things. And if they’re a part of that process all the way through, then that sort of eliminates the blame game of, “Well, this outsider told us we should do this. We never understood or agreed with that. We attempted it didn’t work. And I could have told you from the very beginning, it never would’ve worked.” It kind of separates that us versus them mentality, and instead invites everyone who’s really deeply vested in seeing that problem overcome as a part of the solution. So, that is long-winded answer to part one of, that is what it means to elevate equity and problem solving.

Secondarily, it is literally taking on particular topics that are related to equity in whatever the setting. And so whether that be anti-bias work, which is what I’m oftentimes brought in to do. Or sometimes it is, and giving a distinction between, what are the differences between individual and collective bias versus different forms of discrimination? Anti-racism work. And then also there’s an opportunity, and I see this last category primarily in schools, and that being civic engagement. And so it’s identifying a problem, understanding what the big problem is, and then spending the time with the collective group to problem solve.

But part of the work that I do in the pre-work is really listening well to leadership. And then having them help me identify who are other people I should be talking to learn what is the core issue. So then we just propose, “Okay, this is where we’re going to go with this next.” And it may be starting with bias or it may be going into anti-discrimination. Or it may go into, “Okay, it seems like this is an issue that is particularly related to racism and we need to do some, not just anti-racism training in the sense of me giving you a bunch of terminology. And building up your lexicon and helping you have a better understanding of what these things are. But really being a part of problem solving, identifying the particular challenges that are being experienced typically by people of color within your organization. And then how can we rectify those issues?”

SO: It’s interesting because in many cases, I think the projects that we come into, nine times out of 10, the people on the ground, the line employees that are in the trenches doing the work have a really, really good understanding of what the problem is and how to solve it. They know. I mean, they know what’s wrong and they know how to fix it, and they’ve already figured it out. But because as somebody or others said, infamously, “You get more credibility when you commute on an airplane.” So because we’re outsiders coming in, we get additional credibility, even though we’re potentially saying the same things that your staff, your long-term employees are saying.

And I’ve had, I mean many conversations where I would say to somebody, “Okay, you’re absolutely right about the problem here, and this is exactly the solution. You’re absolutely right about the solution, and this is what we’re going to propose. Now, would you like us to give you credit for it?” And 100%, I’ve never had anybody say anything other than, “No, you have to take credit for this because if I propose it will not get done.”

DL: Very, very interesting.

SO: It is an uncomfortable place to be. Right? But basically what they’re saying is, “Look, Sarah, we are going to leverage your credibility as an outsider to get the thing that we all agree we need.”

DL: Makes sense. Makes sense.

SO: Okay.

DL: Right. Makes sense.

SO: I mean, I can accommodate that, assuming… I mean in the scenario where we all agree that that is the right answer. But it is very upsetting to have person after person after person say, “I know the answer, I just can’t get them to listen to me.”

DL: You’re right. Well, and we may differ in approaches as well as how we differ in particular work that we do, in me more doing design thinking, you doing systems design. But what I like to do is help equip the community with the skills and the actual data that they need to move forward. Which is to say, “If you’re going to argue with this, know that you’re arguing against what the data says. And we are looking at the data.”

So if we can, attempting to be careful with my words, not to be taken in a different sense, but if we can objectify the scenario a bit… Which sidebar, when I do anti-racism work, part of the reason why I work more as a facilitator and guide the process is because it’s also extremely harmful for me to experience microaggressions, even in someone’s question. If I am being looked at as the expert who has the knowledge base, who has to respond to you, when you raise your hand and you have a critical question that also comes across like a confrontation. That can be incredibly challenging.

So instead, if it can be set up where there are small groups and small groups are where in collaboration with one another, they’re also utilizing the same level setting of background knowledge that was not only given, but really facilitated. Because what I do is I try and propose questions and give the tools for people to discover on their own. And then we come to agreements, “Is this what we all saw? Is this what we all heard? Is this what we all understand? Any objections to that?” So I’m objectifying the scenario a little bit to say, “If you are having an argument still, it’s not with me.”

Because that can, for me as a facilitator, as a person of color, trying to lead a workshop that is oftentimes for the sake of helping the people of color within that community to not feel abused, I don’t want to experience the same abuse that they’ve been experiencing. I know why I am there. It’s typically due to a scenario, something that happened. And so, let’s have a conversation about what happened, and then let’s have some conversation about what else is happening. And then, what is your community most interested in tackling primarily? And then let’s discover how to do that. If I can stand more on the side and help lead in that regard, then I also protect myself. And that is honest and real.

SO: Yes. And thank you for doing this work because the whole thing just makes me twitch. Just listening to this, it sounds painful.

DL: Yes. Yes, it can be very painful, ’cause I’ve been saying it. Part of what I do is anti-racism work, part of what I do is anti-racism work. Well, I’ve had to learn a lot even in doing that. Now, my background is totally in this field. My undergrad and graduate work is primarily focused on race relations from a sociological perspective. But knowing about something does not make it easier in a situation where you find yourself being tokenized in the moment, experiencing a microaggression in the moment, noticing someone centering on themselves and their experience. And then confronting you to have to try and counter what they are saying because they see you as the enemy in this setup.

All of that is hard, so I’ve had to learn some things. Had to learn some things such as being mindful in the moment. I will give a shout-out to Rhonda V. Magee and her book, I want to quote it or name the title properly. It’s, The Inner Work of Racial Justice, which is to take a deep breath and pause when experiencing something offensive in the moment. How do we stay professional when we notice that something that is being said or done causes harm, whether it’s to me directly or to others around? And how do we address that situation? So, doing the inner work.

Secondly, making a huge point to level set, to say, “What we’re going to do is attempt to make sure that everyone has the same baseline understanding.” So therefore, if I am brought in to do anti-racism work, I first have a large conversation about the concept of race. Because we’re not going to talk about an ism if we don’t understand the structure in which it’s being built upon.

And I ask three questions and give time and space and also some resources, so that a group can investigate on their own and say, “When was race created? Why was race created and how was race created?” So again, once those things are being investigated and discovered, they’re not doing battle with me, they’re doing battle with research, they’re doing battle with history. They’re doing battle with what is real and not what’s imagined. There are people in that room who could say, “I can tell you this right now,” but there are others in the room that need to discover that for the first time. So, that is part of what I do.

And then the next thing I do is ensure that we don’t move on with doing design thinking through these particular challenges, until we have set some expectations and some commitments from the people that are in that space individually. Because we’re going to work corporately, but we are going to need to individually agree on some things. And so, those things become things that I can always call back on and say, “Remember you said that you would commit to the following.” And so if there’s any need to address any issues, it’s based on their commitments, not the thing that I’ve imposed upon them.

And then of course, I’ve already brought up bringing in definitions of terms so that people aren’t just going off of their understanding of a concept. But at least we’re all utilizing the same definitions, as we talk and discuss them. But then what everyone is able to bring to the table is their experience with those particular concepts.

Those are things I attempt to do to create safety, in a sense, for the participants as well as for myself. But safety cannot be demanded or controlled in a sense of saying to the group, “This is a safe space.” Who says it’s a safe space? Safe for who? And how do we know? But we can do certain things to attempt to create safety. And then we can always stop and pause and call back to, are we actually doing what we committed to do or are we doing something different now?

SO: And so, some of the conversations that we had when we first met were actually revolving around some of these concepts you’re talking about, in terms of safety and bias. But what actually led us off was AI, right? We started in on this question of, “Oh, well, what does it look like to start to bring AI into some of these settings?” Whether it’s to support design work or it’s to support corporate training, K through 12 education, or anything else. The AI is out there, the tools are happening. What do you see? I mean, what’s your sort of capsule view of what’s going to happen, as we go forward with these tools in a variety of settings?

DL: Part of our conversation was acknowledging that AI and the various tools that exist, they’re not going away. We know that that is the case. I wanted it to kind of feel like, “Oh, let’s see if this is a trend that will fizzle away, like Wordle and Bitcoin.”

SO: Wait, one of those has actual value, and it’s not Bitcoin.

DL: I see what you did there. Exactly. But there are billions of dollars being invested in by big corporations. So part of what I do is try and say, “Well, let’s effectively utilize AI or let’s attempt to effectively utilize AI in a research process.” And so that is skill development, much of what it requires to not only participate in design thinking, but then to slow down, stop after what would oftentimes be like a rapid prototype. We quickly, within a very condensed timeframe, came up with what our proposed solutions to whatever said problem is based on this very limited amount of time.

But now that we have more time, extended time, we need to fill in the gaps with what is missing. Some of those may be interviews, and some empathy mapping. But it also requires deeper research. And part of that research requires understanding the tools that exist and how to use them effectively, and being mindful of things such as the bias that exists within them. And so, that becomes a whole workshop in and of itself.

We are going to deep dive into AI because people come to the table. Similarly, as we were talking about race and racism, people come to the table with varying degrees of understanding. And what ends up happening is some people presume that others know exactly what they’re talking about when they say whatever they say. Or there are others that have very, very strong opinions on certain things that it’s clear in certain cases, that they actually haven’t done much research, nor have they actually participated in or evaluated something critically from using. But they’re just like, they heard on NPR, they watched on CNN, they listened on Fox News, and now they have opinions. And I always say opinions matter, but they’re not more important than research.

And so, having people actually deep dive into research, and that includes just starting off with, I got three companies to name to you, Google, Microsoft, and Amazon. What all do they have in common? They are big data corporations. So, let’s start there. And if I say, “One is invested $10 billion here, another has invested $4 billion there, another has invested $2 billion externally here. And who knows how many dollars they’ve invested, invested internally for the development of their tools. And they own the space of data. It’s not going away. What kind of data do they have? How is that data being utilized? How can you be mindful of those things? And then how can you utilize these tools effectively, while also being mindful of the ways in which, if you’re not careful… You are the contributor to the data, and so you can be bringing your bias to the table as well.”

And so yeah, it’s a big, big, big, big discussion that’s still results similar to how all I think design thinking activities should result. And that is concluding with some commitments. And so whether it is revolving around the particular challenge that people are experiencing or with AI and the challenges that it presents, I always bring up a fourfold framework for goal setting. And that is what is it that we are trying to prevent, correct, improve, and excel in? And if we can set our feet on those four foundational pillars, then they become our guide as we continue to move forward. And AI is now just another part of that.

SO: So Dee, thank you. We could probably keep talking for a couple of hours, and I would appreciate that, and I suspect our audience would as well. But if people want to reach out, what’s the best way to find you? And we’ll, of course, also embed information in the show notes.

DL: Sure, sure. Thank you. Well, my website is Lanier Learning, my name, L-A-N-I-E-R, lanierlearning.com. Can also be emailed at dee@lanierlearning.com. I’m still on the Twitter or X or whatever that thing is called @DeeLanier. You can also find me at LinkedIn @DeeLanier. So my name is easy to find, and I would love to hear from some folks.

SO: So, Dee has a book out there in the world called Demarginalizing Design, which I would strongly recommend. And we didn’t have time to get into this, but some really interesting workshop techniques around how to get people engaged doing different kinds of things. Not just talk in a small group, but do some more creative things, which I believe is called, Solve in Time.

DL: That’s correct.

SO: So, that’s out on your website. We will get all of that into the show notes. And I hope that we’ll have an opportunity to have some further conversations about where this mess is going.

DL: We’re all learning, right? Absolutely. Well, hopefully we will have more opportunities such as this. Maybe we’ll even find ourselves on another plane together, having a conversation.

SO: Seems likely. So Dee, thank you so much for being here. And with that, thank you for listening to the Content Strategy Experts Podcast, brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

The post Design thinking & equity in design with guest Dee Lanier (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 26:56
Ask Alan Anything: Resolving pain in content operations (podcast, part 2) https://www.scriptorium.com/2023/11/ask-alan-anything-resolving-pain-in-content-operations-podcast-part-2/ Mon, 27 Nov 2023 12:31:44 +0000 https://www.scriptorium.com/?p=22247 https://www.scriptorium.com/2023/11/ask-alan-anything-resolving-pain-in-content-operations-podcast-part-2/#respond https://www.scriptorium.com/2023/11/ask-alan-anything-resolving-pain-in-content-operations-podcast-part-2/feed/ 0 In episode 156 of The Content Strategy Experts Podcast, Alan Pringle and Christine Cuellar are back discussing more pain points that Scriptorium has resolved. Discover the impact of office politics on content operations, what to do when your non-technical team is moving to structured content, and more.

“Here’s the thing. Skepticism is healthy. If people are trying to poke holes in this new process, sometimes they can actually uncover things that are not being addressed. That is real, that is useful. So don’t confuse that with people who were being a-holes and just being contrary for the sake of being contrary. Those are two different things, and you’ve got to be sure you understand those two things.”

— Alan Pringle

Related links:

LinkedIn:

Transcript:

Disclaimer: This is a machine-generated transcript with edits.

Christine Cuellar: Welcome to the Content Strategy Experts Podcast, brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. This is part two of a two-part podcast.

I’m Christine Cuellar. And in this episode, Alan and I are continuing our discussion about pain points, pain points that Scriptorium has resolved over the years. And we have a lot more to talk about. So Alan, let’s get right into it if you’re ready. Are you ready for round two?

Alan Pringle: Well, I took the first bit okay, I think. So let’s go ahead and knock it out.

CC: All right, let’s do this. Okay, so let’s talk about some more interpersonal pain points. So let’s talk about office politics. How does office politics impact content operations problems that might already exist?

AP: Oh, office politics affects every operation. Not just content, every operation at every level. And you have to be savvy and know how to play the game. If you have had experience with that at the company where you are, or at another company, it can be a very valuable thing to understand that how to read people, how some things are left kind of unsaid, inferring things.

Understanding that when you have a C-level executive who has a priority on, they want content to be like X, that all of a sudden probably becomes a priority for you, even though it may not have been one in your mind. Because the person who has the money sees it as the priority.

So there are lots of things that you have to bridge and address, and it can be a minefield, absolutely. But if you’ve had experience with it before, or again, if you’ve worked with a consultant who has seen these things before and we have seen politics at Scriptorium, lots, it’s inevitable because humans are political beings. It’s just how it is.

CC: Yeah. What are some common office politics or sticking points for content operations? Is there anything unique to content struggles?

AP: This goes back to a little bit about what we talked about in the previous episode, in regard to finding the common communication method, a common language of speaking. Be sure you’re not talking at each other, that you’re talking with each other when you were talking about these things.

And again, this is not just about content. But what is about content is, content does often not quite get the attention that it should. So you may have to spend a little more time explaining its value as we discussed earlier. And that can be a sticking point here.

CC: Yeah. And if you didn’t have a chance to check out the earlier podcast, it is in the show notes, and I do recommend it. Because Alan also shared some specific metrics that you can have on hand to help communicate the value that content brings to your organization. So definitely recommend checking that out.

How about a pain point where people, whether that’s the technical writers or other people involved in this whole process, don’t really want to be helped? They’re kind of happy with what they’re doing. Maybe the reversal is true, they don’t see the need for the change, and maybe managers or executives are the ones pushing that change. How do you navigate that?

AP: Well, you have to find advocates at every level. Even though you’re saying some people may not see the value or are not feeling the pain, I bet there are other people who are sitting back looking at this. Content creators are saying, “This is crap. We need to fix this.”

If they can get other people on board, that’s how you do it. It’s more of a lateral thing. You’ve got coworkers explaining to you, this is why we need to do this. That is much more effective than from top down, you will do this. Although sometimes you may have to play the, you will do this, card. And if those things aren’t done, it may be time for some personnel changes perhaps.

Yeah, that’s not pleasant, but it can get there sometimes.

CC: Yeah, no, that makes sense. Do you feel like once they see the value of what’s trying to be done, or once they see a coworker that’s really motivated by this and sees the benefits, even if this one individual doesn’t, do you mostly see people being won over to the cause, quote-unquote the cause?

AP: Not always, but here’s the thing. Skepticism is healthy. Because if people are trying to poke holes in this new process, sometimes they can actually uncover things that are not being addressed. That is real, that is useful. So don’t confuse that with people who were being a-holes, and who were just being contrary for the sake of being contrary. So those are two different things, and you got to be sure you understand those two things.

But I can tell you I have seen, even on two projects within this past year, where I sense skepticism from certain people and I saw them change over weeks and months. It happens. It absolutely happens. And that’s when you know you’re headed towards success. Because people who were like, “I don’t think so,” are like, “Okay, I see this.”

People who now champion what you’re doing, that’s really rewarding and it will really guide you to success.

CC: And I’m sure that that really helps them, that they were able to question and bring honest questions, and feedback, and concerns about, I don’t know how this is going to work, that kind of stuff. They were able to bring that to the table and have that addressed to the point where they’re now fans. Like you said, they’re champions of … That sounds like a safe environment for them as well. Hopefully that resolves their concerns.

AP: That’s what you want. I mean, that is ideal. And it does happen. Absolutely, it does happen.

CC: Yeah. All right, so how about a pain point where you realize that your team wants to or needs to move to structure, but your team isn’t technical. Do you have any thoughts or examples about how that is navigated? Because that sounds painful.

AP: It is. And it doesn’t happen overnight. Again, we are talking about a situation where you need to win people over, help them understand the bigger picture. And this is where, for example, a proof of concept can speak volumes. Where you take a slice of content and use it, set it up in the new process or quasi-new process, close enough where you can demonstrate the change. Where you can demonstrate the value. That’s one tool that can be very effective in communicating things and bringing people on board.

Also, you got to remember, you cannot throw a completely different way of doing things on anybody, in any circumstance, at any job, not just content creators and say, “Here’s some new tools. Go do it this way.” No, you’ve got to have some knowledge transfer. You’ve got to have training that’s tailored to all the different levels, all the different users of the system and how they’re going to use it. So all of that is vital.

And again, I’ve repeated this probably ’til I’m blue in the face in past events and podcasts. When you are budgeting for a project, never, ever, ever leave out training, always have budget for training, or you’re going to end up with a system that nobody can use. What’s the point?

CC: Absolutely. And like you mentioned earlier, if the worst-case scenario happens and there is some turnover, I mean, we try to avoid that at all costs and try to win people over. But if that…

AP: That can be healthy. I will argue sometimes turnover can be healthy. If someone realizes that they are not going to be a good fit for this new process, maybe it is a good time to bring someone in who can.

Yes, the loss of that institutional knowledge, the product knowledge, the service knowledge, the process knowledge, I am going to fully acknowledge that is a big loss. It is painful. But big-picture wise, sometimes changes like that are exactly what you need to get things moving.

CC: Yeah, yeah, that makes sense. And if you have training, if you’ve budgeted for training like you mentioned earlier, it sounds like that could be something that not only helps navigate the transition, but it can also help new team members that come maybe six months, a year, or even more so after the changes already happened. It’s an asset that, it’s good to have in place from here on out.

AP: Yeah, I’m glad you mentioned that. Because there are multiple ways to navigate what you just mentioned. You can set up a “train the trainer” scenario, where a consultant or an expert comes in and basically gives people within the organization the knowledge they need to then spread the good news to other people. And people who were maybe hired even six, eight months, a year down the road. So you have got those resources internally.

You can also record training and use that as a resource as well. So there are ways to address that. But it is important. You’re right. It’s not just about the transition, it’s about helping people when they’re introduced to the process as new hires.

CC: Yeah. So shifting to some technical pain point questions, tell us about some scenarios or maybe some ideas you have when technical obstacles come up that weren’t discussed in the discovery stage. So this is probably particularly when a consultant is brought in. But during that initial assessment, there was a lot more hiding under the surface than was realized. How do you navigate that?

AP: I would hope there are not a lot of that going on, because that means discovery probably wasn’t as deep as it should have been. It does happen. But I’m going to hope and cross my fingers that we’re talking about some things around the edges, edge cases, things like that.

When you have edge cases, you have to say, okay, do we need to spend time and money for the system to address it, or is this edge case a one-off, and things need to be reconfigured with this edge case, so it’s not an edge case? That’s one way of looking at it.

But if you find enormous gaps where you have completely glossed over something, there’s part of me that feels like discovery went a little awry. That’s where my brain is right now. And that’s like, did the consultant, did we do our jobs here? What happened here? I would take a hard look at that and there would need to be some soul-searching there for sure.

CC: So that’s a good point. That for the most part, the way that a consultant guides that initial assessment should flesh out the major problems. That’s what I’m hearing.

AP: I really hope, I really hope. Because a lot of times, if you’ve done this as long as we have, that sounds boastful, but it’s just a matter of fact.

CC: 1997, so yeah.

AP: Yeah.

CC: It’s been a long time.

AP: Yeah. Your antenna goes up and you’re like, I hear that, but I know that also means X, Y, and Z. So that’s where a consultant can be helpful. Because they can pick up on things that, on the surface, may not mean anything to someone who is mired in the pain. But it really means something to someone who’s seen this stuff before and can pinpoint, oh, if I’m hearing that, that means these things are also probably true. Let’s go digging around on those things.

CC: Okay. Okay. So how about a situation where you need a specific output, but the current authoring and publishing systems don’t support it? And there’s really no way around that.

AP: This is a signal that is not just about that output. It probably means your ops are not where they should be. Because good content operations, they are going to allow you, enable you to deliver to a yet-to-be-specified delivery format. That is the crux, the joy of good content ops. They are going to be basically future-proof.

If you’ve got things set up in a way where your content source is, let’s call it format neutral, and then you apply different transformation processes to it to create all the end results, delivery formats you need, one more delivery format shouldn’t be a huge burden if things are set up well.

Now, you may have to add another layer of intelligence, some new information into your source content to deliver that. But beyond that, you should be more or less ready for the unknown. That’s where my brain is anyways. I mean, to me, good content ops are not just about the here and now. It’s also about what’s coming down the road in 18 months.

CC: And speaking of that, what happens when your content processes, you just outgrow them? Okay, two questions in there. One, that’s a pain point that was brought up a lot is, what happens when you outgrow your processes? So there’s that. But then also, number two, can you create a solution where you don’t outgrow your processes? Is that even possible?

AP: In theory, I think baseline, you can create something that is somewhat future-proof. I do believe that, and I’ve seen that happen. Especially if your content is structured, and it’s got a lot of intelligence, I’m going to use the M word, metadata, built into it. So you can slice, dice, and present that content in many, many different ways to many different audiences, versions, levels, whatever it is that you need at the end.

And then it also gets into content as a service, where other systems can pull in that content and use that intelligence to create what the end user, the reader, or whoever, what they need. And gives them exactly what they need, and often in real-time.

So yes, theoretically, you can do that. But like I said around the edges, you may have to add a little bit more intelligence here or there to your source to be sure that you can address that new delivery format. So yeah, you can do it. But nothing on this planet is foolproof, as much as I would like it to be.

But having structured, intelligent content that is filled with metadata, if you have that as a core, you can take that a really, really long way. A really long way.

CC: So Alan, are there any other pain points that we haven’t covered in our list that I’ve grilled you on? Is there any other kind of pain point that you’d want us to address right now?

AP: The only thing that I want to say to kind of close this up is that change is a people problem. Don’t consider it a tech problem. That’s kind of my overarching advice, based on all these questions that I’ve heard at this point. And looking at it simply through the lens of tools and technology, I think you’re basically guaranteeing you’re going to have your backside handed to yourself. That’s what I think.

CC: Yeah, that’s a really good way to phrase that. I like how you phrase that. Because that also applies to other aspects of the organization, not just content. But it has a big impact here.

AP: Basically, basic change management applies here. Good project leadership applies here. Yes, a hundred percent.

CC: Absolutely. Well, Alen, thank you so much for letting us grill you on this. Especially because you didn’t have the list in advance. You didn’t know what we were going to bring up today, so thank you for being here.

AP: Sure. It was interesting.

CC: Sure. It brought up a lot of really happy memories of resolving all these things very easily.

AP: And some unhappy memories as well. Yes, it did.

CC: All of the above. Yeah.

AP: Yeah.

CC: Well, yeah. Thank you so much. And thank you for listening to the Content Strategy Experts Podcast, brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

The post Ask Alan Anything: Resolving pain in content operations (podcast, part 2) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 17:12
Ask Alan Anything: Resolving pain in content operations (podcast, part 1) https://www.scriptorium.com/2023/11/ask-alan-anything-resolving-pain-in-content-operations-podcast-part-1/ Mon, 13 Nov 2023 12:31:12 +0000 https://www.scriptorium.com/?p=22234 https://www.scriptorium.com/2023/11/ask-alan-anything-resolving-pain-in-content-operations-podcast-part-1/#respond https://www.scriptorium.com/2023/11/ask-alan-anything-resolving-pain-in-content-operations-podcast-part-1/feed/ 0 In episode 155 of The Content Strategy Experts Podcast, Alan Pringle and Christine Cuellar dig into pain points that Scriptorium has helped organizations resolve since 1997.

“The amount of time content creators spend on formatting and for little payoff, it’s just… the numbers don’t add up. Especially in the 21st century now that we have so many automated ways to publish things to multiple channels, if you are futzing and tinkering with formatting trying to deliver to multiple channels, I can say with a great degree of certainty, you are absolutely doing it wrong.”

— Alan Pringle

Related links:

LinkedIn:

Transcript:

Disclaimer: This is a machine-generated transcript with edits.

Christine Cuellar: Welcome to the Content Strategy Experts Podcast, brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we’re sharing stories about pain, specifically pain points that Scriptorium has resolved over the years. This is part one of a two-part podcast. I’m Christine Cuellar, and with me, I have Alan Pringle. Alan, thanks so much for being here.

Alan Pringle: I think you’re welcome, but I may regret it based on the format of this particular podcast.

Christine Cuellar: Yes, you may. So Alan has no idea about what we’re going to talk about today other than we’re talking about pain points. He has not seen the notes. I’ve instead collected data from our team about lots of pain points that Alan and the team have resolved over the years. So there’s going to be a lot of pain in here, Alan, but hopefully there’s going to be a lot of resolution as well. There’s hope.

Alan Pringle: We can only hope so, and I thought of a little subtitle for this. We can call it AAA, Ask Alan Anything, with very deep apologies to Reddit AMA, and to the American Automobile Association. So yes, this is-

Christine Cuellar: I love it.

Alan Pringle: … the AAA talk, and I’m frightened.

Christine Cuellar: Let’s do it. I’m so excited. I’ve been looking forward to today. Not looking forward to your pain, Alan, just these pain points. Anyway, it’s going to be great. It’s going to be so great. Okay.

Alan Pringle: We’ll see about that. Yeah.

Christine Cuellar: So generally we have some main reasons for why people come. They come because they’ve experienced a lot of mergers or acquisitions, and they’re trying to consolidate different ways of approaching content operations, or they have a lot of localization requirements. There’s a lot of big-picture challenges for why people come to us.

But yeah, we’re going to go ahead and pick Alan’s brain on some specifics. So let’s kick it off with this one, Alan. Can you tell us about a time or about some pain that was involved when you and the team moved a customer or a client from disconnected document systems to a unified system? Tell us how that went.

Alan Pringle: The thing is, this could be many, many people. Here’s the thing. I think you always need to rewind before you talk about the systems because you need to lay the groundwork before that, and that goes back to the pain points you were talking about. Okay. You ask the client, “What pains are you having?” And then, from there, you go, “Why are you having those pains? What can we do to stop those pain points, make things better,” and then you pick your system?

So that’s not a super fun answer, but there’s always this temptation to dive directly into tools, and I’ve said this a zillion times in presentations here, panels, and wherever else, don’t do it. Think about your requirements first. And pain points are a great way to dig out and tease out those requirements. But get those in place first, and then pick the tools that are going to help you address them the best.

Christine Cuellar: Yeah, absolutely. Yeah, start with what you need before trying to make a tool decision. That totally makes sense.

Alan Pringle: Yeah.

Christine Cuellar: How about this pain point? Dealing with manual formatting when authors have to manually format things all the time. Has that ever come up before?

Alan Pringle: All the time. The amount of time content creators spend on formatting and for little payoff, it’s just… the numbers don’t add up. And especially in the 21st century now that we have so many automated ways to publish things to multiple channels if you are futzing and formatting and tinkering with formatting, trying to deliver to multiple channels, I can say with a great degree of certainty, you are absolutely doing it wrong.

Why are you inflicting that upon yourself? Stop it. So yeah, don’t do that. Please don’t do that because it’s not a good use of your time, mostly because your reason for creating this content is to educate, help the people who are reading it. You need to spend the time on that, not on the formatting. It’s just not a good use of your time. It’s just not.

Christine Cuellar: Yeah. Do you have any examples of a company that was held back by the time that their team was spending on manual formatting? So maybe they were trying to translate into new languages or rebrand. Any examples of-

Alan Pringle: Oh, yeah.

Christine Cuellar: … how that went wrong?

Alan Pringle: I mean, again, it’s not so much that it goes wrong. It is that what they are doing is just not sustainable, and it is costing them so much more money. I mean, you think about it, if you have your source content, and because my primary language is English, really my only language is English. I’m going to say your source content is English. So all the time that you’ve spent formatting and getting that ready, you have to apply that to every single language. That effort becomes exponential. It’s multiplied again and again and again.

Please, why are you doing this to yourself? Don’t do that. You need to have a system where your formatting and your source language is as automated as it can be, and then that automation will then apply to the localized content as well. It really is just kind of stupefying to me to see people continue to spend so much time on formatting on source content, much less when they have to localize for, you know, how many different locations.

It is just, again, the money and the time, and then there’s the delay because, say, you ship out to your primary language. Again, I’m going to say English. It’s not always that way, but that’s just because I speak English. And then three or four or six months later, you’re shipping out the other languages. Why? You need to get that window down to almost simultaneous shipment so you won’t have this huge delay because if you have that huge delay, that is an income stream that your company is not getting from those markets that need the translated content. There you go.

Christine Cuellar: Yeah. Yeah. So in a nutshell, for organizations that have had this as their primary pain point, you know, the writing team is spending way too much time, manually, formatting things, and they don’t actually get to do their job, which is write the content. What’s the big-picture fix to that? I know it’s probably different for each person and each organization, but what is… where’s square one?

Alan Pringle: There’s lots of square ones here, so I got to be careful. There’s not a one-

Christine Cuellar: Yes.

Alan Pringle: … size fits all. If you are working in more traditional content development ways, when I mean by that desktop publishing, templatize. Your formatting should be coming from a template, creating a repeatable process, and that template can be applied to your localized content as well.

If you have outgrown desktop publishing, and that does happen, you need to look at structured content, and that means there is no formatting in your source content. It is applied automatically later on. When you do that, it basically takes it out of the author’s hands completely, and automated transformation processes apply it. So those are two go-to’s right there on how to possibly address that problem.

Christine Cuellar: Gotcha. How about this pain point? An organization is being asked to personalize content, or they’re being required to personalize content, but they have to rely on manual work to make that happen.

Alan Pringle: No. Just like I was talking about formatting, it causes me pain to hear about people who are basically copying and pasting content over and over and over again to make slight variations of content for different audiences. It happens all the time. Again, please don’t do that to yourself if you can help it. This can be basically the thing to help push you into improving your content operations.

It’s, again, a question of efficiency, a question of reuse. There may be a core of content that pretty much stays nearly the same or static. It’s just there’s bells and whistles on the edges that need to change based on location, on audience, or whatever else. What you’re going to have to do is build on that intelligence. So you have got some content that’s being reused, and then you have flagged the stuff that is specific to a particular audience or whatever else. When you start getting into building in that kind of intelligence, you’re talking about structured content, usually XML, not always XML, but usually XML.

So you can build in that intelligence that says, “Okay, this is my common core of content. Then here are things that are a little bit different for all of these different things.” And you can have this huge matrix of things that are different, audience location, product version level, whatever else. And then, based on those things, you can put in that intelligence and then turn off… turn certain content on and off when you create whatever delivery points that you have, whether it’s print online or whatever else these days. Lots of choices there too.

Christine Cuellar: Yeah, that’s great. Is that something… Okay, just because this is kind of top of mind because we’ve been talking about this a lot recently. Is that something… If people are interested in pursuing that, should they look more at content as a service? Where would you recommend they dig for more information on creating that kind of a system?

Alan Pringle: Again, I would say go backwards and think about your requirements, what those things are. Personalization as a requirement. Yes, content as a service, and let’s explain what that is. When you have built intelligence into your content about audience, product variant, whatever else, version, you can connect systems together in a way where the system that is going to present the information to the end users, to the content consumers can pull the information that it needs from the repository where you have stored your content with that intelligence built in. Yes, content as a service is great.

It sounds great, but you don’t start there. People don’t just say, “I need content as a service.” They may, but I don’t think it’s something that comes to top of mind immediately. What they’re thinking is, “I need a way to personalize this content for my different users so they get exactly what matches the version of the product that they’re using, for example.” Or, “I need the people who were taking this course and this learning management system to get things zeroed in on the way that they have their software configured and they’re trying to learn about it.”

Christine Cuellar: Yeah.

Alan Pringle: To me, there, there’s a distinction there. Yes, content as a service is a way to solve those problems, but I don’t think people generally go there top of mind. “That is what I need.” They think more about, “I need personalization.” Content as a service is a way to get it. That’s how I would like to frame it anyways.

Christine Cuellar: Yeah. No, that totally makes sense. They need that personalization, but they need to not be relying on some person or a group of people going in and manually making all those changes because that’s just not… that’s not feasible to keep up with.

Alan Pringle: Oh, people do it all the time, and then they end up all having breakdowns because it’s just not sustainable. Yeah.

Christine Cuellar: Oh, yeah. Oh, yeah.

Alan Pringle: It’s awful.

Christine Cuellar: Especially as you grow. And yeah, I can see that’s a major scalability issue in so many different ways.

Alan Pringle: Yeah, yeah.

Christine Cuellar: So do you have any examples of that inaction about companies that need to personalize content that have been set up for success now? Even if it’s an unnamed example or stories you can share there?

Alan Pringle: Yes, and I’ve got to be careful here because I don’t want to get too much into it-

Christine Cuellar: Yeah.

Alan Pringle: … to identify customer. But yes, we have done things where the end user is getting information, whether it be from a web-based portal, for example, that matches exactly their customer profile. So yes, we have done this, and I know you probably want more details than that, but-

Christine Cuellar: No, that’s fine.

Alan Pringle: … I don’t want to go too deep into it, but yes.

Christine Cuellar: Yeah.

Alan Pringle: We have done it. We are doing it as we speak. As we record this, we are working on projects trying to do that very thing. So that’s very much in our wheelhouse, indeed.

Christine Cuellar: Okay. No, yeah, absolutely. That’s great. That’s a great example. Okay, so let’s switch gears to another pain point. What has it been like for… or what do you recommend for people who are struggling with the pain point of inconsistent content? Either inconsistent content or maybe inconsistent ways of creating the content.

Alan Pringle: Well, inconsistency can be many levels here. It can be the tools that you are using. Not everybody’s using the same thing, maybe because of a merger. It can be the way that the content is organized. It’s not the same from one product to another or one service to another. It could also be as getting more down at the content itself. The way that people describe things. You are not consistent in what you call this widget in this document versus how you describe that widget in what it does in this online document over here. So there are multiple layers of inconsistency here, and it can even be as using certain terms and terminology.

You’re not consistent in how you do that. And again, there are technologies that can help with all of those things. For look and feel, templatization can help make things more consistent, or you can move to structured content and have your formatting applied automatically to take care of that consistency. There are ways to basically enforce word choice, control vocabulary tools to be sure that you’re using the terminology in your company consistently or different authors and content creators and content contributors are using a term consistently.

Alan Pringle: So again, there are lots of layers here, but there’s a way to solve all of them that basically you can use tech to take that burden off of you, so you’re not having to always think about those things all the time. Having tech provide you a helping hand. And I dare say there’s a point we’re reaching now where even artificial intelligence, AI tools, can help with some of these things too.

Christine Cuellar: Yeah.

Alan Pringle: So as much as I get so tired of hearing about AI and all the irresponsible talk about it, you can also look at and frame AI as a tool to help you make things more consistent. It can help. For example, maybe look across a vast body of content and find where there are things that are not consistent, so you, as a human, don’t have to go and do all that horrible, crappy work.

Christine Cuellar: Yeah, yeah. And going back to something you said earlier about this. You mentioned, okay, so using a merger as an example, people using all these, trying to consolidate these different systems or trying to just work in these different systems after mergers.

Is it common for people to be, for lack of a better word, putting up with dealing with a bunch of different systems until the pain is just absolutely unbearable and they have to reach out, like putting up with this for years or something like that? Is that pretty common, or do you feel like this is a pain point that is painful enough that people reach out pretty quickly when it crops up?

Alan Pringle: I hate to keep saying it’s not one size fits all, but it’s not. Some people recognize the problem earlier than others. Some people just kind of put their heads down to the grindstone and deal with it and grit their teeth. Other people, especially if you’ve got somebody new coming in who’s maybe done things a little differently before, and they see these things, and they’re like, “Oh my God, what are you people doing to yourself? Stop.”

Christine Cuellar: Yeah.

Alan Pringle: It can be-

Christine Cuellar: Fresh eyes.

Alan Pringle: … a catalyst like that.

Christine Cuellar: Yeah.

Alan Pringle: So…

Christine Cuellar: Okay.

Alan Pringle: Yeah, fresh eyes. That is really a dull answer, but that’s often what happens.

Christine Cuellar: No, that makes sense. Yeah, no, that makes sense. And does it make the problem worse if people put it off, put off consolidating systems, or does that not really matter?

Alan Pringle: Oh, I think it does. I mean, think about it. What happens if you ignore a plumbing problem in your house? Is it just going to go away by itself? No, it most certainly is not.

Christine Cuellar: It’d be nice, but yeah.

Alan Pringle: Yeah. I mean, just think about it. “Oh yeah, I’m going to ignore the fact that I have got a dripping hole in my ceiling or there’s water pouring down my wall. I’m just going to ignore it and hope it goes away.” I don’t think that’s the best way to handle that. And that’s true of content operations as well.

Christine Cuellar: It’s the ostrich approach, right. “If I can’t see, it’s not real. It’s not… We’re fine.” Yeah, that doesn’t ever work out. That kind of leads me into another pain point that actually came up quite a lot. Yeah. Doesn’t work. A lot of people mentioned executives or managers not valuing content, which kind of seems like that would be related to this. That was a pain point that we have often seen. Can you talk a little bit more about that?

Alan Pringle: There is an issue where people who create content and their contributions sometimes are not quite understood, or they’re overlooked by executives. A lot of executives are focused on numbers. That is their language.

Christine Cuellar: Mm-hmm.

Alan Pringle: They don’t care about the tools that you’re using. They don’t care about anything but, for example, that people are getting the content they need and not calling a help center, and costing money. That’s when they care about content. They’re looking at it from a different lens.

Christine Cuellar: Yeah.

Alan Pringle: So if you’re going to communicate to them about content, you’ve got to talk metrics, you’ve got to talk numbers, you’ve got to talk money, and that’s where sometimes content creators fail. They don’t look at things that way. So that’s sometimes where a consultant can come in handy and start to help you speak “C-level-ese”—

Christine Cuellar: Yeah, yeah.

Alan Pringle: … basically to kind of bridge that gap between, “This is what’s broken versus this is how we can fix things, and it will increase productivity and better metrics.” Less money spent, better results, that sort of thing.

Christine Cuellar: Yeah, that makes sense. And it sounds like bridging that gap of communication both ways, you’re both helping executives understand the value and helping content people communicate their value. Is that accurate to say?

Alan Pringle: That is fair, and again, not one size fits all. There are some executives, especially that have come up through the ranks of content. They get it.

Christine Cuellar: Yeah.

Alan Pringle: They totally get it. So there are some people, and those people are great to work with. Sometimes, people need a little education, and I’ll just leave it at that.

Christine Cuellar: Yeah, yeah. No, that makes sense. And so you mentioned metrics. What are some metrics that content individuals can have just on hand to start communicating their value to their team, to their company?

Alan Pringle: One thing you can do is kind of get what’s the dollar value you can place per hour on what it costs for a content creator to develop and distribute content. Find a way to find out what that amount is, what that dollar value is. Then, take a look at, for example, what if you automate publishing and cut out 80, 90% of that work by automating publishing? What’s that worth? What’s the dollar value on that? What’s the dollar value on getting closer to simultaneous shipment on localized content?

When you get your product, your service, whatever out there to other markets in this very, very global world environment now, everything’s so interconnected if you get things out to all the different markets, almost at the same time, how much more money are you going to pull in than if you had to wait three, four months for the localized version to get out there to those customers? So think about things like that.

Christine Cuellar: Yeah, those are great. Those are really helpful examples. And do you have any specific recommendations on how those should be communicated? Is that something that should be in a big kind of company team meeting? I know that’s probably a case-by-case basis, but-

Alan Pringle: Well, again, I mean, what is the executive team’s preferred way of communicating? There’s your answer right there.

Christine Cuellar: Yeah. Yeah.

Alan Pringle: If they don’t like email, why are you going to send that … email? Don’t do that.

Christine Cuellar: Don’t send an email that doesn’t communicate your value. 

Alan Pringle: No. If they like spreadsheets, put that mess in the spreadsheet. It depends on the audience. You need to find a common ground with the people that you’re creating these stats for and share it in a way that they can absorb and appreciate whatever that is. And me telling you what to do here is not as helpful. You need to do some digging or have your consultant work with you to figure out the best way to communicate that and do it that way.

Christine Cuellar: Yeah, absolutely. That makes sense because ultimately, I think if you can communicate… Because content really does have real business impact and real business value, and so it’s just about communicating that.

Alan Pringle: Yeah. And this is… And it’s not even just in regulatory situations. Yeah, content in regulatory situations matters a whole lot because if it’s non-existent or wrong, somebody’s going to die or get injured.

Christine Cuellar: Mm-hmm.

Alan Pringle: Even beyond that, even if you’re not in a regulated environment, there are contributions content can make to keep customers happier, to keep down support costs, and many other things. Not everything is tangible. A happy customer, that can be hard to quantify. But a happy customer not calling your support line, you can quantify that. So that’s-

Christine Cuellar: Yes.

Alan Pringle: … one way to look at it.

Christine Cuellar: Absolutely. I know just personally, for me, I’m much more likely to continue or stick with a company where I can do… I can be pretty self-sufficient. If I have problems, I can look it up and deal with the problem myself. And if I do have to contact support, it’s a quick call that gets resolved easily.

That’s just… I think that is how people make their purchases nowadays. And I know that’s kind of more of a consumer kind of mindset rather than business-to-business. But that’s a big part of the consumer experience is can I get what I need just with the content that you already have out there in the world?

Alan Pringle: Yeah. And I think it’s worth mentioning here that when people go to your site and look at the content that’s available out there that’s associated with whatever product or service they’re considering, it could be support content, it could be a help portal, it could maybe even be training content. They are not just looking at your marketing to make a decision here.

Christine Cuellar: Yep.

Alan Pringle: There are other content types that come out to play. And anything that’s out there that the public can get to and see, believe it or not, that’s marketing content, and you need to treat it as such and understand its value as that as well.

Christine Cuellar: Yeah, absolutely. Well, Alan, I think that’s a really good place to wrap up. Clearly, we could talk about pain all day because we have a lot more to…

Alan Pringle: It’s my job, what can I tell you?

Christine Cuellar: Yeah. Yeah. So we are going to continue this discussion in the next podcast episode. Alan, thanks so much for being here with us today.

Alan Pringle: I haven’t run away, so let’s-

Christine Cuellar: Yeah.

Alan Pringle: … get to the next episode.

Christine Cuellar: Thank you for listening to the Content Strategy Experts Podcast brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

The post Ask Alan Anything: Resolving pain in content operations (podcast, part 1) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 26:12
How machine translation compares to AI https://www.scriptorium.com/2023/10/how-machine-translation-compares-to-ai/ Mon, 30 Oct 2023 11:20:37 +0000 https://www.scriptorium.com/?p=22208 https://www.scriptorium.com/2023/10/how-machine-translation-compares-to-ai/#respond https://www.scriptorium.com/2023/10/how-machine-translation-compares-to-ai/feed/ 0 In episode 154 of The Content Strategy Experts Podcast, Bill Swallow and Christine Cuellar discuss the similarities between the industry-disrupting innovations of machine translation and AI, lessons we learned from machine translation that we can apply to AI, and more.

“Regardless of whether you’re talking about machine translation or AI, don’t just run with whatever it provides without giving it a thorough check. The other thing that we’re seeing with AI that wasn’t so much an issue with machine translation is more of a concern around copyright and ownership.”

— Bill Swallow

Related links:

LinkedIn:

Transcript:

Disclaimer: This is a machine-generated transcript with edits.

Christine Cuellar: Welcome to the Content Strategy Experts Podcast, brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we’re talking about the parallels between AI and machine translation. Hi, I’m Christine Cuellar, and with me on the show today I have Bill Swallow. Bill, thanks for coming.

Bill Swallow: Hey, thanks for having.

CC: Absolutely. So for non-technical people like myself, what are we talking about when we say machine translation, is that like Google Translate? What are we talking about there?

BS: Google translates a form of it.

CC: Okay.

BS: But essentially, yeah, it’s a programmatic way of translating from one language to another.

CC: Okay.

BS: It’s been around for quite a while and we see it commonly in Google Translate and other online uses, but it’s actually been around for quite some time.

CC: Okay. So I know that as AI has become the biggest topic in 2023, we’ve often compared it to machine translation. I know we’re going to talk about that throughout the episode, but can you give just a little intro to why they’re compared so often?

BS: Yeah, I think it boils down to really where machine translation started.

CC: Okay.

BS: So I’m not going to give you years because it’s not at the top of my head, but it basically started out as a rules-based program. So people sat down, wrote these if then else statements essentially, to basically say, if you come across this phrase, then it’s translated in this way for this language.

So they started out with that rules-based approach, and they’ve beefed up the rules and they’ve beefed up the processing. And of course, they improved the examples on the backend of the finished translation and modified that so that the translations kind of became a little bit better over time.

CC: Okay.

BS: Then they switched over in many cases from the rules-based to more of a machine learning model, which then, basically it’s like early AI. So it started to learn patterns and it started to learn about context a bit based on the words and phrases that were being used and could draw additional inference from that.

CC: Interesting.

BS: And essentially that started to develop more and more until we got to an AI use case. So it’s something where you actually get this robust use of machine translation. So it’s actually using a lot more learning models in that translation process. And the machine translation process is a little odd because you can do it out of the box, so something like using Google Translate where it basically uses its own Google index as a resource for translating a lot of that content. But a lot of translation companies and a lot of companies that employ machine translation, whether they are translators or not, some companies do it in-house on their own. They will basically train their machine translation against their own content and their own translated store of content so that it brings back their approved wording, their approved language models.

CC: Gotcha. I can totally see how that is a big parallel to AI right now as we’re talking about having an internal AI versus just throwing content in ChatGPT. That makes sense. You mentioned there was a transition into machine learning. When that happened, how did people react? Was it really similar to how people are reacting with AI? Was it split? What was that acceptance like?

BS: Yeah, I think there were some parallels there. Just as with what we’re seeing with AI now, there’s a lot of concern from people saying, oh, the machine is going to essentially rule, make my job obsolete because it can now write these blog posts, it can write these screenplays, it can develop these characters, it can produce these images. But with machine translation, there was that similar kind of fear where translators were like, oh, it’s going to reduce my margin. It’s going to put me out of a job. But we haven’t really, we saw that to some extent in the very beginning, but what we’ve found over time is that no, the people are still required to go in, proofread that machine-translated content, clean it up, make it more appropriate, and essentially improve what’s on the backend that the machine translation is pulling from so that things are improved over time.

CC: Yeah, process updates and that kind of thing. Improving the bank of-

BS: Right, improving the phrases, getting rid of things that are no longer said in certain areas because language is ever-evolving.

CC: Yeah, that’s true.

BS: You need to be able to keep up with those changes.

CC: That’s true. And how far ahead would you say that machine translation is compared to AI? Is it five years in the future so we can maybe see what might be coming? I know that’s probably really hard to quantify.

BS: Let me get my crystal ball.

CC: Yeah, yeah, there we go. Give us an exact answer.

BS: I’d almost say that they’re on two parallel, but different paths.

CC: Okay.

BS: And that I think we’re going to see a lot more blurring of the lines. Those paths are going to start to come together a little bit more. I mentioned that machine translation is leveraging AI to a good degree these days because it’s the next step in that form of machine learning. It’s no longer a core programmatic learning model, but it’s more of an adaptive one. So it basically will influence its own way of learning about stuff going forward. AI is employing machine translation to many degrees. We saw there was a video floating around LinkedIn of this new utility where, and I think Sarah spoke about it on a previous podcast with Stephan Gentz. But yeah, you basically record yourself saying something and it will turn around, machine translate that content, use your tonal voice and basically re-speak, and then re-sync the video so that it looks like you’re speaking a completely different language.

CC: That’s crazy.

BS: It’s nuts. I watched the video a few times. I don’t know either languages. I think they used French and German. I know enough German to be dangerous, and I know enough French to order a meal.

CC: It’s the priority.

BS: But the German I found was actually pretty spot on from what I could understand of it. And I know Sarah speaks pretty much fluent German, certainly more than I do, and she only found really one mistake, I think.

CC: That’s crazy.

BS: It’s crazy. So there are cases where things are being employed, and I think we’re going to see a lot more of that.

CC: Okay.

BS: On the machine translation side, we’re certainly going to see it adopting more robust AI models so that it can continue to build and improve how machine translation is being done. On the flip side, I do think that AI will be leveraging more of the linguistics modeling that is baked into machine translation so that it can do a better job of representing essentially the human construct of language.

CC: Wow. That video example that you gave and that Sarah shared before, that’s just, I feel like that’s one of those examples that I don’t know, 50 years from now, we’ll look back and the kids will be like, that’s so used to, I don’t know, stuff like that is what they’re totally used to, or I remember back in my day that was a big deal, anyways, it’s just mind-blowing that this kind of stuff is happening. So speaking of those kind of innovations and industry disruptions and that kind of thing, we’ve talked a little bit about how with machine translation and then with AI kind of on parallel paths merging together, what are some of the ways that the disruptions have been really different or have created different things for the content industry?

BS: I don’t know if there’s any real difference in how they might be disrupting the industry or how they might be employed. There are differences from a practical matter, when would you deploy a translation management system versus when could you use… Well, AI is kind of a really nebulous term. It could mean anything. It could mean ChatGPT, it could mean image rendering software. It could mean really anything. With machine translation, we’ve seen it become more of a daily utility. So you come across a news article in another language, if you’re using Google Chrome, you might have the option to translate this page. If you’re not using Google Chrome, you can go to, for example, translate.google.com and just provide it the URL or copy and paste a paragraph, and you can basically get an idea of what that website’s talking about. But we’ve seen it become more baked into applications as well. Certainly, there’s a whole industry around providing translation services. So we’ve seen that kind of pick up the pace on round-tripping translation work.

So before you would have someone sit down and actually translate a block of text and they would use translation memory, which is essentially a store of what was translated last time, to kind of pull from and pre-fill the translation, and then that way they can fill in the gaps. That’s a very, very high-level view of translation memory, but essentially it takes that to the next level where it will pre-process the translation for you and provide you with something that’s maybe 95% there. And then you would get someone who is an expert in the language and the subject matter and the target locale, because we know that Spanish is different depending on where you are in the world, for example. And they would proofread it, clean it up, and probably commit that back to whatever the machine translation is using so that it uses that reference next time rather than having to go through that again.

We see it baked into applications as well. So there are some gaming applications that will auto-translate a chat on the fly so that depending on, no matter where you are in the world, you can actually still understand what these players are saying. So if you’re on a team and someone’s saying, go now, and you don’t speak their language, you have no idea what they’re saying.

CC: Yeah.

BS: But the chat translation can kind of help. It’s not perfect, but it’ll help. With AI, I kind of see that moving into a similar role. It’s going to be, now we’re looking at it as, oh, look what this thing can do. It can write me a limerick. It can essentially create me a photorealistic image of whatever I choose to think up. I give it a description and it creates something, and it might be what I’m looking for, and it might not, and there are flaws to those as well. But I kind of see AI being baked more into the backend of a lot of the tools that we use on a daily basis to help with more robust search and query activities, to be used as an editor or a checker for things on the backend, to be a starting point for developing something new.

So whether it’s a piece of code or in our world where we do structured authoring work, it could be something as easy as give me a framework for a new task that I need to produce, and it will lay everything out. That kind of harkens back to more of a template, but you can kind of say, give me a task based on what I’m writing about here in this section, and it can pull some pieces in and fill things out. So I also see it as being more of an aid for finding resources that already exist so you’re not reinventing the wheel and things like that. So things that essentially it’s going to be baked into a lot of different utilities that some of which we use now, some of which we haven’t thought of yet that will make our lives easier.

CC: Okay. So what are some of the pitfalls that we fell into during machine translation that we can avoid with AI? Do you have any red flags or things to watch out for based on how things went the last time, essentially?

BS: Yeah, I think the biggest one is to not take what it provides for granted.

CC: Okay.

BS: So regardless of whether you’re talking about machine translation or you’re talking about AI produced whatever, is to not run with whatever it provides without giving it a thorough check. I think the other thing that we’re seeing though, and it wasn’t so much an issue with machine translation, is more of a concern around copyright and ownership.

CC: Okay.

BS: So who essentially owns the rights to these things?

CC: Yeah.

BS: And it kind of goes back to, well, what was the model used to kind of create them in the first place? Was it using a public domain model or was it something that was trained only on a private store of information?

CC: Yeah. So looking into the future, do you see private AI being maybe the best way to move forward with AI? Not that people will necessarily, or there’s maybe some use cases for public domain AI too, but do you see that though as more of where we’re going to head?

BS: I think it’s inevitable. I think that we’re going to have cases where, we’ve seen cases already where companies have kind of uploaded examples of their own code to see if they could get a public AI model to write more code based on that model. And unwittingly, they basically let their own IP out into the wild, so now everyone can use what these people created that they uploaded in the first place. So that’s an oopsie. So I think that based on cases like that, I think people are going to start employing a private model, basically a walled garden where they can train and develop their own corpus of information, whether it’s images, code, text, what have you, and use that to produce things using AI.

But I still think that, yeah, there’s going to be a public model for, I don’t see that need ever going away. Just as we have public models for everything else that we use on the internet, I think we’re going to see AI have its own footprint there as well. We might need to be careful while using it. There might need to be more guardrails attached, but I don’t see that going away.

CC: Yeah, that makes sense. And you mentioned that with concern for people’s jobs, I know that of course is a concern right now with AI as well. And you mentioned that at the beginning of machine translation, that was, you did see a little bit of job loss, but overall, those experts were still needed to manage the content, make sure that everything that is being created is accurate. So what would you say to people that are really concerned about that right now? Do you think that’s going to be really similar for AI? Are there any differences you can think of?

BS: I think this is actually a good learning point from machine translation because yes, some people lost their jobs initially when machine translation came out. I think in hindsight, that was an error or that was a bad decision to either let people go saying, oh, a machine can just do it. Because it was very clear out of the gate once machine translation really started being used that people are still needed. They’re still needed to clean up what the machine translation is producing. They’re still needed to do new translations into new markets in new contexts with new terms. A machine just can’t invent things and have it be correct for a very specific target audience.

To do any kind of translation correctly, you need to know the subject matter. You need to know the language that’s being spoken in, the flavor of language for the locale in which you are targeting that content, and anything else about that locale that might influence jargon or anything else that might need to be employed. So yeah, I see a similar warning, I guess, for people who are looking at AI and saying, oh, we can reduce our staff by employing AI. It’s like, no, you’re going to augment your staff and they are going to need to learn new skills because they are going to need to learn how to leverage AI to produce basically more and better work. It’s a utility, it’s not a replacement.

CC: Yeah, I liked how you phrased that. I think that that’s a good perspective for employers, for writers, for anyone who is worried about the job climate right now, I think that’s a good way of looking at AI.

BS: And as of right now, we know that AI is being used to generate articles on the web. There are a lot of websites that are using AI to just basically pump out post after post after post, article after article after article. And you can tell immediately once you start reading it that it was not written by a human.

CC: And at the end of the day, it’s still humans connecting with humans. So whatever content we’re putting out there, it needs to be valuable to people that are reading it. It needs to have a purpose, it needs to be doing something. It needs to just be humans communicating with humans. So those big content pumping blog posts, all that kind of stuff, that does bother me because it’s just content for the sake of getting content out there. And there’s humans at the other side that actually need information. So I think this is a really good perspective to have for how to leverage AI in the same way that we’ve leveraged machine translation, how to automate processes, how to have a starting place for people when you’re writing, but not to just make it all about machines and not people. So Bill, is there anything else that you can think of when you’re thinking about machine translation? Any other comparisons between that and the rise of AI? Anything else that you wanted to share before we wrap up today?

BS: I’d say approach it both optimistically and cautiously.

CC: Yeah, that’s really good, especially with the concerns that you mentioned about copyright. We do have an article that Sarah O’Keefe wrote and recently updated as well about AI and the content lifecycle. So we’ll post that in the show notes. Also, some other interviews and information that we’ve provided about AI. So all of that will be linked in our show notes. And Bill, thank you so much for joining the show and talking about this today. I wasn’t in this space while machine translation was happening. It’s really interesting to hear about the parallels because they really are very similar in a lot of ways, and it’s cool that we have some takeaways from both.

BS: Thank you.

CC: Thank you for listening to the Content Strategy Experts Podcast, brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

The post How machine translation compares to AI appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 19:39
ContentOps edited collection: Content operations from start to scale (podcast) https://www.scriptorium.com/2023/10/contentops-edited-collection-content-operations-from-start-to-scale-podcast/ Mon, 09 Oct 2023 11:49:55 +0000 https://www.scriptorium.com/?p=22171 https://www.scriptorium.com/2023/10/contentops-edited-collection-content-operations-from-start-to-scale-podcast/#comments https://www.scriptorium.com/2023/10/contentops-edited-collection-content-operations-from-start-to-scale-podcast/feed/ 2 In episode 153 of The Content Strategy Experts Podcast, Sarah O’Keefe and special guest Dr. Carlos Evia of Virginia Tech discuss the upcoming book ContentOps Edited Collection: Content operations from start to scale. This is a free collection of insights from leading industry experts that will be available in October of 2023.

“This is going to be a free book. We are not going to become rich and famous with this book because we decided that we wanted to make the content in the book accessible for everybody who is interested in learning about content operations. It’s going to be published as an open-access book by Virginia Tech Publishing.”

— Dr. Carlos Evia

Related links:

LinkedIn:

Transcript:

Disclaimer: This is a machine-generated transcript with edits.

Sarah O’Keefe: Welcome to The Content Strategy Experts Podcast, brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we talk about the ContentOps Edited Collection: Content operations from Start to Scale. 

Hi, everyone. I’m Sarah O’Keefe. I’m delighted to welcome Dr. Carlos Evia to our podcast today. Based at Virginia Tech, Dr. Evia is a Professor of Communication, Associate Dean for Transdisciplinary Initiatives, and Chief Technology Officer in the College of Liberal Arts and Human Sciences. He’s also Director of the Academy of Transdisciplinary Studies and affiliated with the Virginia Tech Centers for Human Computer Interaction and Communicating Science, and also a member of the Stakeholder Committee for the Virginia Tech Center for Humanities. In his copious free time, aside from these things, he has been involved with work on DITA standards and especially the Lightweight DITA initiative. So Carlos, welcome aboard. I’m glad you found 20 minutes or so to join us here.

Dr. Carlos Evia: Hello, Sarah O’Keefe. It’s been a while, so good to catch up with you.

SO: It is good to catch up with you. So tell us about this new content ops book. You spearheaded it and I guess I should mention that I about a million years ago, contributed to it. I don’t actually remember what I wrote, so this could be a problem. So tell us about the book.

Dr. CE: Well, it’s new. It’s new to the world because it’s coming out next month and by next month, I mean October of 2023. But it’s a book that has been about 10 years in the making. And some sections of the book really read like creative nonfiction because there are characters that are people in real life and surprise, you are one of those characters. Because the idea for the book started some 10 years ago when we will meet at conferences. And I don’t even remember what happened first, if I invited you to come visit my class here at Virginia Tech or if I saw you at the STC summit, and I was like, “Oh, wow. She’s very smart. I have to invite her to come to my class.” But I don’t know, I guess we were already chatting and talking to each other. I cannot claim that we were friends. I would dare to say that now we’re friends. We’ve had many meals together, family involved, so I guess that counts as friends.

SO: I certainly hope so.

Dr. CE: So you and Alan Pringle in Scriptorium published a very handy book that I used for many, many years in my classes, Technical Writing 101. And you had three editions?

SO: Yeah.

Dr. CE: Yeah. So after the third edition, I started chatting with you on Twitter, when it was called Twitter and not X or whatever it’s called now. And I said, “Oh, wouldn’t it be nice if we write a new version of that book because I have been using it in my college level classes for many years and I have ideas on how to expand it, how to improve it.” And we have been talking about it for many years. And then finally before the pandemic in 2019, we were together at a conference in your neighborhood. It was in Durham. And we sat down and we said, “Okay, let’s finally start thinking about it.” And we made an outline and then we both realized that we could not call ourselves technical writers and that we could not write another edition of a book called Technical Writing 101, because what we were doing was way more than just technical writing.

Yes, of course, what paid the bills was doing technical writing, but you were doing more sophisticated things. I was teaching more sophisticated things that were not just writing about technical subjects. So we brainstorm about many ideas on what do we call it? And we ended up with how about content operations? That’s a thing, and people are talking about content ops. And then the pandemic hit. And when the pandemic hit, everything stopped. And I remember that we had nothing better to do. We will get into endless Zoom conversations, and we started inviting people and we invited Patrick Bosek to chat with us about it. And he said, “Wait a minute, if you’re talking about content operations, we have to bring Rahel Bailie.” And we brought Rahel.

And I guess at the time the idea was that we were going to have a book with four authors and you were going to write some chapters and I was going to write some chapters and Rahel was going to write the introduction and Patrick was going to write something. And then we were like, “What if we invite more people?” And we started making a list of topics that we wanted to cover and we ended up inviting more people. And this is where we are. The book became an edited collection with several chapters written by experts in industry who had something to say about how content operations is impacting the work that they do, not just in our home neighborhood of technical communication, but also in marketing and other forms of more persuasive content.

And finally, the book after those delays, and there were a couple other delays that we can talk about later and we will talk about those later, finally, it’s coming out next month. And I was able to see a draft of the cover. I think I shared with you the draft of the cover and yeah, it’s coming out. Oh yeah, important thing to mention. This is going to be a free book. We are not going to become rich and famous with this book because we decided that we wanted to make the content in the book accessible for everybody who is interested in learning about content operations. So it’s going to be published as an open access book by Virginia Tech Publishing.

SO: So I think this means that if you want an electronic copy of it, it will be freely available. And if you insist on print, then presumably people will have to pay to get the actual physical print edition.

Dr. CE: That is correct. And I don’t think the print version will be an on-demand print service, and it’s not going to be very expensive. But there will be, I think, EPUB and PDF versions that would be downloadable from the Virginia Tech Publishing website.

SO: And I appreciate that Virginia Tech Publishing did this because of course, academic publishing is notorious for these $500 science textbooks and they’re apparently doing it all wrong, and I appreciate that. So this is great.

Dr. CE: We didn’t want to go in that direction on purpose because we know based on the kind of books that you have published with Scriptorium, the kind of work that I have published about DITA and Lightweight DITA, that we have readers in parts of the world that they just cannot buy a book, but they’re very interested in these topics and that’s why we, and I appreciate that you and all the other people who made contributions to the book accepted and signed the agreements to have this be released as open access with awareness that there won’t be any sweet money coming to you in royalties for the chapters that you contributed to this book.

SO: Well, I’ve done a number of commercial books that had royalty agreements associated with them, and I can assure you that the delta between that and what we’re doing with this book is far smaller than you might hope. I mean, it’s never been a big moneymaker. So in addition to Rahel and Patrick, I don’t want to leave anybody out, but I did want to mention that we brought in Kevin Nichols to talk about customer experience in content ops. Jeffrey MacIntyre is dealing with personalization. We’ve got Loy Searle on localization and content ops. Kate Kenyon did a really good chapter on governance, and then we’ve got some really interesting forwards and epilogues and afterwards from some other luminaries in the industry. So it was a really fun project to work through.

Dr. CE: Yeah, I’m very grateful that it started during the pandemic, and I will just email people that some of them we knew from conferences, some of them we didn’t know, and somebody will make a recommendation and I will knock on their virtual doors and be like, “Hi, I have this project that is going to be free and you won’t be making any money out of it, but people will know about content operations. Do you want to write something?” And they said yes. So that was very generous.

SO: So the intent here is to put a stake in the ground and sort of say, “Okay, this is what we think.” This is where we think content operations is and what it is and how it connects to all these other aspects of content, of I want to say communication, but what it looks like to have a content lifecycle that has all these tentacles into all these other pieces and parts. Customer experience is a great example because once you know what your customer journey needs to look like, you can connect that to, and thus I need this kind of content and therefore I need this kind of a content lifecycle. Who’s the target audience for this? Who do you think should be reading this book?

Dr. CE: Well, the way that we started conceiving the idea and what eventually became the book, and it goes back to when I first met you and I invited you to come and visit my class. And again, you were very generous to drive all the way from Durham to Blacksburg to talk to a class of 20 students who were learning about DITA. And I didn’t pay you, I just bought you dinner, and I really thank you for that. That was like, gosh, how many years ago, 13 years ago or something like that.

When I was learning and putting in practice the things that I learned when I was in graduate school and also my experience being a technical writer in industry, I always applied the things that I knew to my classes and I was reading and doing the traditional approach of exposing myself to new ideas, going to conferences. But I realized early on in my career as a professor, which I’ve been doing this gig for like 23 years now, don’t tell anybody, that one of the best ways to bring fresh ideas into the classroom was to invite guest lecturers.

And in particular, in the case of technical communication and the type of technical content enabling content that we do, I realized that bringing guest lecturers from industry and particularly consultants was the best way, in my opinion, to expose my students to practices and knowledge that were not in written textbooks, that were not even in academic journal articles because that was not the work of people who were in academia. So I think the book is structured like that, is the equivalent of a guest lecture. Somebody who comes to your classroom, in the case of people in academia, and is going to be presenting their ideas and give you some pointers on how to implement this into your content work.

And on the other side of the spectrum, we have people in industry, and this will also be the equivalent of having somebody who is a guest and comes to give a presentation about a new topic that people might be interested in. And from the work that Rahel and I were doing for a couple of years when we were working on our chapters for the book, we realized that there’s a lot of interest from many corners of the content universe on the topic of content ops or content operations, be it because people think that is related to dev ops or design ops or many other ops that are out there, or because people want to get an operational model on how to tackle enterprise level content.

So if you’re in academia, what I hope is that this book helps you expose your students and yourself to perspectives from experts in industry when it comes to technical content and marketing content and many other aspects of persuasive and enabling content. And if you’re in industry, I hope that this also helps you continue your learning or start expanding your learning on topics related to the content lifecycle that go beyond just planning how to do things in a content strategy, but really developing a good governance model for content operations that really keeps everything, we hope, under control, but we know that things are never going to be under control, and that’s when we are probably going to have to write a new book in a few years.

SO: Well, yeah, I mean, it’s funny that you talk about the intersection of academia and industry or practice. I mean, first of all, I live in Durham, North Carolina, so Virginia Tech is actually not that far, and it’s this really pretty drive through the mountains. So no particular trouble there. But I think the really important thing about this is that the work that you’re doing at Virginia Tech paying attention to this question of how do we apply, how do we look at what people are doing out there in the world and then intersect that with the rigor of the academic inquiry and practice and all the rest of it, I think is really important and unusual.

There’s not actually very many professors. There’s a few, but there’s not very many academics out there that are looking at this kind of information through a practical lens in addition to the study of rhetoric and all these other underpinnings that I think are important to the practice of whether it’s technical communication or any kind of communication. So I’m always happy to come and talk to students. They have a habit of asking questions that I can’t answer because they are much better grounded, really, than I am in the theory. I know an awful lot about how to make things happen, but anything I’ve learned about the theory that’s underlying it is kind of incidental to what I’ve done.

So it’s always interesting to hear those voices and hear people talk about the research that they’re doing, especially the grad students, but everybody, and the questions that they’re asking as they’re getting all this foundational learning. And you talk about being a professor for a while, it is very, very unusual for somebody in our age cohort to, we’ve had a longstanding argument about who’s older, but we won’t get into that just now. But we fall into the same generation certainly, and I think our birthdays are like a year apart or something dumb. And I think we decided I’m older, although for a while I thought you were older and that was awesome. Anyway.

Dr. CE: That might be correct.

SO: But the thing is that for us, a generation ago when we were in school, in college, there wasn’t a whole lot of any of this. There wasn’t really the study of TechCom or, I mean, there was certainly rhetoric but not rhetoric as applied to TechCom and enabling communications. And so people like me tend to be very poorly grounded in the academics and the preceding research that has gone into this. So I appreciate being able to do that. So how do you define, what’s your best definition of content operations and how that fits into the world?

Dr. CE: Well, the book actually borrows Rahel’s definition, that I think I have a coffee mug here with her definition that she mailed me. And she talks about you have your content strategy, and I guess at this point, people kind of know what a content strategy is. I think the listeners of this podcast need to know about content strategy or maybe they’re interested in content strategy. And that’s the plan of how do you develop, maintain, publish, sunset or revitalize content. So Rahel’s definition says that content operations is the implementation of that strategy.

So it’s like a good example that she has been using for years is that think about if you’re an architect and you make the blueprints for a house, that’s the strategy, that’s the plan. But ain’t nobody telling you in those plans how to live in the house, that you have to change the air filters of the air conditioning, that you have to clean the toilets. Nobody’s telling you that. So that’s the operational part of it, and that’s the content operations component. Other people, sometimes I’m in that camp, see content operations as bigger than that and including the process of developing, implementing and revising the content strategy.

So I think it’s a combination of knowing who is available, what is available in resources and what is missing or what’s needed to really keep a healthy lifecycle of content. That includes the planning, that includes the actual writing, creating, I was going to say filming, but nobody uses film anymore. The actual recording of videos and audio and the publishing and the evaluation assessment and then making new versions or just putting to sleep content that nobody cares about. So it’s really about how to live in that house that you created with all the daily and monthly and yearly transactions that need to happen that when they sold you the house, when they sold you the idea of the house, those were not considered. But based on the work from experts like you and the people who wrote chapters for the book, we are offering these lessons that say, “Hi, we have lived in houses and we know how to take a look at those operational components that you might not even consider now that you’re starting your strategy.”

So I think that’s a complicated way to tell you what I see as part of operations, but it’s heavily influenced by the work of Rahel Bailie, who was very generous to write the introduction to the book. And then last year when the book was almost ready, this close to being ready last year, Rahel and I sat down and said, “This is missing something. It’s missing a chapter that talks directly to content developers, not their managers, not people who are at the high level of strategy or the high level of governance, but people who are actually going to create the content. How does content operations can help you or create challenges for you?”

So Rahel and I went into a months’ long adventure of writing this long chapter that at one point we decided this might be a separate book altogether, but we created this new chapter that is included now in the final version of the book that is speaking directly to people who are going to be creating content. And how thinking about the work that you’re doing as part of a system and not just, “I’m here in my lonely cubicle or working from home because hashtag remote work forever and I don’t talk to anybody else.” So I think that’s the whole process of thinking about operations in a systems approach.

SO: Yeah, that’s interesting. And I think that looking back at some of this stuff, back in the olden days, there was really this concept that as a content creator, technical writer, whatever, I had ownership of a particular book or document or set of documents, but it was like, I’m the writer of the admin guide and you are the writer of the user guide, and I’m going to go learn admin things and write them down, and you’re going to go learn user things and write them down, and then we’re going to have this big complicated print production process. And I know an awful lot of things about press checks and blue lines that I haven’t used in 25 years. I used to know things about blue lines and press checks. But I think one of the reasons that we really need content ops is because the concept of authorship has fragmented, right?

I’m not writing a 500-page admin guide. In fact, it’s pretty unlikely that the organization is writing a 500-page admin guide. We might be writing 500 topics worth of admin stuff, but I’m writing a hundred of them and you’re writing a hundred of them and a couple of other people are contributing bits and pieces. And then we put it all together as the sort of, here’s the help for the admin person, and we put it online.

So the print production process is gone, the press check process is gone, the physical production is gone. And we are fragmented in the sense that nobody has the overarching view of what is this set of content. And because that doesn’t exist, because there’s not me as the owner of this book, which, by the way, from a psychological point of view, introduces a whole set of other complications. But because that owner doesn’t really exist anymore, our systems have to be better so that the five of us, or the 27 of us that are all writing three topics can contribute in a consistent and useful manner. Your systems don’t have to be as good when you’re relying on individuals, single individuals.

Dr. CE: And it might be that the system has people who are in charge of ensuring that the user experience of those who need the content is going to be good and satisfy the information needs. And that’s not the job of the developers. I mean, as the writer, as the creator of videos or audio, it might not be your job to ensure that whatever website, app, product that comes out of that machine that generates the content is going to satisfy the needs of a human being. And it might be that it’s not your job as the creator to be in charge of managing that whole process.

So that’s why the systems approach of thinking and being aware, it doesn’t have to be that happens at the big enterprise level, as you know, because that’s the job that you do every day at Scriptorium. Even small organizations, I don’t want to say corporations, have adopted these models of creating reusable chunks of content that you create. And based on the metadata and the connections that they have behind the scenes are going to be reassembled in different deliverables for the needs of different audiences and in different contexts and in different models.

So it’s not just the work of a lonely writer. It’s a combination of approaches. And I think that content operations really takes a look at that lifecycle. And like you have said before, not every implementation of content operations is going to be super high-tech and mega efficient. You might have your content operations approach that is based on the budget that you have and the scale that you have. And it might not be the prettiest, but at least you have an idea and you want to have, not that you can always achieve that, but you want to have some sort of control over your content publishing structure instead of letting whatever, I’ll just write a piece of paper and see how far it goes if I send it like a paper airplane. So yeah.

SO: So you mentioned the machine and the systems, and I don’t think we’re allowed to have podcasts this year without talking about AI. So do you think that the, I’m trying to avoid using the word fad. Do you think that the rise of AI, and especially this sort of 2023, all of a sudden AI is everywhere and everything is AI-enabled and everybody’s talking about AI, do you think that’s going to change content ops? How is it going to change content ops? What do you think?

Dr. CE: I think it has already changed it. Remember I told you that there was a couple of moments in which we had stopped the publication of the book and revised it. Well, the first one was, I told you, Rahel and I decided that we wanted to write a chapter that talked directly to content developers. And the second one was that Patrick Bosek and I and you were in one of those meetings, we sat down and we said, “We cannot publish a book on content operations without talking about AI and particularly ChatGPT,” because it was the boom of everybody’s talking about ChatGPT and all the conference presentations were about ChatGPT. So the book was already going to print when we said, “Wait a minute. We need to open it and revise Patrick’s chapter, which is about the technology that supports content operations and include a statement about ChatGPT.”

So I honestly think that artificial intelligence has already impacted and changed the work of content operations. It might not have affected, like you said, all the content operations implementations of the world because some might be with limited budget and limited scale. But I think that there are many use cases that are happening right now.

The main consideration is this. It’s not about learning to use the tools. It’s not about seeing how much money you can invest into having AI create your content. It’s about, as the person who supervises and is in charge of the whole operations or the persons, if it’s a large team, consider the ethical implications of using artificial intelligence and decide, “I’m going to use AI for this, to summarize this, to create this. What are the possibilities that by doing this, I put some of my users at a disadvantage? What are the implications of by doing this, I’m going to completely run my bulldozer over the diversity of my readers, of my users, and I’m going to have damaged their perception of their interactions with whatever information products I’m creating.”

So I think the big conversation has to be not is AI going to impact content operations or content because it’s already impacting it, but how do we supervise and bring this into the cycle of content operations in an approach that doesn’t leave people at a disadvantage? And it might be that doesn’t leave content creators or content managers at a disadvantage, and it’s concentrated on the ethical perspectives, on the use, implementation and feeding of artificial intelligence tools. So I think that’s where the conversation is really going to go in the near future.

SO: That’s interesting. And I think additionally to that, the question of trust and reputation. If you develop a reputation for generating junk because I asked ChatGPT to write my bio and it made up a bunch of stuff, and then I just used it because why not? But it seems to me that this is going to, and we’re already seeing search degrading because of all the AI generated stuff. So I think in addition to the ethical issues, there’s some really, really interesting questions around whether the efficiency that you get out of generated content, is the plus of gaining that efficiency greater than the minus of the trust and reputation problems that you’re going to have if you’re not very, very careful? I mean, you could generate it and then you can review it and clean it up and fix it, but you just gave back your efficiency gains. So then is it really a net positive?

I do think that gisting and summarizing can be very useful, but I have some real concerns about when you go in and you tell it to tell me how to operate this medical device, first of all, people don’t use Chat to ask it how to operate a medical device. But if and when you do, be careful because it might make some stuff up. And I can’t remember where this was, but yesterday I heard somebody say that, on a podcast I was listening to, and when I figure out who it was, I’ll dig it out and I’ll put it in the show notes, but essentially that when we create enabling content for new products, we are in the business of creating new content and ChatGPT does not do a very good job of creating new content. It only reissues what it has. And so if you’re creating something brand new, somebody has to do that work. And I don’t think the person or thing doing that work is going to be an AI-enabled large language model.

Dr. CE: There are many tests and forms of evaluating the content created by human beings or created, I mean, it’s not really created, it’s assembled by artificial intelligence. But I am old school when it comes to some of my metrics. And I know that some people have challenged this, and I know that some people have come up with better approaches for evaluating the content, the quality of technical content. But I go back to IBM’s Developing Quality Technical Information, and I want to be sure that the content either created or written or produced by human beings or by artificial intelligence is easy to use, easy to understand, and easy to find. And I send people back to reading the second edition of IBM’s DQTI.

And that is pretty valid today because you can have a machine generate paragraphs and paragraphs of content, and you can have very nicely machine-generated DITA tags that give it some structure. And you can have ChatGPT help you do the XLT to do a beautiful HTML5 transformation. And your content might look like it’s good, but it has to be measured by is this really helping human beings? Because otherwise it’s just garbage regardless of how pretty the code behind the scenes is, which is not necessarily that pretty because ChatGPT doesn’t know much about DITA and it doesn’t know how to establish the difference between a task and a general topic. But that’s a conversation for another day.

SO: And I mean, that’s probably a good place to leave it. I think we’ve raised more questions than we’ve answered.

Dr. CE: That’s what I do.

SO: But the book is going to be out shortly, we hope. So October 2023. And we’ll include a link in the show notes that will point you over to wherever it is that you’ll be able to order or pre-order it from. So we’ll set all of that up. I remembered who it was that talked about new content. It was Jack Molisani in our podcast from a couple of weeks ago, so I’ll add that link. And Carlos, thank you. This has been really interesting as always. Glad to see you. And sounds like we need to talk some more about what’s going on here.

Dr. CE: Yes, indeed. Thank you very much, Sarah.

SO: Thank you. And thank you for listening to The Content Strategy Experts Podcast, brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

The post ContentOps edited collection: Content operations from start to scale (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 32:02
Applications of AI for knowledge content with guest Stefan Gentz (podcast) https://www.scriptorium.com/2023/09/applications-of-ai-for-knowledge-content-with-guest-stefan-gentz-podcast/ Mon, 25 Sep 2023 11:45:39 +0000 https://www.scriptorium.com/?p=22126 https://www.scriptorium.com/2023/09/applications-of-ai-for-knowledge-content-with-guest-stefan-gentz-podcast/#respond https://www.scriptorium.com/2023/09/applications-of-ai-for-knowledge-content-with-guest-stefan-gentz-podcast/feed/ 0 In episode 152 of The Content Strategy Experts Podcast, Sarah O’Keefe and special guest Stefan Gentz of Adobe discuss what knowledge content is, what impacts AI may have, and best practices for integrating AI in your content operations.

“As a company and as a content producer who’s publishing content, you are responsible for that content and you cannot rely on an agent to produce completely accurate information or information that is always correct.”

— Stefan Gentz

Related links:

LinkedIn:

Transcript:

Disclaimer: This is a machine-generated transcript with edits.

Sarah O’Keefe: Welcome to the Content Strategy Experts Podcast, brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. 

In this episode, we welcome Stefan Gentz from Adobe. Stefan is the principal worldwide evangelist for technical communication. He’s also a longtime expert in the space with knowledge of not just technical communication, but also localization and globalization issues. He’s here today to talk about the opportunities and applications of AI in the context of knowledge content.

Hi, everyone. I’m Sarah O’Keefe. And Stefan, welcome.

Stefan Gentz: Hello, Sarah. Nice to be here and thanks for inviting me.

SO: Always great to hear from you and look forward to talking with you about this issue. So I guess we have to lead with the question of knowledge content. What do you mean when you say knowledge content?

SG: It depends a little bit on the industry, but generally, there’s enterprise content and there are multiple areas in enterprise content and we all know marketing content and that beautiful content on marketing websites and advertising and so on, but there’s also a huge amount of other content in an enterprise and what kind of content that is is a little bit depending on the industry and which sector we’re looking at. But they also share a lot of content, which is produced across multiple industry verticals.

If you look at software, hardware, high-tech like semiconductors and robotics and so on, we have content like getting started guides, user guides, administrator guides, tutorials, online helps, FAQs and so on. But we also have things like knowledge bases, support portals, maybe API documentation, and you will find similar content in the automobile and industrial heavy machinery industry where you also have user manuals, maintenance guides, things like that, but also standard operating procedures, troubleshooting guides, safety instructions, parts catalogs and so on.

And when we look into industries like BFSI, banking, financial services and insurances, we have content like regulatory compliance guidelines. Of course, also policies and procedures, but also things like accounting standards documentation or terms and conditions, and again, knowledge bases and support portals, training portals for employees, et cetera, or partners.

And in healthcare, medical pharma, we have a lot of similar content, but we also have things like citation management, clinical guidelines, the core data sheets, CDS, dosage information, product brochures, regulatory compliance guidelines again, SOPs, maintenance guides and so on. And we have in other industries, things like installation guides, user guides, flight safety manuals in aerospace and defense, technical specifications of products, kinds of products and so on.

So there’s a huge amount of enterprise content that is produced in companies and marketing content is probably just a fraction of the content that is produced in other departments, like classic technical documentation, training departments, and generally, also as I just said, knowledge content producers or I think you originally mentioned product content which also fits, but I like to call it knowledge content because it’s a very broad term that covers not only knowledge basis as many people think, but all the content that carries and transports knowledge from the company to the user of that content.

SO: Yeah, I’ve also heard this called, I think we’re all searching for a term that encompasses the world of… It’s almost like not-marketing, not persuasive, the other stuff, other.

SG: Non-marketing content.

SO: I’ve heard it called enabling content in that it enables people to do their jobs, but of course, enabling has some pretty not so great connotations.

Okay, so we take your knowledge content and we wanted to talk about what it looks like to apply some of these recent AI innovations into the context of knowledge content. So what are some of the opportunities that you see there?

SG: There’s a huge amount of opportunities for companies using AI. Maybe we can break it a little bit down into two areas and let’s not talk about creative gen AI, like Adobe Firefly or Midjourney or so that are engines that are used to produce visuals, images, and graphics, but let’s talk about the written content here.

So I see two areas there and one is the area of authoring where content is created, and then there’s the area where content is used and consumed, whatever the consumer might be, maybe chatbot or chatbot interacting with an end user, or maybe even other services that use the content. And we can, of course, when we think from the content consumer perspective, a chatbot is definitely an area where AI can help to find content better and give better answers and maybe also rephrase content in a way that is appropriate to the content consumer. If I’m talking to, let’s say, 10-year-old children, or if I’m talking to a grownup with a university degree, they might have different expectations in how they want to get the content presented to them in terms of language, in terms of voicing, voice and sound.

SO: Right. The 10-year-old understands the technology and you don’t have to explain it to them.

SG: That might be, of course, true. Yeah, maybe they don’t even need the chatbot. So that’s the content consumer perspective, which AI can help to find better results, more fitting results, and produce nicer answers.

But there’s the other field where content is created with authoring content, and I see a lot of opportunities there. And at Adobe, especially in Adobe Experience Manager Guides, our DITA CCMS for AEM, there, we are implementing a lot of AI functionalities. I’m not sure how much I am allowed to talk about that, but we showed a couple of things at the last DITAWORLD, Adobe DITAWORLD in June where we presented some of the features that we’re implementing into AEM guides, into the authoring environment.

And one is, for example, the engine checks the content that an author is creating and compares it with the repository of content that is already there. And then makes suggestions like, “Oh, I understand that you’re trying to write that safety note, but there’s also a small snippet of content with a standard safety note in your company that maybe you want to turn that what you’re currently writing into a content reference, con reference, or maybe that single term that you’re writing there, you could turn that into a key ref because there’s already a key definition in your DITA map,” things like that.

So to assist the author to leverage the possibilities that their technology writes in a more intuitive way, instead of thinking for maybe minutes, “I remember I had written a note for that already, or I had already written that safety note,” the system will assist you with that and give you the suggestion, “Hey, this is already there. You can reuse that content.” That is authoring assistance.

We also showed, I think, some sort of auto-complete. So you’re starting to write the sentence and then a small popup comes up giving you a couple of suggestions how you could continue the sentence. And we all know this predictive typing thing for quite a few years, but usually, they are more created on classic statistical engines that try to predict what you want to write. But our solution there will take the repository of content that is already there in the database as a base for making suggestions that will fit much better than just a statistically calculated probability, how you probably want to continue the sentence.

So this kind of authoring assistance with auto-complete and predictive typing, that gets much better when you have an AI engine that understands your existing content and can build these suggestions on top of that. That is definitely one area.

SO: We’ll make sure to include a link to that presentation, which I actually remember seeing, in the show notes. So for those of you that are listening, it was at DITAWORLD 2024 and-

SG: 2023. You’re quite ahead to the future.

SO: I’m sure there will be an AI presentation at DITAWORLD 2024, however-

SG: Oh, I’m very sure. Yeah.

SO: Yeah. So this year, the 2023 presentations had this demo of some of the AI enablement that’s under development, and we’ll get that in there for you.

SG: Yeah. So these two areas are definitely areas where AI will help authors in the future, but there are many more things. For example, when you think in terms of DITA, you have that short description element at the top and an AI engine is pretty good in summarizing the content off of that topic into one or two sentences. And if you try to do that as a human being and you have your topic in front of you with maybe 10 paragraphs, a couple of bulleted lists and a table, and then trying to find two sentences that are basically the essence of the topic and making two nice sentences, “This topic is about dah, dah, dah, dah, dah,” that is quite hard for a human being, and an AI engine can do that in two seconds.

This is another area where AI will help people to get that job faster. And of course, they can then take that suggestion or not or rephrase it and rewrite it if they want, but they can take it as a starting point, at least. Short description summarizing the content.

It’s also rewriting content, maybe for multiple audiences. Originally, a couple of months back, I bought a tumble dryer from a German company for household appliances, and they have that classic technical documentation that comes with a tumble dryer explaining in long written sentences how to use it. And there are better concepts sometimes to do that. For example, a step list. And I copied and pasted three, four paragraphs there and said, “This is classic documentation. Can we write that a more simple way, maybe as a step list in DITA?” And then I got a step list with the paragraphs broken down, the steps that are ascribed in these paragraphs broken down into step list, step one, step two, step three, and so on. And that made the content much more consumable and accessible.

And so one could use AI here and say, “Okay, here’s my section in my DITA topic, for example, with the legally approved official technical documentation content,” and then I just duplicate that and let it rewrite as a step list maybe for the website. And then I could even duplicate it again and say, “Now let’s rephrase that for multiple audiences,” and say, “Okay, I have that TikTok generation person in front of me and they want to be addressed in a more personal, more loose language, more fun language, and please rewrite that content for this audience.” And then the engine will rewrite that content and say, “Yeah, hey, yo, man, you can put your dirty cloths into the dishwasher or into the tumble dryer, not the dishwasher. And you will have a lot of fun watching how it’s rotating when you hit the start button.”

And then you can change. That’s, of course, an extreme example, but you can create multiple variants of your content for different audiences very easily then. And I see that, and a lot of people are talking about doing that on the front end, on the website for example. I see that more from a responsibility perspective, on the authoring side when an author is doing that and approving it, so to say, maybe checking it if the information is really correct, the steps are really in the right order, whatever, and then it goes checked for different audiences into the publishing process in the end because that responsibility is, I see that not on the AI engine, I see that on the author who needs to make sure that the content is still accurate and correct.

SO: And I think that’s a really important point because at the end of the day, the organization that is putting out the product and/or the content that goes with the product, they can’t say, “Oh, I’m sorry. The AI made the content wrong. Too bad. So sad.” I mean, they are still responsible and accountable for it, which actually brings us very elegantly into the next topic that I wanted to touch on, which is what are some of the riSOs, some of the potential challenges and red flags that you see as we start exploring the possibilities of using AI in our content ops?

SG: That’s a very important topic I think to talk about because even very advanced engines like ChatGPT come with certain challenges and problems. There is, of course, we just talked a lot about, is the information correct or is it inaccurate or is it maybe even just invented by the engine? Usually, people call that hallucinating by just generating content and it would continue to generate content as long as you want and it will invent things.

And I was throwing some content to ChatGPT and said, “I want to write a nice blog post or a LinkedIn article. Can you give me some quotes that fit to the content that I have provided you?” And it provided me five, 10 quotes that sounded like some CEO would have said that, and it was even giving some names. And then I was aSOing, “Is that person, John something, really existing? And is that a real quote?” “No, I invented that, but it might fit. It could have been said by someone.”

SO: It could be real.

SG: Yeah, it could be real. That was basically the answer that ChatGPT was giving. That comes with a huge problem because as a company and as a content producer who’s publishing content, you are in responsibility for that content and you cannot rely on an agent to produce completely accurate information or information that is always correct because it will always generate content and will not let you know that it generated that content.

It’s extremely difficult for a human being to even check that content and say, “This is probably correct, and this might be just made up, and this might be an invention from ChatGPT because it just generated on a statistical probability that this content will probably fit.” And that is a big problem. You don’t have that when you let ChatGPT write code like JavaScript or maybe even DITA XML. There, it’s pretty accurate because it’s based on a certain framework, like a DTD or a JavaScript standard document that explicitly declares or defines how something needs to be structured and which element can follow on which other element and so on. But for more loose content, it’s extremely difficult and good for an author to distinguish that.

And this is why I also say there’s no danger that human writers or content producers will get jobless because of such an engine. No, the role will change. Maybe we use these engines more to generate content, but we as authors become more the reviewer and the editor of that content. It’s a little bit like machine translation where you had a machine translation engine translate your content, but then you need to do post-editing to make sure that this content and the translation is really correct and that the correct terms are used and so on. And we will see a similar development with gen AI for text-based content for sure in the future when it comes to all kinds of content production, maybe technical documentation, maybe knowledge bases, et cetera.

SO: So then can you talk a little bit about the issues around bias and the ethics of using AI and where that leads you?

SG: An AI engine like ChatGPT, for example, is of course trying to create unbiased content, but we were talking about that. I don’t have an example for that, for written content now, but we were talking about that example from the lady who was giving a photo of herself and then aSOed then the generative AI engine, “Please make me a professional headshot photo for an interview letter.” And it created a nice photo with nicely made-up hair and some nice dress and so on with a nice background and looked very professional, like a professional headshot from a professional photographer. The only problem was that this photo was showing a person with blue eyes and blonde hair while the person who provided the original photo to be beautified was an Asian person with a different look.

And that brings that discussion of the bias of an engine. Maybe it was feeded and trained with 5 million photos of professional business people photos from a Caucasian background and maybe just 1 million photos from an Asian background and maybe even less from an Indian background or whatever. And then this engine is making statistical calculations and says, “You want to turn that into professional business photo? Based on my training set, my training data set, I will make you a Caucasian-looking person.” And that is a huge problem.

And this is where this governance of AI generated content will maybe even become a full job one day where we say we need to make sure that the content that an AI engine is generating is really appropriate and culturally sensitive and is not biased and taking all kinds of other factors into consideration, and maybe an AI engine is not yet able to do that.

SO: Yeah. So the question of what goes into the training set is really interesting because of course, it is a little unfair to blame the AI, right? The AI is, in its training sets, reflecting the bias that exists out in the world because that’s what got fed into it.

And I don’t want to go down the rabbit hole that is deep fake videos and synthetic audio, but I will point out that just earlier this week, I saw a really, really interesting example of an engine where somebody took a video of themselves speaking in English and talking about something. Actually, they were sort of saying, “Hey, I’m testing this out. Let’s see what happens.” And then the AI processed what they said, translated it and regenerated the video with them speaking first French and then German.

And so it was, I don’t want to say live video, it was synthetic video of a person who spoke in one language and who was then transformed into that same person speaking fluently in their voice in a different language that they do not in fact speak because the content was machine-translated, and then they used the synthetic audio and video on top of that to generate it.

I mean, my French isn’t very good. It sounded plausible. The German sounded fine. I heard one mistake, but he sounded like a fluent German speaker, and there wasn’t any obvious weird rearrangement. They somehow matched it onto there. It was quite impressive and it was fun to watch. And then you think about it for a split second, and you realize that this could be used in many different ways, some of which are good and some of which are not.

SG: Yeah. I mean, we had some really ugly examples here in Germany where some political party was using gen AI photos to transport a certain political message, and then it came out that these photos were not from actual events that they were claiming it would be, but were AI-generated.

So there’s a lot of danger in there, and we will also need to adapt as societies and human beings to get a better find feeling what is generated content and what’s not? That will become increasingly difficult, but at least developing the awareness that what we get presented as content, especially when it comes to images, that we’ll need to develop stronger than ever before. Photoshop is there for a long time. We all know that photos can be Photoshopped, but with this new approach of generative AI that this awareness becomes even more important.

But when we talk about ethics, I know we are running a little bit over time probably, but there’s another aspect in ethics that I see as something we need to discuss in more detail in the future. We feed the engines with content, existing content, and maybe it’s content that is even intellectual property of someone. And then this engine produces new content that is leveraging the knowledge of that, that is in that content, to produce new content. And then something, especially in the context of university content, research content and so on, who’s the owner of that content that is newly created? And whose intellectual property is it? And what is, if content is generated that is very clearly rephrased of existing content from some content that is maybe protected by licenses or so?

So there’s also this ethical discussion that we need to have and that will for sure maybe even need some regulation on the government level in the future.

SO: Right. And the answer right now, at least in the US is that if the content was generated by a machine, you cannot copyright it. That implies that if I feed a bunch of copyrighted content into the machine and produce something new out of the machine, that I have essentially just stripped the copyright off of the new thing, even if it’s a summary of the old thing or a down-sampling or a gisting of the old thing, the new thing is not subject to copyright unless there is significant human intervention.

So yeah, I think that’s a really good point because there’s a big riSO there. And there’s also the issue of credit. I mean, if I just take your content and say it’s mine, that’s plagiarism, but if I run it through an AI engine and plagiarize from millions of people, then it’s suddenly okay. That seems not quite right. Okay, so yes, tell us-

SG: A plagiarism engine that checks the content is probably very useful in the future, yeah.

SO: Yep. So lots of things to look out for. And I think it sounds as though, from what you’re saying, you see a lot of potential benefits in terms of using AI as a tool for efficiency and recombination of content.

So if you join me in, I’ve already moved on apparently to DITAWORLD 2024, so if you look ahead a year or so, what do you see as the opportunity here? How are companies going to benefit from doing this, and what kinds of things do you think will be adopted the fastest?

SG: I think coming back to the beginning basically, these two areas of authoring and authoring content, content creation and content consumption, and these are the two fields where companies can benefit and will benefit from the near future as soon as enough of these new features will have found their way into the tools themselves.

Faster content production is definitely one part, but that also means that authors need to learn how to create content with AI engines, the art of prompting as a keyword here, and to detect the voice and tone of generated content. It’s relatively easy after a while to identify, oh, this content was written by ChatGPT, for example, because the standard way ChatGPT is generating content is sort of always the same, and you can easily identify it after a while. This will give some job changes and means that companies will need to adapt to that before they can really benefit from it.

People, authors, and content creators need to learn how to get the right content out of an engine, out of prompting, prompt engineering, how to write proper prompts, and that will take some time and trainings and so on, but then it’ll really speed up the content production process a lot. And the second benefit is then with the content consumption, providing just better customer experiences by having more intelligent chatbots that provide better answers, right-fitting answers, maybe assisting users of a long blog post on a website with giving a small summary of that and things like that.

So there will be many benefits for companies using AI, just only when it comes to this specific area of content, knowledge content, but there will be other areas of course as well, financials, detecting patterns in financial data, and so on, for research and so on. There will be a lot of benefits, but when we talk about content, the content we are talking about here today, there will be mostly the biggest benefits will be probably content production, which also includes, for example, translation.

SO: Yeah, I think I agree with that, and that sounds like a relatively optimistic note to wrap things up on. Stefan, thank you so much for all of your perspectives on this. You’ve obviously thought about this carefully and you’re sitting inside an organization at Adobe that is actually building out some tools that are related to this, and I’ll be interested to see what comes out.

Tying back to that, the DITAWORLD 2023 recordings are available, and we’ll put those in the show notes. There were a couple of presentations in there, this was back in May, June, that addressed the state of AI and some of these similar kinds of considerations along with that. I’m not sure if it was exactly a demo, but there was a discussion of what the AEM Guides team is thinking about in terms of product support. So we’ll make sure to get that into the show notes.

Scriptorium has a white paper on AI and we’ll drop that in there, and then I think there will be more discussion about this going forward. So thank you again for being here, and we’ll look forward to hearing more from you.

SG: Thank you.

SO: And with that, thank you for listening to the Content Strategy Experts Podcast, brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

The post Applications of AI for knowledge content with guest Stefan Gentz (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 27:51
Adapt to evolving content careers with guest Jack Molisani (podcast) https://www.scriptorium.com/2023/09/adapt-to-evolving-content-careers-with-guest-jack-molisani/ Mon, 11 Sep 2023 11:20:18 +0000 https://www.scriptorium.com/?p=22094 https://www.scriptorium.com/2023/09/adapt-to-evolving-content-careers-with-guest-jack-molisani/#respond https://www.scriptorium.com/2023/09/adapt-to-evolving-content-careers-with-guest-jack-molisani/feed/ 0 In episode 151 of The Content Strategy Experts Podcast, Bill Swallow and podcast guest, Jack Molisani discuss how content careers have changed through the pandemic, layoffs, quiet quitting, and AI, and what you should do to stay ahead of the curve.

“Rather than applying for a job […] you want companies to come to you and say, ‘Hey, will you come work for us?’ The only way they’re going to do that is if you write articles, if you’re speaking at conferences, and if you position yourself as an expert in your field.”

— Jack Molisani

Related links:

LinkedIn:

Transcript:

Disclaimer: This is a machine-generated transcript with edits.

Bill Swallow: Welcome to The Content Strategy Experts Podcast, brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we’re talking with the president of ProSpring Technical Staffing, and executive director of the LavaCon Conference, Jack Molisani, about how content roles have changed over the last year and what you should do now to advance your career in content. Hi everyone. I’m Bill Swallow.

Jack Molisani: And I’m Jack Molisani.

BS: Jack, lovely to have you here again.

JM: Always a pleasure.

BS: So I guess let’s just get right into it. You’ve seen things from both angles, from a conference organizer doing content conferences and also with running a staffing agency for content people. How have roles changed over the past year or so?

JM: What’s really been amazing over this past year, we’ve seen mass resignations, tech layoffs, quiet quitting, all at the same time. Never seen that before.

BS: That’s crazy. So do we know what’s contributing to that?

JM: Let’s take each one in turn. I do believe that a lot of companies did layoffs in the first quarter because they over-hired during the pandemic. Right. So everyone said, oh my God, Oracle’s laying off 10,000 people. Well, they hired 40,000 during the pandemic, so there’s still a net gain of 30,000 jobs. I also believe that a lot of tech companies or companies in general were worried about a recession and they started trimming the fat off their payrolls in advance. But here’s the deal. I believe that if enough companies start laying off out of fear of recession, they’re going to cause the very recession they were afraid was coming.

BS: It’s funny how that works.

JM: Yeah. And then with the great resignation, I think companies did so much with so little for so long that people just got tired of it. And once companies started hiring again, even with the tech layoffs, companies are still hiring that they said, the heck with this, I’m jumping ship. I’m finding something a little more work-life balance in addition to compensation. But really it’s the work-life balance that I’ve been seeing people changing because of.

BS: So does that mean it’s also a factor as far as the quiet quitting is concerned?

JM: That’s a little different. Yes, and because the fear of recession applied to candidates just as much as it did to companies. So maybe people are tired of where they’re working, tired of being abused, but instead of just quitting and finding a new job, they just say, the heck with it, I’m going to do the absolute minimum possible without getting fired. And that’s where the quiet quitting came through. And I did some research on this and there’s actually top five reasons why people did quiet quitting. Would you like to hear them?

BS: Sure.

JM: One, toxic work culture. Right. And now I have the pleasure of having a really great team on my company, as do you.

BS: Absolutely.

JM: And I personally have never experienced a toxic work culture, but I’ve heard many, many stories from people who have. Two is job insecurity and reorganization. With every acquisition and reorg comes the possibility that you’re going to lose your job. So people are being proactive and resigning. High levels of innovation, I thought was an interesting reason why people changed jobs, not just quite quitting, but also great resignation. Because if you’re constantly doing something new and companies are, what’s the word I’m searching for? Demanding shorter development types. Produce, produce, release, release, release. Do you remember the good old days when we released software once a year?

BS: I do remember those.

JM: Right. I know we both have a little gray in our temples or in our beards in our case. But yeah, now they’re innovating, release, release, release. And that gets tiresome. Four was failure to recognize performance. How many times have we heard technical communicators complain because they’re not recognized for all the work they do. And part of my response to that is, are you letting people know how good of work you’re doing? Are you doing a corporate newsletter for your department? Are you letting people know how much you just saved the company? That’s another whole podcast. And this was fifth, was poor response to COVID-19. I still know companies who, managers who are insecure managing a younger workforce want to see people in their chairs and personally,

BS: Yeah. Butts in seats.

JM: Yep. Butts in seats. And I tell my people, I don’t care where you work, when you work, how you work, as long as you get your work done. Right. If you can do that at home in your pajamas at two o’clock in the morning, you go.

BS: Just be on that call later this afternoon.

JM: Exactly.

BS: Yeah. No, I hear that. So I guess we’ve talked a little bit about the negative trend that we’ve saw with resignations, quiet quittings and general layoffs. As far as the roles that we’re seeing out there now, how are they starting to differ from what we’ve seen let’s say, let’s even go back a few more years. So pre-pandemic versus post pandemic. What are we kind of seeing here as far as the roles, as far as content development, content ops, anything new and exciting going on there?

JM: I’ll start with a happy news that we did not see the mass migration of jobs to India and other countries that people were fearing or they left and came back, right? So yes, you might be able to get a cheaper writer strategist, UI designer elsewhere, but if it takes them five times as long and it’s half as good, you’re really not saving money. So I did see a return of jobs since this, we’re both based in the United States. I’ve got a US-centric viewpoint on this, is that I did see jobs come back to the US. I’ve seen both more and less specialization at the same time. Used to be you’d see a job opening and say technical write needed, must have FrameMaker and RoboHelp. Where now it’s like, all right, yes, we want someone who does structured offering. We don’t care which tool. Because really once you’ve got the concept of structured authoring down, it doesn’t matter what content management system or structured authoring development tool you’re using. Because if you know one, you can pick up the others.

BS: Right.

JM: Now that said, a majority of the world is still not doing structured authoring, but I see that as you asked what trends are happening, that is definitely a trend where people who are hiring are looking forward, and even if they’re not doing structure authoring now, they’re looking ahead. So if they’re going to hire someone and they get that few precious headcounts, they’re going to make sure that person that they do hire is prepared to move forward and know what’s on the horizon coming down the pipe.

BS: Gotcha. So as far as things on the horizon or coming down the pipe are actually being forced through the pipe as we speak. Let’s talk a little bit about AI and,

JM: Oh, let’s not.

BS: Its impact.

JM: Okay, so I’ve got two,

BS: I have my own thoughts on this as well, but let’s go. Let’s hear from you.

JM: I have two completely divergent views on AI, maybe three on a good day. One, I think there’s very, very valid uses of AI. For example, 23andMe, we now have millions of people who’ve mapped their genomes. Take 20,000 people who have male pattern baldness and 20,000 who don’t. Give it to an AI and say, find the difference in the genomes. Brilliant use of AI. All right. Now, how do we apply that as technical communicators or content strategists? Right. I saw a chapter, did a presentation on AI for technical writers, and the first thing the speaker said is, I’m not a technical writer and I’m using AI to come up with ideas for blogs on LinkedIn. What?

Now that said, I can see a valid use for AI in content development. For example, you’re doing structured all three. You just wrote 100 topics. Ask the AI to populate your meta tags for you, or here’s a CMS with 400 topics. Read them all, find out which ones are sufficiently similar that we could combine them and reuse, cut down our translation costs, cut down our maintenance costs. Brilliant use of AI. None of the tools are there yet though.

BS: No.

JM: Right.

BS: No.

JM: So I see value, but here’s another thing. I just saw another blog on LinkedIn the other day going, AI is going to increase your efficiency. That is the main thing we’re going to save money is by increasing efficiency. And my thought is I’m documenting a new printer from Canon or Epson or Kodak, how is that going to help me talk to a subject matter expert about how do I maintain this thing? Right. And I distinctly remember doing maintenance documentation on an LCD projector and discovered that if you didn’t put your finger down on the screw as you unscrewed it, the spring would go spring and the screw would go flying across the room. And the only way you know that is by doing it.

BS: Yep.

JM: And I believe that we as technical profession document things that don’t exist yet. That is our reason for existence and how are you going to get that to an AI?

BS: Yeah, I think we’re on the same wavelength here because I take a similar approach where I don’t consider AI as being a valid content generation tool for technical content, but I see it as being useful. One, behind the scenes. So facilitating, like you said, facilitating search, coming up with keywords. So if you need to do any kind of SEO prep for content, being able to spider that, look against your other content, find a good keyword or a key phrase that’s going to work to make this stand out. Yeah, certainly aid in populating search for content that’s going out to the web, but I don’t really see it in a, it’s going to write these procedures for you because exactly, you need that level of preciseness on things that maybe aren’t yet documented. So it doesn’t have anything to rely on to explain it. So that’s interesting.

JM: Your audience has not been able to see me nodding my head during your entire answer. Another thing I want to comment on, I think people are grossly overusing the term AI. I was just speaking with a vendor yesterday about their system that’s supposed to be AI-enabled and a structured authoring system. And I said, well give me an example of the AI that you’ve added. And she goes, well, it will create a table for you. I go, that’s not AI, that’s a wizard. And we’ve had them for decades and she was the marketing person. So when I really pressed her on what part of that was artificial intelligence, she couldn’t answer me. She goes, I’ll let you talk to the engineer.

So I think there’s a lot of buzzwords going on and a lot of people talking about something they don’t deeply understand and they’re just using AI because it’s popular now. Now one more thing I want to add, if you remember, okay, we mentioned LavaCon, which is a conference on content strategy. Was it four years ago? Everybody was talking about chatbots. Oh my gosh, chatbots are going to be the next delivery platform. Everyone’s got to structure their content for chatbots. Next year, crickets. I have to say, from my little perch looking, my little crystal ball on where the industry is going, I said, oh no, this is going to be a flash in the pan. And I kind of feel the same way about AI at the moment.

BS: Yeah, I think it has a little bit more staying power because it’s more than just a delivery format. But I think that what we’re talking about with regard to AI now is probably not what we’re going to be talking about with regard to AI next year, five years later. It’s going to be a very different beast and it’s going to have some very different applications that I think what we’re seeing now, most people now are looking at it and saying, oh yeah, this is something that will generate a few paragraphs for me. And we’re talking about ChatGPT here. But there are other things where we’ve got cases where you can create artwork and such with AI as well. But again, it’s just pulling a bunch of different representations together and smoothing the edges and saying, ta-da. Whether it’s good or not is of course in the eye of the beholder because the AI doesn’t know what’s good or bad, it’s just going to do what you told it to.

JM: Agreed. The other thing that concerns me about AI is hallucinations.

BS: Yes.

JM: I ran ChatGPT and asked it to create tweets for all my speakers, and one of the tweets was about someone who’s not even speaking at the conference. A friend of mine had an AI write her bio and it said that she had a PhD in mathematics when she didn’t.

BS: Oh, I have a PhD as well, in case you didn’t know.

JM: So yeah, so,

BS: And I don’t. I don’t.

JM: But however, that’s creating a whole new job is fact-checking and editing the content that an AI… Now let me add one more thing. We’ll go on to the next question. Is that another valid use of AI that I thought was really clever was one of the banks would write an article and then run it through an AI and said, rephrase this as a CFO would want to read it in the terminology that CFO understands. Rewrite this in terms of how a financial analyst would want to read it. Rewrite this as how a consumer would want to read it. I thought that was brilliant use, again, not generating it from scratch, but taking an existing dataset and transforming it for a target audience.

BS: That’s an interesting perspective. Okay, so we talked a lot about AI. We talked a bit about people starting to look for less on tools experience and more on structured authoring and more, I’d say non tool specific skills. So what other trends are you seeing with companies looking to hire content professionals? Are there any other things that they’re looking for? And what can people do, I guess, to start sprucing up their resume and their experience to look for that next big gig?

JM: So my answer to this is going to be completely non sequitur and you’re not going to see it coming. Are you ready?

BS: Alrighty, let’s hear it.

JM: Take a class at improv.

BS: Improv.

JM: Improv comedy. I’ve studied improv, I’ve taught a workshop on it at the ESTC summit. Interesting thing about improv. I’ve heard people tell me, oh, I could never think fast enough to do improv, which is interesting because the first thing they teach you in improv is to stop thinking and start listening. Right. So one of the things that I discovered after taking a class at improv, I’ve never been thrown off by a question I wasn’t anticipating because part of the whole concept of improv is yes and. You take whatever your partner, boss, whoever gives you and go, yes and, and add to it. Right. And that’s also a great way, if there’s someone on your team that you don’t like their idea, you go, oh, yes and we can also do this. Right. And without just saying, oh, that’s the stupidest thing I’ve ever heard. That and take a class on public speaking, like at Toastmasters.

Even though a lot of the work we’re doing is remote, I see that 100% remote is probably going to start whittling down. And we’re going to have to either one, come back to the office occasionally or to be visible, right? Speak at conferences, speak at meetups, speak at your local STC chapter, right? Because rather than you applying for a job, and we’ll come back to applicant tracking systems in a second, write that down, that you want companies to come to you, say hey, will you come work for us? And the only way they’re going to do that is if you write articles, if you’re speaking at conferences, if you position yourself as a expert in your field.

BS: So raise your own profile, I guess, out there, LinkedIn, whatever, and build those skills to start putting yourself out there a little bit more.

JM: Oh my gosh, I cannot go on LinkedIn a week without seeing a blog post from Bill Swallow unless that’s AI generated and I should not be impressed.

BS: Oh, I don’t know.

JM: Okay. So I mentioned applicant tracking systems.

BS: Yes.

JM: Real briefly, and I actually have an article and a whole presentation on this that if you want to send that out in aligner notes at the end, we can do that. Real quick. When you apply for a job through a website, it goes through an applicant tracking system. And originally that was just a way to actually track where you are in the system. Well, with the advent of things like Indeed for mobile where you can create a profile and every time you see a tech writer job or a content job, you go apply, apply, apply, apply, apply, apply. So suddenly companies are getting hundreds of resumes that are not even remotely qualified. So again, added artificial intelligence into their applicant tracking systems to weed you out. So 99% of the applications that you submit will never be seen by a person. Right. Because not only are they comparing your resume to see if it matches the job requirements, they’re taking your resume to see if it matches the job description. Right.

So the first thing I tell people, stop applying for jobs through websites. Go to LinkedIn, find somebody who works there, even if it’s a recruiter, because every single recruiter has a LinkedIn profile and say, Hey, I see you have an opening for X, Y, Z. May I send you my resume? They’ll do one of two things. They’ll go, sure, send it over. Or they’ll go, no, go ahead, apply online and I’ll keep an eye open for the application. But now you have a human that can fish your resume out of the spam folder because you are qualified for that job. So that’s another thing I’ve seen change over the past few years, is just an explosion of AI applicant tracking systems weeding people out. In fact, I personally know five people that got jobs from personal referrals last year, and they did not get a single interview applying through jobs, through websites. So again, work your professional network.

BS: Sound advice. I will actually echo on that resume spambot approach to responding to job postings. A few years ago, I actually posted a job for a content, well, not a content developer, but a developer of content systems. So basically I did a plugin developer and I made the mistake of including Java as a desired skill, and I actually got several resumes for baristas come in.

JM: Oh, yes, yes. Absolutely. Yes. Yeah.

BS: And I was looking to see if there was anything, I mean, it piqued my interest because I had to look and see if there was anything in there that indicated that these people were interested in moving into some kind of a development role. And it’s like, no, no, they’re just interested in making really good coffee, which is fine, but it’s not what I’m looking for. Although I’d love the coffee.

JM: And I don’t even think that. I think they now have AIs where you could go, anytime a job opens that matches my resume, submit me. So you’re not even saying apply, apply, apply anymore. It’s the AI automatically submitting to a job that you’re not qualified and AI automatically rejecting you. Madness.

BS: So the computers are taking over.

JM: Yeah, Skynet.

BS: Excellent. Well, I think this is a good place to leave things, Jack. Thank you very much for talking, and actually, I’ll give you a moment to kind of plug LavaCon since that’s coming up as well.

JM: Oh, thank you. So this is our 21st year. We’ve survived two recessions and a .com crash. It’s the LavaCon Conference on Content Strategy. We do have a track on integrating AI into your content strategy, more specifically the benefits and liabilities of integrating AI into your content strategy. But it still covers content strategy and user experience. We’re going to be in San Diego in October. We have a discount code for your listeners. Anybody who registers using Scriptorium as a referral code gets $300 off registration, and it’s at lavacon.org.

BS: Excellent. Jack, always a pleasure.

JM: Thank you for having me.

BS: Thank you for listening to The Content Strategy Experts Podcast, brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

The post Adapt to evolving content careers with guest Jack Molisani (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 20:37
How to choose a content model with guest Patrick Bosek (podcast) https://www.scriptorium.com/2023/08/how-to-choose-a-content-model-with-guest-patrick-bosek/ Mon, 14 Aug 2023 11:44:51 +0000 https://www.scriptorium.com/?p=22039 https://www.scriptorium.com/2023/08/how-to-choose-a-content-model-with-guest-patrick-bosek/#respond https://www.scriptorium.com/2023/08/how-to-choose-a-content-model-with-guest-patrick-bosek/feed/ 0 In episode 150 of The Content Strategy Experts Podcast, Alan Pringle and special guest, Patrick Bosek of Heretto talk about choosing a content model, factors to consider, and when you should think about customization.

“There’s a valid use case for almost every approach that’s out there. There’s no way around that. I think what it really starts to come down to is making sure that you’re matching the 18+ months [ahead] to the decision you’re making now.”

— Patrick Bosek

Related links:

LinkedIn:

Transcript:

Alan Pringle: Welcome to The Content Strategy Experts Podcast, brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we talk about choosing a content model and the pros and cons of customizing it with Patrick Bosek of Heretto. Hey everybody, I am Alan Pringle, and today we have a special guest. It’s Patrick Bosek of Heretto. How are you, Patrick?

Patrick Bosek: I’m good. How are you, Alan?

AP: Good. Let’s talk a little bit about content models today, and let’s start really at the beginning, choosing one, do you pick a standard? Do you create your own? What do you do?

PB: I guess if you’re looking for my advice, which I suppose you are since I’m on this podcast.

AP: Correct.

PB: It’s obviously use-case specific, everybody goes into the process of creating a content model and the first step they look at is like, “What do we need this thing to do?” Well, not everybody, let me back up. People who make good decisions when choosing a content model start with deciding what they need it to do in the long run. And I think the thing that I’ve seen here over the what? 15 plus years that I’ve been working in this industry, is really that there are organizations which somebody chooses a content model, because they think it’s cool and it works for what they’re doing right now. And then there are organizations that look at the total scope of what they’re going to need to do both in their group and then probably beyond their group for the organization long run. And then they embark on a much more formalized process for choosing a content model.

Very often you’ll see that where you start, how you start is pretty impactful on what you choose. It’s not real common that the very iterative approach, it’s almost an Agile approach in a lot of ways, like, “Oh, this works for this, let’s move it forward and we’ll do it this way, this way, this way.” And it’s very iterative. Does that lead people or land people on structured content? Typically, that lands people on either proprietary tool sets that are built on whatever’s in their proprietary tool set. There’s a lot of wikis out there that have their own set of structure in the background, it could be either text-based or it could be HTML based. MadCap’s got their own proprietary standard. There’s a few other tools that are built on standards which are standards but proprietary.

But I think those are even probably more on the side of things that are intentionally selected. Very rarely do you see an organization that takes this iterative path go into it and then choose something which is structured, because that stuff is typically less available to just put your hands on and just start creating something that you can then spit out a PDF or spit out a website. I think that knowing how it is that you’re going to start and knowing what set of problems and what the time horizon you’re trying to cover are, that’s like getting ready to get ready to choose your standard if choosing your standard is your first step. And I think that knowing where you are there is really critical, how is it that we’re making this choice?

AP: No. And I think you’re right. This is like any kind of business decision, you’ve got to think about what your requirements are and it is not a situation where you were looking at or I would prefer that people not look at the next three to six months. Yes, you may be on fire and have something to do, but you’ve got to be careful and balance things out and think, “Are we going to go with something that’s much narrow and very focused on one use case? Are we going to go a little wider and accommodate things that might be say 18 months, two years down the road?” For example.

PB: Yeah. I think that’s fair. I think the thing I would add to that is that, there’s a valid use case for almost every approach that’s out there. There’s no way around that. I think what it really starts to come down to is making sure that you’re matching the 18 plus months to the decision you’re making. And so I’ll give you a really good example, I think it’s a really good example anyways, so if you’re just writing a README file for a microservice that you’re setting up and it’s going to be maintained with a code and nobody’s really going to be referencing it, who doesn’t actually have their hands on the code, using true structured content for that makes no sense.

AP: It’s overkill. 100%, yes.

PB: Yeah. And it doesn’t really integrate well with the delivery or with the end user. It would be a bad experience all around. That makes a ton of sense to just use what is supported by the repository that you’re putting that into, which is by and large, it’s either GitHub or Bitbucket or the other one, GitLab, so it’s one of those three, but they all support some Markdown. README files, by and large, if you’re choosing something other than Markdown, you should have a really special case. On the other end of the spectrum though, if you’re thinking about, “Okay, this is going to result in a large set of content, which is intertwined and pieces of it and move at different speeds throughout time and different audiences access it in different ways and potentially get different pieces of content based on who they are,” at that point in time, you can’t do that with Markdown without doing a lot of custom stuff.

AP: I was going to, say there’s some people who might tell you that you can do that, but I’m pretty sure you should not. How’s that?

PB: Okay, fair. That’s an important distinction. You shouldn’t do that with Markdown. This is a tangent, but I love tangents. I was looking through the software documentation, I can’t remember which company it was, it might’ve actually been one of the Git providers, and they’re still in Markdown with a lot of their content. But they’ve gone so far with pieces of this to customize it for this platform or this audience or that, they don’t have tags in there, but they have things that are tags in there. I think they’re square brackets with a percent sign or something, and then a name after it, and then an end one too. And I’m like, “These are just tags.”

The thing is, once you get to a certain level of sophistication with effectively having to put metadata in your content to tell your content how to behave in different circumstances or to just expose information to other systems like search systems or AI systems or whatever it may be that isn’t the same information you’re exposing to the end user, you have to do it with tags. There’s no other way to do it, because all tags are as a way of putting information into a document that isn’t rendered directly to the user.

AP: Yes. You’re adding intelligence into your content.

PB: You may not like angle brackets, but if you have this case, you’re going to use some kind of tag at some point. And this actually relates in a funny way, I’ll stop my tangent in a second, to another conversation I had, I promise, to another conversation I had with one of DITA’s founding fathers, or one of the longest standing people on the TC, Eliot Kimber, which you know and I’m sure everybody who’s listening to this are going to know. And I love challenging him with stuff, because he has such good answers to everything.

And so I thought I would play devil’s advocate and be like, “Well, why not use a text based format? Why not use Markdown?” And we started going through some stuff and he was like, “Well, why not use DITA if you have all those cases, if you’re going to do all this stuff, if you can do all this stuff, why wouldn’t you just get that out of the box? At what point does it make any sense to ask the question, why not use Markdown plus this, plus this, plus this, plus this, plus this?” And all you’re doing is rebuilding DITA, which frankly is probably my position, not probably, it is my position anyways. But it was interesting to watch how Eliot got there and the way that he positioned it was so very Eliot and it was… I don’t know. I loved it.

AP: At some point, if you start with something maybe a little more boxed in, that’s not a technical term and you keep having to add things to it to do what you need to do, and that happens a lot, that may tell you you’re possibly a little too constrained, perhaps.

PB: Well, I don’t know if I would use the term boxed in, because I think boxed in, implies a structured starting point that has limits around you. Whereas I think the reality is that, especially a lot of the text-based formats, they’re so open, there’s no standard. There’s general accepted practices if you want to call them that, you can put anything you want in there and you can build a processor that processes it, literally anything. And I’ve seen so many bespoke things put into these formats over the years, that you realize that there is no box and you can do whatever you want with them, which in some ways is the beauty of them. But as you scale, more people have more ideas and there’s no box, so they can add whatever they want and they can add onto the processor. And then some of those people go to other places and some people didn’t document what they did and they forget why they did it and blah, blah, blah, blah, blah. And you have system creep. And the problem is, the system creep is built into your fundamental content structure.

AP: Your choice is enabling what you just said. Exactly.

PB: Right. That aspect, the fact that there’s no separation of concerns there, when you’re building this stuff directly into custom stuff, directly into your core content in a way which isn’t patterned and isn’t based on a larger set of standards and rules, means that you’re evolving something which is innately going to become brittle eventually.

AP: Sure. And as you add business requirements, that brittleness that you’re talking about can become basically magnified from my point of view, say for example, you have a merger and you’ve got two companies doing similar things, yet they’ve got two entirely different content models, two entirely different tool chains, at the end of the day, are you going to keep both of those things? I’m going to guess not. At that point, you’re going to have to go through a process of figuring out what you’re going to do. Is it going to be survival of the fittest? Are you going to do some bake off? Are you going to have someone come in maybe and take a look and say, “What should we pick?” There’s some options there.

PB: And so the merger is a great example, and it’s a clear vision of when two different content infrastructures are going to collide and something’s going to have to win. But I almost think that the merger, people tend to feel like it’s really distant. Nobody goes into work every day and thinks about mergers except bankers.

AP: The people who make the money from them.

PB: Right. But people in tech pubs, they don’t think about the merger until it hits them. But the thing that isn’t distant is product evolution. And a lot of products will… You’ll start a new project, and this will be its own product maybe, and it’ll build up, build up, build up, build up, and then you’ll realize it needs to be merged into this other thing. Or it can go the other way, where you’ll start this module in the product and it will build up, build up, build up, and you’ll realize, “Oh, it needs to be separated out.” And these are mini mergers.

AP: Yeah. Absolutely. Internal mergers.

PB: Totally. And the thing that you’ll see here is that, when you’re keeping content isolated into the product and it doesn’t have this box, so there’s no standard rules that go across all the different products, that when you have to bring them back together, maybe somebody in this product team decided, “Oh, we’re going to add this MDX component or this thing or this thing,” and then it doesn’t work now. Or it could conflict with this other version of that to somebody that was very similar, because they’re not talking to each other, because they’re not on the same product team. And that can become a fundamental problem. And then even beyond that, the thing is, while those are two separate things, you’re still one company.

AP:

Yes. Siloed tech stacks, essentially, more or less content tech stacks.

PB: And siloed user experiences. You go to the documentation for this product or module or whatever it may be, and it’s got this structure and this interactivity and it looks like this. And you go to this other one and it’s like, “Oh, okay, well this is same colors but functions very differently, navigation, all this stuff is just separate.” I think this element of consistency, when you don’t have accepted standards across the organization, it shows up. It shows up in efficiency, it shows up in user experience, and both in the customer and the employee side.

AP: The customers do not get a consistent experience, they don’t get consistent messaging. And I’m sure the marketing folks will be really happy about that when you are basically giving two different flavors, yet you’re the same company.

PB: Totally. Well, two is probably a best case scenario.

AP: Indeed.

PB: I think it might be more like 40 in some cases.

AP: If what I’m hearing from you should thinking bigger, always be in your mind when you’re talking about modeling then? Or is that unfair?

PB: Okay. I guess we’re returning to the question, choosing a content standard or a content content model. I think being aware is what’s important. I think most organizations have a general concept of trajectory, what things look like, what the culture of the organization is going to look like. And not everybody needs scale, not everybody needs consistency across many parties because they’re just never going to be there. There’s plenty of hardware companies, software companies, any kind of company out there that’s just never going to have more than three writers. That is a situation. And in those cases, do you really have to think bigger? No, probably not. Should you? I guess that’s a different question.

AP: It’s a balancing act. I think that’s the best way I would put it. You’re right, with three writers, if you’ve got three content creators that presents a different set of challenges, problems that you need to solve versus having a team of three digits. It’s a completely different beast. It is.

PB: Totally. The reality is that you can get to know two other people very, very well and you can read all their stuff and just by the nature of that, you can stay in the same page.

AP: And then of course, the consultant in me says that group of three, what if your company takes off and you have all this growth that three could become nine or it could become 12 or it could become 15. You never know.

PB: Yeah, for sure. That’s the big question that I think that organizations have to wrestle with. If you know that growth is coming, in my view, it’s irresponsible not to choose something that will facilitate that growth. But if you don’t think that growth is coming or it’s not on the horizon, it might be the responsible thing to choose whatever is going to work with relatively low implementation friction and a good customer experience for your small group at that time. And then once you start to see that growth coming, be proactive in terms of transitioning to something which is going to support that growth.

AP: And that comes to a point I want to make here. If you do decide that a, let’s say smaller, and I don’t mean that in a pejorative way, a smaller solution, smaller scale solution, if you do go that route because it’s a good fit, I would suggest you have your eye on an exit strategy then and there when you make that choice, think about your exit strategy, where you might need to go next and think about how you could map where you are now to the new thing. Does it have to happen immediately? No. But I would recommend that you have that in the back of your head filed away because you may need it sooner than you think.

PB: Totally. I think that’s absolutely fair. The reality is that, when you’re trying to do complex content at scale and choose your axes that you want to put complex on, so it could be personalization, it could be multi versions, it could be multilingual, I could continue going, there’s all these different ways that content can become more of a complex, regional is a great example, this content applies to this region versus this region, which again, it’s personalization, but it’s a special form. When you’re in that circumstance, you really have to choose something that’s going to support that. And that doesn’t really matter if you’re one or 100 authors. You need to recognize that that’s your circumstance, where it’s like, “Okay, we’re going to have a personalization requirement.” Or, “We’re going to have a complex versioning requirement.” Or, “We’re just going to have so much content that isn’t highly isolated, especially content writer ratio or highly collaborative, that we need structure to support that.”

Think in the physical world, why do skyscrapers have more structure underneath them than houses? It’s because they need to be bigger. And so when you know that you have these situations, you have to match your content model selection to those things. And so when you’re starting to think about like, “Okay, what content model is going to do that?” Unless you’ve got a really specialized case or you’re in an industry that’s had a content model specifically built for them, aerospace is the one people throw around a lot.

AP: JATS for technical journals, things like that.

PB: JATS for journals. That’s a great one. The reality is that DITA is the gold standard for that stuff. DITA teams three to 300, they’re highly performant when they’re well-trained and they can build anything to any size you need in terms of content. There isn’t an upper limit for good DITA implementations. And part of that is, one of the words we said we weren’t going to say today, but it’s the ability to specialize DITA. That’s DITA’s secret sauce that I think a lot of people don’t realize how important it is. And this ability to extend the DITA without breaking what you’re currently doing is enormous. The business value there is beyond.

AP: Just to give people some context, before we got started, we were talking about not trying to go too deep down the whole DITA specialization path and what it is. And just for a quick, quick like 10,000 foot summary of it, specialization is a way that you take existing elements that are in the DITA standard and you basically build new structures based on things that are already in the standard. And that’s how you customize. And that is probably an oversimplification, but I want to throw that out there for people who were not familiar with the term. It is just a fancy way in DITA speak of saying customize the DITA model.

PB: There’s one key thing there, and this is the only thing I think you really need to know about specialization. One key thing you didn’t mention, which is that when you take an element in DITA and you specialize it, all of the DITA processors, understand your new element to be a version of your old element. It’s like, “Why does that matter?” Well, that matters because if we have a Scriptorium content model and in our thousand person Scriptorium company, someone named Tony goes and decides that they’re going to add this new functionality and it needs this new structure, they specialize off of the base content model. Even if that content comes back into reuse in other parts of the organization that haven’t implemented specific functionality for Tony’s new element, because those elements are derivatives of the underlying elements, they just get treated that way. You can have structured planned, asynchronous evolution of the model across a large enterprise that doesn’t break all the different delivery mechanisms that are based on the fundamental cross enterprise understanding of the model. That thing there, that’s what makes DITA enterprise grade and everything else not.

AP: Absolutely. To say that it puts the extensible and XML, even though I know I’m mixing DITA and XML here. It is super, super extensible. And it really to me, people talk about should, “I pick an open standard like DITA or should I do a custom model?” Well, from my point of view, you can have both based on the discussion that we’re having. That’s where my brain is going right now. I will say X years ago, actually X decades ago, I remember creating custom models mostly in SGML. There was no standard out there to support what we were trying to do. 20, 25 years later, we have DITA, which if that were available when I was creating those models nearly 30 years ago, you better believe we probably would’ve picked it, because it probably did 85, 90% of what we needed the custom model we created to do, so it takes care of that problem. And as you said, you can customize it without breaking the bigger picture. And that’s a big deal. That’s a really big deal.

PB: Yeah. It’s enormous. It is the business case for why you go through the upfront implementation to do DITA, because you look at content operations implementations that have been around and are still modern and are still delivering an ROI year-over-year, and they’ve been around for a decade or more, they’re all DITA, every single one of them. There’s no such thing as the 15-year-old Markdown implementation. It’s the same thing as with the wikis, they go through cycles. If you want something that’s going to serve you today and then in the long run and you have the ability to do the upfront, it’s going to work. Going back to your SGML comment, I think one of the best ways that DITA was ever positioned to me, this goes back a long ways, this is a friend of mine, he basically said, “The reason they invented DITA was so they didn’t have to do the first million of customization on every project for SGML.”

AP: I agree 110% with that, having lived through what you just said. Yes, a 100%.

PB: It was just reinventing the wheel every company, and it was a really expensive wheel and people were like, “Let’s stop doing this.”

AP: Right. It is cost savings off the bat, because like I said, if it gives you the majority of what you need, you can customize it and flex it to make it do what you want to do. I want to quickly investigate the flip side of that coin, are there times when you should not be customizing/specializing your DITA model?

PB: When you don’t need to. In a lot of ways, I think specialization, especially day one, less is more.

AP: Right.

PB: Yeah. You should have a really, really good reason for specializing. And I think the thing that’s really challenging about specialization is that, if you think about it in terms of other technologies, it’s one of the four features that should convince you to go with DITA. But very few organizations use it day one and very few organizations should use it day one. But the reality is that over time, you’re going to find cases that you just can’t efficiently support in other ways. And the alternative to having specialization is something like HTML classes or some other XML attribute that you throw onto something or some tag thing you invent in Markdown. But it’s super difficult to validate that and to make sure that it’s used consistently. And none of that stuff really effectively translates back to the rest of the publishing pipeline in a way that is consistent. There’s not a strong process for it. You have to invent the process and the pattern and then actually do the thing you want it to do. Should you specialize day one? Sometimes. I would ask your friendly neighborhood consultant. about that one.

AP: I’ll tell you right now, sometimes when it comes to metadata, starting early with that is a requirement, that is based on some past project experience. Yes.

PB: That’s fair. And I think the way that you end up managing the metadata, because metadata is a really… We might have to decide what you mean by metadata, because I can make anything metadata.

AP: Things you shouldn’t, by the way. Yes, you can.

PB: There’s definitely a conversation around, where does your organizational intelligence and taxonomy and terminology weave into your content model?

AP: Yep.

PB: I think there are cases where that is specialization and there are cases where it is not. It’s really something that you want to use more standardized taxonomy mechanisms for that, or maybe you want to use on document metadata or et cetera.

AP: Yeah. There are layers there. You’ve got some choices and you can have other tools carry that burden too, that play well with your system. That’s a possibility as well.

PB: Totally. Yeah.

AP: Yeah. The only other thing that I’ll add to this, is sometimes just because you’re doing something the way you are now doesn’t mean it’s the right way moving forward. And you should not knee-jerk decide you must customize structure to match the way that you were doing things right this second. I would pause and look at things very hard before you decide, “Absolutely I must customize because we’re doing it this way.” Well, if you’re doing it this way now, for example, are your delivery formats going to be the same as they are right now? Is that going to lend itself well to all these different new online formats and things like that? Is it going to lend itself well to talking to other systems via API? I could go on and on. Basically, take a deep breath and decide if what you’re doing now is truly something you need moving forward, there’s a chance that you may need to compromise or rethink the way you’re doing. It may in a way that the DITA structure already supports with no customization whatsoever.

PB: I want to break down your point about doing it now for a second, because I think this is really important. There’s doing it now in terms of what are you publishing now? What are your target publish outputs? And then there’s doing it now, in terms of what are your internal practices? In terms of how it is you’re actually creating the content, what’s going into the content, how the content is coming together, how the content is moving, so do you have a distributed model where writers really don’t talk to each other that much other than at the water cooler, but they write their own books? Do you have a collaborative model? There’s all things in terms of doing it now can be so many things in the background.

AP: It’s not just publishing, it is also creation. That’s a very good point. Absolutely.

PB: In terms of doing it now for publishing, one of the things that I think is really critical is understanding the trajectory of your publishing and where it’s going. If you implement structure properly, there shouldn’t be a lot of publishing cases you can’t handle, generally speaking. And if there are, that’s typically where specialization comes in, if you need more semantics, more data typing, more of this to pull something out, because a lot of publishing cases that are more on the upper ends of complexity, what they’re really doing is, they’re doing intelligent selection. They’re saying, “Give me the things like this that connect to this under these conditions.” Something like that. When you’re looking at your outputs now, having a general concept of trajectory I think is really important. But then, the core point that I want to make here is that, a strong separation between doing it now on the backend and doing it now in terms of publishing is what you need going forward.

And this is where almost every wiki based or HAT tool based or whatever else, which is write it and publish it, WordPress. This is where they all break down because there’s no separation or very, very little separation between what you’re doing on the back end and what shows up on the front end. You can’t evolve those two things independently. And that rigidity means that you get stuck and you can’t do the things you need to do on either side. When you’re building a new content operations ecosystem and you’re redoing these things and you’re thinking about, “What are we going to do in the future?” I would say even more than the content model you choose, you need to choose a content operations’ ecosystem that has separation of concerns and where you can evolve the different components independently without breaking one or the other.

AP: That is really good advice. And I really like your front end and back end distinction, because I think that is very important and it’s very easy to conflate those two things, by the way, especially if you’re working in an environment that already combines those things. I think that’s really, really good advice. And before we wrap up, is there any other smart point you want to leave our listeners with in regard to picking a content model?

PB: Smart points? I don’t know. I don’t know if I do those. Do I do those?

AP: Well, you just did one with the back end front end distinction, so if we want to leave it there, we certainly can.

PB: How about I build on that just slightly?

AP: Sure.

PB: Just to make sure it’s really complicated.

I think that front end and back end is one level of maturity when you’re thinking about the separations of systems and a content operations’ system. But I think that one of the things you’ll see is that a lot of organizations will evolve to the point where it truly is an ecosystem. Think about it this way, you have your centralized content repository, that’s where most of your authoring is done. It’s where your pros are written, et cetera. And then you have your primary front end, which is typically a website, but it’s probably mobile ready and whatever else as well. And you have other front ends too, so you have separation of front end and back end in that way. That’s what we were just talking about.

But it’s very common that you start to see there are other systems which integrate with the back end. You might have a system that manages your API documentation, which is typically generated, it’s not written. However, the usage information around the API documentation, that provides the developers the context and the instruction to know what they’re looking at, that’s all written. That goes into the content repository. Now you need a connection between those two things, and you need some kind of a mechanism where they’re going to be able to play nicely together to some extent, at least to get the information generally to the front end without having multiple experiences where a user has to bounce back and forth between raw reference and then guided more learning style content. You might also have a system which holds information about your product, so it could be product configurations, it could be product specifications, it could be all different kinds of things there.

And that information is oftentimes going to come over in a tabular format. And a tabular format is a really interesting thing, because when you’re looking at simple tabular formats that can be represented as a CSV, it’s not a great idea when you’re doing data exchange, but it can, but it can always be represented in a tags based format. Any tabular format can be represented as tags. And even the most complicated Excel stuff under the hood, it’s XML basically, or it can be exported as XML. You start looking at that and you go, “Okay. Well, what if we’re going to start mixing in things which is more tabular data from other systems into the flow of content that we’re producing for the information experiences down the line? Okay. How are we going to support that in the future and how is that going to come together?”

When you start thinking about these things, the awareness that your most basic content operations is a word processor on a desktop, and then you move to front end and back end. But eventually, if your organization demands it and your customer demands it, and you have to deal with them in that way, you’re going to be in a place where you’re at an ecosystem and there’s going to be content going back and forth between systems. There’s going to be something which is pulling together information, data, and content, and then it’s pushing it out to experiences. You’re going to have multiple experiences.

AP: What you’re describing is content as a service. That’s what I’m hearing.

PB: Yeah. Content as a service is one of the things that comes out of this. I just think that when you step back and you’re like, “Okay, what does our organization look like in terms of the information that in a perfect world our customer could access and could access in a seamless way, without going to different experiences and having to navigate around when it should be together, it is together? That’s a big thing. We have these five things when they should be together, let’s put them together.” That’s a thought exercise, that’s worth an afternoon when you’re about to decide how it is you’re going to build your next content operations’ ecosystem. Because if there’s nothing else else I can promise you, it’s that whatever you choose when you set up a content operations’ ecosystem, even if you don’t actively choose, it’s going to be with you longer than you think.

AP: Absolutely. And I think that’s a good place to end and a good caveat to choose wisely and choose well. Patrick, thank you very much for your time. We appreciate it.

PB: Yep. Thanks for having me.

AP: Thank you for listening to The Content Strategy Experts Podcast, brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

 

The post How to choose a content model with guest Patrick Bosek (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 33:19
Content operations for elearning content (podcast) https://www.scriptorium.com/2023/07/content-operations-in-elearning-content/ Mon, 31 Jul 2023 11:15:33 +0000 https://www.scriptorium.com/?p=22020 https://www.scriptorium.com/2023/07/content-operations-in-elearning-content/#respond https://www.scriptorium.com/2023/07/content-operations-in-elearning-content/feed/ 0 In episode 149 of The Content Strategy Experts Podcast, Sarah O’Keefe and Christine Cuellar discuss the unique challenges, opportunities, and considerations of content operations with elearning content.

As an instructional designer, as a person who’s creating this learning content, you start thinking about, How do I deliver this effectively? How do I ensure that learning actually takes place? That’s our goal here. We want the people to learn the thing.” 

— Sarah O’Keefe

Related links:

LinkedIn:

Transcript:

Welcome to the Content Strategy Experts podcast, brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we’re talking about content operations in e-learning environments and elearning content. Today on the podcast, I have with me Sarah O’Keefe. Hi Sarah, how are you doing?

Sarah O’Keefe: Hey, Christine. I’m doing well.

CC: Thank you so much for being here and talking about this. This is a topic that’s coming up more and more, so I’m excited to dive more into the unique challenges and opportunities of content operations in e-learning content. I guess to get it started, what does the shift from in-person to digital look like for the classroom environment?

SO: Clearly, the trend for this year is AI. Nobody’s going to deny that, but I think this is probably the number two trend that we’re seeing is an interest in content ops for learning content [and elearning content] all of a sudden. Let’s talk a little bit about the history of this. Back when Gutenberg… No, sorry. Sorry. I’m capable of doing a podcast without talking about the printing press, I think. When digital content comes along, let’s backpedal a little bit and think about classroom training. You go into a classroom at a particular location, at a particular day and time. You have a physical environment. You have a bunch of people in the room with you, so you’re the instructor and you have eight or 10 or 15 or 45 students sitting in front of you.

Although some of this applies to school instruction, what I’m focused on and what I have some experience with is adult learners. That’s probably also worth noting on the outset. Many of you I think are familiar with school environments, but what we’re talking about here is adult learners coming in to do some sort of probably corporate training. I walk into the classroom, and we have a particular kind of computer set up because I’m doing software training. I’ve got a bunch of learners in front of me, and some of them are tired because it’s 8:00 AM. I’m cranky because it’s 8:00 AM and I traveled and all the rest of it. In the olden days, we had this setup where you would travel to a location, bring everybody together in a room, and for two or three or five days you would do a class together.

There’s some really interesting stuff that happens when you have a group in a room learning. You get interesting kinds of group dynamics. You’ve always got a class clown heckler, and you can sometimes turn them to your advantage. But additionally, you have physical issues with the classroom. The monitors are terrible, the computers are slow. There’s some commotion outside the classroom that’s going on that’s distracting. The fire alarm goes off at 11:00 AM. You have a variety of learners with different kinds of motivations, but you’ve got this physical environment that you’re dealing with and this requirement to bring everybody together all at once in the same place. Now, as we move towards e-learning where learning content is being delivered online, you can take your classroom and have a classroom online. Everybody comes together in a Zoom or some other kind of video meeting and you’re presenting to them.

In a lot of ways, it mimics what’s going on in the actual classroom, but there are some advantages and disadvantages. The big one is that people get distracted. They have screens open. They go off and do their thing. They drop because they have to take another meeting. They’re at home, they’ve got a barking dog. Now, there are distractions of the office training location also. But online training does not require people to travel, it introduces time zone issues. Nearly always instead of doing let’s say a three-day class all at once, we would do several sessions of two hours a day spread out over a lot more time. Because we don’t have to cram everything into three days because we didn’t fly in the instructor, that’s an online classroom. Then you start thinking about asynchronous training where instead of me presenting in the online classroom, all that stuff gets prerecorded.

Your job as the learner is to go watch the video that I did and then work through the handouts and exercises and things. Then do maybe some sort of interactive online thing, and then maybe there’s a test. There’s a set of assessment questions to show whether or not you’ve learned the material. Then when that happens, you introduce all sorts of other distractions. But the complexity here is that the difference between an in-person classroom environment and some sort of asynchronous online training is actually pretty extreme when you think about it. You take away that in-person interaction. You take away the group dynamics. You don’t necessarily have your buddy that you’re nudging and passing chocolate to and all this. As an instructional designer, as a person who’s creating this elearning content, you start thinking about, “How do I deliver this effectively? How do I ensure that learning actually takes place?” Which is our goal here. We want the people to learn the thing.

CC: Yes

SO: A lot of the tools that are available to me as a classroom instructor are not available in a digital environment, but there are other things that are available. Recorded video’s a really good example because it means that you could go back and watch the video again, or we could provide closed captioning or subtitles for the video. You could speed it up or slow it down. You could have little glossary terminology information that pops up in the video so that as I’m using some weird jargon-y word, it pops up the definition. Lots of stuff you can do. But ultimately, elearning content, more than other kinds of enabling content. If we compare learning content to techcomm content. In techcomm, the shift from a printed book to a PDF to something like online help. There’s been some interactivity and some other things added, but a printed book in general, the reading experience of a printed book versus some text online, it’s really not that different.

Whereas in a classroom when you talk about learning and training as a process, there’s a whole bunch of stuff that goes on in the classroom that is very, very different in digital. As we start thinking about how do we move and how do we deliver effective e-learning, we have to think about all these issues around the instructional approach, the modality. Is it in-person? Is it not in-person? Is it synchronous or asynchronous, and all the rest of it? That makes for some pretty complex content [including elearning content]. Then we have to think about the content itself, which is of course, where we live. Because I’ve done a lot of training in my days, but I’m not an instructional designer really. But I’m interested in this question of how do I make a learning experience effective? Then we come around to how do we do that in the context of all these cool tools that we have in the content world?

CC: That makes sense. What I’m hearing is that there’s a unique tension for instructional designers where you want to create a somewhat customized experience… Because as you said, getting people to learn the thing is the goal. Creating learning content [including elearning content] that’s going to be effective is the goal. How do you balance the flexibility of being able to create a tailored training deliverable when you are trying to create a more scalable content development process?

SO: Probably the instructor has an outline of some sort. These are the objectives for the class. These are the things I need to communicate to the students. In a classroom environment that looks… Maybe look one way, I’m going to do a little lecture. I’m going to define some stuff. I’m going to have them do a group project. Put two people together, or three, have them work on some things. There’s a lot of different tricks in classroom management. In the e-learning environment, especially if it’s asynchronous, it’s on demand. I can’t really do that. I can’t tell you to go work with your partner sitting at the bench with you because you don’t have a bench or a partner, so you have to do something different. But if you step back and look at it, you have your learning objectives. I want people to learn how to log into the database.

Great. In a classroom setting, I can do that. We’d probably have a sandbox of some sort. They can log in, they can try it out. We can show them how to set their password and show them all the really dumb password rules. The really dumb password rules are the same across the board. Just because you’re in an e-learning class [with elearning content], they don’t change. That 18 bullet points of you have to use at least one special character, but not these special characters. It has to be more than eight characters, but less than 27, that thing. That content is the same, and so I think the trick becomes to identify the things that are the same and the things that are different. What content is the same, and can I basically just deliver in the same way? What content is different? For the classroom, it might say, “Spend five minutes explaining X here. Cover these five bullet points.”

In e-learning, it’s, “Run the video,” or some sort of an interactive environment where they can do stuff. The objectives are the same. The way that you deliver may be different. I think the really interesting part about this is identifying that pretty carefully and then plugging it in. This one’s only for e-learning and this one’s only for classroom, or this one’s only for a certain kind of audience. To take the dumb database login example, am I talking to users or am I talking to database administrators? If you’re a database admin, you probably have a different set of options than you do if you’re a generic user. Do we have a different class or let’s say a different lesson on logging in? Or is it the same lesson but the admin gets a couple of extra paragraphs about weirdo things that they’re allowed to do but we don’t show those to the user when you’re doing a user-level class? You have that sort of conditionality potentially.

But I think the real key here is to focus less on the form of the delivery and more on what is the backbone of the class. What are the learning objectives and how do I deliver those learning objectives in different kinds of modalities, and different delivery mechanisms? Also, where’s the overlap? If I’m teaching you how to use a particular kind of corporate software, probably lesson one across the board is how to log in for every single class. Unless of course… There might be a basic class and an advanced class. In the advanced class, we assume you already know how to log in. But it’s really, really common to have a series of classes. You’re a bank and the tellers get one kind of training, and the bank manager gets a different kind of training, and the… I’ve run out of banking roles that I know about. But you… Mortgage officers!

CC: Yeah, there we go. That’s one. I was like, “I had nothing.”

SO: You think about it though, and how to log into banking system is probably going to be pretty much the same and delivered in lots and lots and lots and lots of different classes as lesson one. That’s great, but you need a system that allows you to write the canonical how-to log in, and then use it over and over and over again across all these… Not just all these different audiences, but all these different delivery mechanisms. Whether I’m in the classroom or I’m online or I’m here or there, I want that here’s how you log in and here’s our password policy to be delivered to have the same content delivered so that all my people learn what they need to learn in whatever learning environment.

Right now, and I said this was one of our trends for this year, what we’re hearing from the people that are coming to us and talking to us about learning content is, “Yeah, I have how to log in procedure or a how to log in lesson, but what I actually have is 10 copies of it, or 20 because they’re all stashed in different systems and I have no way of actually managing them. I just make a copy and make the version for the teller, or I make a copy and I make the version for the database admin. I can’t share, I can’t link them. I can’t do anything other than make copies.”

CC: Which creates a lot of clutter, I guess you would say, in the content system. I’m sure that that leads to inaccuracies. That could lead to… That’s just also a lot of busy work on the part of the instructor. If it’s already done once, why repeat it a bunch of times? What I really like about what you’re saying is there’s a huge piece of intentionality that ties back to what are our goals for all of this elearning content? What are we trying to accomplish and what do we want people to take away from this? Then that is informing what content gets created and how that content gets developed and produced.

I like that because I’m sure as organizations grow and develop, they’re trying to catch up with learning content and get people what they need while they’re doing a myriad of other business functions, trying to keep things going. Taking a step back to really assess what your learning content is doing and where it’s going seems like a really valuable piece of this process. However, that also sounds like there’s a lot to do within that. What options do people have when it comes to managing their learning content? Is it basically a one track that you recommend? Are there a ton of options? Where do people get started when they’re trying to move in this direction?

SO: It’s tricky because we have to actually think about managing learning content and maybe separately managing learning. Let me start with the second one. When you talk about managing learning, it’s once I put this class together, whether e-learning or classroom or anything else, let’s say that there’s a requirement that you take a particular class and you take a particular assessment or test and you pass it at a certain level. Learning management or learner management tracks that. Have you taken the class? Did you take the assessment? Did you pass? Are you off the hook for sexual harassment training for this year? That type of thing. It’s kind of like a front-end learner experience, learner interaction. Also, there’s some really interesting things you can do around learner behavior. Everybody’s watching this video, but they all watch it at double speed, and it’s pretty clear that they’re just trying to get through it as fast as possible.

Then they’re all passing the assessment at nine out of 10 questions correct. That indicates that either your content is really good or the questions are too easy, or who knows? But a learning management system, an LMS, allows you to track those kinds of things. If you think of a school… We’re talking about adults probably. But if you think about a school, you have attendance and grades and tests and report cards, all that stuff is learning management, basically. That’s the front end. That’s where I as a learner and then the instructor as a teacher interacts with the system. Separately from that, we have the backend, which would be probably the learning content management system or an LCMS. Sometimes this is done in component content management systems. An LCMS is a content management system tuned for learning content, and a CCMS is a component content management system, which could be used as an LCMS.

CC: Oh, okay. But it’s not necessarily specifically an LCMS?

SO: It’s not necessarily explicitly, “Hey, I was built for learning content,” but maybe it is.

CC: That makes sense.

SO: Then you’ll find some LCMSs that say, “We’re totally a CCMS.” So, welcome to my world. We are creating learning content [including elearning content] and we are delivering it into all these different delivery channels and delivery experiences like synchronous learning online and asynchronous elearning content and classroom, maybe. Probably not. I don’t do a lot… Aside from the pandemic, which is a big aside from. But classroom training is rare these days. It used to be everything, and now everything’s online, which is a whole other thing. You can make effective online training, but it’s not easy. It’s much easier to pick a fun, dynamic, entertaining instructor and put them in a room. That’s how you make good training in a classroom environment. It’s just that it costs a fortune and people have to travel and they have to be in the same room.

There’s all these constraints, and it’s super expensive. We have our learning content management of some sort, and now what we want to do is go down the line of all the standard content management systems and think about how we’re going to do this. What information can I reuse across multiple delivery channels, multiple audiences, and multiple places in my system? Where do I have information like my user versus admin distinction where I need to use some sort of conditionality? This paragraph should only go over here. The canonical old-school example of this was a test and an answer key. The students get the test. The instructor, we hope, is the only one that gets the test with the answer key. But that’s really a conditional text problem. How do I suppress the answer key? Rather than making two copies of the test, you have one copy of the test. When you render it for the student, you don’t show the answers. Of course, now we can put it in a learning management system and have it present the question to you [for your elearning content].

 You check the box or you type in your answer or you do whatever, and then it says that was correct or incorrect because the system has that data. Components, how do I break down a class into lessons, learning objectives, and then learning objects that go with that? How can I mix and match and repurpose those learning objects to put together what I’m trying to do? You think of this as just this puddle of instructional content of learning objects. Then I want to sequence them in a certain way. They have to build on each other. You can’t go around telling people how to do SQL commands before you teach them what a relational database is. There’s sequencing implied there, and there are prerequisites and hierarchy. If you’re doing hands-on kind of training, hardware training, you very often have prereqs like, “Here’s the equipment that you need to do this. You need a screwdriver and you need this and you need that and you need the other.”

You need physical objects and you need to make sure everyone has them at hand or has them in their class or whatever. If you’re doing e-learning, you probably have interactive components. Again, instead of an instructor lecture, you’re going to have a video, or you’re going to have maybe a hands-on environment where people can get into a locked down, safe environment where they can play around with stuff, but it won’t break anything. It’s a fake environment where they can try out certain things without being worried that they’re going to transfer $2 billion inadvertently out of their bank account … which we’re not for that. There are all these different options out there on the backend to create all these learning objects and then think about how you’re going to deliver them in an optimum way for all your different kinds of channels, whether it’s online or my beloved and long-lost classroom and all the rest of it.

CC: We do have more information about optimizing content operations, about learning content. We’ve been doing some blogs and other podcasts, so we’ll have those available in the show notes. Sarah, for someone who’s hearing all of this for maybe the first time or maybe they’re just starting to become aware of this whole new way of thinking about learning content and they’re wanting to move to this approach, I’m sure they’re in the middle of everything that they’re doing already. They’re in the middle of producing content. It’s an overwhelming prospect, so where would they get started?

SO: The ideal answer is, of course, to call us up and bring us in to help you. But assuming you’re not quite ready for that today, I would actually suggest that you go look at our learning data site. If you go to learningdata.com, you’re going to see an online e-learning environment [and structured elearning content]. Now, I’m not going to tell you that it is necessarily the best possible, most amazing experience in the world, but it’s effective. Here’s the key, you can look at that site and you can also, if you dig into the About page and how was the site put together, it will tell you where the files live for that site because they’re all open-source.

There’s a whole bunch of, in this case, DITA XML underlying the site, which then is pulled into a stack that involves WordPress and LearnDash. Which is, as I said, a learning management, an LMS system that sits on top of WordPress. You could take a look at how that’s put together and how the source files are then transformed into the learning experience for e-learning. Of course, we can also from that, do PDF handouts. I think we do have some slides in there and all these other things. I think that might give you a reasonable idea of what it looks like to think about learning content as being flexible objects that you can remix and repurpose.

CC: That’s great. We’ll have LearningDITA linked in the show notes as well. It’s really easy to check it out. It’s completely free and that’s a great idea. Sarah, thanks so much for talking about this. Is there anything else you can think of that you want people who are interested in learning more to know? Is there anything you feel like we haven’t covered or any other nuances about e-learning content that you’d like to address?

March 2025 update: We have moved LearningDITA to a new platform. The Introduction to DITA course is still free, and you can sign up for courses at store.scriptorium.com.

SO: E-learning content or learning content is complex because we’re not just dealing with the question of how do I get it on a printed page? But also that question of how will a learner engage with this content? I think that focusing on that question, focusing on how do I make this as effective as possible across all these different delivery mechanisms is probably the key to making this work. Then secondly, the universal theme that we’re hearing from our learning content friends is we can’t keep up. There’s too much stuff. There are too many deliverables. There’s too much change. Everything is going really, really fast. What we’re describing here, a component-based approach to managing learning content [including elearning content] has the potential to address that and to help you manage the velocity that you’re being required to manage. Finally, I’ll also say that we didn’t touch on localization and translation. We do have the ability within an environment like this to support localization in a reasonable manner. That’s another potential reason that you might need to go in this direction.

CC: That’s great. Thank you so much, Sarah, for being here. I really appreciate your time, and-

SO: Thank you!

CC: … letting me pick your brain about this [elearning content and content operations]. This was great.

SO: Anytime.

CC: Thank you for listening to the Content Strategy Experts podcast brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

The post Content operations for elearning content (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 26:26
Anthony Olivier unpacks the MadCap acquisition of IXIASOFT (podcast) https://www.scriptorium.com/2023/07/anthony-olivier-unpacks-the-madcap-ixiasoft-merger/ Mon, 17 Jul 2023 11:26:11 +0000 https://www.scriptorium.com/?p=22003 https://www.scriptorium.com/2023/07/anthony-olivier-unpacks-the-madcap-ixiasoft-merger/#respond https://www.scriptorium.com/2023/07/anthony-olivier-unpacks-the-madcap-ixiasoft-merger/feed/ 0 In episode 148 of The Content Strategy Experts Podcast, Anthony Olivier, founder and CEO of MadCap Software, and Sarah O’Keefe discuss the MadCap acquisition of IXIASOFT, what’s on the horizon for the merged organization, and explore predictions about the impact of AI in the content industry.

“By acquiring a DITA-based CCMS, it allows us to offer not just an unstructured XML-based solution with cloud-based content management, but also offer a structured authoring solution for our customers who want to make that transition.”

Anthony Olivier

Related links:

LinkedIn:

Transcript:

Sarah O’Keefe: Welcome to the Content Strategy Experts Podcast, brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. Hey, everyone. I’m Sarah O’Keefe, and in this episode, I am joined by the MadCap founder and CEO Anthony Olivier. Welcome, Anthony.

Anthony Olivier: Thank you, Sarah. Happy to be here.

SO: It’s great to have you on board. Tell us a little bit about MadCap and how you made that happen, because I know there’s a fun backstory there.

AO: Yeah, it goes back to when I was CEO of a company. Prior to MadCap, we were called eHelp Corporation. We were the founders and developers of RoboHelp, and at the time it was RoboDemo. We sold the business to, I believe it was the end of 2002, I sold the company to Macromedia, if everybody remembers Macromedia, the creators of Flash. They bought eHelp not necessarily because of RoboHelp, but because of our RoboDemo product at the time. It was a Flash-based product and they were really interested in Flash-based technology. They acquired eHelp for RoboDemo.

Quickly rebranded that as Captivate, and decided at that time that RoboHelp and the technical authoring industry was not core to the strategy. This gave us an opportunity or gave me an opportunity to push the market forward and say, look, if you’re not interested in it, we’re going to create a new technology or a new generation of RoboHelp and create from scratch and develop something that was more future-proof, XML-based, and continued that technology forward under MadCap.

That’s what led to the birth of MadCap in 2005. I’ve been in the industry a long time. A lot longer than MadCap’s been around. This has been now what we live and breathe, technical authoring.

SO: Right. Since all of us are all related in some way, somewhere in the middle of all of that, Macromedia gets acquired by Adobe, and then Flash goes away, but Captivate is still here.

AO: That’s correct. That’s exactly right. Shortly after that acquisition of eHelp and Macromedia, Adobe went and purchased Macromedia. And then the rest is history at that point. I mean, we’re the frame maker of RoboHelp and Captivate. It’s been a definitely interesting journey, but definitely a small industry per se.

SO: Here you are in 2005, you launched MadCap. It’s been, I thought 15 years, but more like 18, right? MadCap’s humming along and you’ve got Flare and all the ecosystem of products that goes with Flare. I mean, I think pretty clearly a happy and fairly passionate Flare user base like some other products I could mention from 20 years ago, but will refrain. And then suddenly one day this past February, there’s this announcement that, oh, by the way, we’ve decided to purchase IXIASOFT and its DITA XML content management system. Please explain.

AO: Absolutely. — We’ve been in the industry long enough, you and I, and we’ve been living and breathing this market for a very long time. I think that we recognize that it’s not a “one size fits all” solution for companies. Some companies want structure. Some companies want unstructured. We’ve recognized that from the beginning, that it’s not a one size fits all. Clearly there’s a market for structured authoring. There’s a lot of companies that do it. There’s a lot of companies that offer DITA-based tools, CCMSs. There’s definitely a market for that and a pretty big market. Taking a little bit of a step back, our retention rates at MadCap prior to IXIASOFT acquisition was about 90%. 90% of our customers stay with MadCap for the long haul.

Now, if you exclude those companies that downsize let’s say because of reduction in force or things like that outside of the control of the organization, those remaining percentage of customers that leave MadCap were typically leaving MadCap, although it was a small percentage, leaving MadCap to go something that’s more structured, more controlled, something that had the benefits that DITA offers as a structured authoring solution and more in line with compliance, larger teams, larger content development teams, large amount of content and needed the CCMS functionality.

If customers are leaving MadCap, that’s where they were going. By us acquiring a DITA-based CCMS allowed us to offer not just an unstructured XML-based solution with cloud-based content management, but also offered a structured authoring solution for our customers who want to make that transition or growing teams. We know that the needs of the organization change to be more compliant. That was the reason behind the acquisition. Acquiring IXIA CCMS allowed us to solve a couple problems or gaps, let’s say.

It allowed us to participate in the structured authoring environment or this authoring market offer a migration part for those MadCap customers that are very happy with MadCap, very happy with the service and support that they’re getting, but needed something a little more powerful, needed the DITA-based structure and needed the CCMS capability. We offer that path for them to move along without having to go to market and shop for a different solution. The third thing is we get to retain those customers. As I mentioned before, we have a retention rate of 90%.

It’s a lot higher if you exclude, as I said, reductions in force, but we get to retain those customers within the MadCap family by having a DITA-based CCMS as one of our offerings. Acquiring IXIA, we provided this almost like a future-proof growth plan for our customers who decide to go with MadCap, whether they decide that they want to go unstructured first, get all their content from Word and other formats into the ecosystem, and then grow with us as they grow, as their needs grow, as they do acquisitions, as their teams grow, and then wanted more of the structure with a DITA solution office. That’s pretty much the evolution of why we decided to purchase a DITA-based CCMS.

SO: When you look at this, that’s sort of the, well, I don’t know about tactical, but sort of the big picture strategic view of how those two product sets can fit together or how you can provide a market fit for your customers. Stepping back from that a little bit, where do you see the industry going?

Where do you see the growth happening and do you see these products, and I don’t mean just MadCap and IXIA specifically, but the various products that cover this marketplace? How do you think they’re going to evolve? Looking at this with 20, 30 years of experience and having seen all the different things that have happened, where do you think this is going in the next five or 10 years?

AO: A couple things. I think that, well, definitely one of the things we’re working on currently and was the first initiatives post the acquisition was how do the products talk to each other? How do they integrate better with each other? How do you move from MadCap solutions to a DITA-based CCMS? How do you leverage all the advantages that a DITA-based CCMS has without having to recreate your content?

How can we make that transfusion for customs a lot easier? That’s our first order of business, is to tackle the movement between the products, and then the strengths and weaknesses of each of the solutions, and how do we use the strength and weaknesses of each solution to fill those gaps. We’ll start seeing this over the short term and longer term, is the products feeling like it’s more of an integrated type workflow.

SO: Do you see people using both, like a single customer that would have instances of both products, or is it going to be a “one or the other”?

AO: I see both. During the due diligence process, in the discussions with IXIA, we looked at the customer base of IXIA. Now, granted, they’re a lot smaller than MadCap in terms of customer base, but we actually saw a fair amount of overlap of customers in very large organizations. IXIA has very, very large customers. I mean, you’re talking about SAP, Siemens, Toyota. I mean, those very, very large organizations with hundreds and hundreds of licenses of the IXIA platform. We actually saw there was actually a fair amount of overlap between our customers and theirs.

Siemens is a perfect example. I’m not sharing anything proprietary, but there’s actually some divisions in Siemens that use MadCap and have been MadCap customers for a very long time. But Siemens is a very large IXIA customer. Coming back to my initial point, there’s not a “one size fits all.” There’s going to be certain divisions that are going to be fine with using MadCap products and having more of this unstructured authoring environment without the CCMS capabilities.

There are going to be certain departments within these very large organizations that are very compliant and need to adhere to very strict guidelines in terms of how they’re authoring the content, how they’re managing that content. There’s definitely going to continue to be this overlap between the customers, and we’re not going to try and push or force anything down the customer’s throat in terms of what they should or shouldn’t be using. It’s really, if you have a problem, we can solve it for you no matter what your needs are.

For example, if Siemens decides, hey, these divisions that are using MadCap want to start looking at moving towards more of a structured authoring environment and having more of the CCMS capabilities, then we can make that transition really easy for them. But if you don’t want to do that, absolutely. They use both. But we want to be able to share the content between both.

Bring the benefits of content reuse no matter what you’re using, if it’s MadCap legacy products, so legacy in the sense that MadCap existing products versus the IXIA CCMS and the DITA-based solution that we have now.

SO: I think I’m not allowed to do podcasts anymore without asking about AI. I’ll ask you, when you look at the trends and where the industry is going, do you have at this point a perspective on what AI is going to do to your business and/or a strategy that you can share in broad strokes as to how you’re going to integrate that?

AO: Yeah. I mean, that’s a really good question, Sarah. AI is definitely becoming more and more prominent. If you’re not thinking about AI and how it could affect or is going to affect the workflow, how you’re creating content, then you’re probably going to be behind the eight-ball pretty quickly. We’re definitely thinking about AI. We’re definitely already working on AI integration into our products in terms of authoring and leveraging, allow the author or content developer to leverage AI in creating content.

I think it’s going to just make the technical author’s job a lot more efficient. They’re going to be able to do more, create more content, which is a good thing for us. We want more content. We want to be able to produce more content. We want customers to be able to create more content more effectively and efficiently and more valuable content more effectively and efficiently. AI is going to allow that. I see it being a positive for technical authors. It’s a matter of embracing it and how do you integrate that with the solutions that we have that are going to make the difference.

If we sit there and try to ignore it, then it’s a problem. We ignore it as a potential risk to the author’s role and function within the organization. I think that we can all end up losing. I think it’s embracing it that’s going to be the important thing. It’s going to change the way we do things, but I think in a positive way.

SO: Looking forward at where this is going, and you’ve obviously got a huge amount of work to do in terms of product integration and alignment and just all the usual things that go with merging to companies, but whether it’s inside the organization, the now combined organization, or broadly in the industry, where do you see the biggest challenges that we’re facing or that you’re facing as you move forward in this space?

AO: I think the biggest challenge is, that I think it’s actually an opportunity, is companies are looking to content development or content in general as becoming more and more valuable to an organization. People are not going out there and making decisions, talking to salespeople as much as they used to. A lot of people want to make decisions on their own, and a lot of that comes down to reading the content, making decisions based on the content that’s out there, whether it be web-based content, instructions, user guides, things like that that make a prospect decide on whether a product is a viable solution for them or not.

That’s where the role of the content plays. I think that for us, bridging that gap between sales and marketing and content development, what we call content development, the traditional technical authoring content development, is going to start blurring. The biggest challenge is getting the technical authors to start embracing and seeing that they actually play a role in the sales and marketing, as well as from a top-down level as well, from a CTO level, from a CIO level, from a CFO level, even CEO level, recognizing that the content that’s being produced by the organization is driving a lot of those decisions on the sales and marketing side.

I think that that’s where we see the industry going a little bit more, the blurring of the lines between sales, marketing, and technical content. That brings an opportunity, but it’s also a challenge because we’ve got to start thinking about things a little bit differently. It’s not about just disseminating information, it’s also about selling the product or the services that we’re documenting. The other challenge, I think this is just generally something that we’ve always faced, is obviously resources. Biggest challenge is hiring quick enough to facilitate the growth and innovate on new ideas.

We’ve always been very good on the innovation side, but keeping pace with that I think is always the challenge. We’ve seen with ChatGPT and AI, it’s very, very fast-paced. We need to be able to keep pace with that, and we need to provide our users, our customers, MadCap customers, IXIA customers with solutions and features and functionality that keep pace with what’s going on on a macroeconomic level.

SO: Yeah, which is interesting because when you look back at, again, 20 years ago, we thought we were going pretty fast. When you compare the velocity from 20 years ago to where we are now, there’s just absolutely no comparison, and it shows no signs of slowing down. I mean, things are just getting faster and faster and faster.

Velocity is an interesting one. With the AI stuff that’s coming out right now, I worry about trust and reputation, because of course, ChatGPT has informed me that I have a PhD, which I appreciate, but. There’s stuff like that happening. What’s that going to look like to produce content and make sure that it’s accurate?

AO: Right, absolutely. And that’s where I think that you cannot replace the human aspect behind the technical author, the content developer’s role, because there’s only so much content. You can get as much content as you want from ChatGPT, but the verification of the accuracy of the content, making sure it makes sense, making sure there is some post editing and review process that goes into it. There’s always going to be that role, right? There’s certain industries that have to make sure that it’s 100% accurate. You just can’t afford to have inaccuracies or misinformation.

SO: I do appreciate my medical devices having accurate documentation.

AO: That’s right.

SO: Well, I appreciate your time. This has been really interesting, and I’m looking forward to seeing what the combined company is going to do. Because of course, you’re coming at the problem or the challenges of technical content from I guess somewhat different perspectives. It’ll just be really interesting to see how those combine and what comes out in the mix when it’s all said and done. Anthony, thank you for coming on and answering all my cheeky questions, and we will look forward to seeing you at the events coming down the pipe.

AO: Great. Thank you, Sarah.

SO: With that, thank you for listening to the Content Strategy Experts Podcast, brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

The post Anthony Olivier unpacks the MadCap acquisition of IXIASOFT (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 16:17
“Why do I have to work differently?” (podcast, part 2) https://www.scriptorium.com/2023/07/why-do-i-have-to-work-differently-podcast-part-2/ Wed, 05 Jul 2023 10:32:42 +0000 https://www.scriptorium.com/?p=21992 https://www.scriptorium.com/2023/07/why-do-i-have-to-work-differently-podcast-part-2/#respond https://www.scriptorium.com/2023/07/why-do-i-have-to-work-differently-podcast-part-2/feed/ 0 In episode 147 of The Content Strategy Experts Podcast, Alan Pringle and Christine Cuellar continue talking about how teams adjust when content processes change, and tools you can use to navigate the question, “Why do I have to work differently?”

This is part two of a two-part podcast.

“We had a client a few years ago refer to us as content therapists, and that’s not far off. […] We provide a sounding board. We’re a sympathetic ear. We help give you the opportunity to bounce off concerns, problems, issues, and offer feedback. It’s a relationship where we are going to listen and give guidance, because again, we’ve been through this before with other people. Let’s apply that knowledge and make your life as easy as possible during, frankly, what can be a very tumultuous time.”

— Alan Pringle

Related links:

LinkedIn:

Transcript:

Welcome to the Content Strategy Experts podcast, brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. This is part two of a two-part podcast. Hi, I’m Christine Cuellar.

Alan Pringle: And I’m Alan Pringle.

CC: And in this episode, Alan and I are continuing our discussion about how teams adjust when content processes change and tips that you can have in your tool belt for navigating the transition successfully. Alan, how often do you see organizations that give thorough training after a new system has been implemented?

AP: You have to. I’ll put it to you this way, if we’re involved, we’re going to be a huge proponent for that. Because I think it is horribly unfair, horribly unproductive to just budget for the technology, not thinking about the people that have to use the technology. Again, we go back to people, which is what you started with at the very top of this podcast. People are the thing, don’t buy the tech and forget to show the people how to use it. Again, you’re going to fail if you go down that path.

CC: That’s a good point. And that goes back to another preemptive activity, which is to make sure that the budget includes room for training, because I’m sure that people get into a situation where they haven’t budgeted for that, but then a consultant is advocating for training. What do you do? Do you delay launching all this kind of stuff?

AP: It’s got to be a line item along with the technology and the migration and whatever else, absolutely.

CC: Which makes sense because then you’re making the most out of the system that you just heavily invested in because you’re bringing your team up to speed much faster, and I’m sure they’re going to be a lot more optimistic about the transition as a whole once they’ve been fully trained on it.

AP: Yes. And don’t forget, there may be employees you haven’t hired yet who will need training, so think about them too.

CC: True. That’s true. That could set up a training process.

AP: You might want to have people in your organization who can then turn around and offer that training. You may want to record sessions if you have a third party providing it and then sharing those later. So think about people you haven’t even hired yet when it comes to training because they’re going to need help too when they come on board.

CC: Absolutely. From the consultant perspective, I know in our podcast a few weeks ago, Bill had mentioned that a consultant paints the clear picture of why the change benefits everyone, and that’s one of the big advantages and we’ve touched on that already here. What else does a consultant do when we come in to help navigate this transition?

AP: We had a client a few years ago refer to us as content therapists, and that’s not far off.

CC: That’s a great example.

AP: We provide a sounding board. We’re a sympathetic ear. We help give you the opportunity to bounce off concerns, problems, issues, and offer feedback on those things. So it’s a relationship where we are going to listen and give guidance in a lot of cases, because again, we’ve been through this before with other people, let’s apply that knowledge and make your life as easy as possible during a frankly, what can be a very tumultuous time.

CC: Yeah, absolutely. I’m curious about the changes in the staff that happen during a transition. Do you see a significant portion of people that just get overwhelmed by the change and leave the company? Is that a common situation or is that more in extreme cases of where the transition’s maybe not being handled well or it’s just a tough case?

AP: I’m going to bust out that consultant answer, it depends, because it really does. We have seen that happen. I have seen that happen. There are some people who are just not going to be a good fit in the new process and it’s time for them to unfortunately move on. Is it usually what happens? No. There are a lot of people who are very interested in adapting and changing and making things better because they realize too, if this process is more efficient, it’s going to cut out some of the gross, for example, formatting scut work I have to do, it eliminates that, gives me time to really write content that’s going to help the person who’s reading it. I can spend more time on the quality of that content and less time on formatting and other things that really can be huge time sucks.

CC: That makes sense. So I like that term content therapist. You mentioned that the consultant sometimes has the unique ability to be able to say the exact same thing and it has a [different] impact. I’m sure that includes delivering bad news. So how are consultants able to share bad news in maybe a more effective way than current team members or management are able to do?

AP: Maybe because it’s more compartmentalized coming from a third party, sometimes people will react differently to it. I think the bad cop angle too comes into play when you have vendors involved or you’re making a selection process among vendors. The consultant often will know, from past experience, what tools are better fits in certain situations and with certain company cultures and we can say, “Yes, this is a good fit here. This is not a great fit here. You really need to ask them about this because they have not been really good about this particular feature set in the past and you need it. So push them on this when it comes time to evaluation time.” So that bad cop angle is not just about working with the client. It can also be to be sure that the vendor fit is as good as it possibly can be when it comes time for tool selection.

CC: Okay. So maybe there’s someone listening to this that is in the very, very early stages of considering a big tool shift or a big content process change in their company and they’re trying to understand what to expect. What can you share about how long to expect their team to really fully adjust and get comfortable in a new system? What should they expect there?

AP: There are going to be degrees. You’re going to have some of these people who are early proponents who help bring people on board. They’re going to be part of that shift very early, maybe even before you get the tools completely in place, they’re going to be helping with that. Then you’re going to have others who may be more stragglers, but that’s what you have training for. That’s how you can help them by showing them this is how, in your role, you should be using this tool, this is the best practice for this situation, that sort of thing, which training can really help with quite a bit.

CC: Okay. And then what are some tips or tricks you have for bringing a team member that may be particularly struggling with that transition on board? I know you mentioned that sometimes people just aren’t a good fit for the new system and that can be hard, but before you make that determination or before they come to that conclusion, what are some tips you have for winning someone over to the new system if all the stuff that’s worked for the other team members hasn’t worked for this person?

AP: Find out what their pain points are, what they don’t like about what they’re doing and show how the new system can address it, that’s one way that you can possibly convert them to your new process. Again, you have to be very careful here not to offer up a cookie cutter solution. People are all very different and people react to things very differently. Just because something worked with one person in your organization doesn’t mean it’s going to apply well to someone else. So don’t try to apply a one size fits all situation when you’re dealing with people who are struggling to adjust. That will probably backfire unfortunately.

CC: So there may be some people that would come around, they just need a different approach.

AP: And again, this is where the content therapist can help. A consultant can say, “We’ve seen this kind of situation before. This worked fairly well. Maybe try this to get these people on board.”

CC: Okay. And what do you think are some signs that it would just be better to part ways if the transition’s just not going to be a good fit?

AP: You really have to measure and try to be as objective as possible. Is what I’m seeing realistic, valuable feedback that something in this new process isn’t as good as it should be? Or is this just absolute hard line recalcitrant or someone’s just digging in for the sake of digging in and not changing? You’ve got to make that differentiation and it is not an easy thing to do sometimes. So again, you can tell I’m being a little bit hesitant here. We’re talking about people and emotions. Even though this is often driven by business, this still becomes a very emotional decision, an emotional situation for people, and you can’t let that slip by you when you are a manager or someone driving this kind of change.

CC: I think that’s a really good perspective and like you said earlier, that a lot of this, this competency ties into their career identity, their role, it’s a big deal. So to just brush past that would be incredibly invalidating and discouraging. So I think that that’s a really good perspective to help us remember. Again, it’s about people, people are the reason behind the technology, even the business, it’s all about people. So on the positive side of times that this has gone well, can you give any examples, even if they’re unnamed examples of team members that successfully navigated the transition and are thriving in the new system?

AP: I can think of one in particular. We are working with some folks and have been for about a year and a half now, on learning content. And early on there was someone in the group who got it, who understood it and was part really, even though I don’t think he’s management, he really got the big picture and was able to help bring other people on board. He was very involved in one type of content, someone else who was involved in a parallel line of content, not quite the same, was clearly not as on board. But watching the two of them interact and then interact with some of our consultants, it was great to see the enthusiasm of this one person who got it start to get into the other people on that call, especially the one who was working on that parallel track who didn’t at first seem quite as on board. Watching her come on board with help and input from us and her coworker, that was a great thing to actually watch happen and it was very rewarding.

Again, it wasn’t just us, it was someone else in the organization who understood the big picture and was able to help communicate that and get someone else that he worked with to understand that, that was a great situation.

CC: Yeah, that’s a great example. I know as consultants, we’re brought in for this unique transition and then once the transition, the training is complete, that’s kind of the end of the project. But are you ever able to see, down the line in recurring projects, team members that you worked with initially during the transition that are now years established in either their new role or just the new tools that they’re using, I mean, you’re able to see how they’ve adapted?

AP: Oh yes, because just because we’ve wrapped up the primary implementation and the training, there’s often things that need to change down the road. Just like you change systems because of business requirements, those newer business requirements may require tweaks to the new system and optimizing it to handle new business requirements. So we’ll often come in later and help them make some changes and we will work with people who are now living, breathing the system as if it’s something they’ve been doing their whole lives. That’s not uncommon.

CC: Yeah. And it’s encouraging to hear that people do adapt, the transition can and very often is very successful. It is, though, a big change.

AP: And it’s a long-time process. You are not going to snap your fingers and have this happen in two months, you’re not. Just realize it can take months to get something like this done and you can’t rush it just to get tech implemented and forget about the people angle. That happens. You’ve got to be very careful not to just think everything is about implementing the tech and the people are secondary. I would advise not to fall into that trap. It is a very easy trap to fall into, don’t do it.

CC: Yeah, I like that. And I think you brought up a good point too, that knowing that it’s a very long process and it’s not just wrapped up in a couple of months is a good perspective to keep in mind, because even just the length of a transition can sometimes be wearing on a person or on a team.

AP: It can be, but on the flip side, I would say sometimes it can be a gift when you have the time, because it gives you the time to communicate why you’re doing what you’re doing and then build it and then train people on it.

CC: Yeah.

AP: There’s this very fine balance you have to strike. You can’t let things drag on forever with analysis paralysis, that can happen, and I have seen it happen. On the flip side, you can’t rush things and basically get six months of work done in six weeks, it doesn’t work that way either. You’ve got to find that sweet spot and let reality, especially reality based on things that consultants have already experienced, can give you a more realistic view of when things are really going to get implemented and people are going to buy into your new system.

CC: Yeah, absolutely. Well, Alan, are there any other things that are coming to mind that you want either the content creators going through a transition, managers or anyone else involved in the process, is there anything else you can think of that we haven’t covered yet that would be good to keep in mind for successfully navigating a transition to a new system?

AP: Making this kind of transition can be a pain point itself. Figure out how to communicate and explain why you’re having that pain and maybe the system or the support for the system will be better because you were able to articulate that.

CC: And I love that. Really, as we’ve talked all about this, it’s sounding to me the biggest tools that are going to set you up for success are communication and holding space to hear about and accept feedback for what people are going through. So really, it’s all about people and the approach to helping navigate the transition, is just being a really kind human to get people through this, that’s what I’m hearing from you.

AP: And unfortunately, sometimes kind humans have to make difficult decisions.

CC: Yeah. And that’s hard.

AP: And that’s hard.

CC: Yeah. Yeah. So I think these are really good tips for how you can navigate that transition even if it’s really difficult.

AP: Right.

CC: Well, thank you so much, Alan. I really appreciate you taking the time today. Anything else that you can think of before we wrap up?

AP: I think people probably have had their fill of me for this episode.

CC: Not at all, this was great. Well, thank you so much for being here and thank you for listening to the Content Strategy Experts Podcast, brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

Ready to have experts guide your team through changes in your content operations? We’d love to connect!

"*" indicates required fields

This field is for validation purposes and should be left unchanged.
Data collection (required)*

The post “Why do I have to work differently?” (podcast, part 2) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 16:58
“Why do I have to work differently?” (podcast, part 1) https://www.scriptorium.com/2023/06/why-do-i-have-to-work-differently-podcast-part-1/ Mon, 19 Jun 2023 11:32:32 +0000 https://www.scriptorium.com/?p=21971 https://www.scriptorium.com/2023/06/why-do-i-have-to-work-differently-podcast-part-1/#respond https://www.scriptorium.com/2023/06/why-do-i-have-to-work-differently-podcast-part-1/feed/ 0 In episode 146 of The Content Strategy Experts Podcast, Alan Pringle and Christine Cuellar talk about how teams adjust when content processes change, and how you can address the question, “Why do I have to work differently?”

This is part one of a two-part podcast. 

“One of these kinds of business drivers can be a merger or an acquisition. When you end up combining two companies, you can have two separate workflows. Both of them are not going to win — they’re just not. […] But again, I mean, I have a lot of sympathy for these people. A lot of times they are asking this for legitimate reasons. ‘Why is this happening?’ ‘Why am I having to do this?’ That’s when you’ve got to help them step back and look at the bigger business situation.”

— Alan Pringle

Related links:

LinkedIn:

Transcript:

Welcome to The Content Strategy Experts Podcast, brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we talk about how teams adjust when content processes change, and how you can address the question, “Why do I have to work differently?” This is part one of a two-part podcast. 

Hi, I’m Christine Cuellar.  

Alan Pringle: And I’m Alan Pringle.

CC: Alan, thanks so much for being here today. So, I want to pick your brain about this because we always talk about how people are the “why” behind technology. So, I want to specifically focus on the people and how they adjust to a change when we come in and help a company completely restructure their content operations, because of course that’s a big transition. Maybe a company’s restructuring their content operations, transitioning to a different CMS or CCMS, or implementing one for the first time. How do teams react to the dilemma of a team member saying something along the lines of, “I don’t understand why I have to change. I don’t understand why I have to work differently. I’ve been producing good content for years. Why do we have to make a change?” Where is this coming from and what’s your experience dealing with this?

AP: In the defense of that person, they may have become experts in using a particular tool, a particular process, and really gotten their use of that tool down to a fine science. They are using it to its maximum potential. So their professional identity is somewhat focused on their competence and their ability in that tool. And if someone comes in and say, “Guess what, we got to change things up,” I can understand the disconnect. “Why? I’ve been doing this well. Why?”

Well, you touched on the whys a little bit. A lot of times it can be a situation, for example, where the current content and the way it’s put together is no longer supporting the business goals of the whole company. It’s a bigger picture thing. So, people get hung up in their world and their focus on this particular little track of content and the processes and the tools, but maybe that track isn’t fitting the big picture anymore.

So that can be one reason why things need to change. And one of these kinds of business drivers can be a merger or an acquisition. When you end up combining two companies, you can have two separate workflows. Both of them are not going to win. They’re just not. That’s redundant. That can be a reason for that. But again, I mean, I have a lot of sympathy for these people. A lot of times they are asking this for legitimate reasons. Why is this happening? Why am I having to do this? That’s when you’ve got to help them step back and look at the bigger business situation.

CC: Yeah, that makes sense. How much does their role change? I mean, I know every case is different, but what could a content creator expect in their role change? Are they minor changes? Is it top to bottom, a completely different process of doing things? What does that typically look like?

AP: It really depends on the situation and how the processes are running now. Some processes may be more efficient than others. Some may need a whole lot more help to help meet those business goals. So there is that situation. There’s this entire spectrum basically you have to look at and what you kind of need to do is figure out ways of mapping how people are doing things or that mindset to the newer, more efficient content operations. Help people understand this is going to become this, that’s going to become this, and so on. So it’s almost like a content modeling exercise. It’s more of a process model matching where you are saying, “This is how you were doing it, this is how you’re going to be doing it.”

And if you can start making those connections and explaining those things without diving immediately into the tech, because a lot of times that is a huge turnoff and runs people off. Don’t jump in talking tools first.

And there have even been some times where we have gone in as an organization, a consultancy, and maybe talked a little too high up the tech scale, and we realize and course corrected and brought things back down. You can talk about things without getting too techy, too much mired into the tools upfront, and I think that helps give more of a comfort level as you start talking about change, which is a scary thing.

CC: Yeah, that’s a really good point. Actually I feel like I’ve experienced that a little bit. I know that I’m starting to use some systems on our team, not very many, not nearly as much as our other technical experts on the team, but I’ve used Oxygen a little bit, but that was only after hearing what it does, everything we say about structured content, all the benefits and the vision behind that. So I feel like that has really helped because the technical aspect of it, I mean, when I was just trying to publish my changes essentially that I know it was such a simple thing to do, but it took me forever. I had to get help from the team. So it was a technical challenge for me. But because I knew the vision behind it and the purpose of why we were using this tool and why we write it this way versus just pulling that in a Google Doc, that kind of thing, which I’m used to, that really helped.

AP: Right. No, and what you’re describing is very much you were moving more from a model of doing collaborative authoring and reviewing in a Google Doc kind of simultaneous shared editing to an XML authoring tool, which is what Oxygen is, and you were writing things more in small XML modular chunks that then we put together, remix and put together to create different things. In the case that you’re talking about, it is a book that we have out, the Content Transformation book. Now, years ago when we were doing books, we would do it more in desktop publishing. Now we’re doing it in XML. What you’re describing, that shift, is very much what you have to do, but it was on a much smaller scale because it was really just you in this case.

CC: Yes.

AP: And some organizations, you may be talking dozens of people, and imagine what you went through times 12, 24, 36 people. That’s daunting. You have to be very careful how you approach that.

CC: That totally makes sense, and I like that you tied that earlier to you have to explain that change and explain what’s happening and the purpose behind it. Speak more at a higher level first so everyone can feel comfortable with that before you move into the technology. Because I really feel like that’s also what helped set me up for success. And I would say I use that very minimally. That’s not a big part of my role. But if this is going to change the core functions of your role, I could see how people are pretty intimidated or frustrated or feeling lots of things about a massive change like this.

AP: And again, everybody’s going to have a slightly different perspective. There are going to be some people who recognize the big picture immediately and say, “I get this. I understand why we need to make this change.” Some people may have been doing something similar in another job, so they have experience with this. If you can get these people who will act as basically proponents, evangelists, I kind of hate that word, but it’s actually a pretty good fit in this case, who can get people on board, help them see that big picture when you are making this kind of shift, that’s great because it’s not just a third party, a consultant, an outsider telling you what you have to do. You have someone that you’ve worked with and that you know, saying, “Yeah, this makes sense to me. This is what we’re going to do.” And they can kind of bring people on and turn the tide and get people to understand why this change needs to happen.

CC: That’s a great point. It builds a lot of confidence when someone you know has already done the process and then it ended up being a success, or they ended up finding their way to be comfortable in their role through it. So that’s a great point. I hadn’t thought of that. In other situations where that transition as a whole has been navigated really well for the team, what do you think the key points are that set the team up for success in either changing that mindset of why do I have to change or avoiding the mindset?

AP: I mean, basically it’s good communication. That sounds so elementary and just not helpful, maybe not a good answer, but it is absolutely. You have got to go in there with this very open communication, this very open mindset. Let me lay it all out on the table for you. Let me explain everything to you. Going in there with a more dictatorial style, “This is how it’s going to be, it is this way or the highway,” good luck with that, because you’re going to need it.

CC: Yeah. No one responds well to that and you don’t have your opinions and perspective validated. And like you said at the very beginning of the podcast, this perspective of feeling frustrated with the change or overwhelmed by the change is completely valid because change is hard, and I like that you brought it back to communication. I think communication is, it’s a very, I wouldn’t say simple, but it’s a straightforward answer that may seem like a simple solution, but so many areas would be improved with communication. It’s just hard, I think, for people to communicate. So how do you as a consultant help empower good communication?

AP: There’s a very sad truth behind consultancy, and that is you can go in as a third party and say the exact same things people in-house at that company have been saying for weeks, months, and no one’s been listening to them, but you come in there and say the same thing or rephrase it a little differently and all of a sudden light bulbs go off. I know it is maddening to the employees who were screaming, “I’ve been saying this the whole time,” but that’s just life. It’s how it is.

There’s one other angle here too where I think consultants are helpful and that is helping from the perspective in figuring out the reasons people may be resisting change. The whole competency in my tool set thing we talked about early in the podcast, that’s one of them, but there’s some other, shall we say, more negative, nefarious things that a consultant can spot from a mile away generally. There are going to be some people, and only some, this is not everybody, who may have created processes or made things more difficult to basically justify their existence to make themselves look more valuable than perhaps they are. I know this sounds terrible, but I have seen it multiple times. And they have created something sort of convoluted so they can kind of make themselves the hero.

CC: The ones, yeah.

AP: They are sometimes going to try to sandbag your project to keep things from happening because they are threatened by it. That, to me, is distinctly different from the very valid, “I’m really good at this tool. This has worked well. Why are we changing?” Those are two very distinct things and I don’t want them to be conflated because they’re different, but there can be some negative things going on when you’re changing processes and you need to be aware that that reality is there.

CC: That’s a good point because I’m sure most organizations don’t expect that. You certainly don’t hope for that. So that’s a good warning flag to know. And you mentioned that as a consultant you can see it a mile away. How are some of the ways that you can see those insights more than maybe an organization?

AP: It is because we have gone in and done it so many times. How many times are you going to change processes in your career? Maybe once, maybe twice. I guess it depends on how much you float around. A consultant, though, they will do it multiple times in a year with multiple different people, sometimes concurrently. There’s this whole, basically you build all this experience and as you build it, you can hone it and use it to help other people. That’s the difference. And even if you don’t hire a consultant to come in during a time of process change, if you can hire someone or bring someone on board, maybe even as an employee, who has been through this before and can kind of act as a mini consultant in the sense they’ve been through it before, that can also be a very valuable way to help with this kind of situation.

CC: That sounds like it. So looking at management’s perspective now, whether that’s high level management or direct supervisors that are hearing this feedback, what do you recommend they do? How do you recommend they respond when they hear this kind of feedback from the team?

AP: Again, it comes to communication. You have to explain, these are the drivers for why we’re doing what we’re doing. This is why the current things don’t work anymore. I need your help to get things more in sync with where we are headed as an organization. That’s one way you can do it. And notice, I didn’t mention tech in there at all. Don’t lead with tech if you can help it. It always comes up, like I said. People who are very good at a tool, the first thing they’re going to do is say, “What tool are we going to use?” A lot of times you may not know the answer to that question when you’re getting rolling because maybe you need to assess the situation, have the consultant figure out what’s the best fit for you. So again, don’t race to the tools and don’t let people in your organization race to tools as the primary part of that conversation. It won’t end well.

CC: That’s a good point to keep in mind that people will probably want to know that, which would make sense. I mean, if I was going through that big of a change, that would probably be my first question, too.

AP: And sometimes the answer to that is I don’t know yet. We’re working on it.

CC: That makes sense. But I like the emphasis on having that transparency, having that good communication to say even if we don’t know what the tool is yet, here’s some of the reasons this change is happening. It also sounded like there may be some proactive or preventative communication that managers can have to share the vision, or maybe giving a heads-up about the change even before the process starts, just letting them know.

AP: Absolutely, before. You don’t just announce this. You don’t. This needs to be a very deliberate, thought out process. And part of that deliberate process is the communication and getting the ball rolling. Basically, it’s like a pre-project kickoff. You have got to start talking to people before you even start doing anything with the consultant or with new tools or whatever new thing. Get those communications rolling early and make them as two-way as possible. You need as a manager, a director, you need to take in feedback and kind of synthesize it and figure out what you can do to mitigate the concerns that people are having. And there’s also the idea, you’ve got to kind of filter, are these legitimate concerns, legitimate worries people are having, or is this somebody just sandbagging things because they refuse to change? You’ve got to make that call. Again, nobody wants to believe there are people in the organization who will do that, but usually there’s always at least one, unfortunately.

CC: Yeah, that’s hard. And I’m sure that’s why it’s helpful to have a third party, like you said, whether that’s a consultant or whether that’s just another employee that’s been through the process or some kind of outside voice that can help with that, that can really take the pressure off of trying to identify that that’s going on while you’re also just trying to navigate the change. Because I’m sure there’s a ton on a manager’s task load just trying to navigate the change.

AP: Right. But even so, you can’t let communications stop. That can’t be what goes away because you’re in a jam.

CC: Yeah.

AP: Again, it won’t end well. Keep those communications up and running as long and as hard as you can.

CC: Yeah, that’s a good point. And we do have a lot of other podcasts and articles that touch on this subject. So if you look at our website, there’s a lot of change management articles because that’s kind of what this process is called, right, Alan? Change management is how we refer to how team members navigate the transition, but also how the logistics of the transition happen.

AP: Right. And another component of this is not just communication, it’s also training. People need to know how to use the new processes, the new tools. You can’t just put something together and say, “Have at it.” It doesn’t work like that. There are going to be best practices that are very specific to your organization and your use of that tool set. And you need to communicate those thoroughly to your staff. And training is one way to do that.

CC: Yeah. All right, I think that’s a good place to wrap up for now, but we will be continuing this discussion in our next podcast episode. So Alan, thank you so much for being here.

AP: Absolutely.

CC: And thank you for listening to The Content Strategy Experts Podcast, brought to you by Scriptorium. For more information, visit scriptorium.com, or check the show notes for relevant links.

If you need a consultant to guide your team through a transition, we can help!

"*" indicates required fields

This field is for validation purposes and should be left unchanged.
Data collection (required)*

The post “Why do I have to work differently?” (podcast, part 1) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 18:39
Optimize learning and training content through content operations (podcast) https://www.scriptorium.com/2023/06/optimize-learning-and-training-content/ Mon, 05 Jun 2023 11:23:33 +0000 https://www.scriptorium.com/?p=21958 https://www.scriptorium.com/2023/06/optimize-learning-and-training-content/#respond https://www.scriptorium.com/2023/06/optimize-learning-and-training-content/feed/ 0 In episode 145 of The Content Strategy Experts Podcast, Bill Swallow and Christine Cuellar discuss the impact content operations has on your learning and training content, and how to make the most out of this valuable asset. 

“If the company is looking to implement something within a specific time frame for a very specific business need, and that gets delayed at the beginning when training is being developed, it’s going to snowball down. So, your six-week delay on getting content out the door might turn into a six-month delay on getting the program rolled out.”

— Bill Swallow

Related links:

LinkedIn:

Transcript:
Welcome to the Content Strategy Experts podcast, brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we’re talking about why content operations is really important to think about for your learning and training content.

Hi, I’m Christine Cuellar, and with me today I have Bill Swallow. Hi, Bill!

Bill Swallow: Hey there.

CC: Thanks for joining us.

BS: No problem.

CC: So we’ve been talking a lot more about learning and training content. I know it’s been coming up in a lot of new projects, client conversations, and I’d love to dig more into it and understand just from a really basic perspective, what is learning and training content? What do we mean by that?

BS: I think probably most people are fairly familiar with training content in general, so it’s content that guides you through learning something. But the scope of that is broadening quite a bit, and it’s actually been broad for quite some time. You have everything from instructor-led classes to textbooks to online learning, learning assessments. There’s a myriad of different types of training out there, and increasingly they’re looking for better ways of managing all of that information that they’re constantly churning out for a variety of different audiences.

CC: Okay. When we talk about learning and training, are we talking about the educational space or are we talking about any space that has training?

BS: It really could be anywhere. Educational space is a good one, so certainly institutions of learning, whether it’s be schools, universities, or what have you, but a lot of corporations have a good deal of training content as well, particularly in areas of manufacturing where people really need to be instructed on the correct ways of performing certain operations. Otherwise, it could risk injury or death. And then of course, you have all the regulated industries as well, whether it be in any kind of manufacturing, any kind of development, or even in finance or healthcare or what have you. There are very specific things that people need to do in a very specific way so it’s important for them to have all of this training content so that their people know what they’re supposed to do and how they’re supposed to do it.

CC: Gotcha. Okay. And then to define what we talk about with content operations, what do we mean by that? Because I know that’s a term really similar to content strategy, you see it in a lot of different industries and a lot of different places. So, what do we mean by content operations?

BS: Down to its essence, content operations is the way that you approach writing, editing, distributing, publishing your content. So it’s the how of what you’re doing. So it really encompasses the entire spectrum of working with content.

CC: Okay. So I know we’ve talked about in previous podcasts that this applies beyond just product and technical content. This applies to learning and training, this applies to marketing. Any content that you’re creating falls under content operations. So what are some of the unique challenges that you have to think about when you’re specifically producing learning and training content versus other kinds of content that we’ve talked a lot about?

BS: The most unique challenge with learning content is getting your arms around the sheer scope of information that’s required.

CC: Really? Okay.

BS: A lot of people don’t understand exactly how much work goes into producing a series of training, whether it be online, instructor-led, self-paced, or what have you. And what we’ve heard from a lot of different companies is that there needs to be a more efficient way of managing that process so that people aren’t writing the same thing five, six, seven, eight times and just freeing up people to make sure that the content is correct and not making sure that everything is formatted absolutely perfectly for every single place where it needs to go.

Likewise, there’s the case where the training might be provided in multiple different ways. The same exact training could be written down so that someone can read it and understand what they need to do. It could be delivered in an instructor-led class, whether that be in person or online, and it could be as part of an e-learning sequence where people go into a self-paced portal and take the training there. And what we’re seeing is that there’s a lot of manual work to make sure that all of the information is updated in all of those different places. A lot of times it comes down to these tools that they’re using just don’t talk to each other very well so they have to cut and paste or copy paste from one place to another. And then when something gets updated, they have to remember all the different places where they’ve copied and pasted this information.

CC: Yeah. Which is probably not going to happen. I mean, there’s probably something that’s going to get missed, or it just would take a lot longer.

BS: Yeah. Or they need lots of different steps of approval for each piece of content that they’re developing, which also takes time.

CC: Yeah. And like you mentioned earlier, since a lot of the content in these trainings have life-saving information that you need to know how to operate things correctly or do things correctly — when the scales really can be life and death, you want to be sure you have the most accurate information in those trainings, because even if you missed just one spot, I could see how that’s really crucial. And it sounds like having content operations in place to create your learning and training content makes you more scalable because you’d be able to deliver more content faster. It also helps with your delivery time because maybe you wouldn’t have to go through all of those stages of approvals if you have some of the tools taking that burden off of the writers and the managers. Is that accurate to say, do you think?

BS: I’d say it’s fairly accurate.

CC: Okay.

BS: I think what’s more important here is that content operations really is, it’s a mix of different things. It’s a process for how you’re developing your content, it’s having the right tools in place for the right job, and it’s having a very specific workflow at every stage of the content development and delivery procedure. I don’t want to say it’s an assembly line, but it’s more of a complete agreement of what we’re doing and what we’re using to do it with, and making sure that each thing that is being done in the content development and delivery process is done to maximize the amount of benefit that’s being provided. So first and foremost, it’s getting the content correct and making sure that you’re not putting wrong information in there. The other is making sure that you’re not spending time rewriting the same thing that was already written, could be not having to spend hours upon hours fiddling with a particular layout for a particular piece of delivery. And finally, it’s being able to make sure that once the content is ready to go, that it gets to where it needs to go as efficiently as possible.

CC: Yeah. I noticed that you mentioned having the right tools doing the right function is something that we focus on in content operations, and that completely makes sense but I didn’t even think of that before. People have good tools in place but they’re not using them correctly, or they could actually have a better fit that they don’t even realize. Is that a big problem that we encounter a lot?

BS: It’s fairly common. It’s not a horrible problem but it does cause a bit of churn, especially when you’re trying to share content with other people, because one person may be doing one particular thing and another person might be doing something quite different.

An easy way to look at it is developing a Word document, let’s say. One person is handed a template and they follow that template to the letter. They use every single style in there. They tag everything exactly how it’s supposed to be. And the other person just goes in there and writes and formats it and maybe chooses styles here and there based on whether it looks good to them. So they don’t necessarily follow the template. Now, if you want to move content from one document to another or you need to update the template, one document’s going to reformat very well, and the other document is going to require an awful lot of cleanup.

CC: So that leads me into another question, can you walk me through what a typical content project looks like for learning and training content when someone’s looking to get better content ops for their learning and training content? It sounds like that project probably starts once they’re experiencing a lot of pain in the process, so there’s probably a good amount of learning and training content that’s already been created and I’m assuming has to be moved over. Can you walk me through that timeline and what people can expect initially and how the project proceeds?

BS: Sure. And you’re right, the projects usually begin with someone identifying a very big problem that isn’t being solved with immediate fixes. So they’ve tried a few things, they’ve made some slight improvements, but they’re still seeing that a lot more needs to be done, and they may or may not know what needs to happen to make those changes that they need to see. So a lot of times people will reach out because they are becoming increasingly overwhelmed with the amount of content that they are producing. Other times you’ll see people reach out when they go through some type of a merger and suddenly they have training content coming from two, three, five different organizations that all need to be aligned into one particular brand, one particular focus or what have you. Or they’re just changing up their complete tool set and they’re looking at, “Okay, we have groups A, B, and C using different tools and we want to use a completely different architecture for developing our content. We need help getting our arms around this.” So there are a lot of different reasons, but a lot of it comes down to understanding that they need an efficient way to improve their processes is basically what it comes down to.

CC: Okay. And you mentioned there’s a couple people that often reach out, but it sounds like the people that are experiencing the most pain and not getting that resolved are the ones to reach out. What roles are those people who generally reach out for better content operations or the solution that they aren’t really sure exists?

BS: It does vary. We hear from everyone, from those who are producing content, who understand there’s a problem and they’re poking around for ways to make things better, all the way up to some executive level person or director level person who’s in charge of making things better and needs some help figuring out how they’re going to make this happen.

CC: Yeah. What are some of the things that they may have noticed? I’m sure they’ve heard complaints from their team, but if they’re not actually experiencing the pain day-to-day, what are some of the ways that it gets, I guess, big enough that they start to notice?

BS: I think the big one there is making sure that they are delivering content on time. So if they are constantly behind in rolling out training to various different groups, that’s certainly a problem because, like we talked about earlier, you don’t want to be in a situation where someone doesn’t know how to perform a specific operation and someone gets hurt along the way, especially in those cases. It’s somewhat easy to forgive someone for some amount of data loss or lost time or something like that, but it’s quite a different thing when you’re sending ambulances to the office or to the facility. So we want to make sure that’s not happening.

And also, it can do with being able to roll out programs and roll out new initiatives. So if the company is looking to implement something within a specific time frame for a very specific business need, if that gets delayed at the beginning when training is being developed, it’s going to snowball down. So your six-week delay on getting content out the door might turn into a six-month delay on getting the program rolled out.

CC: That’s true. So in the big picture they’re just seeing content being delayed, things starting to slow down and in turn slowing other business processes down?

BS: Hopefully they’re just starting to see it.

CC: Yeah, yeah. Hopefully it’s very, very early on. So on the flip side of that, when a company is able to implement strong content operations in their learning and training content or really throughout their whole organization, what are some of the benefits they get to see aside from just it’s less painful? Which I know is probably the biggest benefit because that’s why they’re coming for content ops in the first place.

BS: I think it really depends on what the goal of the improvements are for a particular organization. But generally what they will start seeing is things being more efficiently done and all the players involved know what they’re supposed to do, how they’re supposed to do it, where to look for things, who to contact for things, and what the next step in the process is going to be. So ultimately there’ll be a better idea of how they’re producing this throughout the entire life cycle of the content chain.

CC: Okay. Yeah, that sounds great. That sounds like a lot less burden on the team as well. I’m sure that everyone involved appreciates that. It just sounds a lot better.

BS: One other aspect to making these improvements is being able to reduce the amount of time, obviously, that a lot of this work takes because it is a very tedious process to develop all of these different types of training. So if a team can reduce the amount of time being spent authoring content, for example, by reusing content rather than copying and pasting it, so being able to take what someone else has already written and use it wholesale rather than copying, pasting, rewriting, and so forth. If they have solid templates in place and writing practices that support that template use, then they can see a lot of publishing time reduced as well. And likewise, chances are they’re probably translating all of this content as well to many different audiences. So the more they have their arms around being able to develop the source content, the easier it’s going to be to get the translation worked done.

CC: Yeah, absolutely. One other question I was thinking of was that it sounds like a big part of this is tool selection, making sure you have the right tools in place that are helping out the whole process and automating what you can. What is involved in the people side of things, as far as people adjusting to a different form of content operations? What does that adjustment look like?

BS: It can be tricky. A lot of people just in general, and it’s not necessarily a bad thing, but people generally are resistant to change. They’ve been working some way for five, six, seven years longer and suddenly they’re being asked to work a different way, it can be a little daunting. And at times it’s easy to sit back and say, “I don’t understand why I have to work differently. I’ve been producing good stuff for years. Why do I have to do it differently?” And sometimes what we do is we take a look at the whole picture and we try to paint a very clear picture of why the change benefits everyone.

And there has to be also a communication and understanding that it’s going to be a give and take. You may lose your favorite authoring tool or you may not get to write or rewrite the content 100% in your own way, there may be a very specific way of writing now, but the goal of the training is really what is going to drive the change. What is needed to deliver this training? Who needs it? Why do they need it? Why does it need to be, for example, completely consistent across the board no matter where it’s delivered? Looking at that bigger picture and the bigger wins is a good way of framing it.

CC: Yeah. Absolutely. If you as a listener are interested in learning more about how we do all of this at Scriptorium, we are going to be at some more learning content conferences in the future, such as TechLearn in September of 2023. So there’s going to be more opportunities for you to meet our team, talk more about this and ask more specific questions.

Bill, is there anything else you can think of when it comes to learning and training content that you want to be sure we communicate that people understand when it comes to why content ops help this content and these processes so much?

BS: I think the biggest takeaway is not so much looking for small wins but it’s looking at how you can make your training development process as efficient as possible and as effective as possible. Both go hand in hand. You can’t sacrifice one for the other. If you sacrifice effectiveness for efficiency, then you’re just really good at pumping out bad training content.

CC: That’s true.

BS: And likewise, if you sacrifice it the other way, you’ve got really good training content that’s going to be available to people at some point in the future.

CC: TBD. Yeah. Yeah, that’s a really good point. Well, thank you so much. I really appreciate you taking the time to talk about this and just help us understand more about learning and training content and content ops.

BS: Thank you.

CC: Thank you for listening to the Content Strategy Experts Podcast, brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

The post Optimize learning and training content through content operations (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 17:55
AI: Rewards and risks with Rich Dominelli (podcast) https://www.scriptorium.com/2023/05/ai-rewards-and-risks-with-rich-dominelli/ Mon, 22 May 2023 11:18:01 +0000 https://www.scriptorium.com/?p=21938 https://www.scriptorium.com/2023/05/ai-rewards-and-risks-with-rich-dominelli/#respond https://www.scriptorium.com/2023/05/ai-rewards-and-risks-with-rich-dominelli/feed/ 0 In episode 144 of The Content Strategy Experts Podcast, Alan Pringle (Scriptorium) and special guest Rich Dominelli (Data Conversion Laboratory) tackle the big topic of 2023: artificial intelligence (AI).

“I feel like people anthropomorphize AI a lot. They’re having a conversation with their program and they assume that the program has needs and wants and desires that it’s trying to fulfill, or even worse, that it has your best interest at heart when really, what’s going on behind the scenes is that it’s just a statistical model that’s large enough that people don’t really understand what’s going on. It’s a model of weights and it’s emitting what it thinks you want to the best of its ability. It has no desires or needs or agency of its own.”

— Rich Dominelli

Related links:

LinkedIn:

Transcript:

Alan Pringle: Welcome to The Content Strategy Experts Podcast, brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. Hi everyone, I’m Alan Pringle. In this episode, we are going to tackle the big topic of today, artificial intelligence, AI. And I am having a conversation with Rich Dominelli from DCL. How are you doing, Rich?

Rich Dominelli: Hi Alan. Nice to meet you.

AP: Yes. We have talked back and forth about this and I expressed I have a little bit of concern about touching this topic. There is so much bad coverage on AI out there right now. Click bait-y, garbage-y headlines, breathless reporting, and I’m hoping we can kind of temper some of that and have a discussion that’s a little more down to earth and a little more balanced. And let’s start talking about what you do at DCL and then we can kind of get into how AI connects to what you’re doing at DCL.

RD: Sure. As you know, Data Conversion Labs has been around since 1981, and we are primarily a data and document conversion company. So my role at DCL is a architect for the various systems at DCL that covers a wide variety of topics, including implementing workflows, doing EAI style integrations to obtain new documents, and also looking for ways of improving our document conversion pipeline and making sure that conversions are working as smoothly and automatically as possible.

AP: And I’m hearing a lot about automation and programming and I can see AI kind of fitting into that. So what are you seeing? How are you starting to use it? And you may already be using it at DCL.

RD: So AI is a very broad term and I feel like it’s something that’s been kind of shadowing my career since the dawn of time. Back in the Reagan era in the 80s when I was graduating from high school and looking to start my college career, I was told at the time not to enter computer science as a field because computer programming had maybe two or three years left and then computers going to be programming themselves with case tools and there won’t be any careers for computer programmers anymore except a couple of people here and there to push the button to tell the computer to go. That obviously hasn’t panned out.

AP: No.

RD: Although I feel like every few years this is a topic that starts cropping up again. But at DCL we have used what we would call machine learning more than AI. And I guess the differentiation there is machine learning is using statistical analysis to process things in an automated fashion. For example, OCR and text to speech were both pioneered by Ray Kurzweil.

AP: And OCR is, just for the folks who may not know.

RD: Sure. Optical Character Recognition. Taking text or printed words or even handwriting and analyzing it and generating computer readable text out of it, taking that image of a file and converting it to text. So as I said, Ray Kurzwell did some early pioneering work on that in the late 80s, early 90s, and eventually worked on models of the human mind and comprehension. And I think that’s what people are envisioning now when they say the word AI. But even the panorama mode in your camera is a version of machine learning and AI. It takes the ability to stitch images together smoothly and processes that automatically.

Other places at DCL where we do use AI on an ongoing basis is we do natural language processing, looking at unstructured texts and trying to extract things like references, locations, entity recognition where we have a block of texts and buried in that block of texts is a reference to a particular law, or a particular document, or a particular location or person. So that type of work we’ve done. We use it for math formula recognition. So if we have an academic journal that has a large amount of mathematical formulas, for example, we do some work for the patent office and patent applications frequently have mathematical or chemical formulas in them.

AP: Sure.

RD: Putting that information out and recognizing that it is there to be extracted out would be an application of AI that we use all the time.

AP: With the large language models that we’re seeing now, a lot of them are kind of reaching out and people can start experimenting with them. What are you seeing in regard to those kinds of situations? I don’t know if public facing is the right word, but the stuff that’s more external to the world right now.

RD: It is certainly the most hyped aspect of AI right now.

AP: Exactly.

RD: … where you can have a natural language conversation with your computer and it will come back with information about the topic you’re looking for. And I think that it has some great applications for things like extracting or summarizing text. It’s a little risky though. For example, I have a financial document, a 10K form from IBM. Buried in that document is a list of executive officers and a statement of revenue. And I ask ChatGPT, “Given this PDF file, give me a list of executive officers.” And interestingly enough, it does come back with a list of executive officers, but it’s not the same list that appears in a file. It’s a list that it found somewhere else in its training data. When I say please summarize the table on page 31, it does come back with a table, but the information that appears on it is not what is on that page of the PDF app. And in the artificial intelligence world, this is called a hallucination. Basically the AI is coming back with a false statement. It thinks it’s correct or it’s trying to convince you it’s correct, but it’s not.

AP: Yep.

RD: So that is very concerning to me, because obviously we want as accurate as possible when you’re doing document conversions. And if that doesn’t occur all the time, I mean if it came back with an accurate example, but let’s say two or 5% of the files that I throw at it, it comes back with fiction. That’s not an acceptable thing because it’ll be very hard to detect. It looked really good until I went back and said, oh wait a minute, wait, where did it get that from?

AP: We have done some experiments and I’m sure a lot of people listening have too. Like I asked for a bio on myself and it told me that I worked at places where I have never worked. So yeah, it’s not reliable. And I think there’s another element here too that scares me beyond the reliability. A lot of these models are training on content that doesn’t really belong to the person who put together the engine that’s doing this. It’s compiling copyrighted content that doesn’t belong to them. I think there are a lot of legal concerns in this regard. I was talking with someone on social media about how you can maybe use AI to break writer’s block. The group, The Pet Shop Boys, the songwriter, and the vocalist of that group, Neil Tennant recently said, I have a song that I tried to write 20 years ago and I put it away in a drawer because I couldn’t finish the lyrics.

I wonder if AI could look at the song and the kind of work I’ve done and help me figure out how to finish some of these verses. Now I may turn around and rewrite them and change them, but it might be a way to break writer’s block. And I see that being a useful thing even for corporations. Put basically all of your information in your own private large language model (AI) that doesn’t leak out to the internet. It’s internal. So then you can do some of the scut work, like writing short summaries of things, seeing connections maybe that you haven’t seen. But the minute you get other people’s content, their belongings, other people’s art involved, it becomes very squishy. And I’m sure there are liability lawyers just going crazy right now thinking about all this kind of stuff.

RD: Well, you certainly see a lot of that in the stable diffusion space, the art space.

AP: Yes.

RD: Where AI is being trained on outside artists’ work and are very easily able to mimic those artists often without their permission. I do think you touch on a very important point there, actually two. One, the fact that anything you type into OpenAI by default is being shared with,

AP: Right.

RD: … OpenAI. And as a matter of fact, Samsung, the company just banned OpenAI for all of its employees for that very reason because they had taken to using it for summarizing meeting notes and things like that, and they discovered very quickly that trade secrets were leaking because of that.

AP: Intellectual property. Not a problem, let’s just share it with the world! Yeah.

RD: Yeah. So actually what Samsung is doing is exactly what you said. They’re making an in-house large language model for their employees to continue to be able to do that type of work using that. The other aspect of what you touched on, which is where I think the real sweet spot is right now, using these tools as a way of augmenting your ability.

AP: Yes.

RD: Especially as a developer, just because that’s my space.

AP: Sure.

RD: As a developer, most developers have a stack overflow of Google when they’re trying to research on how to attack a problem properly. “What’s the best way of solving the problem?” Now you have your paired programming buddy ChatGPT, and you can say, “Hey, I need to update the active directory with this and how do I do that?” And ChatGPT will spit out working code, or even better, I can throw code that is obfuscated, whether intentionally or not.

AP: Right.

RD: … at ChatGPT, and it will produce a reasonable summary of what that code is attempting to accomplish. And that is fantastic. And you see tools like Microsoft Copilot, which they’re doing in conjunction with GitHub. Google also is having a suite of Bard tools for helping you do that and that type of thing is starting to leak into other spaces. So Microsoft Copilot, for example, is now being integrated into Office 365. So it will help you while you’re writing your memo, while you’re working on your Excel spreadsheet, while you’re working on your PowerPoint, rephrase things, come up with a better approach. In Excel, it’s great because it’ll tell you, well, this is the best way of approaching this macro, for example, or this formula and that type of thing is, I think, fantastic.

AP: Sure. And I’m more on the content side of things and we’re seeing some of the similar things that you’re talking about. For example, the Oxygen XML Editor has created an add-on that will hook into ChatGTP, PT. Look at me getting that wrong. I do it all the time. FTP, GPT, sorry.

RD: Too many acronyms.

AP: Too many acronyms floating around here. So basically it will, like for example, look at the content you have in a file and write a short summary for you so you don’t have to do it yourself. That could be a very valuable thing, but again, do you want people in the world seeing or getting input from your content? Probably not. So if you could create your own private large language model (AI) and then turn everything into that, I see there’s a lot of value because it will help for example, a lot of people who are writing an XML, it can help clean up their code like you were talking about. Or you could take some unstructured content and it could do probably quite a passable job of cleaning it up, adding structure to what was unstructured content. So I do see some very realistic uses there that could be very helpful. And do I see these things taking away someone’s job? Not right this second in this regard, but I see it basically taking something that’s not so much fun off their plate so they can focus on more important things.

RD: Absolutely. The most recent phrasing I saw for that is it replaces that junior programmer that most groups have that you’re looking to do scut work. This is the person who’s going to do the eight days of data entry to convert everybody over to a new system or that type of thing. That type of work nobody wants to do, but that’s what junior developers get stuck with.

AP: And that is very true, and there is a writer strike going on right now, and part of the concern with that strike is content that may be created by AI. Now, is AI going to write a really good script right now? Probably not. Could it write something that is the starting point, the kernel that someone can then take and do something bigger with, clean it up? Yes. And that may eliminate junior writer positions. So there is some concern, very similar to what you’re talking about. There is this situation where we have to think about how are people going to get into an industry when AI has taken away the entry level jobs. That’s going to be something very difficult to tackle, I think.

RD: I suspect you’re right. But on the other hand, you end up in this collaborative space where if you do have that writer’s block, like you said earlier.

AP: Sure.

RD: This gives you somebody to bounce ideas off of and have a conversation with about the subject, about the program, about the article, or about whatever, the song you’re trying to write, which is fantastic. Now at DCL we have had some success. We are doing some work where we’re using a large language model to associate authors and institutions, for example, out of documents. And we have great success in that. Usually we can programmatically determine it, but on those fuzzy edge cases, and I think that’s where ChatGPT and large language models fit in is when it’s a really fuzzy edge case that it’s difficult to accommodate for all things. We’re actually using it and having good success at matching authors and affiliations on a consistent basis and double checking the work that we’re attaining programmatically.

AP: That’s great.

RD: For having your own ChatGPT clone, there is a lot of work out there. There’s GPT4All, there’s Mosaic, there’s a bunch of things where you can download a large language model to your local machine and run it and the performance is not as great as this massive monolith that OpenAI has going. But it’s not bad depending on what you’re trying to do with it. It’s not quite as advanced as GPT-4. But the nice thing about the open source community and their approach to this is you’re starting to see people iterating constantly. So Facebook was working on their own large language model and intentionally or not, there’s some debate about that. It was leaked out to the internet and it became this iterative community in the machine learning space where people were constantly iterating on this model, expanding the model, growing it.

You can access it now through Mosaic, you can access it through Alpaca and you can access it through GPT4All, and you can actually have those conversations running completely local with ever leaving your PC. So for those types of things, I think it’s great. Now, is it perfect? No. For example, a very easy test. There’s actually a YouTuber named Matthew Berman who tracks a lot of this, and he has a spreadsheet of about 20 tests he gives any new large language model, and a very simple example is most large language models still fail the transitive test. So in other words, if A is greater than B and B is greater than C is a greater than C? Okay. Or if John is faster than Fred and Fred is faster than Sarah, is John faster than Sarah? A lot of them fail that test. They just come back with an erroneous answer. The other issue you see is a lot of the AI models are not being updated constantly. So they’ll still see it as 2021, for example.

AP: Right. And what you just said kind of reminds me of something. All this somewhat overblown talk AI’s going to take over the world. Well, AI’s not going to take over the world if the content that it’s basically scraping, and I know that’s really simplifying things a whole lot. If that content is not good, it’s not updated, humans aren’t putting intelligence in it, it’s not going to be that useful. We still have to provide the underpinnings for a lot of the intelligence in these systems. So are our brains going to be replaced today? Probably not.

RD: No. But the bar, or I guess the bar is getting lower and lower as time moves on.

AP: Fair. That is fair.

RD: It’s definitely getting better. For example, GPT OpenAI has updated ChatGPT where we’ll now actually go out to the net and get more up-to-date information. It may not have internalized that information, but it will actually perform a web search, extract information that way now and come back with it. And that was released recently. You have now work going into how quickly you can train a model, which is a huge thing. GPT-4 has been trained on 100 trillion parameters, which took weeks and weeks of time to train and to do a new one using that methodology would continue that curve. It would take months to train a new one, but there’s now work being done of, okay, if I have a pre-trained model, how do I quickly iterate on that model so that it doesn’t take me weeks? It may just be a question of ingesting new information on a daily basis, a little bit of news feeds or that type of thing.

AP: Sure. Let’s talk about risk to wrap up here. I brought up the copyright angle. What do you see as a big concern here, your biggest concerns?

RD: So, there’s a couple of things that are big concerns of mine. One, I feel like people anthropomorphize AI a lot.

AP: Yes.

RD: They’re having a conversation with their program and they assume that the program has needs and wants and desires that it’s trying to fulfill, or even worse, that it has your best interest at heart when really what’s going on behind the scenes is this is just a statistical model that is as large enough that people don’t really understand what’s going on, but it’s a model of weights and it’s emitting what it thinks you want to the best of its ability. And it has no desires or needs or agency of its own.

AP: Yeah, I want to make t-shirts, Large language models are not people, so yeah.

RD: The other thing is we’re starting to, and there’s some press about this where we’re talking about — bias.

AP: Yes.

RD: A good example of that or not so good example of that is when you have an AI model that hasn’t been trained for anything but western culture. It’s inherently biased towards American values, American positions on the world. What the AI will spit out may not be culturally acceptable in other places and vice versa. I mean, an AI trained in China is probably not going to give you the same response for things that you care about in America.

AP: Yeah.

RD: You can also, a lot of these companies have inherent rules and there’s actually a game going on. Microsoft’s AI started as a program code named Sydney, and there’s an ongoing game that people who are doing prompt hacking or prompt engineering to try to discover all the rules inside Sydney. It’s things like, well, Sydney will never call itself by Sydney and things like that. And it almost starts devolving to the point where you’re dealing with Isaac Asimov’s three laws of robotics or Robocop’s prime directives, where you have a list of instructions that are overriding the basic approaches that the AI can do. This is probably getting too philosophical for a content program, for a content transformation podcast, but I mean these types of things will color responses. So if you are asking in AI when it’s ingesting a program to emit certain key characteristics, those key characteristics may be shaded by these rules, may be shaded by this training.

AP: And that training came from a person who inherently is going to have biases, right?

RD: Exactly.

AP: Yeah. Yeah.

RD: So that type of thing is a problem.

AP: Yeah. I mean, AI in a lot of ways is a reflection of us.

RD: Yeah.

AP: Because it’s, a lot of times, parsing us and our content, our images, and whatever else. This has been a great conversation. It went some places I didn’t even expect, and that is not a criticism, trust me. So thank you very much, Rich. I very much enjoyed this, and it’s good to have a more balanced kind of realistic conversation about what’s going on here. I appreciate it a whole lot.

RD: Okay. It was very nice talking to you.

AP: Thank you for listening to The Content Strategy Experts Podcast, brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

The post AI: Rewards and risks with Rich Dominelli (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 21:58
Balancing CMS and CCMS implementation (podcast, part 2) https://www.scriptorium.com/2023/05/balancing-cms-and-ccms-implementation-part-2/ Mon, 08 May 2023 11:45:57 +0000 https://www.scriptorium.com/?p=21908 https://www.scriptorium.com/2023/05/balancing-cms-and-ccms-implementation-part-2/#respond https://www.scriptorium.com/2023/05/balancing-cms-and-ccms-implementation-part-2/feed/ 0 In episode 143 of The Content Strategy Experts Podcast, Gretyl Kinsey and Christine Cuellar are back discussing the common tripping points companies stumble over while implementing their content management system (CMS) and their component content management system (CCMS). This is part two of a two-part podcast.

“If you’ve got people working in a web CMS and you’ve got people working in a CCMS, and they’ve always worked separately, and then suddenly you ask them to come together and collaborate and maybe have one group or the other choose a new tool so that they can share content, but they’ve never had that process of working together, there’s going to have to be not just a tool solution to get them working together, but a people solution and a whole different mindset in the way that they work together.”

— Gretyl Kinsey

Related links:

LinkedIn:

Transcript:

Christine Cuellar: Welcome to the Content Strategy Experts podcast, brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. Hi, I’m Christine Cuellar. In this episode, Gretyl Kinsey and I are back continuing our discussion about implementing your CMS and your CCMS. And today, we’re specifically talking about the tripping points that your company should watch out for and other tools to consider as you’re going about this implementation. This is part two of a two-part podcast. Thanks, Gretyl, for coming back on the show.

Gretyl Kinsey: Absolutely.

CC: So what are some more tripping points that can trip organizations up when they’re implementing their web CMS and their CCMS?

GK: Yeah, one big one is kind of what we were talking about with those competing priorities, right? So we talked about having the competing priorities between the creative side and more of the marketing customer facing side versus people who need more structure in their content because of legal and regulatory requirements. And what this often looks like at an organization is that you’ve got your web CMS people, and then your DITA CCMS people and those competing priorities. And one thing that we see a lot of times as a tripping point or something that gets them tripped up when they have to look at maybe aligning on tool selection or getting new systems working together is figuring out how to strike that balance we talked about so that they’re not competing priorities, but they’re instead aligning their priorities. So we do see a lot of common areas where they struggle to come into alignment.

And a few things, a few examples of things where I’ve seen this go wrong are where each group is choosing their own tools without communicating about it. That happens a lot of times, especially if there isn’t really proper involvement from management. People have just been told this group, this department, pick a tool that is going to improve what you’re doing. And then of course you have a whole other department somewhere else that’s being told the same thing. They’re not talking to each other about it at all. And then eventually down the road, they’ve picked their tools, they’re all established, and then something comes up where they realize that they needed those tools to be compatible for sharing content or connecting to each other, and then they can’t. Because when they were choosing the tools, they didn’t think about that. They didn’t talk to each other. So then they’re stuck in a really expensive and painful mess to fix if they need to get past that problem.

So that’s something that we have been called in as consultants to help fix several times, and that we’ve seen organizations take that path without really stopping and thinking, before we evaluate and choose a tool, we’ve got to get all the different groups who might have a use for that tool or need to integrate tools talking to each other. So that’s one big thing that can go wrong. Another one is related to how the upper management at an organization does or does not prioritize content. So one issue we see a lot is where, let’s say one type of content gets prioritized over another, and we’ve seen some examples where they have a very heavy emphasis on training content. Let’s say this organization has an educational focus. It’s all about learning. It’s about the training materials. So maybe they focus on something like a learning management system, but they don’t realize that they also have to deliver some legal documentation.

They also are going to be marketing their services, and they don’t think about aligning all the different tools that these groups are going to be working on. And once again, it’s too late, right? And so what happens when the management is really prioritizing one type of content over all the others, is that when these different groups have those competing priorities, management’s decision makes one group the winner and everybody else the losers when it comes to their priorities.

CC: Oh, yeah.

GK: And so that can make things really tricky if the groups need to work together, but there’s clearly one group being favored, and being given all the budget and all the resources while the others’ needs are being ignored. And then of course, the even worse situation is when upper management does not care about content at all. They don’t really think about content as a priority for the business. And so that’s bad for any or all groups who produce content. So if you’ve got the situation of, let’s say people in a web CMS and people in a CCMS, for example, and those groups both need to be aligning and improving the work that they’re doing, but management doesn’t care about content at all, then that just leaves it as the groups having to fend for themselves, and it kind of can turn into a free for all of the competing priorities because they don’t have any guidance for management.

So I think that’s a really important thing when you’re looking at content from more of the bird’s eye view where we come in as consultants, is we look at not just the content creators, but we look at the different levels of management and particularly the highest level and how much do they prioritize content, and how does that affect their decisions, because that obviously has an effect on the groups producing the content and really can make or break the work they’re doing.

CC: Yeah. And it also affects the business because content is a really big asset in your business, to really bring value to your customers to make your operations flow very smoothly. So I would say that the business is also losing out when you don’t prioritize content. So a lot of times, that resource does go, I guess, untapped.

GK: Absolutely. And then we also see groups struggling to align on their priorities and their tool selection because they’ve always been siloed. So that gets back to sort of what we talked about earlier where you want to avoid those silos just because this is something that can happen. If you’ve got people working in a web CMS and you’ve got people working in a CCMS, and they’ve always worked separately, and then suddenly you ask them to come together and collaborate and maybe have one group or the other choose a new tool so that they can share content, but they’ve never had that process of working together, then there’s going to have to be not just a tool solution to get them working together, but a people solution and a whole different mindset in the way that they work together.

So that can really be challenging for tool selection as well. Because if these people have never even talked to each other, and then you’re asking them to come together and evaluate some new software for one or both groups, then it’s going to make that process, I think, a lot trickier than if they had been working together all along.

CC: Okay. So we’ve seen how upper management not prioritizing content causes a lot of issues. How would you recommend upper management start to be active so that the content departments, all of them, can really feel supported, and they can get the most out of their content?

GK: Yeah. So I think it comes down to a lot of what you said, actually realizing that content is an asset for your business and making it a priority. And then within that, upper management should be taking an active role in helping these groups to choose the tools that are going to work for everyone and benefit the entire organization, and not just leave it up to an individual department to say, “Hey, make a decision.” If you are going to invest in a new system for your organization, then I think it really behooves you as a manager, or especially even at the C level, to make sure that you have a hand in that evaluation and that the tools that you’re selecting are going to benefit the entire company. And then another thing is realizing all the different things that content can do for the business and continuing to invest resources in it.

And that’s not just tools, but also people, making sure that your content creators are going to be maximizing the value and the potential of your content. And the more that you put into that content, the more you’re going to get out of it. So making it that priority. And then of course, taking a leadership role in fostering communication between groups that might have those competing priorities or those competing needs. So this is an area, where I think in particular, we’ve seen it be helpful to bring in an outside voice like a consultant, just because even if you are in upper management and you’ve got sort of that bird’s eye view of your organization, you still are not going to necessarily have the objectivity of an outsider. And so…

CC: Yeah.

GK: … it might help a lot if you’re struggling to get groups who have been, let’s say working in silos, or who are going to have to choose maybe a CMS over here and a CCMS over here. Getting them into alignment, it might just help to get a consultant in to really hone in on what some of the communication issues are, and then help move past it so that you can actually make that selection.

CC: Yeah, absolutely. Getting an outside perspective, I just feel like that always helps because they can see things that you’re not seeing or thinking of and be that third party unbiased voice that really guides you in the right direction. So what are some other tools that might need to be connected to a CCMS as well? I know we’ve talked about… I mean, the big one we’ve been talking about is a CMS and a CCMS. Are there other tools that need to be connected to a CCMS or even to the CMS?

 

GK: Absolutely. So one example, which I mentioned a little bit earlier, is an LMS or a learning management system. And again, if you are an organization that has a lot of training content, a lot of educational content, a lot of learning material, and that is both for in person or e-learning, or any other kind of non-classroom training, then a learning management system might be really beneficial for the process of storing and creating and managing that particular type of content. And then also another example would be TMS, or translation management system, and then lots of other related translation tools. So this is something we see really commonly if you have to deliver translated or localized content, and it becomes more and more important, so kind of focus that you put on those particular tools, the more languages that you have to translate into because this is really an area where both cost can be an issue, but then also where you have to get it right, because there are a lot of times those legal and regulatory requirements around delivering content in certain locations, in certain languages.

And so that’s something that you really want to make sure that you’re doing correctly so that nobody’s going to get into any trouble. And then another example of a tool you might need to be connected is a DAM, or a digital asset management system. And this is for storing and managing things like images, videos, other digital assets that are used in or delivered with your content. And a lot of times when you look at something like a CMS or a CCMS, those usually have the capability of storing digital assets, but where we see organizations leaning toward using a DAM is if your content is very heavy on digital assets and not just text, or if there’s a lot of sharing of digital assets that has to happen across groups. I know in particular, and we’ve seen this, where for example, if you’ve got heavy machinery and you have a lot of diagrams of not just the machinery, but all the little pieces and parts that go into it that you might be in charge of selling or doing maintenance on, that’s the kind of organization that might have a dam.

Or if material has to have a lot of screenshots and illustrations and things like that, where if you look through any documentation, you would see just as many images, if not more so than words, then that would be an example of an organization where having a dam might work. And with all of these kinds of tools, it’s sort of what we talked about with the connectivity, that you can have either the level one connectivity where they’re actually integrated, or sort of more of the level two where they’re disconnected but can still share content. And this is where it becomes really important to think about a content tool chain or content ecosystem rather than just a disconnected set of tools, right? Thinking about how you’re going to make all of these different tools that you need for different parts of your content processes actually work together as a single working ecosystem.

So if you do need a CMS and a CCMS, and then maybe an LMS, a TMS, a DAM, or any of these other things, then it’s important to think about how you can get them all working together efficiently so that you can get the best value possible out of your overall content production.

CC: Absolutely. And as you’re listening, if you’re in a similar situation trying to make these decisions or figure out what to do with all of these tools that we’ve been talking about, if you ever get stuck, there’s someone who can help, and it’s us. So if you ever have questions, feel free to contact our team. We’d love to help support and get you the information that you need.

GK: Absolutely.

CC: Gretyl, is there anything else you can think of that you want our listeners to be thinking about or understand about balancing their CMS and CCMS implementation that we haven’t already covered?

GK: I think the one last piece of advice I will leave everyone with is to take the time to plan, take the time to really think about and evaluate your priorities, and don’t rush into any purchasing decisions when it comes to these kinds of tools. Like I mentioned, these implementations are major undertakings. They are major investments. They shouldn’t be taken lightly. And if you really want to get the most out of having these different kinds of connected tools or connected systems, then it is imperative to take that time upfront and really do a proper evaluation so that you don’t get stuck with a really expensive purchasing decision that was not the best one for you.

CC: Awesome. Thanks. That’s great feedback. Well, thank you so much, Gretyl, for taking the time today to talk about this — twice!

GK: Absolutely. Thank you.

CC: And thank you for listening to the Content Strategy Experts Podcast, brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

The post Balancing CMS and CCMS implementation (podcast, part 2) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 14:36
Balancing CMS and CCMS implementation (podcast, part 1) https://www.scriptorium.com/2023/04/balancing-cms-and-ccms-implementation-podcast-part-1/ Mon, 24 Apr 2023 11:45:45 +0000 https://www.scriptorium.com/?p=21885 https://www.scriptorium.com/2023/04/balancing-cms-and-ccms-implementation-podcast-part-1/#respond https://www.scriptorium.com/2023/04/balancing-cms-and-ccms-implementation-podcast-part-1/feed/ 0 In episode 142 of The Content Strategy Experts Podcast, Gretyl Kinsey and Christine Cuellar discuss balancing the implementation of a content management system (CMS), and component content management system (CCMS). This is part one of a two-part podcast.

“When you have two types of content produced by your organization and different groups in charge of that, and maybe they’re in two different systems, that it’s really important to get those groups working together so that they can understand that those priorities don’t need to be competing, they just need to be balanced.”

— Gretyl Kinsey

Related links: 

LinkedIn:

Transcript:

Christine Cuellar: Welcome to the Content Strategy Experts Podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. Hi, I’m Christine Cuellar, and in this episode we’re going to talk about how to balance the implementation of both your CMS, which is your content management system, and your CCMS, which is the component content management system. This is part one of a two-part podcast. I’m here with Gretyl Kinsey. Hi, Gretyl!

Gretyl Kinsey: Hi, Christine. How are you?

CC: I’m doing great. Thanks for being on the show. So, Gretyl, before we get started, I just want to kick off with a real basic question, and I know that we have a lot of content on this that we’ll link in the show notes. What’s the difference between a CMS and a CCMS?

GK: Sure. So a CMS or a content management system is generally a broader term, and that’s for a tool or a system that allows your organization to store and manage content. And this could be talking about a lot of different types of content storage in management and operations around that. A lot of the common ones that we see are things like storing print-based documents such as PDF files or updating and publishing your web pages. So this is really more of an umbrella term that you see for content management.

And then in a narrower scale, a CCMS or a component content management system is a specific type of CMS, and that’s used for creating, storing and distributing structured topic-based content. So, for example, we see this a lot with XML and more specifically DITA content. And the component portion of that name is talking about the fact that you have content in individual topics or chunks, and those are called components, and those are assembled into the deliverables that you send out to your customers.

CC: Gotcha. Okay. So why do a CMS and CCMS need to connect? What kind of integration are we talking about here that we need to be balancing?

GK: Sure. And I want to talk a little bit here about what exactly we mean by connect first, because there are two different angles to this that we see a lot. So one level of connection is when you have actual integration or connectivity between the systems where they hook in and talk to each other. And some systems are built actually with this in mind. So they’re designed to connect out of the box. So you might have a tool that has a web CMS and a CCMS under the same brand, and they’re designed to hook together and communicate. And then other times you could have CMSs and CCMSs that have the ability to connect with each other, but it’s not built that way out of the box. So it would be some kind of a custom connector that’s built like an API that allows them to have that integration.

And then the second level of connection that we talk about is where you have the ability to send content back and forth between two disconnected systems. So rather than that direct connection or integration, this requires a compatible content format and a process for getting that updated content from one system to the other. And this could be a one-way or a two-way connection, but it’s sort of more of a bridge rather than a direct integration where the systems are not actually connected, but they can still share content.

CC: Gotcha.

GK: And so when we’re talking about either of these levels of connectivity, either these types of connectivity, the ultimate goal is to prevent the CMS and the CCMS from becoming disconnected silos, because that is something we do see in a lot of organizations and it can have some real consequences for your content development. So one big one is inconsistent information coming out of each of those two systems. So if you’ve got all of your content in a CMS and then you’ve got a separate CCMS silo and they can’t connect or share content at all, you might have completely different processes for checking that content, making sure the messaging is the same, and if it’s inconsistent, then that looks bad to your customers at best, and then could get your organization into legal trouble at worst. So that’s one really important reason why we want to avoid those kinds of silos.

Another reason is that there can be difficulties with brand consistency and messaging. So this is not just the consistency of your content itself, but how it looks and feels to your customers. And, of course, this can be a really big headache if you ever need to go through a rebranding.

CC: Yeah, the marketer in me is just cringing right now, as you mention it.

GK: Oh, yes. And this is actually a reason that we’ve had some of the organizations who have come to Scriptorium for help is because they needed to go through a company rebranding and they had their content at a bunch of different silos and couldn’t figure out a quick or efficient way to make that rebranding happen. And then of course, that problem can and does get magnified if the rebranding is due to a merger or an acquisition because if you’ve got two or more companies coming together and they’ve all been working in silos, then suddenly how do you get everything rebranded under one name as quickly as possible and as painlessly as possible. If you didn’t have those silos to start with, that could happen a lot more effectively and with a lot less hassle and headache for everyone.

And then of course, another reason to avoid silos is that you waste a lot of time and resources creating and publishing the same content potentially in two different places. If you don’t have a way to share the content, then there may be times when a marketing group that’s working in a CMS needs the same information. So things like technical specifications, if you’re selling a product that people need to know that information about, but then also that same information would obviously be in your tech docs and if you have two disconnected silos like a CMS and a CCMS that can’t integrate or share content, then people would be writing that information twice. And that just wastes a lot of time.

CC: So when it comes to a timeline of the, I guess what we typically see when we’re implementing a CMS and a CCMS, do they get implemented at the same time? Does one of the systems typically come first? What does that standard timeline, I guess for lack of a better word, look like?

GK: Yeah, and I don’t know that there really is a standard per se. I can say that unfortunately they are almost never implemented at the same time.

CC: Oh, gotcha.

GK: If you do have that opportunity to do a complete overhaul and get a CMS and a CCMS at the same time, I would say definitely take advantage because that is pretty rare. What we see more often is having one system that’s already been chosen and established, and then you have to choose another one that will be compatible with it. So whichever one your organization has put in place first, that sort of gives you your parameters and your requirements for the other. From our perspective, we do see more organizations that already have an existing web CMS, because that is a little bit broader. It might manage some more of the parts of the content lifecycle than something like a more structured environment like a CCMS would. And so then what will happen is they’ll realize they have a need for structure and then realize they need a CCMS to manage that content and then need to choose a CCMS that will align and be compatible with the existing web CMS.

CC: Okay. So what are the pros and cons of each of those: implementing together versus separately, that kind of thing?

GK: Yeah, sure. And one thing I also want to point out about that is that there’s a big “it depends” kind of factor, which I know is the thing you hear from every consultant that it depends. But I know that one thing we always look at before we even get into the pros and cons are things like limitations that come into play. And so, one of the big ones we see at almost every organization is the budget. So how much budget do you have? Who controls that money? Are there timeframes in which you have to use it? All of that can really make a lot of your decisions for you about implementing, whether it is one system or more than one system at the same time.

And then of course, you have deadlines and timeframes that are set by your organization around their production schedule and other goals. And so that can also be a really big limitation for implementing a new system. And then of course, it’s important to think about what business needs are actually driving the decision to implement a new system or maybe more than one new system in the first place. So all of those are the big considerations that we think about first.

And then when we think about the pros and cons, like I said, if you are implementing a CMS and a CCMS at the same time, the big advantage is that’s rare. You want to take advantage of that opportunity, because you can evaluate both systems at the same time instead of already being locked into one tool and then having to make another tool fit with that. So that’s obviously the major advantage that you have is that you have more of that freedom to look at your options and maybe pick something that’s going to be a really good fit for you without sort of limitations or parameters.

But of course, that being said, sometimes those parameters can be good. So if you already have, let’s say the typical scenario, you already have a CMS in place, maybe if you didn’t have that in place, you would be looking at five or six CMSs and then five or six CCMSs as well. And you have a lot more tools to evaluate in the first place. You have a lot more areas of compatibility to assess. And so that timeframe is going to take longer to make that decision. And you can get bogged down by indecision-

CC: That’s true.

GK: By saying, maybe we have two or three options that would all be good fits for different reasons. But if you already have, let’s say your CMS in place and then you’re just looking for a CCMS that can play nicely with it, maybe you’re only narrowed down to two or three options. And it takes a lot less time to really find out what the right decision is. So there are pros and cons in that way.

CC: That’s true.

GK: Another thing to think about also is just risk. Because implementing any system is a huge undertaking. It takes a long time. You have to go all the way from the evaluation to making the selection, to getting everything stood up and ready to go. And then there’s always a little bit of experimentation and churn as you actually start getting content into that system and getting your publishing lifecycle going. And so if you’re doing that for more than one system at the same time, there is a lot more risk of

something possibly going wrong, not going according to plan. And then of course, the investment that you have to make into an implementation is quite large as well.

So there’s definitely, I think, less risk in only implementing one system as opposed to trying to do two at the same time, even if you do have the advantage of, we got to choose these together so we know they’re going to work well together. So yeah, there definitely are pros and cons for whichever way you end up doing it. A lot of times it won’t be your choice. It’s going to be limited by all the various circumstances I talked about at your organization, but things to think about just in case you’re ever in that situation.

CC: And so something that comes to mind is, I know that when you’re implementing systems, whether it’s the systems we’re talking about here, or just systems in general, a lot of times organizations can get stuck when both systems have competing priorities and that can cause a lot of problems in how things are implemented and in the timeline of how things are implemented, all this kind of stuff. So are there competing priorities for a CMS and a CCMS?

GK: Sure. And one big one that we see a lot is that when you’re talking about the people who are actually developing your content, your authors, your subject matter experts, contributors, a lot of them tend to see creative freedom in how they create the content versus consistency as competing priorities. The less structure you have for your content, the more creative freedom it gives you, but then it also introduces a lot more room for inconsistency and human error. And so there’s always that balance to strike. And if you have groups at your organization where, let’s say, one group needs that creative freedom, so maybe your marketing team, they need the ability to have full freedom of their design and what information they’re putting where, but then you’ve got another group that they need the rigidity that comes with topic-based authoring and with having information delivered in a specific way for legal and regulatory requirements, then obviously something like structured authoring is going to benefit them.

I think it’s important that when you have both of those types of content produced by your organization and you have two groups that are sort of in charge of that, and maybe they’re in two different systems, that it’s really important to get those groups working together so that they can understand that those priorities don’t need to be competing, they just need to be balanced. That’s always the challenge when it comes to those priorities is, yes, they seem like they are competing, but really it’s more about striking that balance and making sure that each group understands the importance of the other group’s needs and how they can still work together and share information that needs to be shared, but also still have the ability to work in the way that they need to work to get the content out the door.

There are tools that can help you strike that balance. So, for example, a web CMS can give your marketing team the creative freedom that they need, but also so can some types of CCMSs. So there are ones that use topic-based authoring and those smaller components we talked about, but not an XML structure like DITA. So that might be an option to look into. And then of course, an XML or a DITA-based CCMS can give other groups, like your technical team or your training team, that structure and the components that they need to create that more heavily regulated technical or legal content. So it’s really worth having these different groups explore the options that are out there and help turn what seems like competing priorities into those more balanced or coordinating priorities.

CC: Gotcha.

GK: I think it’s also worth noting that just because your content is structured, so topic-based XML, DITA XML, that doesn’t mean that it cannot be made to look beautiful when it’s published. There are a lot of things that we can do with PDF output, HTML output, all other kinds of output formats to make things look really nice. So you don’t always have to have that unstructured nature to give you the creative freedom for a really nice look and feel. And then also, it can be delivered in creative ways. So because it is componentized, because it’s in little topic-based chunks, that actually lends itself really well to having flexible delivery, to delivering personalized content to different segments of your customer base and to having a lot of different formats that they can receive it.

So yeah, I think we see a lot these days where people can log into a portal and get stuff served up to them according to parameters they’ve put in about what they’ve bought. We can see a structured componentized content used to serve chatbots, all kinds of other things. So there is a certain degree of creative freedom in structured content as well that I think a lot of people don’t always realize from the outset just because there is that structure.

CC: And I’m going to jump in on that because I think when it comes to marketing content, I feel like your freedom to be more creative when a lot of the mundane technical tasks are taken off of your workload, and that is something that structured content allows you to do. So that’s something, me standing on my little soapbox, I get excited about when we’re looking at structuring content and streamlining content operations is that, yes, you may feel like your creative freedom is a little bit restricted, or maybe it’s a little bit more complicated for you to learn how to get the kind of creativity and design that you want from your published content, but the benefits of having your workload reduced because you’re not focusing on things that you don’t need to be focusing on anymore is really massive. And in the long run, I think that frees you up a lot. I get excited about stuff like that.

GK: Oh, yeah. And I absolutely agree. I do think from the side of people working in structured content, they realize how much more freedom they have when they’re not doing a lot of manual design tasks anymore, when they are free to just write the content they want to write and realize that-

CC: Exactly.

GK: … it can be delivered and mixed and matched and put out to their customers in a lot of different ways. And so it really does, I think, take a little bit more practice in doing things to realize how much more freedom that you can get when you work in structure.

CC: Exactly. Yeah. All right. So I think that’s a good place to wrap up our conversation, but we will be continuing this discussion in the next podcast episode. So thank you so much, Gretyl. I really appreciate you talking about this today.

GK: Absolutely. Thank you.

CC: And thank you for listening to the Content Strategy Experts Podcast, brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

The post Balancing CMS and CCMS implementation (podcast, part 1) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 17:31
What is LearningDITA? (podcast) https://www.scriptorium.com/2023/04/what-is-learningdita-podcast/ Mon, 10 Apr 2023 11:30:45 +0000 https://www.scriptorium.com/?p=21869 https://www.scriptorium.com/2023/04/what-is-learningdita-podcast/#respond https://www.scriptorium.com/2023/04/what-is-learningdita-podcast/feed/ 0 March 2025 update: We have moved LearningDITA to a new platform. The Introduction to DITA course is still free, and you can sign up for courses at store.scriptorium.com.

In episode 141 of The Content Strategy Experts Podcast, Alan Pringle and Christine Cuellar discuss the story behind LearningDITA, the free DITA training created by the Scriptorium team.

What we are trying to do with this site is give people a resource where they can go and, at their own pace, learn about what DITA is and how it can apply to their content and their content processes. It’s a way to take some of the technical mystique out of it, to bring it down and help you learn what it is and how it works.

– Alan Pringle

Related links:

LinkedIn:

Transcript:

Welcome to the Content Strategy Experts Podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we’re talking about LearningDITA, the free DITA training created by the Scriptorium team. 

Hi, I’m Christine Cuellar.

Alan Pringle: And, I’m Alan Pringle.

CC: Alan, welcome to the show. Thank you so much for talking with me today. We’ve just received a lot of great feedback about learning DITA on LinkedIn. A lot of people are thanking us for the course, talking about how it was a great experience for them, so we thought this would be a great resource to dive into.

AP: Sure.

CC: My first question for you is, what is Learning DITA, for those listeners that have no idea what we’re talking about?

AP: Learning DITA is a free online resource where people can go and take several courses to learn about DITA. DITA is an open source standard that gives you a way to describe your content in a modular fashion. It’s really good for helping you basically build in intelligence into your content, so then you can then filter it, sort it, and do that stuff with it.

CC: Got you. Okay. Learning DITA is the free training that the Scriptorium team created many years ago. When was Learning DITA created?

AP: We may have started somewhere in 2014 into 2015. That’s when we started. I think the first course came out, probably came out right around 2015.

CC: Okay. It’s been around for a while. Was there anything like it at the time? Why did you feel the need to create this resource?

AP: Well, I mean, you just heard me describe DITA and you hear things like –

CC: (Laughs) Yes, it’s a lot of words.

AP: You hear Darwin information typing architecture and, you may hear from someone at work, someone you work with, “We may need to use this,” and you’re like, “What is this? This sounds like some scary sh*t. I’m not doing this.” What we are trying to do with this site is give people a free resource where they can go and, at their leisure, at their own pace, learn about what DITA is and how it can apply to their content and their content processes. It’s a way to take some of the, I guess, technical mystique out of it, to bring it down and help you learn what it is and how it works.

CC: That’s amazing. Yeah, that’s a great resource. Who are the experts that are behind the Learning DITA course? Who created it? I know you mentioned the Scriptorium team, so who was involved in that?

AP: Well, a lot of the people that you have heard on this podcast have contributed Gretyl Kinsey, myself and several other team members have. We have written a lot of that content. It’s not completely Scriptorium, I will be very clear on that. We’ve had some other people who have contributed some content and we appreciate it. We have set this up so that the actual source content for learningdita.com, which is DITA XML files, they are freely available in GitHub. You can download them and look at the source. You can treat it or view it as a proof of concept.

This is how DITA works. The source files are DITA, and I don’t want to go too deep into the weeds, but we basically transformed that, DITA XML, into a WordPress friendly format, markup language, and sucked it into WordPress where we use a learning management system that sits on top of WordPress. You’re going to see courses where you go through exercises. There are assessments in addition to reading about things. They’re linked to reference information. There’s all kinds of ways to absorb and understand DITA through learning DITA. Again, it’s free and we tried to make it, shall we say, less threatening, very accessible.

CC: Yeah. Yeah. I’m actually taking it right now. I’m going through the courses and my whole career has been in marketing. I know nothing about technical writing. DITA was a whole new word to me when I started this position. If I can do it, anyone can do it, basically. It really has made the concept very down-to-earth for me.

AP: Don’t sell yourself short. That’s one point, I’m glad you brought this up. People really may assume that DITA is strictly for product and technical content, and that is no longer the case. I think it’s fair to say early on, it was created specifically by IBM for technical content, product content, but it has expanded its reach. The fact that you are using, when you’re taking the class, you were using an LMS to basically consume DITA content that is training content, that shows you right there, this is not just about user manuals anymore, not by a long shot. There’s proof in the pudding. There it is, you’re using learningdita.com, and believe it or not, you’re consuming DITA content, but you may not know it, but it’s there under the covers.

CC: Yeah. It’s been really helpful. I’ve always been really passionate about processes, optimizing processes to make everybody’s jobs easier, to make your workflow easier so you can do more, better and easier. Just work smarter, not harder, I guess is a better way to say that. The whole approach to structured content and DITA, it was scary at first to be looking into, but that’s the core concept is, let’s structure things in a way so that we’re flexible, we’re scalable, we’re not making our team repeat things over and over, we’re doing things better in a way that’s more accurate, and I really love it. I still have a while to go, I haven’t completed the course yet, but I love that heartbeat behind what DITA is and what Learning DITA is.

AP: Right, and it’s really, it’s trying to bring something that may seem very scary and technical down to Earth. A lot of people hear, XML, that is “extensible markup language,’ they think they’re going to have to type computer code.

CC: Yeah, that’s what I thought.

AP: Right, that is not necessarily the case. Sure, if you are comfortable typing code, you can type code, but there are a lot of authoring tools and experiences that can sit on top of DITA to hide all that, so you feel more like you’re just using a word processor. But the bonus is, under the covers of that authoring experience, the DITA structure is basically managing your content. Like enforcing a template, it is forcing you to write to a particular structure and to include intelligence about what you’re writing like, who’s the audience? What product is this for? Is this for a teacher or is this for a student? All of those kinds of things, and when you build that kind of intelligence into your content, it makes it much easier to mix and match and assemble and filter and create all kinds of versions and alternatives based on the audience who is consuming your content.

CC: That’s great. Like you said, not just product and technical content. Every aspect of content needs to be thinking about that. I know in marketing content, that’s a big thing. Who are we writing to? What’s the purpose of this? Having a structure that forces you to keep that in mind is a no-brainer. It feels like it’s great.

AP: Right, exactly. If you feel that you are in a situation where you find yourself doing a lot of manual work, you’re doing a lot of copying, pasting, that may be the biggest clue. If you find yourself in a content development process where you were making multiple versions of the same file and then making a change here, change there, but then forgot about the fact you’ve got versions 14 and 15 over here that also need that change. That’s the kind of thing that DITA can help with.

If you have any kind of inkling that you might need a better way to make versions of content to reuse content, take a little visit to learningdita.com and learn a little bit about DITA and see if it might be a way that it can solve some of your problems. I am not going to sit here and tell you that DITA is a fit for every organization, it is not, but it does address a lot of the common pain points that anybody who creates content in a professional way, the kinds of things they have to deal with and that make their work life a lot of times just downright unpleasant.

CC: We’ll include a link to Learning DITA in the show notes. Something also to mention, not only is Learning DITA free, but it’s a flexible course, so you can take it at your own pace. You can do a lot of it and then stop, whatever you need to do. It’s not scheduled or anything, it’s as flexible, free, low risk as possible.

AP: Yeah, there are multiple courses and it starts with the basics and then builds upward. Are you going to take all of the courses? No, you may not need to, and I’m going to have to do a refresher, I’m going to cheat and look and see how many courses we actually have, because I don’t remember, let’s see. I think we have 9 or 10 courses right now, so there’s a lot there. Like you said, you take it at your own pace. You can start with the introduction, get your feet a little wet, and then start diving in a little more deeply into the structures that make up the DITA standard.

CC: We talked about this a little bit. Who is Learning DITA for? I know you mentioned that the most common scenario is someone saying, “Okay, here we’re going to introduce DITA, this is what we’re going to start working with,” tells it to an employee who may be like, “I have no idea what you’re talking about,” and is panicking. For one, is that the only scenario for learning DITA? For two, who is Learning DITA for?

AP: Learning DITA is for anybody who wants to know more about the DITA standard and how it could apply to their professional world, or even not even professional world. If you have any interest in improving content processes, content operations, and you may be more of a manager who doesn’t actually create the content but still want to understand what’s going on with DITA and how it can maybe help your organization, it’s for anybody who wants to understand better content processes and how DITA could possibly provide fixes for any problems that you have with your content operations.

CC: How many people have registered for Learning DITA or taken or completed the courses?

AP: Well, we did start in 2015, so there’s quite a few. I think we’re somewhere hitting near, as of this moment, 15,000 people have signed up to use the courses, so yeah, it’s a lot. It makes me feel good to see something that we put together being embraced by the content community and getting their hands a little dirty and figuring out how this DITA thing works and doing it at their own speed and sometimes on their own time. My hat’s off to them for digging in and learning these things.

CC: Yeah, I love that it’s such a community-oriented resource. It feels like it’s been so helpful for people. It sounds like people also contribute or give feedback or have asked for other courses.

AP: They have. We have a lot of resources listed on the site and within the courses, and a lot of those point to things that other people in the DITA community and content community they have created. Again, it’s not just about us at Scriptorium, this is about the content world and how you can really improve your content operations by breaking your stuff into more modular structured content that DITA supports.

CC: When someone finishes the Learning DITA courses, but then they need more training, they realize, “I’m going to be getting more into this,” where would you point them? What should they do next?

AP: Once you’ve gone through those courses, I would say there’s a good chance that you may be in an organization that is looking at implementing DITA, and if you need help doing that, we as Scriptorium and there are other consultants that do this too, talk to somebody who can help you, for example, set up your workflow, your database workflow, help you figure out how to map your content to the DITA model. Then, not only do that legwork upfront the assessment stuff, you also may need help actually standing up and configuring your DITA system and then training people how to use that system. We do all that at Scriptorium. If you need help beyond what we offer for free, we will be more than happy to oblige you and provide you with some consulting and training services to get you set up and running in DITA.

CC: Absolutely. Well, Alan, is there anything else that you want to be sure we communicate about Learning DITA or anything else that’s coming to mind that you really want people to know or understand about the resource?

AP: We appreciate people contacting us. If you see something that’s not quite right or you don’t understand, we appreciate that being pointed out and we will do our best to correct it at some point. It’s also a community resource. I can’t stress enough, we’re trying to demystify DITA, make it less scary, and that’s the point. If you, in your head, have an idea of how you can contribute and do something along those lines, please do it, I will note. Other people have taken our Learning DITA source content and then created versions of Learning DITA and German and French, and I believe even Chinese.

CC: That’s amazing.

AP: There are other people who have taken that stuff and then translated it and then used our process to create the same thing in other languages to make it even more accessible and reachable to other people.

CC: Yeah, that’s great. That’s really great. Well, I’m just really impressed with the whole Scriptorium team for coming up with this resource. Since I’ve started, I’ve just seen nothing but really positive feedback about it. I love how, as we’ve already talked about, it’s community-oriented, it’s just a free resource that helps people really understand. I love the phrase that you use de-mystify, because I think that that can happen a lot of times in our jobs, we just get overwhelmed by what we don’t know, especially when there’s the expectation that we’re going to do this now or you need to know this now. It’s great that the team saw that need and then fulfilled it with this resource. It’s really great.

AP: Yeah. and it’s always a problem when you’re dealing with technology. There’s always this fear of the unknown involved. If you can cut that fear out, you’re going to have a much better time when it comes time for you to possibly implement a DITA workflow.

CC: Yeah, absolutely. Well, thanks so much for talking about this, Alan, and thanks for being here today.

AP: You’re welcome.

CC: Thank you for listening to the Content Strategy Experts Podcast, brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

The post What is LearningDITA? (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 16:37
Éric Bergeron explains the MadCap acquisition of IXIASOFT (podcast) https://www.scriptorium.com/2023/03/eric-bergeron-explains-madcap-acquisition-of-ixiasoft/ Mon, 27 Mar 2023 12:05:57 +0000 https://www.scriptorium.com/?p=21858 https://www.scriptorium.com/2023/03/eric-bergeron-explains-madcap-acquisition-of-ixiasoft/#respond https://www.scriptorium.com/2023/03/eric-bergeron-explains-madcap-acquisition-of-ixiasoft/feed/ 0 In episode 140 of The Content Strategy Experts Podcast, Sarah O’Keefe and Éric Bergeron, president and CEO of IXIASOFT, share the story behind the MadCap acquisition of IXIASOFT.

“The question that everybody is asking, and we really want the answer to, is this seems like a very sensible combination, but MadCap as an organization has done a really excellent job with their marketing, and much of their marketing has been based on the concept that DITA is not something that you need. Flare is happy and easy and safe and wonderful, and DITA is none of those things. So, when you say this is a bit of an odd combination, I think everybody’s looking at, ‘Well, wait a minute, there’s been a lot of DITA bashing over the past 10 years or so.’ What do you do with that?”

—Sarah O’Keefe

Related links:

LinkedIn:

Transcript:

Sarah O’Keefe: Welcome to the Content Strategy Experts podcast, brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way.

Hi everyone, I’m Sarah O’Keefe. In this episode we’re talking about MadCap and IXIASOFT with Éric Bergeron, president and CEO of IXIASOFT. 

Éric, welcome to our podcast.

Éric Bergeron: Thank you very much. I’m very happy to be here today.

SO: Well, and we’re excited to talk to you, since I think the entire industry has been talking about nothing but this merger for the past couple of weeks since the news broke. And so I wanted to ask you a couple of questions about what’s happening here and where is it going and what does it mean for those of us that live in the DITA XML world. And I’ll guess I’ll lead with the obvious question, which is why sell IXIASOFT to MadCap?

ÉB: Yeah, very good question. Unfortunately, I will have to give you some background before answering that question, and I will try to do that very quickly. Six years ago, IXIASOFT was a very traditional software publisher. We were selling perpetual licenses with a yearly maintenance plan. We were installing the system on-prem, customer side. And the product was a desktop application connecting with the backend server. So very traditional.

And six years ago, we decided to change the business model and provide to our customers a SaaS solution. So we had to change the business model to provide subscriptions. We had to change the product to move from a desktop application to a web-based application. We also had to put in place a new team to manage the hosting and the management of the solution. And we knew that it would take approximately five years to do all that work. And we were near the end of that five year period.

So the timing was good for us to look, “Okay, what’s next for IXIA? What will be the next growing phase? What should we do to grow, continue to grow?” And at that time, MadCap arrived with Battery and they contacted me. And they had a plan. And we listened to their plan, and we discuss it with them. And finally, we realized that the timing was perfect. I think the story and the plan, the project is great, and that’s why we decided to sell. And also because I will turn 60 very soon and I was starting to think about my retirement. It’s true. But really, the driver was really the plan, the project, I think they had something interesting to propose and that’s why.

SO: So what can you tell us about that plan or that vision? What is the vision for the combined company that you can share?

ÉB: And again, I was a teacher in the past, so I need to explain things. But for me, there’s a spectrum of solutions on the market. And some solutions provide the ability to manage documents, other systems provide the ability to manage more components, and some systems manage components with structure. And I think with the combination of MadCap and IXIA, the last two, we will be able to provide that to the market. We will be able to provide a component system to create unstructured components with MadCap Flare and Central. And with this IXIASOFT CCMS, we will be able to provide the tool that will let our customer manage components and very structured components.

So that’s the goal, I think it’s to have a broader offer and propose to the market a solution that will let them move from unstructured on-component systems like Word and FrameMaker, move them to Flare and Central. And eventually, if they need more structure, they will be able to move to the CCMS. And I think that’s a great project.

And the other reason why I was interested to proceed with that transaction is also because MadCap, they had some big customers that outgrow their solution. And they were looking for a more structured system. And IXIA will be the place they will go. So that will make the IXIA customer base grow. And that was a guarantee for us that we will have more customers, they will keep the product, they will continue to improve the product, and that will also increase the customer base. So that’s also an answer to your first question. But that was also the other reason why I was interested in that transaction. And I think for the market it’s great to have those two products together in the same organization.

SO: So I know that you and Madcap, both IXIA and Madcap have said in the short run, “Nothing is changing. Do not panic. Remain calm.” But looking at this a little bit more long-term, what kinds of changes should IXIA, or for that matter, Flare customers expect in the midterm? Six months, a year, five years, what does that look like?

ÉB: Yeah. For the next six months nothing will change, really. It’ll continue to be the same. However, IXIA for example, we will have a user conference at the end of May in Munich. This year the user conference will be in Europe. And we will have MadCap customers that will come to the IXIASOFT user conference. Because some of the MadCap customers are interested to learn more about DITA and maybe use that eventually. And we will provide to them a path from Flare Central to IXIA CCMS. So those are small changes, but we will start to see MadCap customers maybe more in the IXIASOFT CCMS community. But internally nothing will really change.

Over in the next year, two years, what we want to do is really propose to the market some tool to make the content move more fluently from Flare, Central to the CCMS. So we’ll have an importer, for example, to import Flare content to the CCMS. That will arrive probably after the first six months, but it will be there. And that will clarify the path for customers moving from Word to Flare, and eventually from Flare to the CCMS, to DITA. So that we will see in the future.

And more midterm, long term, I can say that Battery, you mentioned Battery previously, we talked about that, they decided to invest in MadCap and IXIA, but they want to continue to make that combination grow. Maybe eventually there will be other acquisitions to continue to complete the offering and to propose to the market a broader offer for people that want to create and publish content. So that will probably happen eventually.

SO: So what do you think this looks like in sort of in that five years down the road? My track record on five years is not very good, I don’t know about you. But what do you see as the big picture vision in that longer term timeframe?

ÉB: Agree with you, five years in technology is very long. And I’m not the best for visionary things. One thing I really believe is technical documentation, but documentation in general will change a lot. We are moving definitely from books to components. In the past we were providing documentation with books and manuals. Now, for me, documentation is more and more knowledge base. And there will be more and more modern tools to publish that information. ChatBot, for example. ChatBot will ask questions to users. With the answer, they will find the relevant content, then they will push that to the end user. We will have tools like Fluid Topics, Zoom and Congility that will be used more and more. So we need to create content that is compliant or compatible with those tools. And I think component systems are very good systems to create content that can be leveraged by those modern tools.

The other thing is, for sure, in the past there were a lot of text and picture diagrams. I’m pretty sure we’ll have more and more video, audio, augmented reality, virtual reality objects too. So that’s the future of the documentation. And our tools will have to provide the functionality to create those contents, but also to publish those contents. So that’s the future of our world, I think. I don’t know exactly how we will navigate in that evolution, but it’s definitely for me I’m sure it’s going in that direction.

SO: So I guess the question that everybody is asking and we really want the answer to is this seems like a very sensible combination, but MadCap as an organization has done a really, really excellent job with their marketing. And much of their marketing has been based on the concept that DITA is not something that you need. That Flare is happy and easy and safe and wonderful, and DITA is none of those things, right? And you don’t need it and it’s just generally not great. So when you say this is a bit of an odd combination, I mean, I think that’s what everybody’s looking at is that, well, wait a minute, there’s been a lot of DITA bashing over the past 10 years or so. So what do you do with that?

ÉB: Yeah, it’s funny that you mentioned that, because after my first call with Battery and MadCap, I went to the MadCap website. And I look at that saying, “Oh, how can we work together? We’re so different.” But when you are selling a product, you are doing the best marketing pitch to sell it. And not having a DITA tool, they had to do that. And so I fully understand. But we talk about it and you probably realize that all that information was removed from their website after the transaction, because they had to do that to promote their product, but they don’t need that anymore. And it’s the opposite now. They need to embrace DITA and put DITA at the right place. And it’s true. And I still believe that not everybody needs DITA. Some organizations, they don’t need that highly structured content. And so it’s okay to produce content that is not very structured. If it answers your needs, it’s fine.

Maybe eventually they will need more structure, and the good news now they have a solution for that. We can propose to the market the path to move to higher structured content. And what we want to do is provide tools that will let you move from unstructured components to structured components. So yeah, it was funny to see that on their website, it’s funny to see that disappear now. And now we will put on our website content that will explain the new reality. But I fully understand it was… And you’re right, we were a little bit like an odd couple, but we’re learning to live together now and I really believe that it’ll work very well.

SO: I have some questions about who’s the neat one and who’s the not so neat one, but I think we’ll set that aside. Is there anything else that people should know? Things that I haven’t asked you about, but information that you want to make sure is out there about this merger transition?

ÉB: Maybe one thing I would like to share with you is the fact that, for me, it was my first experience selling my company really, and I was really happy to do it with MadCap. And especially because Anthony, the CEO of MadCap and I, we share a lot of values, same values. And when you look at the history of Antony, he founded MadCap 17 years ago with friends. He was working before at eHelp. And they worked together for a long time. They grew organically all those years. And it’s the same for IXIA. If you look at the IXIA team, we are working all together for a long time, very, very long time. And 20 years, 25 years, some of them. And we have the same experience a little bit.

So I think this transaction, this merger was interesting and went very well. Because when Anthony and I, we were talking, we were at the same place. We were able to understand each other. And I believe that that merge will work because of that and because people working on both organizations share the same values. And for me it was really, really important. And that’s another reason why I accepted to enter in that transaction because I wanted to make sure that my team, my customers, and I say my, but IXIA is not a one-man show. It was really the IXIA team, the IXIA customer base. I’m sure they will be respected in that process and they will be happy in the future. So that’s just another thing I wanted to say.

SO: Well, and that’s an interesting point because we always talk about how… I mean, the work that we do and everything else, it’s about people, right? It looks like a technology problem, but it’s always about the people. And I guess here again, we’ve fallen or I’ve at least fallen into that trap of saying, tell us about the technology, tell us about the integration. And you’re saying, well actually, as always, it is about the people. So yeah, that’s a great point. I think I’ll leave it there. So Éric, thank you for being here and sharing this background and this information.

ÉB: I was really happy and thank you for the invitation.

SO: And congratulations to you and the whole team and to the MadCap team and Anthony and all the rest of them.

ÉB: Thank you.

SO: And with that, thank you for listening to the Content Strategy Experts podcast, brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

 

The post Éric Bergeron explains the MadCap acquisition of IXIASOFT (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 16:35
Unpacking structured content, DITA, and UX content with Keith Anderson https://www.scriptorium.com/2023/03/structured-content-dita-and-ux-content/ Mon, 20 Mar 2023 11:30:37 +0000 https://www.scriptorium.com/?p=21824 https://www.scriptorium.com/2023/03/structured-content-dita-and-ux-content/#respond https://www.scriptorium.com/2023/03/structured-content-dita-and-ux-content/feed/ 0 In episode 139 of The Content Strategy Experts Podcast, Sarah O’Keefe and special guest Keith Anderson dive into their experiences with structured content, DITA, and user content.

“My definition of context is anything that affects the cognitive processing of information. […] So, whether you’re consuming information by reading or listening, there are so many factors that affect how you process the context of the content.”

Related links:

LinkedIn:

References: 

  • Floridi, Luciano. The Fourth Revolution: How the Infosphere Is Reshaping Human Reality. 1 edition. New York ; Oxford: Oxford University Press, 2014.
  • Duranti, Alessandro, and Charles Goodwin. Rethinking Context: Language as an Interactive Phenomenon. Cambridge [England]; New York: Cambridge University Press, 1992.
  • Stein, Howard F. Euphemism, Spin, and the Crisis in Organizational Life. Westport, Conn: Quorum Books, 1998.
  • Stein, Howard F. Nothing Personal, Just Business: A Guided Journey into Organizational Darkness. Westport, Conn.: Quorum Books, 2001.

Transcript:

Sarah O’Keefe: Welcome to the Content Strategy Experts Podcast, brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we talk about structured content, DITA, and user context. Hi, I’m Sarah O’Keefe, and I’m here with a special guest, Keith Anderson. Keith is a longtime friend and one of the very few people I think in the world who understands both DITA’s structured content and the world of UX content. So Keith, welcome aboard.

Keith Anderson: Hi. It’s good to be here.

SO: Thanks for coming.

KA: Of course.

SO: So first, give us a bit of a background on structured content and DITA and what your sort of experience is in that space.

KA: Oh, okay. So I go back to SGML days when I was working at a telecom company and we were doing structured content back then, and it was mainly in DocBook, but structured content lent itself really well to being repurposed or single sourced, like we used to call it. There was a point where we were actually single sourcing out the instruction sets for online help, for printed documentation, for instructional design, and we also used them for test scripting. So that’s kind of how I understood the power of structured content.

SO: I just want to note that we are still wrestling with single sourcing and learning content and technical content. So having somebody tell us we did this back in the day, pre-DITA is pretty encouraging.

KA: Yeah.

SO: So then digital transformation comes along, and I think you’ve said that there you can’t really apply DITA directly, but you came up with a way of making that work. What does that look like?

KA: Okay, so out of what I would call the mainstream content management systems, out of all of them, only Adobe Experience Manager actually natively supports DITA. And Adobe has DITAWORLD every year, but when you look at content management systems like SharePoint and Sitecore, they don’t support it. So I was brought on board to do a project a few years ago. It was an online help system, and when I did the content audit, it was like two and a half billion words, and they had been maintaining it in some old tool and then they were just porting it over to the online help system. But it was taking a lot of time. They were trying to move everything into Sitecore. And a few things that I noticed, one was they weren’t using some of the best Sitecore features, which are inheritance and repurposing content. That’s just built in.

The other thing that they weren’t doing was planning out content to be repurposed. So I got the bright idea that I would use DITA because when we did our design thinking sessions, we kept coming back to that, the fact that this was an online help system and DITA lends itself really well to that. So I came in and I ended up with my own little server and … Let me back up just a second. Sitecore, all it is a fancy interface for a bunch of XML schema. And so I thought, well, theoretically it’s possible to enforce DITA on Sitecore and DITA broke everything else. And I started doing research and I talked to a guy in The Netherlands who told me that the surest way to hell was to try to put DITA in Sitecore.

So what I did to circumvent this was I did content modeling and I came up with the idea of using DITA as a platform independent model, meaning that we use it for terminology and we use it for reference, but we can’t technically implement it. So the platform is not dependent on any of the schemas in DITA. And we did that, and that actually helped quite a bit because it did provide us with structure. And then we were able to set up search hierarchies and things like that on the solar server. Solar is the search engine that ships with Sitecore most times, and it worked out pretty well that way.

SO: So you’re saying that essentially you used the concept of a DITA reuse or something like that, but you implemented it without using the standard DITA [inaudible 00:04:39]?

KA: Right. But it was a really good place to refer to. So we used the DITA vocabulary, we used the idea of how DITA content is separated out into topics, and then I introduced topic-based writing to these authors who had been doing very verbose writing on things that didn’t need to be verbose. So we were able to cut out two thirds of the content just by going through and doing that.

SO: So that’s really interesting because it’s one of the big issues that our clients struggle with is this question of, okay, we have web content and we have DITA content, and how do we put the two together? Or how do we integrate them in some way? So in your work, in addition to looking at these sort of structured concepts and putting them in, even if you’re not strictly speaking using DITA or I guess even if you’re not using DITA period, you focused very much on context and the relationship of content and context. So I guess we have to start with the basics, which is what is context or what is your definition of context?

KA: My definition of context is anything that affects the cognitive processing of information. It’s an idea that context is three-dimensional and that, well, the author Luciano Floridi, he created a term called infosphere, and he essentially says that in today’s world, we are living in an infosphere. And it makes a lot of sense because if you imagine context is all around you. So whether you’re consuming information by reading it or you’re listening to it or whatever, there’s so many factors that affect how you process the context of the content. So for example, when I lived in the Chicago area and I took the train downtown every day, I was constantly reading, but was interrupted a lot just by train stops or noise or whatever until I learned to put on headphones just so I could read and focus on that instead of what was happening around me.

So context is very situational. Some things affect you, some things don’t. There’s many, many examples of when you have more context, it completely alters the way that you see something. One example that I could think of is controversial, but it’s Bill Cosby. With all the controversy that’s happened with him, does that for you as an individual, does that affect how you see his life’s work, which was comedy? And so there are factors where context utterly changes things over time. And some things you can control, some things you can’t. I think companies like Comcast who are notoriously hated by most consumers have trust issues regardless of the intent of content writers in the company. And that’s a context those writers cannot control.

SO: So they have no goodwill and that’s their context.

KA: Yeah. And the flip side of it is the context of creation. And back in the day when we were doing online help, you remember how we would talk about can you write good online help for bad software? I mean listen, we had late-night drunken discussions about this at STC conferences, but I think the modern dilemma for content strategy is can you write good content for a bad corporation or for a bad organization? I think it’s a philosophical issue. How do you build trust? How do you be authentic without engineering authenticity? All of those things are contextual and people pick up on it. It’s like magic. You can tell if somebody has written something under pressure versus they’ve taken their time and they’ve crafted prose. Readers know this and they know it intuitively just because of the way our brains are wired.

SO: So I guess this is really interesting because the canonical example of context is always location. If you’re at this location, you get different kinds of information, or if you look up weather, if you look up weather that corresponds to your current location and there’s a tornado warning or something like that, it will give you a very different experience than if your phone knows where you are but you’re looking up a tornado warning hundreds of miles away. And it’s just like, hey, by the way, there’s a tornado warning, maybe traffic, but if it’s right on top of you, it’s going to give you a different kind of experience because the context matters. Obviously I’m concerned about the tornado no matter what, but if it’s on top of me, I’ve got an immediate, “I need to stay alive” problem as opposed to a sort of more, I guess, academic distant interest. So what does it look like to have DITA or generally what you were describing, DITA like structured content and context? How does that work?

KA: Well, there’s a couple of things that I’ve noticed with it. So context can end up being synonymous with metadata, and that works out really well because then you can have contextual cues built into the metadata for people who want to dig deeper. But when you’re writing agnostic content, so when you’re chunking and you’re putting things in structure and you’re writing agnostic content, that content usually gets assembled almost like a stack of Jenga pieces and it’s put together. And so if you repurpose my instructions, and you repurpose a concept topic like in DITA, and you put concept of procedures together, they could be written by two different authors, the style of the pros and all of that needs to be under really strict editorial control for consistency purposes. But with some of the projects that I’ve seen lately, what Microsoft is doing with Microsoft Viva, another good example is Notion. I don’t know if you’re familiar with Notion, but you notice these building blocks and you build things on top of each other and you can have different contributors all building onto the same thing.

All of that stuff taken as a whole is how readers actually take in the information. So inconsistencies in those building blocks will be evident. So one way to handle that is definitely having strict editorial guidelines and following a way to do it. But the other thing too is to have metadata and have enough content to orient the users to the whole piece of what they’re about to read. Every page is page one idea of producing content.

The other thing that I’ve noticed is that when you take agnostic content and you don’t really give it a lot of thought, sentence construction starts to fail because good writing is like you write one sentence after the other, but each one is building in anticipation of what the reader is looking for. And so you’re trying to build the anticipation and then you’re trying to reward the reader by continuing to read. Very hard to do whenever you have chunks and different people are working on chunks.

SO: Yeah, it’s interesting because I don’t think I’ve ever thought about the … We think about the emotional state of our readers, but I don’t know if we’ve connected that to the idea of context. But certainly in technical communication, the generalized assumption is that somebody who is looking something up in the docs or for that matter in the knowledge base is annoyed or frustrated or angry because they’re blocked. The only reason they’re looking in the docs is because they’re trying to do a thing and they can’t do the thing and they need help. So they are somewhere on the continuum from annoyed to having a tantrum. And it makes for a very difficult writing challenge because as you said, they’re not going to give you the benefit of the doubt. So here we are. So what does that look like? I mean, what does it look like to integrate the ideas around context into your overall content strategy?

KA: Well, what I’ve been working on, on the side is developing a universal context model that should be conjoined with standards like DocBook and standards like DITA. And the context model would help drive or maybe not drive, but guide authors as they’re writing as to what should happen next. I’ll think of a completely non-technical example, but something that everybody probably understands is with all of the police shootings and things that have happened in recent years, I don’t know if you’ve ever seen where the police reports get changed, and then they get released again, and then they’ll update them again. And a lot of this has to do with officer trauma, it has to do with different witnesses, everybody’s on an adrenaline rush when they’re trying to get the paperwork started, then people remember things later. The problem with that is that a lot of police reports are free form narratives. They’re not scripted.

So in some ways, the old school green screens, like call centers used to use with scripting, worked a lot better because it guided somebody down to where they needed to be to get something done. So having a context model that kind of underlies the content and it helps drive form fields and things like that, I think that’s critical for the content of the future because as artificial intelligence is growing and language learning models are expanding, they still need guidance and they need human interaction. And I almost think that it’s better that the machine learning happens within a more closed system as opposed to learning like what’s happening with ChatGPT, where it’s just all of the internet ever is what the chatbots are learning. And I don’t think that’s doing anybody any good. I’ve seen all kinds of horror stories about it already, and I think Microsoft just released their demos for Bing just a few weeks ago. And the horror stories are, I see one in the news just about every day.

SO: So what kind of challenges do you see lying ahead? What are you trying to achieve with connecting context into content strategy? And what does that look like? What kind of interesting challenges do you foresee coming?

KA: I think it’s a way of building trust. So let’s take journalism. So if you look at really good reporting that you see, you realize that there is institutional knowledge that larger publications, New York Times, The Washington Post, they all have that institutional knowledge, and we make assumptions based on their reputation that they have an editorial process. But because of the way politics have kind of become so divisive, a lot of the articles and things get picked apart.

A context model on the other hand, might have reporter notes, might have direct quotes from anonymous sources, and then you might have editors who sign off on it and it’s all part of the metadata that maybe you want to know more about the story that you just read, you could actually access. And I don’t think there’s anything wrong with even tying a context model to blockchain for trust purposes. This editor who works for this organization, has this many years in, and it’s almost like having a reputation server to help provide trust. So that way you’re able, as a reader, to weigh how much you trust the news source based on the metadata rather than just taking the article at face value.

SO: Well, you’ve given us a lot to think about because, and I suspect we could go on for another 20 minutes or much, much, much longer, but I think we’ll leave it there for now. Keith, thank you. This was really, really interesting.

KA: I’m glad to be here.

SO: Yeah, a whole bunch of new ideas and we’ll leave some additional resources in the show notes, including I believe Keith’s website and some other bits and bobs that should be useful to people listening to this podcast. And with that, thank you for listening to the Content Strategy Experts Podcast, brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

The post Unpacking structured content, DITA, and UX content with Keith Anderson appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 17:50
Why information architecture matters (podcast) https://www.scriptorium.com/2023/03/information-architecture/ Mon, 06 Mar 2023 13:00:36 +0000 https://www.scriptorium.com/?p=21807 https://www.scriptorium.com/2023/03/information-architecture/#respond https://www.scriptorium.com/2023/03/information-architecture/feed/ 0 In episode 138 of The Content Strategy Experts Podcast, Gretyl Kinsey and Christine Cuellar talk about a common content strategy trap: what happens when information architecture (IA) is missing, and why you need IA.

“Without IA, you can’t get the most value out of your content. When we think about things like the time it takes to create your content, or getting benefits out of it like reuse, saving money on your translation costs, saving time to market on your translation, all of these things really make your content work for your organization. If you don’t have solid IA in place, it’s going to be really hard to do those things and truly get that value out of your content.”

Related links:

LinkedIn:

Transcript:

Christine Cuellar: Welcome to the Content Strategy Experts podcast, brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way.

In this episode, we talk about a common content strategy trap, what happens when IA is missing and how you can avoid it. Hi, I’m Christine Cuellar, and today I’m joined by Gretyl Kinsey. Hey, Gretyl!

Gretyl Kinsey: Hello everyone. How are you?

CC: Good, how are you doing?

GK: Doing well.

CC: Thanks so much for joining the podcast. So in a previous podcast, you and Bill were talking about some common content strategy pitfalls, and you briefly touched on this topic, but we wanted to unpack it a little bit more today because it seems to be something that’s commonly resurfacing. But before we dive in, I’m going to pull the newbie card. Gretyl, can you tell me a little bit more about who you are, your role here at Scriptorium and some of the experiences that you’ve had?

GK: Sure. So I have been a technical consultant at Scriptorium for actually more than a decade now. I started as an intern in 2011 and I’m still here still learning all kinds of new things with all the different projects that we do. I mostly am on the content strategy and information architecture side, so that’s why I think it’s perfect that we are talking about IA today, that’s a lot of the work I do. I’ve seen all kinds of things from really, really ideal IA projects all the way to ones that needed a lot more help and a lot more guidance, and so have a lot of wealth of experience to draw on at this point.

CC: That’s great. So can you tell us what is IA for maybe our listeners that don’t know what that is?

GK: Sure. So if you’re unfamiliar, IA stands for information architecture and it is sort of a subcategory under the overall umbrella of content strategy. And IA specifically focuses on things like your content model, metadata, reuse, linking, basically how you plan to organize and structure your content and what decisions need to go into the process of doing so.

CC: Got you. Okay. So why does IA commonly get skipped or overlooked?

GK: There are actually several reasons that we see this happening. One of the big ones is just a lack of resources. So depending on the size of your company, how much budget you have, how much time you have to dedicate to content, how much content expertise that you have on board, you may or may not have the resources that you need to actually plan and create a good IA. So that’s a big reason why it might get skipped. Another one is just not prioritizing content until it’s too late. So maybe putting the resources that you do have into other areas and really thinking about content as more of a last minute or a last resort kind of thing.

And another one is a culture of disconnect around content. So in some organizations we will see a lot of collaboration around content and that can tend to lead to maybe a better thought out IA, but then in other organizations there may be content silos where you have different departments or different groups working on different kinds of content or different pieces of content, and we can see in organizations like that a general lack of collaboration.

And sometimes even if you’re not in silos and you are more interconnected with your technology, there may still be on the people-side, a lack of collaboration. So if there is that culture of disconnect around your content, then you’re probably less likely to have a good IA or to skip it or overlook it. And then another one is mergers and acquisitions. And this is just because when one company acquires another or when multiple companies come together, that’s going to give you a mix of the IA and content processes that each group may or may not have had before and maybe no clear winner. And depending on what other things are happening in that merger, then IA might fall by the wayside if, again, it’s kind of not a big priority.

CC: That totally makes sense. Okay. And why is it a problem not to have IA?

GK: Well, without an IA, you can’t get the most value out of your content. So when we think about things like the time it takes to create your content, getting some benefits out of it, like reuse and saving money on your translation costs, saving time to market on your translation, all of these things that really make your content work for your organization, if you don’t have a solid IA in place, it’s going to be really hard to do those things and truly get that value out of your content. Another reason why it’s a problem not to have an IA is because it makes it hard to deliver content as effectively as you could otherwise. And especially if you have a really heavy customer demand for things like content delivered in digital formats rather than print only, or if you’ve got a lot of demand for highly personalized content, those are the kinds of things that really require a solid information architecture.

It’s also really difficult to convert content from one format to another. If you have a need to do that, we see this a lot with, for example, going from unstructured content to structured content such as something like Microsoft Word or unstructured FrameMaker into data XML. If you don’t have a good information architecture for what you’re converting your content into, that conversion is not going to go very successfully because there’s not going to be the kind of consistency and structure and organization in your content that you need to make that work well.

And then of course, one of the biggest issues is that without a good IA, it’s very hard to scale up your content development processes. A lot of times content production can work really well on a small scale if you haven’t done a lot of planning and a lot of organization and thought about how your content is put together. But then as soon as your business starts to grow, you realize that you have to get a lot more content out the door a lot more quickly and maybe have it personalized for different segments of your customer base. Maybe you’re starting to translate for the first time and you’re just have this need to scale up. If you don’t have a solid IA in place, that scalability is also going to be really painful, if not, impossible to achieve.

CC: Yeah, that makes sense. I feel like growth is always such a good indicator of gaps and processes and it’s such a good time to take a look at things and see where you can change. So scalability is always something I feel like we come back to on our podcast and our blog post. So what are some of the examples from your work where these issues have come up?

GK: There are actually all kinds of challenges that we have faced here at Scriptorium with IA. So one of them kind of touches on what I mentioned in the last question, which is we were talking about taking your content from unstructured to structured. We see a lot of clients who are looking to do digital transformation, and so that’s going from more of a print-oriented life cycle to a digital-oriented life cycle for more flexible delivery. And a lot of times that does involve a move from unstructured content into structured content. And so, of course that does mean a major change is required in your IA. So that is not an easy one-to-one match if you are working in something like Microsoft Word, something desktop-oriented to start, and then you are going from print only to print and digital, some kind of a hybrid and maybe involving some personalized delivery in there. You’re not going to have a one-to-one match of what you had before in your Microsoft Word, your unstructured frame, your InDesign to what you have now that is going to put that digital delivery on the table.

So that’s a really big IA challenge to think about what is our implied structure in the content that we have right now that is more desktop publishing oriented, and then what does the structure need to be for something that’s going to allow us to have a more digital-oriented life cycle. So that’s always really difficult. It’s a long and oftentimes painful process, but it’s a necessary one. And it’s where I think for us, as consultants, we can really come in and help if an organization is struggling with that. Another challenge that we faced is helping content creators deal with the learning curve that comes with a new IA. And just like I mentioned on that last point about digital transformation projects, that’s where we tend to see a lot of us happen the most is that you’ve got a lot of people who are very experienced writers and experienced at that aspect of content creation, but they don’t have the experience of working with a more digital focused content life cycle and the IA required to support that.

So for example, if they’re going into something like data XML that would support a new digital life cycle, then they’re going to require a lot of knowledge transfer, a lot of training, and a lot of support all throughout that process because that learning curve is pretty steep. Another challenge that we see a lot is conflicting ideas around how the IA should be designed and built. And this is true whether you have one IA that you’re already working with and you’re looking to improve it or whether you’ve never thought about it before and you are just now realizing that you need to solidify an IA for your content. So there can be differences of opinion with different groups who are working on content. Like I mentioned earlier, if you’ve got those content silos and people who don’t work collaboratively, then they might have really, really different ideas of how the IA should be done going forward.

You can also have an issue where if an organization isn’t really getting adequate feedback from their customer base, then they don’t have that in mind how that should feed into decisions around how the IA should be built. And all of this is really where it can help a lot to get some outside perspective from a consultant. So when we come in and we see these conflicting ideas happening, we’re able to give them that perspective and say, “Here’s what we’ve seen at a lot of other organizations that might help you to learn from that experience. Here’s what we typically see as industry best practice.” And that can help resolve those conflicts and guide them through to getting an IA that’s actually going to serve their organization best.

CC: That’s great. It’s just like a tiebreaker, a third party to come in and be able to be that unbiased voice to give support for what’s going to be best.

GK: Sure, absolutely. And then another challenge that we faced is trying to work around aggressive or sometimes even unrealistic implementation schedules. And this happens a lot because the schedules are often set by non-content creators. It might be people and upper management people at the sea level who aren’t really in the weeds and don’t fully understand all the ins and outs of what’s required to create content, convert it from one format or structure to another, develop an IA that’s going to work for you going forward. And so, if there’s that tension with the schedule saying, “We have to meet this deadline because that’s going to affect our scalability, our other goals,” that can sometimes result in a project being pushed forward without adequate time to plan for your IA.

And then what that eventually causes is some messy situations where because you did not put an IA in place properly or didn’t think about all of the different things your IA might need, then you try to produce content and it’s not going to serve you in the way that you thought it would. So even though a schedule might be really aggressive, even though you might have deadlines, it’s still important to prioritize the IA and not let that be something that falls by the wayside in favor of meeting a deadline.

CC: Got you. So I’m curious to know a little bit more about pilot projects or proof of concepts. I know it was mentioned in a previous podcast and we’ve talked about it a little bit in some other places. Can you unpack what those are and how they may be able to help your developing a new IA?

GK: Absolutely. So pilot projects and proofs of concept are a really good way to mitigate risk when you are developing a new IA or changing an existing one or really doing any kind of change to your content processes. So specifically when we’re talking about IA, you could use a pilot project to try out a new IA that you are planning and thinking about on a small subset of your content and that can let you see what works and what doesn’t in real-time, give you that practical example, and that way you can make adjustments to the plan for your IA before you roll it out across your entire body of content. And then if you’re still trying to convince management that a new IA is a good idea, you’re trying to get the budget required to roll that out across the organization, then having a successful pilot project can actually help you do that. It can really convince people, “Here’s the return on investment that we’re going to get if we put this IA in place and here’s the proof that it’s going to work.”

CC: That’s great. Yeah, that’s really helpful.

GK: I also wanted to note that IA development does require a lot of flexibility. You are almost guaranteed to have to go through multiple iterations, you’re never going to get it perfect on the first try. And that’s why we do recommend a pilot project or a proof of concept because it lets you start small and it allows you to build in the room for that flexibility all throughout your project rather than being under that deadline pressure that I talked about. If you have that pressure to get it right and you know that that’s not going to work, then you’re setting yourself up to fail. So putting a pilot project in place, doing a proof of concept really just helps get rid of a lot of that risk.

CC: Yeah, absolutely. I’m sure it puts everyone’s minds at ease. So I’m curious if someone wanted to start a proof of concept or organization wanted to invest in these first, how do they do that?

GK: That’s always really interesting. It kind of varies from one organization to another, but where we see it often originate is there will be maybe a writer or a manager of a group of writers, one person who really sees an opportunity and isn’t at the level where they have the pull at the organization, where they have the budget, have the resources, but they do have the knowledge for, “Here’s an idea that might work.” And so, a lot of times these proofs of concept just originate from the ground up from people who are actually working on the content, and that’s what allows them to grow their IA and their overall content strategy for the larger organization.

CC: Got you. Yeah. So they’re the ones that are really recognizing the need, probably the ones also hitting the pain points, unfortunately, to say something needs to change. So that’s interesting. Circling back to your response earlier on that actually, when you mentioned that sometimes content isn’t a priority until it’s too late. Could you kind of unpack what could be included in too late? Either signs that it has been too late and we need to focus on content or some pain points that might be coming up to help you avoid getting stuck in too late.

GK: Sure. So one of the red flags that we see a lot that says, “Either it’s too late, you should have started planning an IA earlier or now is the time to start,” is that if you have a lot of inconsistencies in your content getting in the way of being able to take advantage of all it can do for you, that’s definitely a sign that you need a lot better IA planning. So if you are trying to do reuse for example, and you’re unable to do so because of how your content is structured, if you realize that you need to start translating into other languages, or maybe you already are, but you need to translate into a lot more languages and that’s costing you a lot of money because you can’t do reuse, if you are running into issues with publishing, so if you’ve got people requesting custom content or personalized content and you just are not set up to deliver that, all of those things because your content is written inconsistently, it’s structured inconsistently, that’s definitely a sign that you need an IA.

Another one is the inability to search your content or filter your content due to a lack of sufficient metadata. So metadata is a really important piece of your overall IA puzzle. And a lot of organizations don’t really think about how it’s going to be used both internally by content creators and externally by your audience, by your customer base. And so, if you haven’t thought about all the ways that people might need to search the content and find information they need, that they might need to filter the content down to delivering specific pieces to specific people or even filtering your search results, all these different ways that you can find the right information within your set of content, a lot of that is driven by having the right metadata in place.

So if you find that people are unable to do that, then that’s another one of those signs or pain points that says, “Okay, we need to rethink our IA and make sure that metadata is a big part of that and that we have considered that.” And then just like we’ve talked about several times throughout this discussion, challenges the scaling. So if you have issues with meeting your goals for growth and scaling your content up to meet that demand, then that tells you, “Hey, let’s go back to the ground up and think about our IA that we should have had in place all along. And then that will allow us to do what we need to do to scale up our content development processes.

CC: Yeah. So if any of those pain points sound uncomfortably familiar, that is definitely something that we can help with here at Scriptorium. So we’ll have a link in our show notes where you can contact us to get a conversation started. Gretyl, is there anything else you can think of that you want to share with our listeners about IA or anything else we’ve talked about today?

GK: I think the biggest thing is just don’t overlook it and don’t leave it out.

CC: Yeah, absolutely. Well, thank you so much. I really appreciate you being part of the podcast.

GK: Absolutely. Thank you.

CC: Yeah. Thank you for listening to the Content Strategy Experts podcast, brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

The post Why information architecture matters (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 19:00
Content fragmentation with special guest Larry Swanson (podcast) https://www.scriptorium.com/2023/02/content-fragmentation-with-larry-swanson/ Mon, 20 Feb 2023 12:32:37 +0000 https://www.scriptorium.com/?p=21739 https://www.scriptorium.com/2023/02/content-fragmentation-with-larry-swanson/#respond https://www.scriptorium.com/2023/02/content-fragmentation-with-larry-swanson/feed/ 0 In episode 137 of The Content Strategy Experts Podcast, Sarah O’Keefe and guest Larry Swanson talk about the fragmentation of content over the past 30 years, from the delivery of books to UX writing.

“What are the changes that this fragmentation has introduced from a business or an economic point of view? One is the notion that we’re all publishers now. This is where the whole field of content marketing comes from — this notion that it’s a better way to promote yourself if you demonstrate expertise around what you’re doing.”

Related links:

LinkedIn:

Transcript:

Sarah O’Keefe: Welcome to the Content Strategy Experts podcast, brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way.

In this episode, we talk about the fragmentation of content over the past 30 years, from delivery of books to UX writing, where you publish content inside software. Our guest today is Larry Swanson, an independent content designer and content architect who also has his own podcast called Content Strategy Insights. Hi, everyone. I’m Sarah O’Keefe, and, Larry, hello!

Larry Swanson: It’s great to be here, Sarah. I love your podcast and I’m delighted to finally be on it.

SK: Well, likewise. So we’re going to have, I think, dueling podcasts and it’ll be fun. Tell us a little bit about what you do. Tell us a little bit about who you are and where you’re coming from and what your life looks like in business.

LS: Yeah, it’s inevitable that I ended up where I am. I’m a word nerd from birth. My mom was an editor. I’m also… My dad was an engineer, so I’ve had this weird combination of technical and grammatical stuff throughout my life. And yeah, so I went to journalism school, which seemed like a good idea at the time. I immediately abandoned journalism for book publishing when I got out of college. I went to a course called the Radcliffe Publishing Procedures Course. Because of that, I think just that journey, the way I got into journalism school was interesting, the fact that I went right into book publishing out of there. I’ve always been more interested in the process of how this happens than about the writing itself. I’ve done some writing and can communicate well, but I’ve never been a writer. I’ve always been the publisher, the editor, the marketing guy, the meta-practitioner.

SK: Yeah, which is interesting because I think that that’s quite similar to how we describe ourselves here, that if you’re looking for domain knowledge, that’s something that you should have inside the organization. We’re the people that come in from the outside looking at your publishing systems and what that looks like.

So you and I were talking a couple of weeks ago, which is actually where this podcast topic came from, because we were looking at this question or this idea that many of us actually came from traditional book publishing, and now we’re doing things like publishing software strings inside software, doing UX writing or UX design, but we came from this traditional book world. And that got us started on the concept of fragmentation and what that means and what the implications are. I wanted to start with content challenges. What kinds of content challenges do you see in this long term, I guess, but 30 years is really the blink of an eye, but in this transition from a book-based publishing world to UX design, UX writing content embedded in software?

LS: Yeah. Well, it’s funny. The first thing I reflected on when we talked about this is what’s the same. And I’m surrounded by awesome word nerds and good collaborators, so that’s been the same throughout. But it manifests entirely differently now. Whereas we used to have these long, convoluted, literally years-long processes to develop a manuscript, put it into that sausage factory of everything from developmental editing to the composition, and then all the distribution and the physical manufacturing of the books and all that, a quick turnaround would be nine months. Sometimes if you got a really hot topic, you could turn it around that quickly. Now we’re dealing in milliseconds for some of this stuff. I think of that Oreo commercial during the Super Bowl. Remember when the lights went out? And they created a whole advertising campaign in 10 minutes about “you can dunk in the dark.”

So we’ve kind of gone from years to seconds and the whole cycle. Around the time, the start of this transformation, for me, a guy named Nader Darehshori. He was at the time the CEO of the publisher Houghton Mifflin. He said that publishing is just the business of the discovery, development, and dissemination of ideas. There’s a lot going on in there, and it used to take a long time for it to happen, but that ad that Oreos did during the Super Bowl, that whole thing happened in literally 10 minutes or something like that. So that compression of time has led to, I think, the need to be super hyper-attentive to the procedures, how you do stuff, and the stuff you have in place to facilitate that sharing of ideas more quickly.

I think one of the very first books we read is, when I went to a publishing course right after college and read this book called One Book/Five Ways, where they took the same manuscript and gave it to five different university presses and got five different treatments of that. So I’ve always been really conscious of… and there were sort of best practices exemplified in that, but everybody did it their own way, and that’s another thing that’s only magnified. There used to be something much more like best practices. Now it’s like everything is bespoke. And yet, you have to have a way to do it so that you can be bespoke. And that’s where we’ve gone from these long tunnels of production and distribution stuff to these more fragmented, increasingly decoupled modular architectures that permit taking content, mostly words in text form, but also recordings of various kinds and even 3D stuff in the metaverse, and being able to do stuff with them more quickly.

That’s been the biggest change, is it’s still people sharing ideas with other people in a media format. I think a person from Mars who could magically look in at us and just look at… They would think, oh, it’s just the same thing. And it’s like, yeah, it kind of is, but there’s a lot more going on now to make it all happen.

SK: So I understand the concept that a lot of the increase in velocity is that we got rid of physical distribution. We don’t have this process of printing books and binding books and shipping them to bookstores, which does take a significant amount of time. But backing up from there, you mentioned developmental editing, and where are the developmental editors in our fragmented content chain? Is that concept just gone?

LS: No, I think it’s still here, but it manifests differently. I think think that happens in the craft. Content strategy is a discipline. You might call it fragmenting, but I call it… I think I’d equally call it specializing. I’m still a generalist and I’m kind of weird that way, but for the most part, the content practitioners now, they’re either a content creator or a strategist or a designer or an engineer or a content operations person, people managing it, and there’s many other specializations that are happening. I think that’s part of how it’s happening, is that it’s the craft that permits the acceleration, that things are developed differently, and we’ve figured out… We are still figuring out, I should say, because the articulation of the field of content design is really only… I mean, people have been doing it a long time, but people have only been calling it that for 3, 4, 5 years, something like that. But look at how quickly it’s taken off. And Kristina Halvorson is shutting down Confab to focus on content design at Button.

So it’s a very fast-evolving thing, but it’s the collection of crafts that develop the ideas now rather than this kind of sausage factory, linear progression of things. Does that make sense?

SK: I think so. So looking at… You’ve mentioned the sausage factory a couple of times, which is, I think, an apt metaphor, unfortunately. What does this look like from a tech point of view? What are the changes in the processes, systems, and I guess especially software that we use to produce content?

LS: Yeah. I think what’s funny is what… I remember being exposed to SGML, the predecessor to HTML 30 years ago. I knew that there were ways that you could deal with words separately from their presentation. But I think that’s been the main thing, is the disarticulation of the content, the meaning, the words, the pictures, the images, all that stuff that make up the content, their disarticulation from the physical… We used to have these physical artifacts where we shared the information, and now it’s all digital. And within that sharing, the tech that makes it happen, it’s instrumental to the whole thing. Where it used to be tech was the facilitator that created the object, now the tech is the object. It’s like an interface at the end of a thing rather than a physical artifact.

And it’s more of a people challenge, I think. It’s not hard to get into all that technical stuff and figure out, oh, I can make these words appear here with this technology. Piece of cake. Getting people to abandon WYSIWYG mentalities around graphical user interfaces and author content in new ways for more abstracted out and then reassembled experiences, I think it’s… The technology has kind of made it, to my mind, a logical evolution, and it’s like, oh, cool, we can do all this. We can make our little Lego kits however we want and put them together however we want. But I think there’s still this legacy thinking that a lot of us have that I still struggle with every day of that linear process that creates physical artifacts that we still have.

People still talk about creating web pages. It’s like, really? Is that what you’re doing? I don’t think so. I mean, maybe it manifests as a page in that one moment, but the elements on that page are increasingly customized or maybe even personalized for a unique experience. They’re responsive to the device that they’re on and the screen resolution and the accessibility needs of the end user. There’s all these different things that go into that are technically easy enough to implement, but helping everybody along the way understand this different way of doing stuff. On my podcast, it almost always comes back to, you know, this is mostly about people, and I think the technology stuff, yeah, it’s mostly about people.

SK: Yeah. Well, and it’s interesting. I mean, as you’re talking about WYSIWYG and people acting as though WYSIWYG is their birthright, which has been around forever, it hasn’t. I mean, you don’t have to go very far back in book publishing to find that people would on a typewriter type a manuscript, which bore no actually resemblance to the final book. It was a typed manuscript with no formatting, I mean, paragraphs and maybe some chapter headings, but it had to be actually composed into a book, and woe be unto you if you had figures and tables. Those were nearly always included in an appendix at the end of your manuscript, right? Here’s Figure 1 inserted on typed page 75. So this concept of WYSIWYG and putting it all together and getting a visual preview for the author is relatively, I mean, relatively new. We’re talking about, what, 1987 or thereabouts.

LS: Yeah. When was it? PageMaker and then Quark, and I think that’s where that came from.

SK: PageMaker was… Yeah, roughly. I think the first time I saw it was about 1988, so somewhere in the ’80s. Yeah.

LS: Yeah, that’s right. You know, it’s funny the way you said that, like it’s our birthright to be able to see what we’re doing. It’s like, nah, it’s just a little blip in publishing history.

SK: Right, and, well, of course if we go far enough back, then we will discover that people used to actually compose their pages as they went, and they were totally WYSIWYG because…

LS: Right. No, and as we were talking about before we went on the air, like Gutenberg, the implications of that were more about replicability, and the scribes before him knew what they… You saw exactly what you were publishing. 

SK: What you see is what you get.

LS: Exactly.

SK: No podcast of ours is complete without a mention of Gutenberg, so we’ll check that one off the list.

So the tech, it swings back and forth, and sometimes you’re WYSIWYG and sometimes you’re a cog in the machine. And people seem to prefer largely not being a cog, right? They like to exert that at least perceived control over what they’re doing. So then turning our attention to the business of publishing and the business of content, what do you see there? I mean, what are the changes that this fragmentation has introduced from a business or an economic point of view?

LS: At least two big things. One is that notion that we’re all publishers now, that this is where the whole field of content marketing comes from, that this notion that it’s a better way to promote yourself if you demonstrate expertise around what you’re doing. We both do that with our podcasts. This is why people know we’re so awesome at our content practices. It’s because we have podcasts. And there’s a million other ways that you can do publishing-y kinds of things. But the business intent of those, rather than selling podcast episodes for money, we’re using it as marketing.

There’s that, that notion that everyone is now a publisher, but there’s also the notion that the business of publishing itself has changed. There’s both the fact that we’re all now publishers, just made that whole world a lot bigger, but there’s still publishing happening within there. You think about media like Netflix and the New York Times and game publishers, everything from consoles to the new 3D stuff. So publishing is still happening, but there are a lot of other business things that happen with the same technology, which I don’t think that was true. I mean, it was kind of true with old-style publishing. You would use the printing press to create an internal newsletter or something like that.

But it was not as ubiquitous as it is now, because everybody has access to this stuff. No matter what line of business you’re in, you’re using those technologies to do slightly different stuff, which I think is where the whole field… That’s one way to contextualize the rise of user experience design, because you’re serving like, okay, I just need to sell some stuff. I’m a merchant, and so I have this e-commerce world of stuff that I can do with these ostensibly publishing technologies, because they’re about just sharing information. But you’re sharing information in service to getting somebody to place an order. Or if you’re a marketer, you’re sharing information in service of getting a lead. Or if you’re a publisher, you’re sharing information to get paid for that thing you just published, whether it’s an advertisement or a subscription. And if you’re…

So that kind of publishers, merchants, marketers have always been, to my mind, the three main buckets in the business world of digital business. And their websites all kind of look similar now, but there’s different business prerogatives that underlie them that lead the whole… I’m working at a big travel company right now, and this business logic that underlies that whole thing, it just looks like any other website, just lists stuff about the travel products. That’s way different than a big affiliate site that was just selling links back to Expedia. A big travel company like Expedia is doing all that business stuff. The travel agents and airlines and hotel chains used to do it. So I think it’s broken down a lot of barriers that make new kinds of businesses possible.

So I think that’s the biggest level of it, and they’re all using the same technology. They all have to abide by those same practices around, if you want to be found, you better have a responsive website so you better abide by responsive web design principles and be using CDMs and all… whatever the latest technology thing is to improve the end… and it’s always about user experience. The reason that’s important is because users don’t have the patience to wait for a slow-loading webpage. I can’t articulate it as well as we hoped I might when we talked about this interview, but there’s something going on there where it’s much more about the end user and meeting their needs. So I think you can trace back almost all these developments to the need to improve that, the places that are doing it well, anyway, to help people find the right information at the right time.

Google does that pretty well, help people get the movie that they really want to chill to that night. Netflix does that really well. And it’s all about satisfying user needs. And that, to me, is this technology that we first saw as a way to accelerate and increase the velocity of publishing activities, it’s like, oh, I can sell stuff with that too. Oh, I can deliver media that’s customized to a person’s interest. Yeah, that would’ve been nice to have Blockbuster could have sent somebody to your house and interviewed you about what video you want to watch, but that’s not very scalable. So anyhow, so that notion that it is all technological that permits the scalability, that’s the foundation of most of these business models, is that ability to take a good practice and just, boom, do it for millions of people at once.

SK: Yeah, I think scalability is a really, really good point. And velocity, velocity of publishing is sort of related to that. They’re not exactly the same thing, but can you scale up and produce more and more and more content and can you do it fast or instantaneously by… because our old distribution, put it on a truck and send it to a bookstore, has been replaced by push this button.

LS: That’s right.

SK: And sometimes not even that.

LS: There’s something in there about… I think one of the other really important things that we just don’t think about consciously enough but we’re all doing all the time is automation, that we’re automating tasks that used to take… I think that’s really coming to the fore now with the generated AI stuff, ChatGPT and those kinds of things, that, like, oh, I don’t have to outline this. I’ll just have ChatGPT do this for me. That kind of task automation underlies a lot of this. I can’t articulate exactly how that’s going on, but I think that’s an important part of it as well.

SK: Well, and I guess with a call-out to AI is up next and we’re not really sure what that’s going to do for us, that seems like an excellent place to close this. So Larry, thank you so much for coming in and sharing your thoughts and giving people something to think about and be scared of.

LS: I hope I didn’t scare anyone. And thanks so much, Sarah. I really enjoyed the opportunity to chat with you, and I hope that rambling stuff made some sense.

SK: Well, I think so. We’ll see what our audience thinks. So thank you, Larry, and thanks to you for listening to the Content Strategy Experts podcast brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

The post Content fragmentation with special guest Larry Swanson (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 18:31
Nightmare on ContentOps Street (podcast) https://www.scriptorium.com/2023/02/nightmare-on-content-ops-st/ Mon, 06 Feb 2023 12:00:45 +0000 https://www.scriptorium.com/?p=21715 https://www.scriptorium.com/2023/02/nightmare-on-content-ops-st/#respond https://www.scriptorium.com/2023/02/nightmare-on-content-ops-st/feed/ 0 In episode 136 of The Content Strategy Experts Podcast, Alan Pringle unveils horror stories of content ops gone horribly wrong.

Related links: 

LinkedIn:

Transcript:

Christine Cuellar: Welcome to the Content Strategy Experts Podcast, brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we share some content operations horror stories. Today I have our COO, Alan Pringle, with me. Hey, Alan. How’s it going?

Alan Pringle: Hey there, I’m doing well.

CC: Are you ready to talk about some horror stories?

AP: The question is, are you and is our audience ready for this? Because I’m not sure that they are.

CC: Well, I hope we all like horror because we’re diving deep into some stories. So, Alan, why don’t you kick us off?

AP: Well, I do appreciate the horror genre, I have for a very long time, and I’ve noticed that my favorites tend to have very short titles. Like last year there was Barbarian, which I really liked. Then there was the 1978 movie Halloween, the original. I’m not talking about the newer ones. I don’t like those as much. And then the Evil Dead and the Conjuring, they’ve got these short, snappy titles. So I thought we could kind of play with that whole idea and label some of the things that I have seen along with the other Scriptorium folks over the years.

CC: Absolutely, love that idea.

AP: So let’s talk about the first horror story. So everybody, let’s gather around our digital campfire and we can exchange scary tales.

CC: Grab our marshmallows.

AP: Yes, that. Yes. Let’s call the first one, The Update.

CC: Dun, dun, duh.

AP: Exactly.

CC: Always chaos and carnage with an update.

AP: In this case, it was ugly and gory indeed. We had a client who was changing their name, changing their branding. They had hundreds of desktop publishing files, and unfortunately, these files were not templatized, which means to do an update to change the company tagline, to change the company logo. They were going to have to go through and touch every single one of these files. Yeah.

CC: Talk about horror.

AP: Absolutely awful. The good thing is there is a happy ending here that is not all blood and guts. Because there was so much of this content involved, it made more business sense to convert all of these desktop publishing files to structured content. And by doing that, we set up an automated publishing workflow. So instead of going through and touching all of these files, we did the conversion and then we set up transformations of that structured content into, for example, PDF files, and the automated publishing process automatically put in the new logo, put in the new tagline.

So people didn’t manually have to do it. We ran that structured content through this transformation process, and voilà, we had the PDF files that had everything in it, and people didn’t have to physically touch them. So instead of making a huge one-off investment in all this manual work, the company did something really smart and invested in better content ops. So since then, if they have had to update their logo or their tagline, all they would have to do is go in and touch their transformation process, fix that there, and just rerun everything and then the process will handle it for them.

CC: So much better.

AP: Yeah, it’s magical.

CC: Yeah, so much better. I love that it’s not only easier for the team, but it’s also better quality. It’s easier to produce better systems rather than leaving things open to mistakes. I just love that about content ops.

AP: No. No, you’re exactly right. This created a repeatable automated process, and those are two huge wins. So that has a happy ending.

CC: Unlike most horror movies, there is a happy ending here.

AP: Well, you got to have the sequels.

CC: Yes, that’s true. That’s true. The 500 sequels.

AP: Exactly.

CC: All of which pale in comparison to the original, but yes, correct.

AP: Usually. Correct. You’re a hundred percent correct. Let’s go with the next story, which I call Cut and Paste, and this is not limited to just one client, and I am sure our listeners have been through this very thing before where you have one piece of content that pops up in multiple places in your documents. Unfortunately, that content has been cut and pasted manually a zillion times, so you have a bunch of different versions of that and a bunch of different files. And this is where your sequel comes in. Somebody will go in there and slightly change one of those, which is supposed to be the same wording, change a word or two in there, and now you have the sequel, Cut and Paste: The Mutation. That is never, never good, and it just compounds headache after headache. And then for part three, which probably should be in 3D, a three-dimensional movie.

CC: Yeah.

AP: It would be… yeah, Cut and Paste Three, Localization. Yeah, that’ll have ‘em running out of the theaters, because every time that you translate something like this and you’ve got all this copy and paste in your source content, and then you translate it, what are you doing? You are basically replicating the same horror that you had in how many different languages? It’s incredibly inefficient, it’s incredibly expensive, and it’s a headache for everybody involved.

CC: Yeah.

AP: This is why you need better content operations, and you basically need to figure out reuse scenarios. You don’t necessarily have to do XML or structured or authoring or use X tool or Y tool to do this. A lot of tools have the ability to set up mechanisms for reuse. Even Microsoft Word at a low level has some of these features. So what you need to do, you need to templatize this content. You need to set it up so you are referencing things that are going to be repeated often. So when you do have to make an update to that, you change it one time and it just automatically fixes itself across your body of content. That is the ideal thing you need to do, especially before you start localizing your content into other languages.

CC: Absolutely. Yeah. And that’s something we do touch on more in our blog posts that we published about how Scriptorium optimizes your content. We’ll go ahead and link that in the show notes as well, so you can check that out.

AP: Yeah, no, and that’s a very good point. Localization is often one of the drivers that has people talking to us and realizing our content operations, they’re broken. So yes, localization is one of these things that can really make or break you when it comes to your content.

CC: And from my understanding, a lot of times companies are reaching out because they’re missing out on a localization opportunity, is that correct? That’s the pain point that they’re experiencing is they’re missing out on something they either are being told they need to do or something they want to do, but there’s no way that they can go ahead and meet those requirements or step into that new opportunity with their current operations process.

AP: No, it’s true. In some cases, there are regulations that say you will provide this content in the languages where you’re shipping this content, to locale specific content. So are you going to end up having your products sitting on a dock somewhere while you scramble to get these documents in place? Which sounds absolutely bonkers. It has happened. Same thing for services too. I mean, in this global international environment, if people don’t have that content in their language, they’re not going to use your product. And that goes for the interface. Is it in their language? Is the content that explains how to use it in their language? So yeah, you can lose out on an income stream because you are not ready to localize and to do it efficiently.

CC: Absolutely. All right, now onto our next horror movie. It’s The Spreadsheet From Heck. And I’m very curious about this one because this one really messes with me.

AP: Yeah. The more R-rated version is spreadsheet from (beep). I know I will be bleeped for that, but that’s more accurate. So yeah, Christine’s going to have to get out her buzzer and bleep me on that.

CC: I will, yeah.

AP: Spreadsheet From Heck. First of all, if I saw a trailer for a movie that had the word spreadsheet come up when I was in the theater, I’d just get up and leave. Yeah., Because I get enough of that during the workday. I do not need to see it when I’m trying to have fun, thank you very much.

CC: Trying to escape reality here by going to the movie

AP: Exactly. I don’t need it reinforced in my face for an hour and a half. But someone has made the observation, it was not me, and I want to be very clear it was not me, someone has made the observation that the most common content management system is probably an Excel file.

CC: That’s horrible.

AP: It is horrible, but there is a degree of truth to this. There’s a kernel of truth there. A lot of people will plan out their workflows. “Here are all our files. This is the schedule. Here’s when it needs to be reviewed. Here is when it needs to be approved,” all that stuff. There is some degree of automation, yes, that you can do in a spreadsheet, but that only goes so far. And there’s some really critical things that you need to keep track of when you’re trying to manage just gobs and gobs of content.

I cannot imagine trying to do all of that in a spreadsheet, yet some people valiantly try, and they may be successful for a while, but I am nearly certain there has got to be a tipping point where you cannot do this anymore. And that’s true of almost everything we’re talking about in this episode. These things can work one off, or if you’ve got a very small body of content, the minute your requirements change and require you to do more, the stuff doesn’t scale. And this is a perfect example of where scale is going to inflict a great deal of harm on you. Maintaining that sort of stuff in a spreadsheet, that is a no-go from my point of view.

CC: No, I can’t even imagine from a content marketing perspective because I know I just specialize in content marketing and I’m not producing content on the scale that a lot of our clients and even our staff are producing content. I can’t imagine organizing all of that in a spreadsheet and having tasks remind me of when to follow up, when to do what, when to update what, all that kind of stuff. I truly can’t imagine managing that. I think it would just…. It would be a horror movie.

AP: Exactly. Like I said, it’s not ideal, but it happens more than it probably should.

CC: Speaking of something that happens more than it probably should, let’s move on to the next movie, The Email Chain.

AP: And people are going to think this may be some throwback to some lower tech era, and the sad truth is yes, today in the 21st century, there are still people, still companies who do content reviews by sending either PDF files or bits and pieces of information in an email. And they go back and forth making changes and getting approval. That to me, I mean, please just set me on fire. It’s deeply, deeply inefficient, yet it still happens today. And I’m sure there’s some people out there saying, “Surely not.” Surely yes, it does still happen, believe it or not. Very painful.

CC: Yeah, less efficient and more overwhelming, like you said. So things are going to get lost. Little updates, revisions, that kind of thing, that’s definitely going to get lost. So it’s more work to produce a lower quality piece of content versus moving it over to a streamlined content operations system.

AP: Yeah. And we think the spreadsheet is bad, I think the email chain may be more horrific than spreadsheet from whatever word you want to say there.

CC: Yeah.

AP: Those both point to using technology that’s really not the right fit, but it’s ubiquitous, you’ve got it handy, so you’re going to rely on it. Not the best business decision. I can understand why you would do it, but in the bigger picture, it’s not where you should be going.

CC: And speaking of the bigger picture, one thing that stood out to me when we were… on our previous podcast with Sarah is companies often reach out to us when they’re hitting some pretty significant pain points and when they’ve definitely recognized that it’s time, something’s got to change, we’ve got to become more scalable. But you don’t have to wait until then to optimize your content operations. I mean, I would recommend doing it now before you hit those pain points. Don’t wait until you’re missing an opportunity. Don’t wait until things are… you’re stuck in a never ending horror movie and you can’t get out. Now’s the time.

AP: Yeah. Yeah. Basically nip it in the bud. And what’s popped into my head is the movie, and there have been multiple versions of this, Invasion of the Body Snatchers. In our case, I think we might want to call it Invasion of the Time Snatchers, because if you let this stuff compound, compound and compound, all that’s going to do is just basically completely drain your organization of any resources to even try and make incremental improvements in how you create your content.

As you improve how you create your content, it is going to make it easier for you to create better content. The content itself will improve, but if you’re stuck in this mire where all of these inefficient processes are eating up all of your time and they are not something that you can repeat, they are not scalable, it’s like the Groundhog Day of horror. It just repeats and it loops back on itself over and over again. And that is a sad reality for a lot of people. But as you suggested, the minute you start having an inkling that’s happening, that’s when it’s time to realize it’s time to take action. Let’s fix this.

CC: Yeah. And anybody that produces content has content operations. So there’s always the opportunity to optimize. There’s always the opportunity to see where you can automate and make things better.

AP: And it can be baby steps. Absolutely. And I think that’s a very good point to make. All these things that we have mentioned that are not super efficient or sound even remotely fun, they are all content ops. They’re just really bad content operations. So it’s not a matter of, “I don’t have ops, I need them.” It’s a matter of improving, and these things can be taken in baby steps. You can be incremental. For example, even trying to templatize things to give you some degree of consistency, that is a small step you can take if you’re working in word processing or desktop publishing. Templatize things so you have a very standard way that you create content, the formatting is standardized. Because if the content creators don’t have to spend time fiddling with that stuff, that is time they can invest in writing better content to help the people who are reading it.

CC: I like that mindset shift that you brought up: When you have bad content ops or things aren’t working well, those problems compound on each other. But in the same way, when you have good content ops, the benefits of that compound on each other. So you have more time to be able to make content better, and to even revisit your processes and revisit them over and over to see, “Now that we’ve optimized, how can we get to the next level? How can we get to the next level?”

AP: Absolutely. There is always room for improvement, and it’s a good idea not to rest on your laurels and do a check every once in a while, because you never know what creature might be hiding in your closet.

CC: Yeah. Might be Jason.

AP: Michael Myers, Freddy Krueger, the Babadook, you name it. Yeah.

CC: Not that this is about content and ops, but the sequels in Halloween cracked me up, how they continuously repeated, “Evil dies tonight.” And evil never did die tonight, so.

AP: No, it didn’t, and I wish that it had. This could be a whole other podcast. Those later reboots did not please me, but we’ll talk about that some other time amongst ourselves.

CC: Yeah. Good idea. Well thank you all for listening to the Content Strategy Experts podcast, brought to you by Scriptorium. For more information, visit scriptorium.com or check out the show notes for the links we talked about today. And thanks, Alan.

AP: Thank you.

The post Nightmare on ContentOps Street (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 17:28
Who is Scriptorium? (podcast) https://www.scriptorium.com/2023/01/who-is-scriptorium-podcast/ Mon, 23 Jan 2023 13:03:26 +0000 https://www.scriptorium.com/?p=21657 https://www.scriptorium.com/2023/01/who-is-scriptorium-podcast/#respond https://www.scriptorium.com/2023/01/who-is-scriptorium-podcast/feed/ 0 In episode 135 of The Content Strategy Experts Podcast, Sarah O’Keefe and new team member, Christine Cuellar, talk about who Scriptorium is and how we use content to optimize your business. 

Related links:

LinkedIn:

Transcript:

Sarah O’Keefe: Welcome to the Content Strategy Experts podcast, brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we’re talking about who we are and how we use content to optimize your business. And I’m joined by our newest team member, Christine Cuellar. Hi everyone. I’m Sarah O’Keefe. Our host today is Christine, who has just started as our new Marketing Coordinator. So of course we put her to work right away. Welcome to the team Christine, and guess what? You’re in charge of this podcast.

Christine Cuellar: Sounds good. Hello everyone. I’m excited to be here. Since I’m new, I’m going to be asking all the newbie questions. So I thought this would also be an interesting podcast for our new podcast listeners. Sarah, I’ll go ahead and kick it off with a basic intro question. What started Scriptorium?

SO: Well, canonical answer is I was annoyed by a layoff. So a million years ago, I worked for a software company that did the canonical hockey stick growth from zero to 60, from 80 people when I started to 500 people 18 months later. And then that company with about four others got acquired by a larger company. And in the process of sorting out all those companies that the mothership parent company had acquired, in the process of that assimilation, many of us got laid off and we were all a little cranky about it. And I decided that if executives were going to make dumb decisions, I could be the executive making dumb decisions. And so basically I was angry, and here we are 26 years later.

CC: That sounds great. And you have an article that goes more in depth on that story as well. We’ll go ahead and link that article in the show notes. So what does Scriptorium do?

SO: That is…

CC: A big question.

SO: I know you think I’d have it figured out by now. But we are interested in content and technology and publishing, and specifically we’re interested in product and technical content. How do we take that information that is so often overlooked and under-invested in and manage it, produce it, do things with it in such a way that we can maximize its value for our customers? So we are interested in taking interesting technology and applying it to content so that we can perhaps automate content development or automate content delivery or improve the publishing process or do multi-channel kinds of things. All those buzzwords that you hear these days we’re interested in how do we make the most out of the investment that you have to make in technical content.

CC: And what’s the scope of our work?

SO: We start typically with an assessment or an analysis or a where are you now? What’s working? What isn’t working? Where are the pain points? And then work our way from that to what are your business needs? What are you trying to achieve? Do you have problems with people calling tech support with basic questions that are or should be in the documentation, but instead they’re making an expensive phone call? Do you have problems with translation, localization with people who would prefer your content in their local language, but you haven’t made that investment because it’s so expensive to translate or localize. So part A is what are your business needs, what are the problems you’ve identified? And part B is how do we develop a solution that leverages your content to fix that? And then I guess part C is we actually build the solution.

CC: Wow.

SO: So some really, really common things here are people saying all our stuff is in Word and it’s not working. We can’t scale it, we have a problem. Some huge percentage of our work is actually structured content. So that’s definitely a point of emphasis right now. That’s not where we started because 25, 6, 7 years ago, structured content had less market share than it does today. But that’s a common thing that we hear is that people have identified that structured content will address some of their requirements, and they need us to help them get there from whatever system they’re in right now. And then I think a key point is that we do not have software. We are a pure services company. And in addition to not having software in the sense of having a product that we sell, we also don’t resell software and we don’t accept referral fees from the software vendors in the space.

CC: Great. Thank you. And thanks for those examples too. Those are really helpful. So what kind of implementation work? You mentioned that briefly. Can you expand on that a little bit?

SO: A typical project for us is somebody who has decided that they need to improve the maturity of their content development processes, move it out of a word process or unstructured, sort of flailing at it, and just throwing bodies at the problem in order to make more and more and more content. And instead design and then build out a system that is more efficient, that leverages reuse, that leverages formatting, automation and all the other cool stuff that we can do. So when we say implementation work, what we’re talking about is that we’ve been through or you’ve been through as the customer and have said, this is the problem. Here’s the proposed solution.

And now we need to do things like pick a content management system, convert all the content from whatever format it’s in now into future state content format, get it moved in, stand up the system, configure the system, get all the people, the authors, the content contributors, the reviewers, the approvers moved into the system, move the content itself, and then go to production. So when we say implementation work, we’re talking about the process of actually building out or configuring the system that’s going to support all these things for you.

CC: Great. Thanks. And you mentioned earlier on that one of the first parts of the process is to just identify some pain points, figure out what’s going on in the organization that needs to be addressed. What are the most common pain points?

SO: I think the number one issue that we see is scalability, as in we can’t scale. We are being asked to do more and more formats. We are being asked to do more and more languages. We are being asked to do more and more content variance because our products are complex or we do customer specific information. So a given customer gets a custom version of the product, that type of thing. Scalability, especially scalability in localization, I think is the number one issue that we run into. So that looks like somebody saying, we know our process isn’t great, but it works okay because we only have five languages, but now we’ve been told we’re expanding into the European Union and we’re going to need 30. We simply cannot take the current five language inefficiencies and multiply by six to get to 30 languages.

There is no way. We have to automate, we have to refactor, we have to reuse because if we don’t do those things, our costs are just going to skyrocket. And maybe more importantly, our time to market. We can’t get to market on time in all these languages in our current process. It just piles, delay upon delay upon delay. So scalability is a big one. Now, related to that, we see things like multiple incompatible content creation systems that don’t talk to each other, but yet need to share information in some way. This is really, really common after a merger. Because company A had system A and company B had System B, you put them together, they can’t talk to each other, but they need to because the customers now are joint customers. From my point of view as a customer, I don’t care that you were company’s A and B, you’re now company merged. Company C.

And I demand that when I go to your website, it looks like a single company, and you can’t get there because these two content creation systems are just not talking to each other. Now, having said that, when I say A and B and two content creation systems, what’s actually far more common is that it’s more like five to eight. It was two companies, maybe it was three companies.

CC: Wow.

SO: But five to eight systems that simply do not talk to each other in any way, shape, or form. Happens all the time because old company A, they had a different merger five years ago and they never did the work. And so they’ve never pruned and it just piles up.

CC: So it just piles up.

SO: And you get this just, it’s technical debt to a certain extent. It’s content rot. You can call it whatever you want, but it’s a mess. Inside of that, it doesn’t require multiple systems, but duplication and redundancy of content. Content is expensive to develop, expensive to manage, and expensive to translate well. And so it’s not good to have multiple copies of the same thing. And it’s especially not good to have multiple copies of the same thing that say two slightly different things for no reason. Happens all the time. So scalability, systems incompatibility, which then blocks you on the things you need to do with your content flowing back and forth, and duplication, redundancy. Those are three things where it’s actually pretty easy to get a hard return on investment.

Some really solid numbers that show if we clean this up, things will be better. In addition to that, we’re seeing a lot of demands now for content integration. The E-Learning or training group is sourcing content from tech docs. They can’t do it well because their learning management system and the tech docs content management system refused to talk to each other, but they really do need to integrate that, and then they need to flow it over and link it to marketing content. And it can’t be done because all these systems hate each other. That’s becoming a big issue. And there’s some really interesting solutions coming on that, but that’s where we are with this. So if you’re looking at those first three issues, if you’re looking at formatting automation, scalability, content creation, duplication, we actually have a calculator for that on our website that lets you get at least a first cut at what this is going to look like.

CC: Great. And we’ll go ahead and have that in the show notes as well. So why content strategy?

SO: It is of course an overloaded term. I look at it as thinking about how you manage information across the lifecycle within an organization. How do you create, how do you edit, review, approve governance, which I know is a dirty word, but when you create content, typically you have to delete it at some point. For the most part, it doesn’t live forever. Some content does live forever and explicitly needs to live forever. But how do you do that? How do you first make sure you have the right information in a given piece of content, and then how do you get it where it needs to go and manage it and update it and translate it and foster it throughout the entire life cycle? So content strategy to me is the overarching plan. And it’s the people, the processes and the technology that you use within that plan to do the things you need to do. So you’ve got your business needs, business requirements, and then the content strategy that provides the solution or the plan that gets you to meeting those requirements.

CC: So when and why should someone, any of our listeners, when do they know it’s time to contact us?

SO: Well, everybody should totally call us immediately. Everyone. No. I would say it this way. If you are in a situation where it is pretty obvious to you sitting inside your organization that the current approach is not sustainable, you can’t hire the people, you can’t scale up because you just have to keep adding people because you have all these terrible processes that take up too much time. You have too many manual workarounds, too much copy it over here and then paste it over here and then spend hours and hours reformatting it to get it into wherever, that kind of thing. If you’re frantically running in place just to keep up or even not able to keep up, it might be time to take a look at whether better systems, better, more mature content life cycle, better, more mature content strategy can get you to where you need to go.

And I would say that in the big picture, people call us when they reach the point where they look at this idea of doing some sort of a transformation on their content and… Because it’s going to be painful. I’m not going to tell you, it’s going to be painful. The fear of doing that is less than the pain of staying where you are. And we’ve done a lot of these projects and we’ve done all sorts of fun, successful things, but ultimately people stay with what they have almost always too long because it’s comfortable. We all do this. I’m not pointing fingers at anybody other than maybe myself, but we all do this. We’re like, no, this is good enough. And then at some point you realize that you passed okay and good enough two years ago, and it is time. And that is the point at which you should probably reach out to us.

CC: That’s great. Well, thank you so much Sarah, and thank you all for listening to the Content Strategy Expert Podcast brought to you by Scriptorium. If you want more information, visit scriptorium.com or check out the show notes for relevant links. And we’ll see you next time.

SO: Thank you, Christine. Welcome aboard.

The post Who is Scriptorium? (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 14:05
Content strategy in UX teams (podcast) https://www.scriptorium.com/2023/01/content-strategy-in-ux-teams-podcast/ Mon, 09 Jan 2023 13:15:06 +0000 https://www.scriptorium.com/?p=21614 https://www.scriptorium.com/2023/01/content-strategy-in-ux-teams-podcast/#respond https://www.scriptorium.com/2023/01/content-strategy-in-ux-teams-podcast/feed/ 0 In episode 134 of The Content Strategy Experts Podcast, Sarah O’Keefe and guest Jodi Shimp talk about the role of content strategy in UX teams.

Related links:

LinkedIn:

Transcript:

Sarah O’Keefe:                 Welcome to the Content Strategy Experts Podcast, brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize and distribute content in an efficient way.

In this episode, we talk about content strategy as a part of UX teams with special guest Jodi Shimp.

Hi everyone, I’m Sarah O’Keefe. Welcome to the Content Strategy Experts Podcast, Jodi.

Jodi Shimp:                       Hi. Thanks for having me.

SO:                                     I am so glad to see you and/or hear from you. So for those of you who don’t know, Jodi and I worked together for many, many years on a lengthy project, and at one point, she introduced me into a big meeting as her content therapist, and I guess this is my revenge. So today we’re here to talk about content strategy and what the role of content strategy is in UX teams. And Jodi, what brought you into this? What’s your interest in this topic?

JS:                                       Yeah, so I spent those many years working on that project with you, but a lot of years developing and leading content strategy from the ground up for a large manufacturing business, and even as part of product interfaces, and then I switched over to join Wayfair as part of their customer-facing content-strategy team. And by that, that team was responsible for the UX content for all five Wayfair brands in all locales.

So although we worked with branding and merchandising teams and content ops and a lot of different groups, we were really primarily part of the experience design team, along with product designers and user researchers. It was a real change from content strategy work when we’re talking about all the levels of structure and meta, and all of the different things that you think of with content strategy, and it was a big departure from working on physical products. So there was a really fast learning curve necessary for that.

SO:                                     So what was the biggest difference? You came out of producing physical products, which by the way is pretty hard to say, and moved over to a digital product company. And from a content strategy point of view, from that lens into the organization, what happened? How was it different?

JS:                                       Yeah, so coming from the physical product side where I had also built content strategy as a function from the ground up, and working with a product that requires a longer lead-time and a longer development time, and moving over to digital where things move very, very quickly, working with a lot of very intelligent people who have created things in a very fast-moving environment that changes super quickly, it was a lot harder to put the wheels on that vehicle as it’s moving forward at a really quick pace than it was in the physical product world that I had been used to before. Even though that had also been accelerating because of more and more content on the product itself in the form of digital displays, it was nowhere near the same speed as the technology companies move.

SO:                                     So you come into a digital product moving at the speed of, I guess, electrons, and into an experience-design team. So I guess foundationally, I keep asking this question, how is UX different from content strategy? And also, please tell me the difference between content strategy and content design.

JS:                                       Yeah, so I think that’s a blurry thing that maybe nobody knows the answer to quite yet, but I can frame it up in how I’ve been seeing things develop specifically at Wayfair, and talking with different content strategists that are at companies like Amazon and Spotify and Shopify and all of those.

So content strategy as a terminology is really starting to become something that people think of marketing as in the technology world. So they’re thinking of those marketing teams, because of marketing content strategy and whatnot. And then there’s the UX content strategy. We’re often starting to hear more content design in that, and I think that’s because those content strategists in UX design teams are often part of experience-design teams. So they sit with the product designers and the user researchers and work side by side with them in these user-experience-design teams.

So that’s starting to be called more commonly content design. But I’ve also seen it still called content strategy, as it was Wayfair. And some places it’s content strategy, some places it’s content design, and sometimes I think they’re really being seen interchangeably still. But I’m starting to see a little more definition between the two.

And then I still think of content strategy as an overall, more talking about that structure and adding the meta and making that smart content that it’s really able to be reused in different places, and it identifies itself of what it is and how it should be used and all of those things. So I hate to say that the lines are still really blurry, but I think they are.

And I think the other common thread that I’m still hearing, whether it’s at conferences like LavaCon or just talking to peers in other companies or on LinkedIn posts, is content strategists, content designers, UX writers, all feel like they spend a lot of their time explaining what they do to other people and trying to help people understand why they’re there and why it matters. Just because you can write in a particular language doesn’t mean that you really understand how to get a user from point A to point B in the most concise way, and most delightful way in a lot of situations, so they enjoy doing so and can do so effectively to accomplish the task that they’re trying to achieve.

SO:                                     And it’s like you said. I mean, in a scenario where you have to say, “Oh, I’m a content strategist, but no, not that content strategist, I’m the other kind. No, not that kind, the other, other kind.” I mean, it’s just-

JS:                                       Right.

SO:                                     And we’re not even going to deal today, because we don’t have enough time, with the question of content engineering and content operations. We’re just going to put that aside and move on. I mean, we’re supposed to be content people, and we are super terrible at self-description, and we argue these terms for years.

JS:                                       Yes. Yes. One of my most enjoyable things in joining that technology team was to the entire 180-person design team, I gave a presentation about what is content strategy and why does it matter to product design, and went through all of the pieces. Here’s why it matters. Here’s what we do. Here’s how we approach content. And we’re not just wordsmiths that come in at the end and make the words pretty, just like you product designers aren’t there just to make the interface pretty. It serves a function and a purpose to help a user achieve their end means. And I got, surprisingly, to me, a lot of feedback on that particular presentation from product designers and user researchers about how they now understood it.

And we also followed that for people who were interested in doing content workshops, content studios, where we took different product managers, user researchers and whatnot through the process of how we would think about content and how we would structure the content and why that matters and what it means in the long run for the content. So that was another effective way with those teams to help them understand the purpose of content strategy in design teams, and we had a lot of success with that over a longer period of time.

SO:                                     So you’ve mentioned content designers, content strategists, user researchers and product designers, I think. And so what did that look like? So you’d have, I guess, an experience-design team that had those four contributors, and then what?

JS:                                       Experience-design team works in a lot of technology companies as part of atomic teams, or sometimes called four-in-a-box model. And that four-in-a-box model really means user-experience design, which includes the groups or the individuals that you talked about. It also includes product owners or product managers, and back-end engineers and front-end software developers.

So the goal is that they’re working in an agile environment on specific features or digital products to, from start to finish, create new or revise existing product features or products together. And the goal is that they’re there from start to finish so that everybody’s working in lockstep and having different review points throughout that development so that what is designed by the user-experience-design team is actually what’s created at the end and tested and then published, released, for the end user, whomever that end user is.

And sometimes, that is super effective. Most of the time, it’s super effective. But there are a few drawbacks that I noticed, being in those technology teams. One of those is, because you have each of these individual atomic teams working on features, it can be really difficult for those teams to connect with other atomic teams.

And so as content strategists, we’re often really concerned with, “Okay, how does somebody get to this feature? Where are they going after they leave this feature?” Because a user might experience multiple features over the course of accomplishing one task: deciding what they want to buy, being inspired, looking through choices, all the way through that end checkout, and then maybe coming back. “Where’s my box? Where is the thing that I ordered?” three weeks later when it still hasn’t shown up, or things like that.

So as content strategists, we want to connect all those different groups together, but the atomic team wants to move fast and quickly, and sometimes that makes them separate from the other groups so that each can move independently and quickly. So there’s positive things to that model, and then there’s some drawbacks too for content strategists, and I’m sure the other teams as well, but especially for content strategists.

SO:                                     You talked about the speed, the velocity at which you’re working in an organization like this. We haven’t talked about whether that’s a positive or a negative, but we’ll just say it was faster. But was there anything that you really missed coming from physical product that was different that wasn’t there, that you’re sitting there thinking, “Ugh, we used to have this and I don’t have it any more”? Was there anything like that? I’m curious about the difference.

JS:                                       So I’ll start with the reverse of that, actually. The thing that I did really like about digital products is you have a lot different opportunity to iterate on ideas and introduce gradual improvements, where with a physical product, once you’ve released the physical product, apart from the actual user interface, it can be really difficult to make incremental improvements, and expensive to make incremental improvements.

So I think that is a thing that I actually enjoyed, was that opportunity to go from a true MVP product that can be released and then incrementally improved, where when you put an MVP physical product out there, there’s more risk in that, I think, and you can’t go back and, “Hey, I’m going to install this new feature on your car because we think it’s cool, so we’re going to add a new button,” after someone’s already purchased it, where with the digital products you can.

But the negative was not having that hands-on piece through the development where you’ve seen a 3D-printed model or the mock-ups and you’ve compared hand in hand, one beside the other. You’ve got A/B testing in digital and other user-testing opportunities where you can mock the different ones up in a digital environment. But it felt very different from the physical progression, to me, than the digital progression.

SO:                                     That’s interesting. What about localization? I know that you had a heavy emphasis on localization in your former life, and what did that look like in this digital product world?

JS:                                       Yeah, so localization’s my pet favorite thing to have around. I don’t know why, but it really is. So I look at every product, whether it’s physical or digital, through that lens of localization, and I’m constantly asking myself, whether it’s something that I have anything to do with production or not, “But how would that work if you put it in a different environment?”

And there’s some things that work great if you put them in a different environment. My blow dryer is one that doesn’t work great if you put it in a different environment. I’ve probably burned up a more than one hair dryer trying to use them in the wrong environment. But those are the sorts of things where that’s my lens always.

So in the technology teams, in the UX design teams, working for a company that did not do a ton of localization beforehand, that was probably… And originally the reason that I joined Wayfair was to work on localization and really help guide that map and create playbooks: how do we do this better?

So there was a lot of education, and one of the biggest things that I took away as a positive from Wayfair was a really cool look at what software engineers could do with localization, and having software-localization engineers on those teams, how cool that was, because I had always relied on outside vendors to do any of that before. But now you have software engineers who are creating APIs right into the work-stream of how to translate content right there.

So they’re building and testing and playing with machine translation engines. They’re building a platform, an interface, that takes whatever we feed it from whichever software within Wayfair, and then that can feed that out to the localization providers or a CAT tool or whatever that needs to be, an interface there. So that was really cool working with those localization teams.

But I did find the same similarities that I’ve seen in other industries, where a lot of the localization problems, they get blamed on the translators; they get blamed on that localization team. And they’re really problems of the source content, whether it’s problems because it wasn’t written succinctly and clearly, or if it’s problems because they didn’t think about the fact the expansion was necessary and the same words won’t fit in the same space when you translate from English to German. So lots of education back to those source-content teams and the source software-engineering teams to learn how to handle localization libraries for metric units and things like that. So translation problems often start at the source, and I found that to be the same whether it was digital or physical, the source.

SO:                                     I just remember that incident, which I think you know about as well, where we were looking at a particular translation and the feedback came back, “Ugh. The Spanish translation is terrible.” They used five different terms for brake-pedal or some word or phrase that should be a single word. And they were like, “This is a terrible vendor. They used five different terms for this word, and we need a new vendor and localization is bad,” and et cetera. And then we went back and we looked at the original English source, and what we discovered was that in the English source, they used eight different terms for brake-pedal.

JS:                                       Yes.

SO:                                     And the localization team, or the linguist, had actually gotten it down to five, which was a big improvement.

JS:                                       So much.

SO:                                     And they were still getting yelled at for being bad and not getting it down to one. It is true that it should be one; it’s just that it wasn’t one in the source, which is where the problem originated, and they were getting blamed for not magically inferring that these eight different terms were actually a single thing, which seemed a bit unfair.

JS:                                       Yes. It is unfair, and that’s why one of my… Well, actually at both the big companies that I’ve led, this type of thing, terminology-management program, is one of the most critical elements. And as machine translation becomes a greater part of localization, really it’s going to be for everyone, I think 50% of translations for customer-facing products are coming from machine translation at this point. And that’s pure machine translation. That doesn’t even include machine translation with post-editing. For companies to manage terminology well, whether they’re physical products or digital products, it doesn’t matter, that terminology is critical to having well-translated products, as with all the other content.

SO:                                     And presumably it’s only going to rise.

JS:                                       Exactly, because I hear a lot more already company leaders saying, “We can translate, so we should,” where in the past, when it was much more expensive and much slower, and machine translation wasn’t an option, you’re very, very specific about which content gets translated. And now the expectation is becoming that everything should be translated, so how are we going to do that effectively?

And a lot of the localization aligns with accessibility. If you make it so the source content is written well and well structured, and that metadata is there, then accessibility requirements serve everyone better, whether you need the specific accessibility adjustments or not. And the localization requirements drive the same thing. It’s talking in simple language. It’s talking in consistent terminology, consistent structure, that makes localization smoother too.

SO:                                     Well, that seems like a perfect place to close this out: with a call to make your source content better and more accessible so that people can use it better, or we can translate it better, and/or it can be more effective out there in the world. So Jodi, thank you for coming in and sharing some of your hard-earned wisdom with us.

JS:                                       Well, thank you for having me. It’s been a pleasure, as always.

SO:                                     Yeah, it’s great to see you. And thank you for listening to the Content Strategy Experts Podcast, brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

 

The post Content strategy in UX teams (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 20:02
What is a headless CMS? (podcast) https://www.scriptorium.com/2022/12/what-is-a-headless-cms-podcast/ Mon, 05 Dec 2022 13:10:08 +0000 https://www.scriptorium.com/?p=21586 https://www.scriptorium.com/2022/12/what-is-a-headless-cms-podcast/#comments https://www.scriptorium.com/2022/12/what-is-a-headless-cms-podcast/feed/ 1 In episode 133 of The Content Strategy Experts Podcast, Sarah O’Keefe and guest Carrie Hane of Sanity talk about headless CMSs.

If your organization isn’t already going down this route, it will probably go there soon. Whenever it’s time to get a new CMS or change hosts. It’s usually triggered on the IT side to switch to it. But like I said, the developers love the flexibility and ease of this decoupled tool. Yeah, it’s really technology driven, but it’s a real opportunity for everyone in an organization to rethink how they’re creating and using content.

—Carrie Hane

Related links:

LinkedIn:

Transcript:

Sarah O’Keefe:                 Welcome to the Content Strategy Experts Podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we talk about Headless CMSs with Carrie Hane. Hi everyone, I’m Sarah O’Keefe. I’m here with Carrie Hane from Sanity. Carrie, welcome.

Carrie Hane:                     Hey, Sarah, good to see you.

SO:                                     You too. Tell us a little bit about your background and what you’re doing these days at Sanity.

CH:                                     Yeah, well, my background. For longer than I would like to admit, I’ve been working in-

SO:                                     I know what you’re talking about.

CH:                                     … web and content strategy and helping organizations use technology to better serve the people they’re serving. Obviously the web exploded in the late nineties, and that’s where I started. And so I’ve been able to learn from really smart people, lots of mistakes, and finally get to a point where I guess I’m considered an expert. Five years ago, I co-authored the book, Design and Connected Content, that really laid out a framework for developing future friendly digital products, which includes websites but isn’t exclusive to websites. And then last year I started at Sanity, headless content platform as Principal Evangelist. Now my work involves helping people understand structured content, the value, how to use it, what value it has, and how they can make their lives easier by using technology to support their work, no matter who they are.

SO:                                     Tell us a little bit about headless CMSs. What is a headless CMS specifically?

CH:                                     Technically, it’s a content management system that separates where the content is stored, which is the body and where it’s presented, which is the head. You can store your content in a headless CMS and then send it to any display anywhere. Yes, it’s a website. It could also be an app, it could also be voice assistance. It’s Google, everybody sends their information to Google whether they know it or not. It’s all of those things. It’s a future friendly way to think about plan and store and create your content for whatever comes next.

SO:                                     When you differentiate between a headless CMS and I guess a head on CMS, but I suppose generically, we’re talking about web CMS versus headless is kind of how this breaks down. Although I guess technically headless CMSs are a subset of web CMSs or something like that. But what makes the headless CMS special? What’s the main point of differentiation between, we’ll call it traditional web CMS and headless?

CH:                                     Well, a few things. For content creators, it allows us to really embrace the create once, publish everywhere, the cope framework of working. Whereas in a traditional, monolith web CMS, we could only ever create content for one website and that website only. We would have to create another instance if we wanted that same content to go to an app or to go somewhere else, so those different heads. It lowers the amount of content that we need to create and maintain, kind of future proofs our content because it’s not tied to any specific presentation. Then even if we are only using it for one website, we can reorganize the content because it’s not tied to a certain site map. Or we can redesign the website without having to redo all the content. That is, if your content is good in the first place, which is a wholly separate thing, which we will have to have another podcast about.

In that sense, it kind of lives up to a promise I think a lot of us have been expecting for a long time. On the technical side, it helps technologists create more componentized ecosystems, so that no matter what the latest trend is in front end frameworks or processing or hosts or whatever, I don’t even know all the terms for all of the things that IT needs to be thinking about now, but that tech stack is no longer all tied into one product or one suite. It can now use the best in breed of whatever is needed, so it’s future friendly in that way as well.

SO:                                     Who’s the target audience for this? Who’s adopting headless CMSs, and what are some of the justifications for that? You’ve touched on a few things already, I think.

CH:                                     Yeah. Well, honestly, organizations of all types and sizes are adopting headless CMS. I just saw this week we were talking about it among my colleagues, that the size of the market of headless CMSs is expected to more than double by 2030, which is only seven years away, by the way.

SO:                                     That’s not helpful.

CH:                                     If your organization isn’t already going down this route, it will probably go there soon. Whenever it’s time to get a new CMS or change hosts or, I don’t know what else. There’s a lot of things, it’s usually triggered on the IT side to switch to it. But like I said, the developers love the flexibility and ease of this decoupled tool. Yeah, it’s really technology driven, but it’s a real opportunity for everyone in an organization to rethink how they’re creating and using content.

SO:                                     What does it look like to implement, to make that transition over to headless CMs, assuming that you’ve started in, I hesitate to say traditional web CMS, because that’s ridiculous, but here we are.

CH:                                     It looks different for every organization. I think one of the things that can happen when you make this switch is a complete digital transformation. Organizations who are committed to going through digital transformation, are really completely changing how they’re approaching their digital experience. Other groups are like, “We need a new CMS, a new something yesterday,” so they literally just recreate what they have in whatever tool that they buy, just reconfigure the connections, but all the content goes over in whatever way it was.

The design looks the same, they might even have the same underlying CSS in frameworks, so it really varies from that. Going from exactly what you have now to a new tech stack or completely changing everything. And then obviously lots of things in between. But yeah, it’s an interesting time to be watching all of this because it is accelerating. I remember first hearing about headless, maybe 10 years ago, and now I don’t know how you can work in the content management world and not hear about it and not be thinking about it.

SO:                                     I know a lot of the people listening to this podcast, and certainly my side of the world is sitting largely in XML, DITA and technical and product content world. What you’re describing to a certain extent when you talk about multichannel publishing and separating content and formatting, kind of sounds like XML based publishing and kind of sounds like DITA specific… Well DITAs obviously an implementation of that. I guess then the question I have to ask is, is a DITA component content management system actually a headless CMS?

CH:                                     I suppose technically because it’s a body that’s separate from the head, I don’t really have any experience with DITA CCMSs, so I don’t know more. What I associate that with is technical communications, which is only in my mind, one use case for any of these systems. I don’t know, have you seen other use cases? What are your thoughts and what are you seeing?

SO:                                     Well, there are other use cases, and we have some customers that are using XML structured content and specifically DITA outside the core tech pubs, tech com world. But ultimately, when I look at these two, I would say the DITA world, the DITA XML world is optimized for a certain kind of content type. And what you’re describing with headless is a lot of the same principles, but it’s not specifically optimized or built around a framework that is designed for technical content specifically. It’s almost like the DITA CCMS world is the specialized… Sorry, people that was not really intended to be a terrible pun. But is sort the solution that’s intended for a specific industry or a specific use case, we’ll say. Whereas the headless approach, or when we talk about headless CMSs, we’re talking about something that is intended for more of a general purpose solution. I guess it’s a subset. Is that fair?

CH:                                     Yeah, I think-

SO:                                     Sorry, headless is the super set and DITA PCMs would be the subset. And I guess the other important note is that although it’s not required, DITA and XML are based on a sort of a tree view of a document, similar to HTML. And the headless CMSs as a general rule, are built on knowledge graphs, which are less of a tree and more of a multidimensional thing that’s hard to conceptualize.

CH:                                     Yeah, a graph.

SO:                                     Yeah. The knowledge graph. And the really sad thing about knowledge graphs is that I saw those for the first time about 20 or 25 years ago when we had things like information models and entity relationship diagrams in some of the software that I was supporting. What do you see as some of the biggest challenges, as we talk about this concept of moving websites or web content or content outside of tech com might be the fairest way of saying it, into this headless approach. What are the biggest challenges that you see there?

CH:                                     I think the biggest challenge is the content creation world. The content strategy world is not ready for that. And not because people don’t get it, it’s because they don’t even know what they don’t know. Most organizations are not mature enough in their content operations to really take advantage of a headless CMS. And so the danger becomes the tech. IT moves them there because they need it for their tech ecosystem. And then they’re given the keys. I’ve heard some people say they’re given the keys to a Lamborghini and they don’t even have their driver’s license yet.

I hear a lot of people say, “I don’t like headless. I don’t like that,” because they’re disoriented. It’s not what they’re used to. It’s set up completely different. And so then they blame this technology for a problem that’s not the technology’s fault. And then what will happen? Will we go backwards? Probably not. But it’s going to take a lot for the whole… I think it’s even bigger than a market, the whole world really. We’re all moving or moved to digital first publishing, and what isn’t digital these days. And we’re still in this old mindset of print analogies, print whatever, and haven’t thought of new ways of approaching how we can create and publish information. And I think headless is a big opportunity and it’s potentially a jumping off for a new era in publishing, but we’re not going there fast.

SO:                                     What you’re describing sounds exactly like the pain we went through in trying to move people from a word processor, style based mindset, to a structured content, separation of content and formatting mindset. And I’m not saying we’re done and it’s been super painful and the change management issues have been extensive. If it’s going to be pain and there’s going to be all this change and change resistance and all the rest of it, what are some of the opportunities? What makes it a worthwhile change?

CH:                                     It opens the door to doing more fun stuff because it can reduce the amount of content you’re creating and maintaining. People who create content can get out of the business of constantly reacting and putting out fires and move to being more proactive and creative and thinking about these new ways to reach their audience, connect with their audience, and instead of constantly trying to keep up. As people who work within organizations or with organizations are so far behind actual people, the consumers out there who want new things and new ways of interacting with things. And I think we’re on the cusp of that. I’m not saying headless is the end goal, but I think it’s a good jumping off point for trying out new things and getting our houses in order enough, so that we can then move forward instead of being on a treadmill and trying to keep reaching for a different goal that just keeps staying the same distance away.

SO:                                     That seems like a good place to leave it. We are going to attempt to get off the treadmill and onto the, I don’t know, the ski slope, maybe a little bunny slope.

CH:                                     The trail.

SO:                                     The trail. That metaphor did not work at all, but we will hop off the treadmill onto an undisclosed other means of transportation that is actually going to advance us forward. And Carrie, thank you for coming in and talking about this, because I think this is, at this point, a topic that’s not well understood and we need more people out there to explain it and explain where this is going.

CH:                                     Yeah. Well thanks for having me. It’s always fun to chat.

SO:                                     Yeah. And we’ll do some more of that in 2023. That’s a truly terrifying thought. And with that, thank you for listening to the Content Strategy Experts Podcast, brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

 

The post What is a headless CMS? (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 18:25
Misconceptions about structured content (podcast) https://www.scriptorium.com/2022/11/misconceptions-about-structured-content-podcast/ Mon, 21 Nov 2022 13:20:04 +0000 https://www.scriptorium.com/?p=21567 https://www.scriptorium.com/2022/11/misconceptions-about-structured-content-podcast/#respond https://www.scriptorium.com/2022/11/misconceptions-about-structured-content-podcast/feed/ 0 In episode 132 of The Content Strategy Experts Podcast, Alan Pringle and guest Jo Lam of Paligo dispel misconceptions and myths about structured content.

“Science and history shows us that structured content, structured authoring, is actually very intuitive. And if I may rewind back to, say, the paleolithic era where we first started using a lot of symbols, and then eventually converting them into what we now know as letters. Understanding patterns on an extremely micro level, and that’s how we actually learn to read and write.”

—Jo Lam


Related links:

LinkedIn:

Transcript:

Alan Pringle:                     Welcome to The Content Strategy Experts podcast, brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we discuss misconceptions about structured content with special guest Jo Lam of Paligo.

Hi everyone, I’m Alan Pringle, and we have a guest for this episode. It’s Jo Lam of Paligo. So Jo, introduce yourself.

Jo Lam:                              Hello, my name is Jo. I work at Paligo. That’s intentional, for all of the rhyme. I’m a solutions engineer, and I generally work as someone who helps figure out the best approach and the solutions, if you will, for people moving into a structured authoring environment. And as suppose I should tell you what Paligo is…

AP:                                     Yes.

JL:                                       Since I just mentioned that’s where I work. So we at Paligo, we are a CCMS and content… Sorry, whoops. Component Content Management System, where we use the DocBook standard as our base. And really what we strive to do is provide the perfect entry point for people moving from unstructured to structured content by being as user friendly as possible and making the entire process very intuitive for them.

AP:                                     Well, that is actually perfect for what you and I are about to talk about, because we’re going to talk about the misconceptions people have about structured content. And having worked with structured content decades myself, I can guarantee you there’s lots of apprehension and misconceptions about it, and I am sure you’ve run up against them, as well. So I’m going to throw the first misconception out there about structured content. And by the way, I will post in the show notes a link to a white paper about structured content and structured authoring for those who want a little more background about structured content.

So let’s go and talk about the first misconception, that structure is hard. What do you have to say about that, Jo?

JL:                                       Well, science and history shows us that structured content, structured authoring, is actually very intuitive. And if I may rewind back to, say, the paleolithic era.

AP:                                     Okay.

JL:                                       Where we first started using a lot of symbols, and then eventually converting them into what we now know as letters. What this is, actually, just understanding patterns on an extremely micro level, and that’s how we actually learn to read and write, is through systematic training of our brains. Because our brains weren’t actually evolved to read and write naturally. Language and speaking it, yes, but reading and writing is not natural for our brains. So through this whole process of learning how to read and write, we actually have employed the basics of structured authoring.

So to give you an example, if I have, on my desk, maybe a very far distance from you, on the other side of the room, two sheets of paper. One is a resume and one is a cover letter, and all you can see from a very far distance is the blocks of the ink, but you don’t know what letters they are. But you can already tell which one is the cover letter and which one’s the resume. And that’s because there’s structured authoring employed in there and you naturally know those structures are associated with those particular types of documents.

AP:                                     Yeah, that makes a great deal of sense. It’s like intuitive, almost built in for us. That makes a lot of sense. And also too, beyond intuitive nature, any time someone is doing any kind of process change, and that includes moving from unstructured content to structured content, if you do not put in basic change management practices, of course it’s going to be hard. You can’t just say, I’m going to do structured authoring, or this company, this department’s going to do structured authoring, and then not consider all of the business requirements that drive that decision. And then buying the tools such as Paligo for that, training people on how to use them and keeping those lines of communication open.

So merely just saying, I’m going to do structure and not thinking about what that entails, yeah, structure will be hard, as any process change would be. So yeah, I’m trumpeting the change management mantra again, and I’m sure people listening are tired of hearing about it if they listen to any other episodes, but it’s a huge component here. It is not just about structured content, it is not just about the tools, it is about change management and people, as much as all those things.

JL:                                       100%.

AP:                                     Yep. So here’s another misconception. I’m going to have to write code, I’m going to have to type pointy brackets and slashes. Don’t make me do this.

JL:                                       There’s a lot of fear behind that there.

AP:                                     Yeah.

JL:                                       Well nowadays there are a lot of different interfaces that hide all that. But let’s say you do have to write with the pointy brackets. I really like that, the pointy brackets. Let’s say you do have to use the XML tags. Well, I would like to think of it as using identification labels for what we are writing. I mean, we tag everything as it is. We all use social media, we tag exactly what that thing is about. And in a lot of senses, it’s kind of the same thing. If I have a list and I tag it, hey, this is a list, I’ll tag it within those brackets, now we don’t have to think about how that looks like. So you’re not thinking about, oh, well it is indented plus a dot in the front of it, and you don’t have to think about all of those formatting things, you know immediately it’s a list.

And typically in structured authoring, you have all the look and feel of the document handled somewhere else.

AP:                                     Exactly.

JL:                                       So you really don’t have to do anything beyond, say, it is identified as a list or identified as a table, and I know exactly what that’s going to be when it gets pushed at the other end.

AP:                                     Yeah, I think that’s an important distinction to make. When you’re talking about structured content, structured authoring, you basically have a predefined organization hierarchy for your content, and then you tag things to follow that hierarchy, that organization. And what you’re talking about is, there is no thought about formatting. The structure content itself is, shall we say, formatting agnostic. What it cares more about, like you said, is this is a list item and an unordered list, or this is a paragraph and a note. When you build in that kind of intelligence with the tagging, then wherever you publish to, and these days we all know print pdf, everything, eBooks, E-learning, websites, I mean, you name it, it is anything.

And things can look slightly different, even though you’ve got an unordered list in all of these things, they may not be formatted exactly the same. That is not the concern of the author. All that author needs to do is just be sure and say, this is an ordered list, this is an unordered list. And then the processes later take on all that formatting that you’re talking about.

So let’s go to the next misconception. There’s a lot of content here and we’re going to have to convert it to structure. I don’t have time to do this.

JL:                                       Yeah, so the time thing is a huge concern for large organizations, especially if you have a massive amount of documentation, maybe spanning back the last 50 or so years. That’s actually a nightmare for any technical lawyer. That’s horrifying. I don’t want to dream about that. So what most of the tools now have, just the good news here, is there’s a lot of import tools already built in, and if not there’s a lot of import tools outside of CCMSs is that can help you with that, that into integrate with a CCMS.

But generally you’re going to be hard pressed to find one without one built in already. And the great thing is, generally, whatever you’re working in is likely something that spawned out of the original SGML. So SGML went to evolve into XML. Well, not evolved, but we derived it from SGML.

AP:                                     Yes.

JL:                                       And then from that we derived a lot of other things such as, well, everybody knows HTML, and a lot of other [inaudible 00:08:47] formats are derived from that. So meaning the conversion is actually relatively simple, and you don’t have to do it yourself because so many tools out there already know that and will bring it in for you.

AP:                                     Exactly. And your tool’s one of them that will do that. And even if the tool doesn’t, there are third party vendors, that is all they do. They write scripts and automate that stuff, and it means less dirty work, really, for the authors.

And one thing that we have learned at Scriptorium, and we really advise people not to do this, don’t let conversion be your content creators first exposure to XML, or structure, or whatever, because they may end up resenting it because of the amount of work that they have to do just upfront, converting. They should be putting their focus on creating content as efficiently as possible. So anything you can do with an import tool like you mentioned, or with a third party vendor who can automate that for you, I highly recommend it. It is money well spent and it will keep your content creators far happier than they would be otherwise.

JL:                                       You know what, that exposure to conversion there, I thought my example was a nightmare, that is a true nightmare, right there.

AP:                                     It is.

JL:                                       I would not wish that upon anybody.

AP:                                     No. We’re on the same wavelength there. It is not a good thing to do. And it is something, if you’re going to move to structure, be sure to budget time, money to do this, but use tech to do it. Don’t make people manually do it if you can avoid it.

JL:                                       Absolutely.

AP:                                     Yeah. So let’s go to number four of our misconceptions. And that is structure is just for technical communication and other technical content.

JL:                                       That it is very much not. I mean, earlier I talked about the resume versus a cover letter. How about we think about what we usually use on a regular basis. And I love food, I am actually very hungry right now. And so I will think about recipes.

AP:                                     Sure.

JL:                                       And we all know what recipes look like.

AP:                                     Yeah.

JL:                                       There’s always going to be, near the top of that recipe, an ingredients list followed by procedures. Nowadays, we’ll also usually see yield times or how many servings, and then you can toggle that back and forth. Now every single part of that is identified. So the procedure, well that’s a procedural element. And the ingredients list, well that’s a list element. And that’s all actually structured authoring right there. And that tells us the difference between, well this will tell me how to make a dish versus, oh, that page with the five paragraphs that’s telling me concepts I should understand about the culinary world, or something like that.

AP:                                     Yeah.

JL:                                       So those distinctions between that structure is in our everyday stuff, even your social media post. We know it’s a social media post, it’s only two lines long. We know that’s an update, so that’s like a reference topic, per se. And we know what we get from that is just a, oh, you should know this, not, oh, I have to do something about that. Right? So very different kinds of information in very different structures every single day, in every aspect of our lives.

AP:                                     Sure. And I know from working with many clients that people are now applying structure to marketing content. They are applying it to learning and training content. It is not just about technical information anymore, especially considering we’re seeing these trends where these lines between different kinds of content are blurring. So it would make sense that structure would start to kind of seep out and work for all different kinds of content. So let’s talk about our last misconception. Readers don’t care how we author this content.

JL:                                       I think all the people working in tech support, what customers coming to them after not understanding the documentation, would disagree.

AP:                                     Yes, they do. And often.

JL:                                       Very often, yeah. Readers do care, even if they don’t know it, they don’t know it’s structured authoring. But again, it’s all about intuition. If someone wants to know how do I do something, they’re going to look automatically for numbered steps, procedures. And if you give them a paragraph, yeah, they’re going to be pretty angry.

Or even just on a more casual level, let’s go back to my resume versus cover letter. Let’s say in the resume, you can derive from that, what are the skills. And you can look exactly for that because we have these filters in our brains, the patterned thinking actually helps us with applying these filters. So you’re using contrast, repetition, alignment, and proximity, these principles, to really figure out what’s on a page before even seeing the very first letter. And that’s going to tell you, oh, I want skills, I’m going to look on the resume for a list. And you’re going to ingest that differently than say, I want to learn more about this person’s personality and therefore I’m looking at the cover letter for the biggest paragraph, and you switch gears in your brain to absorb it very differently. So if you imagine writing that paragraph in point forms, how would you process that, how would you prepare your brain to actually start reading that? And then you’re going to just be very confused and get frustrated and start all over again.

AP:                                     Yep, yep. Exactly. I think this is a great place to end this conversation. I think you’ve given some really good examples and kind of dispelled these myths about structured content. So I want to thank you for this, this has been a great conversation.

JL:                                       It’s been fantastic and a lot of fun. Thank you very much, Alan.

AP:                                     Absolutely. And we’ll include a link to Paligo in the show notes. Thank you for listening to the Content Strategy Experts podcast, brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

The post Misconceptions about structured content (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 15:01
Jobs in techcomm (podcast) https://www.scriptorium.com/2022/11/jobs-in-techcomm-podcast/ Mon, 07 Nov 2022 13:02:28 +0000 https://www.scriptorium.com/?p=21563 https://www.scriptorium.com/2022/11/jobs-in-techcomm-podcast/#comments https://www.scriptorium.com/2022/11/jobs-in-techcomm-podcast/feed/ 2 In episode 131 of The Content Strategy Experts podcast, Sarah O’Keefe and guest Keith Schengili-Roberts discuss the techcomm job market.

Most of the jobs I see are industry experience … is helpful. Medical device is very helpful. PS, we’d love it if you had these tools. It’s common not to require the tools. It’s common to require domain knowledge and then say tools are a nice-to-have or a strongly preferred, but not an absolute requirement.

—Keith Schengili-Roberts

Related links:

Twitter handles:

Transcript:

Sarah O’Keefe:                 Welcome to the Content Strategy Experts podcast, brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we’ll talk about challenges in the technical communication marketplace. Hi everyone, I’m Sarah O’Keefe, and I’m delighted to welcome Keith Schengili-Roberts to our podcast. Hey Keith.

Keith Schengili-
Roberts:                                   Hello there. Hello everyone.

SO:                                     How are you doing over there?

KSR:                                   I’m doing fine. Doing great.

SO:                                     Awesome. You and I, of course, go way back, very, very far back. But for those people listening who aren’t familiar, tell us a little bit about yourself, and who you are, and what you’re up to these days.

KSR:                                   Back when we both first started in the industry, we were writing on clay tablets and hacking away with chisels into stone. But much more recently, what I do and have been doing now for quite a number of years is independent research on a website called DITAwriter.com. My main domain is for work related things, not having to do with my full-time job is actually on the DITAwriter.com website.

I started doing this in part because I was growing frustrated at various claims that I was seeing made by others, saying that such and such is happening. This is a trend, and such and such is happening here, it’s a different trend. Sometimes these things were contradictory, and sometimes people would say that … One of the first things that made me think about starting to do actual research in this area was the claim thrown about very widely at the very beginning of DITA being disseminated throughout technical writing culture is that it was the fastest growing XML standard out there.

While I didn’t necessarily disagree with that, it was kind of like, yeah, but how do we know that? So I wanted to go and do some digging to see if I could actually find some evidence to go away and ascertain if that was in fact the case or not. Arguably that answer, the answer to that is yes. That’s from a long ago. Now there’s a new and interesting twist on things. But essentially over, my methodology is basically going to indeed.com, which is the largest job aggregator website in the US and has been so for well over a decade now.

What I have done since August of 2011 is that once a month I go in and do a search looking for technical writing jobs and other keywords that go with that. Then essentially doing fairly basic statistical analyses on the results that come out of it. Well, quite recently there was the 10th anniversary of this, and I’ve started writing some blog pieces talking about what are some definite job posting trends relating to technical writing, and structured, and unstructured content within the industries.

SO:                                     So we’ll include a link to DITA Writer in the show notes, or I guess I should say it should be there if you’re listening to this. So tell us about the marketplace for technical writers. What’s going on? Is it a good market, a bad market, a buyer’s market, a writer’s market? A writer/buyer, that sounds bad.

KSR:                                   I’d put a writer’s market? Yes. Let’s just call it a writer’s market. Well, let’s say that the good times have come and gone rather quickly. As of May of this year, there was over 4,400 technical writing job posts, and that is a 10 year peak. In fact, for the most part, technical writing jobs within the states have kind of hovered around somewhere between the 3,000 to maybe 2,000 mark. That’s talking about all of the jobs across the states that have the words technical writer in either the title or the content of the actual job posting.

Up until, it shouldn’t be a surprise to anybody that took a significant hit back in at the beginning days of COVID, where essentially there were not only fewer hires, but things seemed to be plummeting. But then, I don’t want to say post COVID, because in many ways, I’m not sure we’re really out of the COVID era, so to speak yet. But around the time of February of 2021, things began to pick up significantly. As of, I’m just going to check this, as of I think it was May of this year …. Yeah, that’s what I was quoting earlier. So as of May of this year, we had over 4,400 job postings looking for technical writers, which is great.

I believe that a lot of that has to do with the fact that there were, essentially, the economic pressures were people starting to, and companies, I should say, starting to produce things again that actually had a requirement for people to document exactly what was going on. Now since then, and keep in mind that this is six months ago, here we are, I did latest stats for October. So again, May of this year, 4,400 technical writer job posts. Right now, 3,300 job posts. So that’s a drop of 1,000 in a rather precipitously short amount of time. I’m not sure if that’s the biggest drop I’ve ever seen in the past 10 years.

I’d have to have a closer look at it, but it’s going to be pretty darn close to that. So again, what does this say in terms of … This doesn’t exactly bode well for acquire jobs at this very moment. Having said that, it’s funny, we’re actually, at this point we’re still higher than at most points within the past 10 years. So there is a chance that if the economy does do a turnaround, that we may in fact find another climb in the stats. But to be honest, with all the doom and gloom that’s out there at the moment, I highly doubt that that’s the case. Whereas back in May, I was basically saying, “Hey, this is a great time to put yourself on the market as a technical writer,” I really can’t say that anymore.

SO:                                     Interesting. Although it sounds as though the 30 … I’m going to take the glass half full and say that 4,400 is crazy, but 3,300 sounds as though it’s still a good number. It’s not a terrible number.

KSR:                                   Yes. if you consider that the average has been around 2,000, again, that’s still fairly high. But the trend is most definitely going down and quick. That’s the thing that I’m remarking on, really.

SO:                                     So you’re looking at this rollercoaster and wondering if it’s hit the bottom and it’s going to start back up, or if it’s just, we’re all going to … Nevermind. We won’t make a rollercoaster analogy.

KSR:                                   Keep screaming while you can sort of thing. Because I’m sure we haven’t hit the bottom yet.

SO:                                     So within these limited job postings, what are you seeing in there? I mean, you’ve been looking at DITA and DITA Writing trends, so what kinds of structured authoring trends are in there or not, as the case may be?

KSR:                                   Fair enough. Yeah, well, so I’ve been keeping an eye on all of the major XML based standards. So I keep track of how many of these job postings, again, specifically for technical writers, not programmers or anything else, mention things like, say, S1000D, SGML, DITA. Of course, and also just plain, old XML. There does seem to be one sort of fundamental change that’s occurred over the past 10 years. It appears as though jobs that are looking for people with experience with XML have in fact gone up. What’s more, more than the number if you add up all of the numbers of people who are jobs looking for just DITA, SGML, S1000D, DocBook. You add all that together, and the number for job posts looking for just XML experience is still significantly higher than that.

Now, there are some trends, the ups and downs of the overall job market sort of define the overall number of these things. But it’s interesting to see that XML, and also a cure topic come up in a lot more technical writing jobs than for any of the individual standards in and of themselves. Now there’s another story here though, and this is, now, as someone with a domain name like DITA Writer, this is a wee bit concerning. This has nothing to do, it seems, with the recent dip in the number of technical writer job postings in general. It appears as though DITA is waning in terms of, again, overall job postings.

So at its peak, to be fair, DITA essentially was mentioned in something like maybe four and a half percent at its peak of all job postings. They tended to be the same people, like the same crowd. So more often than not, it was the larger companies that have a real advantage by using DITA in terms of cost savings. If you do any sort of translation to multiple languages, DITA, it’s hard to beat that, really, to be honest. But what I’m seeing now is it’s been getting to be a little bit more in the doldrums. It’s actually at roughly half that at about 2% of all job postings across the states.

Now that having said that, that’s still a substantial enough number. But it’s dropped, and I’m sort of racking my brains as to what has changed to make that happen. I have some ideas, but again, I can’t say that I’m 100% sure that that’s the case. But here, I’ll present my ideas to you and please, I would love it very much if you could just let me know what you think.

SO:                                     I’ll shoot it down. You’ll be fine.

KSR:                                   Okay. So as I said earlier, there’s … My experience is that DITA tends to be used by the largest companies, like companies that typically have 5,000, or 10,000 people, or more working with it. The reasons for that is that there are essentially economies of scale when working with DITA. I mentioned the localization aspect of things before. But if you have a lot of products and you can share content between those products in the technical manuals or engineering manuals, again, the efficiencies that you can get from using structured authoring like DITA, again, looking at this purely from a business perspective rather than from a writer’s perspective makes a whole lot of sense.

But having said that, another interesting trend that I notice that goes along with the drop in DITA is, to me, the rise in FrameMaker in this. Not too long ago, I think I was saying a couple of years ago, I was seeing that the trend for FrameMaker was that it was steadily going down. I’m pretty sure I did a post saying that, I mean, not that FrameMaker was dead. But that it wasn’t very healthy at this point. My thinking at the time was that, well, maybe DITA is coming to the fore and structured authoring tools, such as, say Oxygen are coming more to the fore.

But the recent resurgence in FrameMaker and the decline in DITA makes me think that, and again, here’s my theory, that what we’re seeing is that there’s more of a push for technical writing jobs at the moment within smaller companies, for which having standalone FrameMaker licenses working on the desktop, as opposed to, say, working within Adobe Experience Manager. That’s a whole other subject. I keep track of that as well. I suspect that what’s going on is that we’re seeing a lot of hires of individual writers within smaller firms rather than larger firms who are looking for people with experience with DITA. So that’s my thinking. Any thoughts on that, Sarah?

SO:                                     Well, of course, you have the data and I don’t, so unfair advantage. But a couple of thoughts. One is that I would be interested, I don’t know, I don’t think this is going to be in your data. But it would be interesting to speculate around the question of whether the turnover is actually higher in, let’s say, FrameMaker based jobs than DITA based jobs, which would then account for more FrameMaker jobs. The obvious anecdotal speculation would be that the FrameMaker people are retiring and need to be replaced.

KSR:                                   Oh, that’s an angle I hadn’t thought of. Yes.

SO:                                     Well, and again, I have zero evidence, so you make of that whatever you want. But it might be interesting to go back and look at, instead of looking at percentages, to look at the raw numbers. So 10 years ago it was 100 DITA jobs a month, and now it’s still a 100 DITA jobs a month, but there are more jobs. There’s FrameMaker jobs, and markdown jobs, and this, that, and the other thing. I’m also very curious as to the percentage of jobs that don’t specify tools, that just say we need someone who can write these kinds of things.

Most of the jobs I see are industry experience in thing is helpful. Medical device is very helpful. PS, we’d love it if you had these tools, but usually … Not usually. It’s common not to require the tools. It’s common to require domain knowledge and then say tools are a nice to have or a strongly preferred, but not an absolute requirement. But yeah, I mean obviously you’ve got this data that shows things are going up, and down, and sideways. But in terms of FrameMaker specifically, I do wonder if that’s a case of these groups have been chugging along quite happily and now they’re losing people to extra attrition.

KSR:                                   Yeah, I can also say that what was interesting, and again I’m talking from personal experience here, but what I have seen is during COVID there were some people that either I worked with or that I knew in other companies that were working, essentially they were planning to retire when COVID hit. Then they were asked, essentially to look, could you just stay on a little bit longer until we get through this? As you say, maybe that’s part of what’s going on there.

SO:                                     Maybe, yeah, I mean-

KSR:                                   Though, I don’t know why that would necessarily hit FrameMaker job posting specifically, because you think that would be across the board, but yeah. It’s interesting.

SO:                                     Yeah, I don’t know, but I’d be interested to find out more. What about some of the other tools that are out there? I mean, of course your focus is on DITA specifically. But if we’re going to talk about not DITA, then Markdown, Flare, Paligo what do those look like?

KSR:                                   Yeah. Now the interesting thing is that, and this sort of echoes what you were saying earlier, is that … Again, I’ve been doing this actually, not fully for the 10 years, but for probably something like half of it. I’ve been throwing in the names of major tools. So that would include things like say Oxygen, or Webworks, or back in the day, Dreamweaver, and so forth. More specifically looking at things like Astoria, SDL, which of course is now …

SO:                                     [inaudible 00:17:16].

KSR:                                   Oh actually, what is it? Sorry, what is SDL?

SO:                                     [inaudible 00:17:21]. Yep.

KSR:                                   I should know this offhand, but I don’t. Anyways. Now also very recently things like Paligo, and [inaudible 00:17:29], and Vasant. But the interesting thing is that, much as you were saying earlier, the trend in job postings is very rarely to talk about or specifically mention tools. So those numbers are very typically in the numbers that you can count on one, occasionally, two hands, and very rarely higher.

What does come up much more often is job postings that are looking for experience with a, and I’m going to say a CMS as opposed to a CCMS. The numerical difference between the two is significant. So some sort of CCMS mean … Sorry, CMS meaning some sort of a structured authoring tool. But again, the numbers are not huge. So I don’t want claim that a significant percentage of all tech writing jobs require or are asking for CMS experience. Again, it’s in the low single digit percentages, but it is there.

But if you come to the other vendors, yeah, almost … Not almost never, but it rarely comes up. However, as you were saying earlier, if you do have the experience, like please, if you have experience with particular tools, mention them in your CCMS, as a former and still current hiring manager where I work full-time, those things matter. But from a job posting perspective, we have to be a bit more, have to cast the net wider, so to speak. So on the whole, you don’t see a whole lot of those things. Of interest, just very recently I am seeing Paligo coming up in the number of CCMS things. So clearly they’re beginning to make a dent, much more so than long established players in the market, for example. So I just find that interesting.

Of course, Sarah and I, we both know that Paligo works with its … Oh-

SO:                                     DocBook.

KSR:                                   DocBook. The thing is that if I look at the number of times DocBook is again mentioned, I pretty famously declared that DocBook is dead, like from a hiring perspective only, because nobody is looking for people with that experience. Yet here comes Paligo, and they have done such a good job with the interface for their particular CCMS that I suspect that many people using it. I’m sure this almost never gets to the HR people who are cobbling together the job postings that, do you have actually experience with DocBook, but because in a way it’s more the CCMS that they’re interested in this particular case than the standard.

SO:                                     All right. Well, that’s a really interesting point, right? Because you’ve been looking at, are the job posts asking for DITA? So to a certain extent you would then say, “Okay, well what about DocBook?” Except no, because it’s been, I don’t think rebranded is exactly fair. But let’s say subsumed in the sense that that is what underlies Paligo, but it’s not really a topic of conversation. Are there any other conclusions that … I mean we always want to know what the CCMS market share is. That’s like the first question anybody asks. Are there any inferences you can draw from what you’re seeing about … So it sounds like Paligo is doing well. Is there anybody else out there that’s doing well or not so well from a posting point of view?

KSR:                                   No, not really in terms … Yeah, no, the numbers are just not high enough to really come up with any sort of strong conclusion as to what the marketplace is when it comes to job postings. Now for that sort of thing, I have done some research, but not recently. But I have done research using LinkedIn information, and people saying what sort of CCMS they’re using, and compiled the list of companies that are using DITA, which is also on the DITA Writer website based on that information.

But I’ll admit, I’ll be the first to admit that that information is beginning to get a little bit out of date, because I was paid to do that type of research before, and now that I have a full-time gig, that’s something take that takes a lot of time that I simply don’t have anymore. So I have no additional insights into the CCMS mark at this point in time, at least not from this data.

SO:                                     So what about Markdown?

KSR:                                   Ah, now that’s also interesting. Now I think one of the ironies that comes up is people are saying, “Well, Markdown is the new shiny thing, so to speak,” which is funny because it’s actually been around longer than DITA. Of course, it’s also an unstructured format. So there’s the continual sort of dynamic of structured content, that it requires more upfront efforts to make it work. But then the payoff is being able to do some really interesting things with it. There are all sorts of interesting buzzwords that come around structured content you simply don’t get with unstructured content, such as Markdown.

Having said that, there has been a real interest in, again in job posts, technical writer job posts that are looking for Markdown experience. Quite recently, in fact, I think within about the past year or so, the request for people having Markdown experience now exceeds that of DITA. I think at least in part that has to do with the fact that the DITA numbers are, at least at present, going down. So yes, in the graph there is that moment where they intersect. Then Markdown continues to grow. Now I say that given that the entirety of the job market has fallen down rather precipitously in the past couple of months. The Markdown numbers as absolute numbers go down, but the percentage certainly continues to increase.

SO:                                     So I guess in kind of wrapping this up, the question that probably everybody wants to know is if I’m out there in the job market or I’m trying to get into technical writing, but if I’m looking for a tech writing job, then you mentioned make sure, do you include any CCMS type of experience on your resume because you just never know. But what would you advise people if and when they are looking, what’s that thing that you would say, “Look, if you can prove that you have this, your job search will go well?” Are there a couple of things that people can and should focus on?

KSR:                                   I would say that one of the best tools out there is definitely going to be LinkedIn. More specifically, so I mentioned earlier the companies that are using DITA, that’s on the DITA Writer website. So if in fact you do have DITA experience or you want to get experience, go through that list and see if there is any companies listed there that are in your area. That would be a good one to start with. Then once you’ve done that, use LinkedIn’s search and capabilities, such as AR, to see if you can narrow down things further.

Find out if there are technical writers who are working at the location that’s local to you, and then try to figure out what are the tools that they’re actually using. More often than not, they will say what they’re using. Then from that you can construct a resume that targets specifically the tool use and/or standards that they might be using. Similarly, if there’s any potential, it also helps to be a member of any sort of technical authoring organizations that can also help you with things like networking. But that’s a couple of, what I hope, are practical tips on how to go about that.

SO:                                     Awesome. Well, Keith, I really appreciate this. I think we could keep going for a very long time.

KSR:                                   Part two.

SO:                                     Part two. But I’m going to, I’m afraid, cut us off there. We’ll come back next year and see where your numbers are.

KSR:                                   Sure thing. Yeah.

SO:                                     So thank you, and thank for your time.

KSR:                                   Thank you.

SO:                                     Thank you for listening to the Content Strategy Experts podcast, brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

 

The post Jobs in techcomm (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 26:57
The challenges of replatforming content (podcast) https://www.scriptorium.com/2022/10/the-challenges-of-replatforming-content-podcast/ Mon, 24 Oct 2022 12:20:59 +0000 https://www.scriptorium.com/?p=21555 https://www.scriptorium.com/2022/10/the-challenges-of-replatforming-content-podcast/#comments https://www.scriptorium.com/2022/10/the-challenges-of-replatforming-content-podcast/feed/ 1 In episode 130 of The Content Strategy Experts podcast, Bill Swallow and Sarah O’Keefe talk about the challenges of replatforming content from one system to another.

Links are always a problem, especially cross-document links. Reusable content tends to be handled differently in different systems, or almost the same, but not quite, which is almost worse.

—Sarah O’Keefe

Bill Swallow:                     Welcome to the Content Strategy Experts Podcast, brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize and distribute content in an efficient way. In this episode, we talk about replatforming: the process of moving content from one system to another. Hi, I’m Bill Swallow.

Sarah O’Keefe:                 And I’m Sarah O’Keefe.

BS:                                      And I guess the best place to start here, Sarah: what is replatforming?

SO:                                     My definition of replatforming is that it is when you decide to move your content assets from one system to another. You may or may not change the file format, but when we talk about replatforming, the focus is on the idea that both systems support the same format, whether it’s DITA or HTML or anything else, and we’re changing the authoring and the publishing systems. So we are changing the platform that the content resides on, but not necessarily the content itself.

BS:                                      Okay. So, given that the content’s not changing, it’s a pretty straightforward process, right?

SO:                                     Sure. It’s great.

BS:                                      I’ll start with a loaded question,

SO:                                     Right. It is easier than the alternative. So if we’re comparing replatforming to something like taking completely unstructured, ad-hoc-formatted Word files or InDesign files and moving them over into structured content in some rigorous approach for the very first time ever, then yes, replatforming is easy. But with that said, you still run into some really annoying and costly complications when you do replatform.

BS:                                      What types of things?

SO:                                     Well, the usual suspects, right? Links are always a problem, especially cross-document links. Reusable content tends to be handled differently in different systems, or almost the same, but not quite, which is almost worse. Variables, conditionals, metadata…

So for example, if you take metadata, if you have your existing structured content in some sort of a component content management system, a CCMS, then there are at least two different places where you can store your metadata. There are probably more, but we’ll start with two. And the two typically are inside your content at the element level, so maybe a paragraph or a section or a topic; you could apply metadata to that element, to that paragraph element or section element or something like that. Also, in most CCMS, you can apply metadata at the CCMS level so the CCMS itself has a way to manage, govern, and apply metadata at the CCMS object level.

Now, the object level in the CCMS could be a lot of things. We maybe could assume it’s a topic, but if you’re talking about a reusable object, it might just be a little paragraph or it could be a variable, et cetera. And what you run into is that your old system, your legacy system where all your content currently is, has a certain way that is optimal to apply metadata and to manage metadata. And your new System B also has an optimal way of doing things, and the two are not exactly identical. Oh, and there’s a non-zero possibility that when you implemented System A eight or ten years ago, you maybe did some things that weren’t optimal.

So your metadata is stashed using a certain approach. Well, actually we hope there’s a certain approach and a certain system, because of course, option C is that it’s all over the place. But even if you did everything exactly right in the old system, there’s a decent chance that in the new system you’re going to have to make some changes.

BS:                                      Right, and mapping metadata from one system to the other, they may not even handle the same types of metadata fields that System A had versus what you need in System B. So you may end up having to either create a lot of metadata in System B from scratch, or have to somehow port the metadata over and make it fit where it may not be optimal to use, which I would not recommend. But there’s some degree of legacy carryover that you need to maintain when you’re switching these systems.

SO:                                     And I think it’s worth noting that the baseline metadata: who is the author? When was it last updated? All the administrative stuff. That will carry over well enough. The problems you’re going to run into have to do with places where you’ve done customized metadata, which almost certainly is the metadata that has the most business value for you.

BS:                                      Right, right. Even things like profiling metadata may be handled completely differently in a new system.

SO:                                     Yeah, I mean, in theory it’s DITA, and so it should just carry over, but, well, here we are.

BS:                                      So given some of these gotchas, what are some of the best practices that you recommend for replatforming?

SO:                                     Well, of course you should plan. Plan the project and don’t just jump in. At the same time, and in total contradiction to “you should plan”, it’s also worth trying it, doing a little proof of concept, pulling some files over just to see what happens. But expect that things will go sideways, and then you’ll need to plan some more to figure out how you’re going to do this.

BS:                                      Right. And there’s a difference between the various different types of assets that you might be moving over. So you might have some content files that might be different, some different types of content files that would be handled differently. And then you have other things like images, videos and so forth that may have a completely different approach to being stored and managed.

SO:                                     Right. So definitely figure out what’s going to change, and what does it look like to move this stuff over, and how are you going to do it? In many cases, our clients are taking a replatforming as an opportunity to also do content-modeling updates. So let’s say that 10 years ago you put in place a system based on DITA 1.0, and now you’re moving into something new and at this point you look at, well, we could move it up to DITA 1.3, take advantages of keys and some other fun things that we can do in that system. I do think it’s useful to separate the replatforming, the systems work, from the content-modeling updates. The content-modeling updates, in theory, you could do in the old system, assuming it supports the latest and greatest, but there’s value in doing the content modeling, even if you’re not replatforming. And the challenges you have in replatforming are different from the challenges you have in doing content-modeling, information-architecture updates.

BS:                                      When we’re replatforming, it really is an ideal time to do that level of content modeling and restructuring. Okay. So aside from setting up your project and planning properly, giving it a few test goes and taking the opportunity for content modeling, are there any other recommendations for replatforming?

SO:                                     I would look for places where you can get wins, especially if they’re easy wins. Find the things about the current system… Or sorry, you don’t need to find them. Put out a message to the team that works in the current system, and ask them what annoys them about the current system.

BS:                                      Oh, boy.

SO:                                     Yeah. And then after the deluge, read through them all and figure out whether and how you can address those issues. Can you make the new system better? Can you improve on the things that are in the old system that are just long-standing annoyances that people are unhappy about and fix them? Because if you can go through and fix that stuff and get some wins for your team on the new system, that’s going to go a long way to helping to make the transition go smoothly. People are going to be much happier about switching systems if there’s a win: “I’ve always been super annoyed by how this particular thing works or doesn’t work, and oh, it is so much better in the new system.”

BS:                                      And it’s not just the authors in that case. There are a lot of other groups and people who are using the system in one way or another.

SO:                                     And there may be new groups. It’s common, for example, that old systems people are still doing review outside the CCMS: PDF-based, email-based review, that kind of thing. Most of the new systems, you’re going to be doing reviews in some sort of a review workflow that is built in. So there’s a big transition there, and you may be talking about bringing in dozens or hundreds of new review users that were not there previously.

BS:                                      So there’s likely also a good deal of planning for not just change management among those who are using the system for authoring and managing content, but a wider scope as well.

SO:                                     Right, exactly. And this is maybe the most critical one. One of the biggest… Sorry, folks. I’m going to say mistakes. One of the biggest mistakes that we’re seeing in making these transitions is what we call the burning platform problem. So this is jargon for, “I have to get off of Platform X because it’s burning.”

Now, software typically does not burn. So what we mean here is: my software, the contract expires December 31st. If we’re not off it by December 31st, we have to pay maintenance for another year or another quarter, or the software is going into some sort of end-of-life status. It is rare that you reach the point where the software is actually being turned off, in the sense of on December 31st, this particular vendor will cease to exist and our software will cease to work. That’s not usually the case, but we have seen numerous, numerous projects where clients are telling us, “I need you to finish this by December 31st, because that’s when our maintenance contract expires.”

That is a cart-before-the-horse approach, and of course, these conversations are always happening on September 17th. “We have to get this done by December 31st.” “Well, but it’s a six month project.” And they’re like, “I don’t care. Make it happen.” Well, no, we can’t, and neither can you as the customer. And when the project is driven by an external deadline that really doesn’t have anything to do with the actual project… We have a project that’s going to take six months and you’re telling me to do it in two and a half, or three and a half.

Well, that’s going to increase cost. It’s going to increase risk. There are going to be mistakes. There are going to be problems where somebody who needs to sign off on something is out on PTO or vacation or maternity leave or a trip around the world, or they’re at an onsite and can’t be reached. And so there’s delay. And what we’re doing is we’re putting pressure on the project in order to meet an artificial deadline that is not really critical.

And so I think that it makes a lot more sense to plan ahead, pay the maintenance for an extra quarter or maybe even an extra year, and get it right. You may end up running in parallel for a bit, running both the old system and the new system so that you can validate that everything’s working and we didn’t miss any edge cases and that type of thing. But going live on a system that isn’t ready because of some, I’m going to say artificially imposed deadline that did not allow for a proper project cadence is risky.

BS:                                      It asks for trouble. Exactly. So not only is it risky to have a ticking time-bomb on your legacy system, but it also invites opportunities to not exactly implement things the way that you ideally should in the new system. So we could probably talk for hours on this bit of squeeze-time, but I think this is probably a good place to wrap this episode up. Sarah, thank you very much.

SO:                                     Thank you.

BS:                                      And thank you for listening to the Content Strategy Experts Podcast, brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

 

The post The challenges of replatforming content (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 13:56
Prerequisites for efficient content operations (podcast) https://www.scriptorium.com/2022/10/prerequisites-for-efficient-content-operations-podcast/ Mon, 10 Oct 2022 13:00:18 +0000 https://www.scriptorium.com/?p=21537 https://www.scriptorium.com/2022/10/prerequisites-for-efficient-content-operations-podcast/#respond https://www.scriptorium.com/2022/10/prerequisites-for-efficient-content-operations-podcast/feed/ 0 In episode 129 of The Content Strategy Experts podcast, Sarah O’Keefe and Bill Swallow discuss the prerequisites for efficient content operations and the pitfalls from not following them.

Mayhem, chaos, cost overruns, work, rework, delays. I mean, these things, they’re expensive. And they’re not just expensive, they’re soul sucking for everybody involved in the project. And it doesn’t have to be that way if this thing is planned and executed at the right level.

—Sarah O’Keefe

Related links:

Twitter handles:

Transcript:

Bill Swallow:

Welcome to the Content Strategy Experts podcast, brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we talk about the prerequisites for efficient content operations. Hey, everybody, I’m Bill Swallow.

Sarah O’Keefe:

And I’m Sarah O’Keefe.

Bill:

And I think, before we jump in, we should probably explain to everybody what we mean when we talk about content operations.

Sarah:

Content operations is the people, the processes, and the technology that allow you to make content happen. And some people will say that content operations only counts if you’re actually working efficiently. So it’s like a best practice. But I would argue that content operations is all the things. So, in a world where you’re writing things in Notepad and converting them into WordStar, and from there, through WordPerfect into some ancient version of PostScript for print, that is in fact content operations. It sounds like pain, but it is content operations. So what we’re looking to do in general in all of our projects is produce better, good, efficient content operations.

Bill:

So within content operations, you generally have four areas that we tend to look at to see how best we can optimize those things. One would be requirements. One would be having a roadmap. Another one is actually planning based on the roadmap and the people involved.

Sarah:

Right. And it gets very tricky very quickly because content ops sits in between the publishing content production world and IT. And so, the temptation is to say that, “Well, content is just a weird type of data,” which, well, that’s a whole other conversation. It’s a whole other podcast. So we’ll just set that aside for the moment. But the major point here is that, when you start looking at content ops, when you’re looking at content at scale, huge volumes of content in lots of different languages, globalization requirements, you have to think about delivery platforms. You have to think about video streaming, audio issues, transcripts, accessibility. And the volume of content that passes through a content ops environment can be, I think, surprising to a traditional IT group. If this is your first experience with content and content ops, the amount and the complexity of information that we’re dealing with tends to come as a surprise to people that are not specialized in the space already.

Bill:

Right. There’s so many different facets to content in so many different ways that those facets can get leveraged and need to be leveraged. It’s not just a raw data store, even though many would argue that XML is just a raw data store.

Sarah:

When you start looking at content operations, what you’re going to find is that there are a number of components to your content ops that are unique to a content ops environment. Yes, you’re familiar with content management systems, but in particular, are you familiar with component content management systems, headless CMSs? Are you familiar with localization issues, what it looks like to do Unicode across 40 or 50 different languages? When you look at XML, XML for content and XML for data are in fact not at all the same thing. So you need people that understand this tech stack from a content perspective. And since 80 or 90% of the work that we’re doing is actually DITA, the Darwin Information Typing Architecture, keep that in mind. A lot of tech people struggle with understanding DITA. And I mean, to be fair, a lot of content people struggle with DITA initially. So there’s a lot there, and it’s complicated.

Bill:

There’s usually an assumption that is made that, “Oh, well, DITA is just XKL. XML is just data. So we know how to handle data, so we know how to handle DITA. And the two just couldn’t be more different

Sarah:

Yeah. And requirements. When we start talking about requirements, what are we talking about here? Not so much the tech stack, right? I mean, there’s a tech stack requirement, but what’s the baseline that you start from when you start building requirements?

Bill:

Right. And that baseline really does come down to the business drivers for why you are doing the things that you’re doing. And fundamentally, if all of your tech requirements do not meet those business goals, you’ve just wasted a ton of money.

Sarah:

I mean, I’ve told this story before, but a long time ago, I was working on a project and we were busy trying to justify the build, the system, which was going to be pretty expensive. We were trying to justify it because we were going to have more efficient formatting. We were going to save money on formatting, formatting automation, get away from a very manual at the time in design and I think unstructured frame maker process, both of which were time consuming. And I’ve never forgotten. I went into a meeting with this VP and we were explaining some of the cost savings, which were really very much efficiency-driven. And he stopped us and he said, “Look, we’re a company. We’re doing,” whatever it was, “$10 billion in revenue per year.” And of that 10 billion, at least half is international revenue. So $5 billion a year in international revenue.

And he said, “Right now, we have a six month delay on localization in getting any… And so, we can’t get any international money.” You can’t get international revenue when you can’t ship your product with content in German, or French, or Italian, or Spanish, or Thai or whatever it was they needed. He said, “Can you promise me that you can chop one month off of our six month delay in localization? Because if you can, we can easily justify this whole thing and you don’t need to talk to me about this other complex stuff.”

Now, we knew that it’s actually pretty easy to get from six months down to, say, two months without really trying very hard. Now, getting from two months to two days, that’s hard. But all he wanted from us was-

Bill:

Very hard.

Sarah:

Six months to five months, which we said, “Well, yeah, I mean, that’s easy. We can do that.” And he said, “Great. Where do I sign?” So, ultimately, the requirement in that particular case, because all of their growth was coming from international revenue, non-US, non-English customers, so they wanted to focus on that. They wanted to deliver better and more efficiently and faster so that they could get that revenue more quickly. And my misunderstanding of the business case could have gotten us in real trouble had it not been for this person in a meeting who said, “Let me stop you right there because you’re focused on the wrong thing. Tell me about this thing,” which was easy.

Bill:

Actually, that was a good example of having the right people making the right decisions, and talking to the right people in order to inform the right decision.

Sarah:

Right.

Bill:

Yeah. From your perspective, you were doing the right thing because you needed to build efficiency and all this other fun stuff. Meanwhile, you talked to another person who’s looking at the right thing as, “We need to expedite our sales in a foreign market. How can you help us do that?”

Sarah:

Right. And we were able to do that, but we had to get refocused on that correct baseline fundamental requirement for what they were trying to do. So I guess then the question becomes, what happens when you don’t have the right people asking the right questions?

Bill:

Right. Because that really is one of the linchpins here. First of all, you have a huge learning curve for anyone who is not the right person doing the right type of work. They’re starting from ground zero, and they need to basically escalate their knowledge and build their proficiency in the work that they need to perform out of the gate. And generally, you don’t have that kind of a runway when you’re doing any kind of an implementation project of any kind.

Sarah:

I mean, it’s common to come into these projects where there’s not… And for us. We’re consultants. We get brought in when that knowledge internally is missing. So it’s really, really common for us to come in and have to build out a knowledge base and a set inside the organization so that the stakeholders inside the organization can make good decisions and can carry this thing forward. Related to that, once we’ve done that, once we’ve built a group within the organization that has this knowledge and these competencies, we want to hang onto them. There are very, very few things worse than losing the people that have that knowledge because they move on to something bigger and better and more exciting. And we have to start over with a new group. And again, build all that foundational knowledge to make sure that they know what they need to know in order to make good decisions because when you come into a new area of practice, whatever it may be, you don’t know what you don’t know, and so, you make bad assumptions. And if you make bad assumptions, you make bad decisions. And bad decisions are expensive and time-consuming.

Bill:

Very much.

Sarah:

So I think if I’m a director or a VP looking at launching one of these content ops digital transformation kinds of efforts, look around your organization. Do you have the right people in place with the right skillsets? If not, do you have people that can learn this stuff, that you can, I don’t know, not dedicate to, but assign to the project for the long term, couple of years, to build that community of practice, that knowledge inside your organization? That’s something that we spend a lot of time on, we spend a lot of time focusing on, but there will have to be that core group eventually. Unless you’re playing to black box outsource this stuff, which is very, very rare, you need a group internally that keeps track of this stuff and manages it for the long haul.

Bill:

Building that into your team, it really is critical. Like you said, whether you bring someone in initially to get the team up and running and have them learn, or you have people with those core competencies already in house, if you’re missing those people, that your project is going to very likely run over budget, run over time, and generally just be absolute chaos.

Sarah:

Mayhem, chaos, cost overruns, work, rework, delays. I mean, these things, they’re expensive. And they’re not just expensive, they’re soul sucking for everybody involved in the project. And it doesn’t have to be that way if this thing is planned and executed at the right level. And I will say that, typically, the people who get blamed for this are the people on the ground who are doing their best to try and do this stuff. But ultimately, folks, I blame you, the senior leadership. It’s your job to plan this thing, to give people what they need to make sure that they have the right skill sets. And if they don’t have them, that you support them in acquiring those skill sets, that you support them with outside experts who come in and can deliver on those skill sets, contribute to your project, and do all of the things. The lack of planning, magical thinking is the thing that kills these projects. And then, the people on the ground get blamed for it. “Oh, why didn’t my tech writers do this better?” Well, because you put them in an impossible to succeed position.

Bill:

Right. And to say that it’s senior leadership’s job to plan everything, that’s a little misleading, I think. But it’s their job to make sure that the right people are involved at the right points in the project to make the decisions and help plan the effort because they are the ones who have the leverage to bring the right people in and make things happen.

Sarah:

Yeah. It’s enablement. And we don’t enabling as a verb because it sounds terrible, but that’s really a leadership job, is to make it possible.

Bill:

Clear the runway, get the right people in.

Sarah:

Yeah. Apparently, I have some feelings about process and wrong processes. The most common thing that happens here from our experience is that people pick the software, the technology stack first or too early, and then let that drive all the other decisions. Now, there are legitimate reasons why the tech stack might be a constraint in the sense of we’re in group B over here and groups A, C, D, E, and F are all using the same tech stack and we need to fit into that. That I get. But what’s actually a lot more common is, “Ooh, I like this. I used it in a previous job, so let’s just go with it.” And that’s really not a good reason to pick anything specific. So what happens?

Bill:

Congratulations, you pick the box that you can’t work outside of. And not every box contains every single solution to every single business need. So if your business drivers require very specific things and the box doesn’t have it, you’re never going to get there.

Sarah:

Yeah. Okay. Hopefully, you picked the correct box, or actually you did your requirements properly, and then you said, “Hey, this looks like the right kind of system for what we’re trying to do.” What are other things in the process as you move along in one of these builds that cause problems?

Bill:

A lot of it comes down to pretty much the same type of focus. Whether it’s a big box or a little box, you’re still picking the wrong one. For example, if you are combining content sets from multiple different groups into a brand new system and you spent a very long time choosing the right system that meets the right business needs, but you don’t do any upfront content modeling to see how all of these different groups content will fit together in this bright, shiny, perfect box, you’re going to find a lot of missing pieces along the way. You’re going to find a lot of edge cases. You’re going to have to do a ton of rework just to get this content to all interact.

Sarah:

Did you just tell our audience that they have to think inside the box?

Bill:

If they have a box, they should think inside it. Yes. If you don’t have a box, then think outside of it until you find the box that fits wherever you ended up going.

Sarah:

Yeah, the content model, I mean, it’s such a point of contention because if it’s too strict, it won’t work and people will do weird work arounds. And if it’s too loose, it doesn’t really help you because it doesn’t constrain things in any useful way. And if you build it out and then later you find edge cases that you weren’t thinking about, you have to stick a bolt on the side of the box, and it’s just bad. So there’s the content model, then you convert your content into the new content model, at which point you find all the things you missed.

Bill:

Exactly. That is really the aha moment. When you start converting content, you go, “Oh, wait a minute. We didn’t account for this thing that this group is doing over here. And they say it’s really important.”

Sarah:

Yeah. I mean, that’s a tough balance because you want to build out a content model, start doing some prototype proof of concept conversion, refine it as you go, do the rework that’s necessary. I mean, no matter how much upfront planning and analysis you do, you will find edge cases. The problem is, the later you find them, the more expensive it is to either rework the content model or, my particular favorite, to just hack around them.

Bill:

Yeah. The number of times I’ve seen output class used as a means to an end, it’s [inaudible 00:17:49].

Sarah:

I mean, we’ve spent a really long time talking about all the terrible things that happen, but how do you do this right? How do you make it such that your content ops project is as painless as possible? What are the best practices?

Bill:

Well, we talked about getting the right people involved at the right stages in the right project, but I think that’s something that needs to happen regardless of what you’re doing. But as far as content operations is concerned, first and foremost, you need to have your requirements nailed down. And we’re not talking about your requirements like building out an agile framework or something like that to build things out and it iteratively progress, but what are the high level requirements that are driving this entire initiative?

Sarah:

So we need to go from six month delay in localization to five month delay, or less would be better, but a one month improvement. Our system… We talk about language support. We need to be able to localize into 40 or 50 or 75 languages. I’ll add to that, that one unusual requirement that will rule out a number of tech systems is multilingual authoring. So we’ve seen a few cases where the content is being created in… Most everything we see is being sourced in English. But English and also French, or German, or Chinese, or Korean. And you have to then have a system that will support authors working in those languages as they are creating content. It turns out that a number of CMS systems make the assumption that you have a single source language and many downstream target languages that you localize into. So it’s a one to many relationship. If what in fact you have is a many too many or a few too many, you need to really pay attention to that.

Bill:

Yes, absolutely. So, other things that you really should do is start looking at your publishing requirements as well. So it’s not just the authoring side, but it’s where you’re going. We’re talked about being able to publish out to 40, 50 languages, but what about seven, eight, nine, 10 different types of output? Are you able to get there easily? Is there a limitation in the tech that you chose that prevents you from developing a critical delivery point?

Sarah:

Yeah. So, multichannel publishing, integration with some sort of an omnichannel world. Incremental publishing is becoming important. I have a library of 40 or 50 or 100,000 chunks of content, but what I actually want to be able to do is update one and publish it, and not have to push the entire system or the entire document that that one chunk lives inside of. Integration with other systems is becoming increasingly important. The ability to take a chunk of content, push it to Salesforce, push it to the main website where we’re working in perhaps a TechCom world, or push it to eCommerce system so that it can be reused there.

Bill:

And not only iterative publishing, but iterative translation as well because some systems, they’re really great about you being able to gate very, very small chunks of content or very, very discreet files for localization at any point in time. This is separate workflow for each individual file. Other systems, they gate things by publication. So if you have 90% of your content hardened for a particular publication, you still can’t start the localization workflow for that content until the last 10%’s completed. And if we’re talking about getting from that two-month to the two-week point in the translation turnaround, you’re not going to get there if your system is gating by the publication level.

Sarah:

I think, overall, I’ve seen a lot of lists of requirements. What you want to do is focus on the ones that are unique. We need version control is not interesting to me. Everybody needs version control. And we want to be able to reuse content is a little bit interesting, but not really. And we need variables and we need localization support, those are all basically prerequisites to the requirements. They sound like requirements, but not really. What I’m looking for is, what are the unique requirements in your organization?

For example, we make medical devices and we need traceability because, if we don’t have that, we get in trouble with the regulators. We have a very complex content structure. There’s a reason it’s set up that way, and we need to reflect that in our operations. We need personalization. We need high velocity, really high velocity. Those are the things that you want to find that make your content unique within the landscape of generalized content operations. And once you’ve identified that keystone, that keystone requirement, that if we can point to this and make that successful, then we’re good, that’s what is going to help you drive the entire project and always look at that fundamental foundational requirement and make sure that you’re focused on it and meeting it.

Bill:

All those little bells and whistles can be added later. They can be configured later. But yeah, if you’re not meeting those high level requirements out of the gate, you’re doing the wrong thing.

Sarah:

The summary of this very lengthy podcast is you should plan. Planning is good. Planning is your friend. And if you don’t plan, some very, very bad things are going to happen.

Bill:

Very much so. And while you’re planning, make sure you have the right people doing it.

Sarah:

Plan well.

Bill:

Well, I think that will be a wrap for this one. Thank you, Sarah.

Sarah:

Thank you.

Bill:

And thank you for listening to the Content Strategy Experts podcast, brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

 

The post Prerequisites for efficient content operations (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 24:40
Replatforming your structured content into a new CCMS (podcast) https://www.scriptorium.com/2022/09/replatforming-your-structured-content-into-a-new-ccms-podcast/ Mon, 26 Sep 2022 12:10:11 +0000 https://www.scriptorium.com/?p=21527 https://www.scriptorium.com/2022/09/replatforming-your-structured-content-into-a-new-ccms-podcast/#comments https://www.scriptorium.com/2022/09/replatforming-your-structured-content-into-a-new-ccms-podcast/feed/ 1 In episode 128 of The Content Strategy Experts podcast, Sarah O’Keefe talks with guest Chip Gettinger of RWS about why companies are replatforming structured content by moving it into a new component content management system (CCMS).

I find there’s some business change that’s happened to spark this replatforming. One is mergers and acquisitions, where two companies get together, there are two CCMSs, and one basically is chosen.

—Chip Gettinger, RWS

Related links:

Twitter handles:

Transcript:

Sarah O’Keefe:                 Welcome to The Content Strategy Experts Podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we talk about replatforming with special guest Chip Gettinger of RWS. Hi, I’m Sarah O’Keefe. Hey Chip, welcome back to the podcast.

Chip Gettinger:                Hi Sarah, it’s great to see you.

SO:                                     Yeah, you too. Chip, tell us a little bit about yourself and who you are, and what you do at RWS.

CG:                                     Sure. I manage our Global Solutions Consulting Team here at RWS, and it’s our product is Tridion Docs. It’s a data component content management system. So I work with customers and partners on technical business requirements for their CCMSs.

SO:                                     So you are in many ways the in-house edition of what we do over here at Scriptorium, on the outside, looking in.

CG:                                     Yeah, I’m in the sales side of things, but we have very, very detailed solutions that I get to work with some really wonderful customers.

SO:                                     Yeah, and so for the audience, Chip and I go way back, we’ve known each other for a long time. And so if this degenerates that that’ll be why, and we apologize in advance. I wanted to focus on replatforming today. We’ve had a lot of projects recently that involve this, I think both of us. And I guess I need to start with a definition of what replatforming is. So in my world, I define replatforming as moving from one component content management system, from one CCMS, to another. And I suppose technically, if you start with a collection of word files out in space and you moved to a database CCMS, that would be replatforming. But really, that’s a new platform and building out structured content. So when we talk about replatforming projects, typically we’re talking about a situation where a client already has structured content, and they’re moving it from system A into a new system, into system B. Does that match how you handle it?

CG:                                     Absolutely agree, Sarah. I have seen new Tridion customers coming from other CCMSs, and typically I find there’s some business change that’s happened to spark this replatforming. One is mergers and acquisitions, where two companies get together, they’re two CCMSs, and one basically is chosen. So the other group will move their content over into a CCMS like Tridion Docs. The interesting part is, is I also see people who are upgrading from really old systems. We have some customers 12, 14 years, and we had one customer still using the IBM DITA, if you remember that from the early days. And really, that was a real replatforming into DITA 1.3, and other new aspects that they had no exposure to.

SO:                                     So what’s the breakdown that you’re seeing. I mean, in terms of replatforming in your potential client base. When people come and talk to you, is it mostly replatforming or is it mostly going into DITA for the first time or is it kind of a 50/50? What does that look like?

CG:                                     It’s a great question. And I would say it’s 50/50. And I find this, and my team very much gets involved in evaluations and workshops where companies come in and want to try out Tridion Docs before they move. And what I’ve found, Sarah, especially over the last five or six years, is we have more DITA-educated customers, users coming in. They understand it. But perhaps one trend I’ve noticed is that when they set up the original CCMS, let’s say, 10 years ago, they really didn’t think about a reuse strategy, they didn’t centralize libraries, they didn’t set up [inaudible 00:04:22], and all the things that you’re team at Scriptorium does a great job with. We’re finding organizations. Just, I’ll tell you the worst-case scenario. They took their FrameMaker files and they used a composite DITA topic, and guess what they did, they made it one big, big, huge topic. That’s the worst case. But most people are doing generic topic typing, or the composite DITA topic. They didn’t really think about reuse. And now here they are many years later and they have a new group coming in, or something like that, that’s causing this change.

SO:                                     Yeah, that sounds like what we’re seeing. Additionally, we’re seeing a lot of companies that aren’t using keys because, for example, when they built out their initial system 10 years ago, keys didn’t exist.

CG:                                     Exactly.

SO:                                     It wasn’t a mistake, it was just that now we have some additional features, and we’re also seeing a lot of, well, we specialize to cover these kinds of use cases, which are now part of the newer DITA, DITA 1.3. And so we look at, do we keep that or do we despecialize down and get them into the standard DITA element that’s now available for what they’re trying to do? So you’re right. I mean, it’s an opportunity to revisit the content modeling decisions that were made.

CG:                                     Exactly.

SO:                                     And I think make some improvements there, which that’s not really part of the replatforming, it’s just, we’re going to replatform anyway, so let’s do some cleanup.

CG:                                     And do some cleanup and alignment. And yeah, getting back to the replatforming, when, let’s say, we’re converging two groups together, they’ve got different metadata and attribute models, and they probably have different topic models and bookmaps versus DITA maps. And it’s a great time to make alignments when you’re going to be cleaning up and trying to reuse this across these different systems. One customer I worked with, there are three or four different mergers of different companies, and they did eventually, they chose to centralize on Tridion Docs. But they decided to maintain their existing content models because the marketing wasn’t really recombining new products, and so forth, they were still kind of siloed with their products, but they were able to have their own publishing DITA Open Toolkit chains and so forth. And it worked okay, but I wouldn’t want to try to reuse across the content. But the interesting part was just two years ago, we came out, we redid our content importer application and we rebuilt it. And it’s been quite popular with our customers who are replatforming and moving content around, and so forth.

SO:                                     So yeah, that’s really the question for me. What’s the biggest challenge? What are the biggest problems that you run into in these replatforming projects?

CG:                                     I think it’s gaining acceptance on alignment. Governance is hard enough to agree to, and then you come along and you’re going to change it, especially if you have a company that’s being acquired by a larger company. So the typical governance and other issues we have are problems. I think the other challenge that I see are more technical, where people are really running old versions of software out there, they’re really outdated. And for example, I mention the DITA Open Toolkit, there’re versions of Java that aren’t even really officially supported people are still running with. And then, as your team knows at Scriptorium, you have to then, sometimes you have to rebuild scripts and publishing, and so forth. And sometimes companies don’t really take that into account, they just think, “Oh, DITA’s DITA, I’m going to move it around.” And generally, the DITA content does move around. But there’s supporting things that go into it that do have some costs associated with that replatforming.

SO:                                     So you mentioned mergers, and I mean, that makes a lot of sense to me, that if you have two or three or five companies that merge, that have two or three or five different CCMSs, that just broadly from a total cost of ownership point of view, even if they’re not sharing content, it makes sense to consolidate. What are some of the other things that you’re seeing that push people into replatforming? You mentioned old systems too.

CG:                                     Right, right. Well, sadly I’ve seen, when you’re old systems, you’re also more vulnerable to security issues. And you just look at what’s happened the last several years as far as vulnerability of data. And if you’re running on a Microsoft platform that Microsoft doesn’t support anymore, you can get into trouble. Especially, we had one customer in a regulated industry, and they were technically out of compliance with their own internal regulatory groups. But the team had never upgraded their system, they just had gone along for 10 years. So compliance can be very much opportune time. The second area I see is the move to cloud. It’s amazing, Sarah. I mean, I would say a majority of our business now is Software as a service, cloud. And we have many customers that are on premise and their IT group goes, “Well, got to move to the cloud because all those server guys, they’re not here anymore. We’re just going to be outsourcing services.”

So you suddenly can’t just go along with the way the system had been set up. And moving to cloud is actually a program that we brought into some of our customers, and we’ve been pretty successful in planning it.

SO:                                     Yeah, that’s an interesting one. And I would say also, more broadly, we’re seeing a lot of environments where the system, the CCMS, was essentially customized and purpose-built for a particular use case.

CG:                                     Yes.

SO:                                     And then that customer either, their use case changes or the external situation, something changes. And they’re faced with this thing that they’ve customized to a point where they can’t get out, they can’t change it, they can’t fix it, they can’t modify it. The person who wrote the code is long gone. And it’s just that has been very, very difficult. You get this sort of, “But that’s how we’ve always done it.”

CG:                                     Well, Sarah, you brought it up. One of our customers spoke at the recent ConVEx conference around replatforming, and that’s exactly what happened. The person who had written a lot of the customizations left the company, and the team that was left with these customizations did not know how to support them. And there was a lot of analysis, and she talked really well about they took it as an opportunity to modernize the infrastructure. And sometimes I see modernization, in your content modeling, because you think of what did you and your team teach 5, 10 years ago versus today. We’re doing chat bots and all these other applications that really didn’t exist years ago. Voice-activated, it’s another one. So replatforming can also be a time to have new digital strategies that your company wants you to support instead of putting those huge PDFs out in the website.

SO:                                     Oh, everybody loves huge PDFs. So on that note, when we talk about terrible PDFs, what makes a replatforming successful? What are some of the factors that lead, that will make it successful? And for that matter, what are some of the red flags that you see, where somebody comes in and says, ‘We want to accomplish this’? Or what is it that they say that gives you concern?

CG:                                     Right, right. I think a real success factor to me is that it’s like when you originally purchased your system, and now you’re replatforming, is you have clear business goals and objectives, you set timelines. And I think a real success factor is you meet those factors. And because you’re spending money and that money to budget, and your managers and executives want to know how you’re doing. So a real success factor to me is you’ve made your goals, and then many of those goals should have included performance improvements. For example, we’ve seen customers, their PDF publishing has dropped 50 to 70% from older hardware, older Windows, older systems. And we also, we rebuilt our publishing platform a few years ago. So suddenly you’re starting to say what used to take 20 minutes is now taking 4 or 5 minutes. So your users really gain benefits from it. And then I think the other thing we were talking about earlier is the taking advantage of new, let’s say, new DITA topic types, the troubleshooting has been very popular. MathML formulas, it’s a lot easier to do that.

And being able to take advantage of new content types for new groups and so forth that are coming in. So that’s the cool thing. Now you brought up what’s the downside. I think the downside, especially when two groups merge is they think they’re going to be able to do content reuse, but they just did a hack job of information architecture to get the DITA content into the same CCMSs. As you well know, we’ve talked earlier about attributes and other things that you need to align, it doesn’t need to be perfect, but some of the mistakes people make are the assumptions that, “Oh, DITA is so transformed. I can do this and that and this.” Well, 10 years ago, we all made mistakes and did some things in DITA that made it more, let’s say, proprietary or unique. Those things surface in not great ways when you’re trying to merge different groups together. So you have publishing failures and things like that, that just aren’t seem to work. Those tend to be rare, generally, but I think where I’m going.

SO:                                     Yeah. I mean, you mentioned governance earlier, and I think there’s a really interesting balancing act in that alignment, because on the one hand, a given company, an organization, has some unique DNA and unique features, and they need to preserve those things to make sure that the content that they’re producing is compatible with who they are and what they’re trying to accomplish. At the same time, when you replatform from anything to anything else, it really doesn’t matter. Any given piece of software that you look at is going to be good at some things and not so good at other things. And it has a certain way that it’s designed, and you have to work within that design. If you try and do things the old way in the new system, very, very bad things will happen. And yet, so you have to figure out what makes my content unique and special and interesting, and how do I preserve that going forward?

But separate that from what design decisions did we make because old software worked a certain way, and how do we address that or mitigate that, or transition out of that, and take advantage of the things that new software gives us, without, again, losing our, whether it’s DNA or culture or whatever you want to call it, but that overall feel of your content?

CG:                                     Yeah. Sarah, that’s a really great point. And I just last week was having a conversation with one of our professional services experts who’s done replatforming, and he reminded me that DITA originally was built on a file system. And there’s still things there that when you look in a CCMS that we take it for granted on a file system. And people did things 10 years ago that today you don’t have to do it that way because the CCMS automates so much more of it, things like virgining, commenting, notes, metadata, and so forth.

So one of the things we did, and one of the replatforms we did last year, customer had very pretty good DITA content. We also were able to move over a lot of their CCMS metadata into our system. So sometimes they replatform for reasons, but they had, let’s say, it’s a good platform, but they outgrew it or something. So replatforming can also incorporate things that are outside of your DITA content and the metadata, and even things like, oh, who the author was who made that change in June of 2019 and everything. And it was pretty impressive to see that history inside of our CCMS that they had preserved. Because again, they were a regulated organization.

SO:                                     Actually, that brings up another interesting thing that we’ve run into, which is the question of what do we keep and what do we not keep?

CG:                                     Right.

SO:                                     At what point do you just say, “You know what? Anything pre-1980, we have a PDF, we’re good. We’re not preserving it as editable content.” Now, depending on who you are and what kind of products you produce, you might very well need that 1980 content to be still editable because it’s still being maintained. But most of the time, you can pick a cutoff point somewhere, but you’re never going to be perfect. There will always be that outlier.

CG:                                     Yep.

SO:                                     Nothing past 1995, except for this one product.

CG:                                     Yep, yep. Yeah.

SO:                                     Yeah, no, go ahead.

CG:                                     No, that’s a great point, that it’s really a time, a great time to take inventory of what do you really need to move forward, and what’s legacy. And perhaps you just archive it and leave it around just in case it ever is needed. But yeah, it’s that 80:20 rule of what content is the most active that you want to work on and continue on and updating and so forth.

SO:                                     So if I’m someone who’s thinking about replatforming, what would you tell me? I mean, what is the number one thing that people need to do to increase their odds of success as they move into this pretty complex project.

CG:                                     It is, Sarah. And my number one advice would be to clearly define your business objectives and goals. You’re going to be making an investment, and you’re going to have to ask for budget and funding to make this happen. So you have to have clear business goals to be able to achieve this replatforming. And an example might be, and again, to take advantage of new digital initiatives in your organization, because even your approach today is more document or even a [inaudible 00:20:18] viewer kind of approach. So you’re going to replatform, we’ll be able to take advantage of new systems that you’re integrating with. Another example we see is people are moving to Jason quite a bit for interactive applications on mobile devices. And we have some really great Jason outputs that are being driven from DITA content that was written years ago. And so you can create some new output types that might fit into a more modern infrastructure, instead of just publishing out some 15-year-old chunk files or HTML and things like that.

SO:                                     Yeah. And I mean, I think that’s really good advice to look at the business objectives and figure out what you have. And then from a technical point of view, I think it’s worth thinking really carefully about what is a platform deficiency that you can address, and what is, I guess, a choice that’s been made, that may or may not be able to be unwound. There’s all these issues you run into around culture. And you touched on this earlier, that if the culture is a certain way, then swimming against that, it is just pain.

CG:                                     It is. I’ve seen hardware related companies merge with software companies, and just different development methodologies, waterfall versus agile. And you have to realize your business could be different too, and when you’re trying to combine or replatform.

SO:                                     So if you combine waterfall and agile, you get wagile.

CG:                                     I like that.

SO:                                     Which is the worst of both worlds, it’s not good at all. But yeah, it’s an interesting process though, of really understanding, when we replatform, what can we fix? What will we just get? Because our new software will do these neato things that our old software didn’t do, and what things are kind of baked in? And what kind of decisions do we have to make to it all work?

CG:                                     Yeah. One last bit of advice I would offer is you can also learn as you’re going through this replatforming. So learn from the experts, learn from consultants, like your team, learn from the vendor. You’re going to go in with certain assumptions and so forth. So if you’re going to come up with a new governance model, pay attention to some of the experts. And finally, attending conferences and so forth to see what is going on. I noticed, I love this term replatforming and I saw at ConVEx, I saw some people talking about their next generation CCMS. So this is pretty cool, pretty cool trends.

SO:                                     Yeah. It’s fun that we’ve lasted long enough to see not just platforming, but replatforming.

CG:                                     Yeah, it’s great. It’s great, Sarah. Kudos to you and your team for keeping up all this work.

SO:                                     Well, yeah, and I mean, same to you because I think there’s a small group of us. We’re small but mighty, and we’re going to make it happen.

CG:                                     Yeah. And I’m constantly amazed at the executive alignment that we see in many of our organizations. Again, 10 years into it, their executives are still great. The things that we promised; automation, translation, integration, reuse, all those things have blossomed well within organizations. And it’s great to see it continue to grow.

SO:                                     Yeah. And I think that’s a great place to leave it on that optimistic note, since we spent most of our time talking about challenges and problems. So with that, Chip, thank you so much, it’s great to see you.

CG:                                     You too, Sarah, thank you.

SO:                                     Thank you. And thank you for listening to The Content Strategy Experts Podcast, brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

 

The post Replatforming your structured content into a new CCMS (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 24:36
The challenges of structured learning content (podcast) https://www.scriptorium.com/2022/09/the-challenges-of-structured-learning-content-podcast/ Mon, 12 Sep 2022 12:15:40 +0000 https://www.scriptorium.com/?p=21509 https://www.scriptorium.com/2022/09/the-challenges-of-structured-learning-content-podcast/#respond https://www.scriptorium.com/2022/09/the-challenges-of-structured-learning-content-podcast/feed/ 0 In episode 127 of The Content Strategy Experts podcast, Gretyl Kinsey and Alan Pringle talk about the challenges of aligning learning content with structured content workflows.

We’ve seen a little bit of a trend where we think about learning content and structure almost as mortal enemies, and we see some degree of resistance to wanting to use structured content for learning and training materials. And we want to dig into a little bit of why that might be.

—Gretyl Kinsey

 

Gretyl Kinsey:                  Welcome to the Content Strategy Experts Podcast, brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we talk about the challenges involved with structured learning content. Hello, I’m Gretyl Kinsey.

Alan Pringle:                     And I’m Alan Pringle.

GK:                                     And we’re going to be talking about our experiences with learning content and structured content and how they come together and sometimes how they don’t. So where I want to start is with talking about how they don’t always come together sometimes.

We’ve seen a little bit of a trend where we think about learning content and structure almost as mortal enemies, and we see some degree of resistance to wanting to use structured content for learning and training materials. And we want to dig into a little bit of why that might be.

AP:                                     Well, I want to be clear here, it’s not necessarily because of the trainers and the instructional designers. In a lot of cases, I think it’s because the tools that are targeted for those kinds of content creators are frankly not the best in the world. I think objectively we can say PowerPoint may be the worst tool for content creation of any kind.

It just encourages all sorts of bad behavior. The way that you, “Okay, I need to add text here. I need to add an image here.” It’s like how many frames can you possibly draw on a slide? Well, a lot based on some things that I’ve seen. So I mean, really, let’s take a look at the tools.

PowerPoint is not a great tool. It’s very freewheeling, gives you possibly too much latitude in how you put things together. So I think it’s fair to point that out. And then you look for example, at some of the learning management systems, and some of those systems are really good, but they have very narrow capabilities that are focused specifically on training.

So they’re very closed systems. And for example, it may not be super easy to import content into the system, export it out. So they’re closed and there’s just not a lot of interaction between systems. And then when you don’t have that level of interaction or those capabilities for systems to play together, that’s when you start seeing things like copying and pasting and other things that are, shall we say, not the most productive or free of error.

GK:                                     Absolutely. And I think it’s really interesting too that you brought up PowerPoint, because there are all those limitations that you talked about. The way that it’s really, there is no structure to it whatsoever that you can do anything you want on any slide and it is really hard to keep that templatized and consistent.

And yet PowerPoint is a very popular tool for learning content because when you think about how training is often delivered with a presentation style, that really lends itself to being one of the optimal ways to do so.

And so then what you said about having instructional designers really not having a lot of control over the fact that structured content and learning content are sort of moral enemies or don’t mesh very well.

I think a lot of that is because PowerPoint is something that they need to use because it does lend itself well to what they’re trying to do. And the same I think is true with learning management systems, they need that to be able to deliver their e-learning content for digital learning platforms.

And whenever you’ve got tools like this with these sorts of limitations that are really not conducive to structure and that go against what structure does, it really puts them in a difficult position where maybe even if they wanted to have more structured learning content, they really can’t because of the types of tools that are best suited to delivering training materials are really just not well suited to structure.

AP:                                     Sure. And then when you’re dealing with really ridiculous aggressive schedules, you’re having to do constant course updates, you are not in the frame of mind to be thinking about, “Oh, how could structure make my life better?” It’s completely understandable to me. But there are compelling use cases to have structure content for your training content.

GK:                                     Absolutely. And I think there’s a growing demand for that. As we move more and more into a digital world, we’ve seen that over time. I would say definitely in the last decade plus that I’ve been at Scriptorium, and especially in the last couple of years with the pandemic, there’s been more of a necessity and a demand for digital learning environments and e-learning. And I think that’s where structure really can come in and help with things.

AP:                                     Sure. And if you think about content creators in general and all the different places that they are, structure has pretty much moved well, very well into the product content TechComm area. Those folks have been using structure for quite a while now. We have also seen a shift in MarCom.

I have seen marketing content that is now driven by structure. So if you think about the flow of information, it kind of makes sense by extension that training content, learning content may be the next logical extension of where structured content can go.

GK:                                     I agree with that completely. And I know that even with some of the clients we’ve worked with, we’ve seen that as a use case that a lot of these companies have where they have a need to share content, like you were mentioning, get things out of closed systems and have the ability to reuse and share across systems and across departments. Right?

So you’re talking about TechComm and MarCom, I’ve seen several companies where there’s a need to have some common core information that’s shared across TechComm, MarCom and training and then maybe some other departments as well. And structured content is the obvious go-to way to do that.

AP:                                     Exactly.

GK:                                     And there are a lot of benefits too, when it comes to having your learning content in a structured environment. So one is that it’s easier to build an intelligence. When we think about e-learning environments and we want to have something like automatic grading and scoring that shows up as you’re going through and doing the activities or taking quizzes, that’s something that’s possible when you’re learning content sources are structured.

AP:                                     Yeah, very much. And a lot of the reasons and compelling use cases you have in TechComm and MarCom and elsewhere they apply here too. Consistency is greatly improved when you are working with structure.

And another big one is reuse. Everybody, regardless of what kind of content you are writing, is going to have some kind of reuse scenarios where basically having this modular structured content that you can basically refer to and plug in wherever you need to, it reduces you having to write the same thing over and over again and it reduces the number of variations of that same content.

For example, in training, one thing that immediately comes to my mind is the kind of housekeeping stuff you do before a course, whether it’s in person or you do it online. There’s certain things you want the students to know. This is where your sample files are to do these exercises, this is when we’re going to have a break, this is how long this is going to take.

All that kind of stuff that you’d want to communicate upfront. In a lot of cases, that is very, very structured, templatized, whatever word you want to use. And it’s a matter of sometimes just picking and choosing certain bits of that housekeeping content and putting it together to explain all those things you want explained upfront. That way you don’t have to write them 400 times and have a zillion variations of them stored away somewhere.

GK:                                     Absolutely. And I’ve seen just that, because I don’t think I’ve ever seen learning materials that don’t start with that housekeeping information, but I have seen some cases where the learning materials were in PowerPoint, for example. And so they each had a copy of that same slide and sometimes the wording would be different or sometimes there would be some different images, when it really should have just been one consistent reusable piece of housekeeping information.

AP:                                     Yeah. And it’s got to be just so frustrating to have to go through and touch a bunch of files just to change a word or two and a paragraph that appears in housekeeping information. I mean, to me, this is absolute low hanging fruit on why structure can really help you out in a training environment. And like I said, that’s a low hanging fruit. There are so many other things that go well beyond that, that are probably worth discussing as well.

GK:                                     Yeah, I know one example that comes to my mind is thinking about the ability to have student versus teacher versions of the same course.

AP:                                     Yep.

GK:                                     From the same set of source materials so that you don’t have to have, for example, a copied and pasted version of an entire test just so that you can have an answer key.

If you are in a digital and structured learning environment, you want the ability to switch that key with the answers on and off so that you can have one version that’s for the teachers and one that’s for the students. And that’s something that structure allows you to do.

AP:                                     Yeah. And I think we also have to note, there are ways to do what you just described that are not based on structure. There are ways to do them, they’re just a lot more painful. I think we have to be careful and not say absolutely structure is the only way to do these things, it’s not. But it streamlines and makes doing these sorts of things a lot easier and it takes the burden off of the instructional designers and the content creators.

GK:                                     Definitely. So I want to talk about one possible solution for getting your learning content structured in a way that we’ve seen with some of our clients, and that is the DITA Learning and Training Specialization.

And this is basically a set of DITA tags that is designed for learning content. And so it comes with a really robust and flexible set of tags. You have different map types that you can use for gathering your course materials and organizing things into different modules and lessons and entire courses.

You have different topic types that are designed for learning material including things like test questions and assessments. And there are a lot of different options that are included in that set of DITA Learning and Training Specialization tags. And you can also customize those further if you need to.

AP:                                     And let’s back up just a little bit and explain what a map file is because a lot of people may not know.

GK:                                     Sure. So a map is essentially the equivalent if you think about published content, it’s similar to your table of contents, it’s the backbone or the overall hierarchy of a publication.

So if you have a course for your learning and training materials, then the map would say, here are all of the different lessons and modules and materials in that course in the hierarchical order and structure in which they appear.

AP:                                     Yep. And I think it’s worth noting what you said about the Learning and Training Specialization having a tremendous amount of options. I think that that particular specialization, just based on my experience with it, is so wide open, you’re going to have a hard time not finding something that’s going to help you out in it.

It is an enormous set of elements and it is very robust like you mentioned. And it is so robust, in some cases you may actually want to constrain down the number of elements and basically say, “You know what? Our instructional designers don’t use this particular set of elements.

They really don’t write these kinds of topics. So we’re going to hide those so they don’t show up when people are authoring.” There are ways to take the DITA standard and shrink it down just to the tags that you use to streamline your authoring and course design efforts.

GK:                                     Yeah. And we’ve seen that a lot of times where, for example, if you are delivering training in a school classroom, and maybe the only things that you need are your actual learning material. So that would just be learning content. And then also if you need some assessment questions for tests.

And maybe let’s say all you need is multiple choice and true and false, and you don’t even need the other types of test questions, you could narrow that specialization down to where you only have your learning content for the courses themselves and then those two types of test questions and that’s all you need.

And then on the opposite side of the spectrum, if you need more than what comes with the DITA Learning and Training Specialization, which I think is pretty rare because it does come with so much. But if you do for example, need a type of test question that does not exist, then that can also be specialized.

And I know that Scriptorium has done that before where we had someone who needed a couple of different types of questions besides what was already available. So we created I think two or three new ones. And yeah, that level of flexibility is 100% possible with the Learning and Training Specialization.

And it really can help you get your learning content into that type of structure that you might need for an e-learning environment or for doing something like a split where if you’ve got in person and e-learning and you need all of that delivered from one set of sources, DITA Learning and Training is a really ideal way to do that.

AP:                                     Yeah. And I think it’s worth noting too, even if you use the specialized set of elements that are specifically for learning content, people in other departments who are creating content, for example, people in your product content department, people who are creating the marketing content, if they are also using DITA and they are creating topics for example, that have very good explanations of concepts around your product or service or have the specifications for your product, you can take those and refer to them, borrow them.

Pull them into your content and reference them so you don’t have to recreate those specifications that another group has created in DITA. So there’s a lot of cross pollination and sharing that can go on to, again, give your company, not just your department, but your entire company, all content creators will have a much more consistent voice and be sharing things out to the public, to the customers, to their clients that are much more consistent in messaging and the information being shared.

GK:                                     Absolutely. And it makes updates easier as well. Because if you think about a company who needs to train new employees and the products are constantly going through upgrades, it’s a lot easier to get the training materials updated to reflect those product upgrades if all of the information is structured and connected and your learning content is pulling in and reusing information from your product content that automatically gets updated alongside of those product upgrades.

AP:                                     Exactly.

GK:                                     And just one thing I wanted to point out about learning and training as well is that Scriptorium has a website called learningDITA.com and all of the course material on there was created using the DITA Learning and Training Specialization. So if you want a real world example to get an idea of what some course material that was created, what that structure looks like, that is a really good place that you can go and all of it is free to access.

March 2025 update: We have moved LearningDITA to a new platform. The Introduction to DITA course is still free, and you can sign up for courses at store.scriptorium.com.

AP:                                     Yeah. And it’s worth noting that site is just one way that content could have been displayed. We hooked it up to a learning management system that’s based on the WordPress platform. But you don’t have to do that. We could have taken that content, we could have created printed study guides with it.

We could have done all kinds of things. We could have ported it into a different learning management system. You are really not limited in the ways that you can transform your structure source content into whatever. It’s really, the possibilities are nearly endless.

GK:                                     Absolutely. So we have talked about a lot of those possibilities and the benefits with using structured content for your learning material. But there are some common challenges that we see with this as well when it comes to moving to structured learning content.

And one of those is that it goes back to what we talked about in the beginning with the tools limitations. It can be really difficult to find a good learning management system that is both going to meet your training team’s requirements and allow you to work with a structured content like DITA. And that goes back to what you said earlier, Alan, about issues with close systems and difficulties with important export and connectivity.

AP:                                     Yeah. And those issues around tools of course, affect people’s perceptions and how they’re going to view things. If they’re really overworked and who isn’t these days? Frustrated with continually updating things, their mindset may be, I can barely handle what I’m doing now, can I manage this jump to structure?

And one way to kind of handle that I think is to really take a look at your pain points, what things are making things really hard for you. And listing out those pain points. And then having either someone in your organization who’s familiar with structure or even hire a consultant like we are to come and say, “Okay. Let’s take a look at your pain points.”

This is how structure could help address those pain points. And the thing is, you don’t have to go in a hundred percent at first. Do a small proof of concept. There are ways you can bite off a small section of what you need to do and focus on that because you can kind of grow and build upon things.

Start with one kind of training content, start with one particular subject of your training content, whatever. Break off a manageable amount that still reflects kind of the overall structure, the overall process that you’re going through. And then use it to do basically a test to see how things can work for you. You don’t have to do it all at once upfront.

GK:                                     Yeah. And another thing that I see as a challenge when it comes to moving to structure is that we talked about reuse and we talked about how that could really help bring some benefit to when you’ve got different departments like TechCom, MarCom training and others that need to share material that reuse can really help them out.

But reaping the benefits of that reuse is going to require collaboration across all the departments involved. And that’s definitely a challenge that we see because it’s a change in the way that people work. And to the point that you just made, when people are already overwhelmed and stressed and overworked, then getting them into a mindset of collaboration when they’ve previously been working separately can be a pretty big hurdle to cross.

AP:                                     Yeah. And this is when it can be very helpful to have someone come in who has seen this and done it, whether you hire, bring someone in who has had these experiences or you bring in a consultant. That can really help you focus and identify those pain points, figure out ways to address them, and then do your proof of concept testing.

GK:                                     Yeah. And I think that’s also going to really help show just how much time and cost can be saved if you’ve start reusing content that you have not been reusing before.

AP:                                     Yeah. And this gets into the return on investment. This is a very important part of doing structure for any content. Or it’s not even content that I’m talking about here, this is for any business initiative. You need to figure out what your return on investment is going to be on making these changes.

You don’t want to invest a tremendous amount of time, energy, frankly, pain into something for which there is no return on an investment. It just doesn’t make sense. So again, this is part of why you want someone who has done this before to come in and help. They can help you figure out what that return on investment is going to be for reuse and other aspects of your content.

GK:                                     Yeah. And I think that gets into what I see as one of the biggest challenges for moving to structure is you have to invest to get value. And because of that, you have to know what that return on investment is going to be.

And budget and resources tend to be one of the biggest limitations when it comes to making a change like moving to structure. So that is why it’s so important to be able to demonstrate, improve that return on investment before you dive in.

AP:                                     Yeah. And this sounds ridiculous, but you need to invest before you invest. You need to invest some time and money into doing a really good strategy and then think about how you’re going to implement that strategy. Don’t just dive into the tools, that is one of the worst mistakes you can make. And that is not just training content. Trust me.

GK:                                     Absolutely. So I think the overall question that we pose at the beginning or the thought that we pose at the beginning about learning content and structure being mortal enemies, is it possible for them to stop being mortal enemies and for you to get good structured learning content at your organization? And I would say yes, but it does take a lot of work, time and money.

AP:                                     Yeah. It’s a loaded question and maybe a little unfair, but it’s a valid point. You are not just going to go into in a new system, a new way of doing work and expect it to magically work. That is not how it works. It does take some analysis, it takes money and it takes time and patience to get these things to work.

GK:                                     But the good thing is if you do prove that return on investment is going to truly benefit your organization and it’s going to resolve your pain points, then you’re going to know that that investment will be worth it.

AP:                                     Yeah. And we do have clients who have been working in structure for years for their learning and training content. We know it can happen, they know it can happen. So it is possible. Yes, truly.

GK:                                     And I think that’s a really good place to close things out. So thank you so much, Alan.

AP:                                     Thank you, Gretyl.

GK:                                     And thank you for listening to the Content Strategy Experts Podcast, brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

 

The post The challenges of structured learning content (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 23:18
Industry 4.0 (podcast) https://www.scriptorium.com/2022/08/industry-4-0-podcast/ Mon, 29 Aug 2022 12:15:20 +0000 https://www.scriptorium.com/?p=21490 https://www.scriptorium.com/2022/08/industry-4-0-podcast/#respond https://www.scriptorium.com/2022/08/industry-4-0-podcast/feed/ 0 In episode 126 of The Content Strategy Experts podcast, Sarah O’Keefe and Stefan Gentz of Adobe discuss Industry 4.0.

Related links:

Twitter handles:

Transcript:

Sarah O’Keefe:                 Welcome to the Content Strategy Experts podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we’ll talk about Industry 4.0 with Stefan Gentz of Adobe. Hi, I’m Sarah O’Keefe. Stephan, welcome tell us a little bit about yourself and your job at Adobe.

Stefan Gentz:                   Hi Sarah. Yeah, I’m Stefan Gentz and I’m the Senior Worldwide Evangelist for Technical Communication at Adobe. I’m working for Adobe like six years, six and a half years now, almost seven years. And it has been a great journey and it’s a great company to work for. So, I’m happy to look into these topics like Industry 4.0 to drive that forward and help our teams also to get a better understanding for that and develop solutions that the Industry actually needs.

SO:                                     The Technical Communication portfolio at Adobe includes, I’m going to list a few and then I’m going to forget some things and you’ll fill in the rest, right? But it includes FrameMaker RoboHelp, AEM guides which did a CCMS AEM product and what did I forget?

SG:                                     For the Technical Communication part that’s mostly it, RoboHelp is brand new product since 2019, where we completely revamped it. And of course, with the good old workhorse FrameMaker that you can use for structured content and XML editing and DITA authoring, et cetera. And yes, for a couple of years now we have our own DITA CCMS Component Content Management System, which sits in as you said Adobe Experience Manager it’s called Adobe Experience Manager Guides, formerly XML Documentation for Adobe Experience Manager, which was a very long name.

SO:                                     It was.

SG:                                     Yeah.

SO:                                     So, I wanted to ask you about Industry 4.0, it’s a term that I hear a lot in the European market and it seems to be used more there perhaps, especially in Germany, because Germany has so much heavy industry, so much machinery, that kind of thing but when somebody talks to you about Industry 4.0, how do you define Industry 4.0?

SG:                                     Yeah. I mean, these two terms around there’s IoT, the Internet of Things, which is more a north American thing and industry 4.0, which is something that roots deeply in the German industry. And it’s actually not a new term goes back until I think even 2011 when that started. And in 2013, the platform Industry 4.0 was founded by German Industry Associations like Bitcom, VDMA, and ZVEI. And they were coming together to develop further and implement the industry 4.0 idea as part of the high tech strategy of the German government. So, the German government in 2011 was inventing that term industry 4.0 as a future initiative to drive digitalization in the German industry and that what was picked up by these industry associations and then companies like German Telecom, Robert Bosch, Siemens, Festo, SAP, and others joint that platform Industry 4.0 and started to create a framework for that.

And that’s all happening on the platform Industry 4.0, which is indeed very German and very rooted in the German classic manufacturing industry. And that is probably the reason why that term Industry 4.0 is usually heard more in Europe than in North America where the industry is talking more about internet of things but in some way, the same idea and roots into the same concepts, but Industry 4.0 is more about classic manufacturing industries and production processes and what we call the smart factory. And it refers to intelligent networking of machines and processes in the industry with the help of information and communication technology. That’s more an industry thing while IoT is very often also about the end consumer Smart Home and things like that. And that is Smart Home and things like that are not so much in the focus of Industry 4.0 there we really talk about things like smart factories, smart machines that can communicate with each other and where content and data is used in new ways.

SO:                                     Right. So basically, I mean, it sounds as though, Smart Home is internet of things more or less and smart factory, the industrial equivalent is Industry 4.0 to terribly oversimplify.

SG:                                     Yeah. Very simple set. We could put it like that. Yeah.

SO:                                     Yeah. Okay. So, we make a smart factory where we’re going to wave our hands and make a smart factory where everything starts to be interconnected and have some intelligence in terms of what’s going on with all the machines, talking to each other. What does that then mean for? You and I both live in the content world. What does that mean for us? What do, what are the implications of Industry 4.0 of a smart factory for content people?

SG:                                     Well, I recently talked to someone and he said, or actually she said data and content are the new raw material in the Industry. Of course, well in the future also have other raw materials to turn them into products but one important factor that is basically a deciding factor for success or failure is data and content. And when we think about the use of data, data on the production process and the condition of a machine and the product are combined and evaluated by algorithms, by software and data analyzes provides information on how a product can be manufactured more efficiently, like you’re monitoring the production process and then trying to optimize it. But more importantly, it is also the basis for completely new business models and services. For example, let’s think of elevator manufacturer, they can offer their customers. Things like predictive maintenance based on content and data.

The data is maybe produced by the machine, by the motor of the elevator, for example. But that data itself is not useful. It needs to have some context and that context is something that is coming from engineers. And for example, technical writers or engineers, let’s think of a classic manual for an elevator where you have some part about maintenance. And then there’s a table of how often does this elevator need to be maintained and maybe the software needs to be updated, whatever. And this is content that usually is created by human beings. And that is something changing where in the future… For example, elevators can be equipped with sensors that continuously send data about their condition and what to do with that information. That is something that we as human beings, as tech technical writers, for example, put somewhere and classically it’s put in a user manual or in the maintenance manual.

And that then there’s a disconnect between the data and the content and the idea of Industry 4.0 is also to bring this together and have a new way of consuming and using technical content like stuff that is a content that is in a maintenance manual and use that the machine can use that. And there are already examples for that in the German industry. Like there’s a big company that produces big machines for a wood processing. On the one end, you put in a big tree and on the other end of the machine window frame comes out of that machine simply said, and how often this machine needs to be maintained, the machine is actually pulling that data life through an API, from the system where the data content is hosted and the maintenance data is created as XML as data and there’s a maintenance table and in index in one topic and the machine can pull out the information, when is my next maintenance cycle from the data topic that is created by a technical writer.

And that means we also need to think about how we create content and how we offer that content and how we make that content accessible. And if you think about it from a technical writing perspective, traditionally, the idea of technical writing was to explain a complex product or a complex process for a human reader so that the human reader understands how to use the machine or how to use the software or how to use the elevator or whatever, or a service engineer who has to maintain such a machine or product. So, the target audience of content created by technical writers was usually a human being. And in that Industry 4.0 context, and also in the IoT context, we need to think about content, new ways.

We need to create content in ways that it’s consumable by both human beings and machines and that is something where we need to not only from the words and phrasing and how we explain content, but also in terms of attributing content and say, ‘This is content that is for a human reader, one paragraph for the human reader and one paragraph or one table of data for the machine that is going to consume that content to pull out, for example, maintenance cycle data.” And that means we need to approach technical writing in a new way in the future and already today actually, and also need to use technologies that make it possible to implement what we as technical writers produce to implement that into an Industry 4.0 or IoT scenario. Yeah, it’s a complete new way of what we need to do with content that we produce and how we store it, et cetera,

SO:                                     I’m terrified of your wood machine. So I’m going to go back to the elevator cause. So what you’re saying is that you have an elevator and in the Non-industry 4.0 world, you have an elevator and the elevator goes up and down. And after every, let’s say 100 hours of operation, there’s a particular maintenance procedure. You need to go in and lubricate some things or check a belt for wear or something. And by the way, no idea how elevators actually work, so-

SG:                                     Neither…

SO:                                     Yeah, okay. So we have an elevator every 100 hours, there’s something that you should be doing. And so, the implication of Industry 4.0 is that the elevator itself has sensors which count operational time, right? So it would measure, Hey, I’ve hit 100 hours. Yeah. And at the point, and it knows that, or it has what amounts to a clock or a counter inside the system, inside the elevator.

Okay. So it hits a 100 hours and normally at a 100 hours, the dumb elevator, right? It has the sensor, but the dumb elevator just turns on a red light and says, “Hey, hi, you need to do my maintenance.” Right?

SG:                                     Exactly.

SO:                                     And then the maintenance technician shows up and says, “Oh, the red light is on. I have error code 57. Let me go see what that is. Oh, that means I need to go pour oil on this thing over here.” Fine. And presumably they looked up error 57 in the documentation, some horrible PDF that’s like 100 of pages long. And it has error codes for days. And they go in there and under 57 on page 685, they eventually find something about machine oil, but in Industry 4.0, it kicks off that message or that error that says, “I’ve hit a 100 hours. It’s time to do some things.”

And then essentially it has access to the documentation, right? It’s like contact sensitive, help. It just says, “Hey, Stephan, my friend, the mechanic, you need to do this procedure.” Right. You don’t have to look it up. You don’t have to provide that connectivity between the system, the error code or the maintenance code. And then the documentation that explains what that maintenance code is. Now, somebody did the work, right? I mean, to your point, some human being created the content and presumably some other human being built the framework that connects all those error codes to the relevant information.

And then somewhere along the way, we have to display that content in a human readable form so that the service technician can do the procedure. Now, to your point, that doesn’t get into the question of what about automated procedures or the machine automatically going into service mode and I guess servicing itself in some way. So, when I think about this and we look at what we’ve been doing for the past 15, 20 years with topic based content, that seems like the first step in this direction, right? That we have to have individualized units of content that cover these specific procedures so that when the machine says, “I need this service, we can connect. I need this service X, Y, and Z to the relevant instructions.”

SG:                                     Yeah. And Sarah, we can think it even a little bit further. Imagine that it’s not about oil in the elevator, but about a certain part of that elevator that needs to replace every hundred hours of operation time. As you said, classically red light would light up and someone looks at that red light and needs to call the elevator company. You need to send someone there’s a strange red light. And then the service technician comes, sees, “Oh, that’s this red light with this arrow code. Okay. I need to replace that part.” Good. That don’t have that part with me. I need to go drive back into the company and order it in SAP or wherever. After a couple of weeks, it comes back. The part is delivered. He again, needs to go to the elevator, replace the part, et cetera. So it’s a very time consuming and cumbersome process where maybe even the elevator will not be available for a certain amount of time.

And people need to walk the stairs, which could be healthy, but could also be that. And the idea of Industry 4.0 here is that the elevator can look up. For example, by being connected through the internet with the central data center of the elevator company can look up what that error code actually means. And in a list of error codes or whatever could send, for example, a short message or an email or something to the service technician, informing the service technician that this part will needs to be replaced in 20 hours of operation time, predictive maintenance, and when the 100 hours are actually achieved, then the survey technician can already see, “Okay, based on data. So many hours per day, the elevator running probably on August 15, this part will fail and needs to be replaced.”

And then he can also already come to the elevator with the right part and replace it and he already knows, or she already knows that how to replace that part, because that information is already there. It’s already pulled from the technical documentation coming maybe to the iPhone or Android phone or whatever or smart pet that this is the part that you need to re replace. It’s maybe already automatically ordered from the supplier of the replacement part, along with the information, how to replace that part. So when the service technician arrives at the elevator, he or she already knows what to do and what to replace and doesn’t need to go forth and back and then order and then wait for the part to come, et cetera. And that makes the whole process much more efficient and makes it much more stable for the people who use the elevator because the elevator company can take care of such replacements and maintenance things before in German, we say before [foreign language] fallen into the-

SO:                                     Well-

SG:                                     Yeah.

SO:                                     Oh, there’s a terrible lassie joke in there, but we’ll let that slide. So, okay. So let’s say that I work for some an industrial company producing service and maintenance documentation. Now, if my organization has already started an initiative like this, then this is all not new information, but let’s say what about for the people that are in these organizations that at this point are still producing dumb PDFs. I mean, good maintenance instructions potentially, but just in locked up in PDF or locked up in a way that is not interconnected with the systems.

What would be your advice to those people? What are the first steps to start thinking about this? If I’m the tech writer and I know that my company is moving in this direction, doing more service management, predictive maintenance, trying to add some intelligence into my products, then what are the implications for me as a content creator? And what kinds of steps should I be taking proactively to make sure that I’m ready when eventually the director or the VP of something shows up on my doorstep and says, “Guess what? We’re doing Internet of Things, we’re doing Industry 4.0, and we need your content to be ready.” What do I do?

SG:                                     Well, you will not go very far with traditional ways of producing content in Microsoft Word as a long Microsoft word document so what do we basically need is what do we call intelligent content and intelligent content basically is something like structured content XML based where you can add metadata, where you can attributes to the strengths of content that is readable so to say. So you could have attributes on a certain table with maintenance data, like the audience is the technician, or the audience is the machine, and then attribute the rows in the table to certain, to these two certain audiences. And this is something that this additional intelligence on top of the text strings themselves, this is this additional layer of intelligence that you can attach to the content. This is only possible today with XML and this is why Industry 4.0 Scenarios or IOT scenarios are always based on content that is produced in XML.

And one great language to produce XML based content is of course DITA, the Darwin Information Typing Architecture, this language makes it possible to put that in additional intelligence into the content or on top of the content or attached to the content. This is one thing. So, structure content is there’s no discussion about that. If you want to be future ready, if you want to be Industry 4.0 already with your content, you need to work with XML preferably with DITA. And then, you need to also host that content somewhere so that it can be centrally managed, but also centrally consumed and centrally delivered. And that is something where you need a CCMS. You will not be able to achieve that with only RoboHelp or only FrameMaker, but you need a component content management system. And that CCMS also needs to have APIs so that external content consumers can access the content through an API and pull out the information that they need.

And from that CCMS on the other end, you can also deliver that content to what you say, call Omnichannel. So basically the good old PDF yeah still very relevant, but you can also produce push that content into other systems, like let’s say Salesforce or Zendesk or your own help portal or support portal or [inaudible]-

SO:                                     Service management system. Yeah.

SG:                                     Or in some other management system or some machine system where can inject certain informations from the CSMs side. But APIs are a crucial part there with system, without APIs that are accessible from the outside. You will not achieve a lot without APIs. So basically, XML plus CCMS and the CCMS with APIs. These are the things that you definitely need to have as a technology stack to be Industry 4.0 already. And then of course, you need to think about how to create that content, how to write content, how to migrate content legacy data, because you don’t want to start from the scratch with everything, right? You want to migrate that content and have a content suggestion engine where you can push your existing unstructured, non examine content into a CCMS like Adobe Experience Manager Guides and get it migrated into XML and transformed into XML, and then enrich it with these new possibilities that such system offers like attribution of content, metadata, taxonomy, et cetera.

SO:                                     And, we talk about the world being more and more interconnected and more and more interdependent and really, it seems like what you’re describing is a world where the content is an integral part of not just the product talking to it, but the product operations in the sense of the maintenance tech, the service people, all these different people who actually are looking at both the product, our elevator, and also the maintenance and connecting those together in ways that make them better, make the product better, make the product safer in that we have, as you said, predictive maintenance rather than after the fact maintenance and potentially more efficient, right? Because, well, let’s just replace it 98 hours instead of a hundred, instead of waiting for it to stop.

SG:                                     Yeah. And it’s also the personalization of that content. If you think of a car, let’s say you’re a car manufacturer and you have a model of your car, but this model is in hundreds if not thousands of variants in the market with this seen on light or with this other light, or with all kinds of parts in the car that you can customize as a customer. But when you buy that car, traditionally, you get that super big bot manual where all possible variants are described.

And of course not only that’s raise of paper and not good for the environment. It’s also very unfriendly for the customer who wants to access information because the customer doesn’t care about all the variants. The customer cares about that one configuration that he or she has bought this kind of configuration and having a personalized bot manual for your car in two variants, even one in the screen, in the car, and one in the human machine interface there, and one in the maybe printed manual for security, backup reasons, and this kind of personalization of content and maybe even having two different ways of representing the content, maybe in the technical manual that is shipped with the car, it’s legally approved proper content.

And maybe in the onscreen, in the human machine interface, in the car, in the display, it’s a completely different content. Maybe it’s like, “Hey, Sarah, the oil in your car needs to re be replaced or whatever.” And this, this, this kind of personalization of content for that you also need a CCMS. And to put that a step further, personally, I think in the future, we will approach content in a new way from delivering content to hosting content. In the past, we were always thinking about the output channels. We were always thinking about how can we create a nice looking PDF or how can we create nice looking web help portals and you also talked a little bit about that in your presentation at DITAWORLD, about content as a service.

I think in the future, companies will much stronger focus on having a central place for their content a CCMS and then from there, the content is just pulled or delivered as necessary and needed by the different scenarios where their products are used in, and this idea of not thinking about the output, but thinking about how to host the content and in which format to host the content, et cetera, that is some a new way of thinking about content. And I think this is where the future is going to that companies are more and more focusing on centralizing that content in a platform that can be used by all kinds of content consumers.

SO:                                     And we’re already seeing some of this in the projects that we’re doing exactly that model and in a really interesting move away from focusing on delivery endpoints and rather focusing on, I guess, enablement and making content available but not necessarily being a 100% focused on where it might be going. I mean, of course delivery is important, but it’s more the idea that we want to make sure this content is set up in such a way that it can accommodate today’s requirements, but other requirements, the requirements we don’t know about, you know the future stuff. That’s a really interesting, I think, challenge to the people that are listening. And so, I think I’m going to wrap it up there’s an enormous amount of, I think, food for thought in here. So, Stefan, thank you so much for coming in and sharing all this wisdom with us and all these exciting new possibilities for our content.

SG:                                     You’re welcome.

SO:                                     And with that, thank you for listening to the Content Strategy Experts podcast brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

 

The post Industry 4.0 (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 30:27
Structured content: the foundation for digital transformation (podcast) https://www.scriptorium.com/2022/08/structured-content-the-foundation-for-digital-transformation-podcast/ Mon, 15 Aug 2022 12:15:26 +0000 https://www.scriptorium.com/?p=21488 https://www.scriptorium.com/2022/08/structured-content-the-foundation-for-digital-transformation-podcast/#respond https://www.scriptorium.com/2022/08/structured-content-the-foundation-for-digital-transformation-podcast/feed/ 0 In episode 125 of The Content Strategy Experts podcast, Alan Pringle and Amy Williams of DCL talk about digital transformation projects and how structured content provides the foundation for those efforts.

If, as a company, you start to think and plan and build processes with the digital innovation, you really start to future-proof for yourself, because you’re going to become more agile, more flexible.

– Amy Williams

Related links:

Twitter handles:

Transcript:

Alan Pringle:                     Welcome to The Content Strategy Experts podcast, brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize and distribute content in an efficient way. In this episode, we talk with guest Amy Williams about how content structure provides the building blocks for innovation. Hey, everybody. I’m Alan Pringle. We have a special guest here today.

Amy Williams:                 Hi, Alan. This is Amy Williams. I’m here from Data Conversion Laboratory.

AP:                                     Hey there, Amy. First, let’s do some introduction so people know who you are and what your company does. So tell me a little bit about yourself and about DCL.

AW:                                   So I’ll start with DCL. We’ve been in business over 40 years.

AP:                                     Good for you.

AW:                                   And, yeah, I think we have you beat. I think we’re 1981.

AP:                                     Yeah. We’re ’97, so you have.

AW:                                   Right. Essentially, what we do is provide data and content transformation services and solutions. We use different technologies to provide those services and different various AI technologies that you probably hear a lot about, but machine learning, natural language processing and ultimately, we use those to help our customers structure their data and their content so they can use them on different technologies and different platforms. That’s essentially what we do. I’m the Chief Operating Officer at DCL. I’ve been here for 24 years. I come from a management consulting background.

AP:                                     Wow.

AW:                                   I know, it’s a long time. I was so shocked when I say it myself.

AP:                                     But hey, that means you know what you’re talking about. I think you’ve given us a good springboard with that introduction into what you and I want to talk about today. We’re going to talk about how structured content is the building block, the basis, whatever you want to call it, for doing these digital transformation and innovation projects. Would you give me what your definition of digital transformation is?

AW:                                   So really, I’d say, at its core, digital transformation is using digital technologies to create or modify business processes and your customers’ experiences. And the goal here you’re trying to meet, business needs are changing all the time, you’re trying to meet those changing business needs and the market requirements. But I would say it’s really the re-imagining of your business in a digital age. So, I guess, if you think about it, most companies really started this transformation a long time ago. We used to have analog processes. People started to go digital. So that was sort of the first step in the digital transformation. But if you think about it, we had filing cabinets full of paper and ledgers were built to [inaudible 00:02:47] their books. And then to digitize things, we went to word processors, and spreadsheets, and scanned hard copy.

I guess, when I’m talking about digital transformation, I’m talking about taking that next step and changing the way you’re doing your business, from your internal systems to your customer interactions. If, as a company, you start to think and plan and build processes with the digital innovation, you really start to future-proof for yourself, because you’re going to become more agile, more flexible. You’re ready to embrace these new technologies. Basically, everyone has to keep up with the times to succeed, so that’s really how I see digital transformation. That’s what it is.

AP:                                     Yeah, and that fits with kind of our, at Scriptorium, we of course, have a very content-specific view of digital transformation. And our shorthand description I think can be summed up as something like using technology to enrich the delivery of information to customers. I think you hit on a lot of good points there, especially in regard to future proofing. But let’s dial it back, go all the way back, and let’s talk about from… You’ve got this big overarching idea, but, at the core of it, you’ve got to make some changes about the way that you handle information, the way that you handle content. And really, that pivot, from my point of view, and well, not just mine, but a lot of people’s point of view, is that structured content is at the core of doing this future proofing so you can do this digital transformation. Do you want to talk about that a little bit?

AW:                                   Yeah. I totally agree. Obviously, we’re in sort of the same business here. To me, the same thing, it’s a key building block for digital transformation, is structured content. I mean, there’s other pieces, obviously, of it. But from my perspective, and we’re both a little biased here, that structured content is that key building block here. I mean, I could talk a little bit about structured content if you, I mean, want me to do that.

AP:                                     Yeah, we might as well. Why don’t we go ahead and just define it again. This digital transformation, people have slightly different definitions, so let’s hear yours.

AW:                                   Okay. From my perspective, I mean, obviously, all companies, organizations, everyone has archives of content, and it’s different across industries. It could be historical documents, photos, industry standards, research. It just depends what industry. The problem is it’s not all in searchable format. I was just talking a little bit about digitizing as that first step to the transformation. But people think as a PDF, I took this, I scanned it, I’ve got a PDF. It’s a digital document. Well, obviously, it is digital, but it’s not really, because it’s not true searchable format. So that’s where the structured content comes in. We have to take that image-based PDF, take it to the next level. So you can run it through an automated OCR engine.

AP:                                     And tell people what that is.

AW:                                   Oh, so OCR is an Optical Character Recognition engine. And when you run it through the OCR, you get text behind that. So it’s not always beautiful text. It can be searched. But sometimes it doesn’t come out exactly right. Is and ones and Ls might be mixed up. It depends what the source format and what the quality is. And so it could be searched. The problem is if you don’t know the structure of that text, because basically you just have a bunch of text behind that image, it’s not going to be a very efficient search. So that’s where the issue comes in. And really most of the content that people are producing now for the most part is not structured. People are using Word and Google docs and it really produces unstructured content.

And what’s happening here is, when you’re writing these things, authors that are typically writing a Word document or Google docs or something like that, they’re really concentrating on the way the content looks, instead of what that content actually is. So for example, if you’re writing something and you have an introduction to a journal article and you say, “This introduction is going to be bold,” well in XML or in structured content, you would say, “This is an introduction.” You would actually say what this is. So when we talk about a searchable format, I’m really talking about XML here’s. That’s what we’re talking about.

AP:                                     Sure. And like you said, we’re both kind of biased. I would agree with you. XML is really the way to do structured content. And when I say structured content, what I am saying is it is a publishing workflow that lets you define very consistently organized content in your documents, programmatically, so a human being doesn’t have to do it. So it sets up. You’ve got to have an introduction that has these types of elements. You’ve got to have a procedure that has this kind of structure. So all of that is programmatically enforced. And on top of that enforced structure, this is the critical part, and I think you may agree with this too, because again, we’re both biased, that you can add a layer of intelligence on top of that, that is really necessary from this delivery perspective, in particular, from my point of view.

AW:                                   Right. And I’m assuming you’re talking about a metadata layer, right?

AP:                                     Exactly. Exactly. Yes.

AW:                                   Right. So in time that will facilitate an even a more efficient search in your content management system or your website. Basically, if you can’t find your content, it’s really not usable, so that’s really the key here.

AP:                                     Exactly. And it goes both for the people who are creating the content, because if you have all of these bits and pieces of structured content inside a content management system, the people who are creating the content need to be able to gather all the bits that they need. And if they can’t find them, they’re going to probably rewrite it, which is what you don’t want to do. Plus on the delivery side, you may need that intelligence to personalize that content so you can send out something that is very specific to a region, or to the audience, or whatever else.

AW:                                   Right. You sort of touched a little bit, then because to me, one of the biggest benefits of structured content also is content reuse, the infrastructure content facilitates content reuse. So basically, instead of creating and recreating, copying and pasting content, you’re creating that XML once, that instance once. And then other people in your organization can use it or reuse the content, but you’re also going to be able to publish it everywhere. So different apps that need it, or integrated systems that can use that XML and render it for different devices and generate PDFs for distribution, create eBooks, all of those things that can happen once you have that structured content in place. And really, I guess, the opportunities are endless as far as I see it. But it all comes back to that building block of structured content.

AP:                                     Sure. I’m glad you brought up reuse, because when people hear digital transformation, they may think of big, shiny, beautiful marketing things and all the fancy technical ways that you can deliver content. But that reuse angle lets you basically give a very common voice or give clients your customers the same information regardless of where they are, if they’re in the sales cycle, or if they’re using their product, or whatever else. By reusing that content, you are giving consistent messaging. And yeah, it’s not as glamorous as some flashy kind of personalized distribution scheme. But that really, I think, is super important when we’re talking about digital transformation.

AW:                                   Right. I agree. And you hit the nail on the head. It’s not super fancy. They say content is king. It’s true. It is.

AP:                                     Yeah. Absolutely. So your wheelhouse, like ours, is structured content. So why don’t you tell me what you’re seeing out there right now, as far as trends go with structure helping these digital transformation scenarios?

AW:                                   Right. So there’s a few different things that we’re seeing. I wanted to talk a little bit about the pharma industry, because we’re seeing a real big uptick in the use of structured content in that area, in life sciences and pharma. And really, what’s drawing that is, you can imagine, there’s a lot about documentation required to bring new drugs to a market. So here in the US we have a market language called SPL. It’s for structured product labeling that the FDA’s mandated. But what we’re seeing now is the pharma companies looking past that and worldwide. I mean, where we’re dealing with companies all over the globe right now. And they’re starting to look at where they can implement other tools and technologies that are using that structured content.

And the types of applications we’re being asked to support are things, it’s really streamlining the content around product labeling in the pharma industry. And you know what the goal there is, is they’re trying to improve the way that the content’s created and managed and delivered. It’s a full end-to-end. And they’re connecting at the end that product contact with the graphic templates. And they’re really putting together a fully-automated workflow around labeling. I mean, it’s really amazing. It’s really transformation of that whole internal publishing process for pharmas. It’s the same kind of thing that we’ve always done in tech docs. And really, the pharma industry is starting to come around to that end-to-end process and using structured content underneath. So it’s really, really exciting.

And the other trend we’re seeing actually in the pharma industry is they’re also starting to use structured content for a direct end user consumption, like through mobile apps. We just recently worked on a pilot, still around labeling. You know the labels you get when you get a prescription drug and there’s pieces of paper that you fold up? So they’re really going online and digital with those things. And they’re looking at ways for the end user and you as a consumer to go and get the most up-to-date information about those products. So that’s really interesting also.

AP:                                     So this integration you’re talking about, it kind of is an integration in two ways. You’re integrating your processes that really assist with this automated delivery of content. But you’re also integrating things in regard to delivery and making it much easier for people to get information, your consumers, to get information, because it’s no longer just on a piece of paper. Not everybody wants to read a piece of paper in the 21st century anymore.

AW:                                   Right. Right. And the piece of paper may be out of date.

AP:                                     Exactly.

AW:                                   And that’s really important. It’s a liability issue. I think that a big reason why they’re being embraced in the pharma company, I think is part of liability and risk and minimizing risk.

AP:                                     Yeah. And I’m glad you brought that up. Again, digital transformation is not just about the shiny stuff. It can really help with regulatory compliance, a lot, and give you all of the basically, intelligence you need to keep track of things, the archiving and whatever else, because you’ve got that really nice integrated process in the background managing all that information for you.

AW:                                   Right. And it’s interesting, Alan, the legislation, that was the other area that when you asked me about trends that I wanted to hit on, that we’re seeing now. We’ve worked on a few projects now where we’re harvesting this complex legal and regulatory content and from public websites. And we’re seeing this trend in several industries. I’ve seen it in the financial industry. We’re seeing it in insurance and legal and accounting. And what’s going on is there’s all this information that appears only in public websites, this legal and regulatory type information. And their sites are constantly being updated with new content, modified content. It’s just so hard for people to keep track of it, for companies especially to keep track of it. And it’s extremely valuable, but there’s no standard for it or anything. And it’s a real challenge for companies that need that data so they can be in compliance.

And so what we’re seeing now is a bunch of projects where we’re developing applications that are just harvesting that information on a continuous basis and then structuring it, putting it into some form of XML, feeding that XML to their downstream system. So it’s streamlining that compliance process and back to avoiding the risk of non-compliance. I mean, they’re really, really important applications.

AP:                                     Yeah. And that’s certainly better than keeping 1,400 filing cabinets full of musty old paper, isn’t it?

AW:                                   Right. Right. And I don’t think they were really doing that. I mean, they have the information. They’re on websites. The problem is, how efficient is that? If you have up to 150 legislative websites that you need to keep track of and comply with different laws, it’s very difficult. You can have a whole stable of attorneys or legal aides sitting there working on this, but it’s just not efficient, unless it’s in a structured format and a consistent structured format. You can look at one website and it’s one way, and another website’s another way. And we’re talking documents here. It’s a little different than in an Amazon, your product details, that type of thing, but you’re talking about full legal documents.

And then you have to know what changed and what got updated and what got deleted. And you need to know that on an ongoing basis. And you need to follow those. So, I mean, it’s a lot of really valuable information that needs to come out. So we’re seeing a lot of these harvesting projects happening, and with structured contact being the outcome.

AP:                                     Are there any other projects that really show some, I don’t know if surprising is the right word, but uses that you may not necessarily consider as being a digital transformation project that you want to talk about?

AW:                                   I think mostly everything we get involved in is as a digital transformation project. I mean, we have some, I think, some particular interesting projects. But there’s one that we’ve been doing. We’ve been working for over 10 years now with the US Patent and Trademark Office. And I mean, it’s another good example of digital transformation. So you can imagine the USPTO receives a massive volume of patent application materials on a daily basis. And it’s a lot of different document types. And this is a lot of information. And they did have this process to digitize the incoming material. They had a whole scanning process going on, but they’re scanning to TIFF images. So it’s back to that same thing, you’ve got this information in sort of a static digital-

AP:                                     It’s a picture, essentially. [inaudible 00:17:57], yeah.

AW:                                   It’s a picture. Right, right. So it was taking the patent examiners way too long to go through the material. They had a multi-year backlog, when we started this, of reviews and approvals of patents, which obviously, is not acceptable to anybody. So at DCL, we developed a, there’s a fully automated system for them that transforms that high volume of scanned images to their XML schema. So they have their own XML schema. And what’s interesting about this… Well, the volume is interesting, because this is a totally lights-out, no human hands are touching this process. And we’re processing about a one and a half million pages a month. And the turnaround’s under 10 minutes. So it’s fully lights-out conversion. And even the volumes in some months have gone to two and a half million pages in a month. And it can scale to several times that.

But what the really interesting part of this is the way that we were doing the OCR, because we talked a little bit about OCR and how you can scan something. And then what you’re getting behind is not so great. Sometimes the OCR doesn’t work very well with tables and things like that. So the process that we developed now, it uses a computer vision technology. And it automatically detects that content that isn’t suitable for OCR. So things like math and there’s a lot of chemical equations. You can imagine in patent applications, they have a lot of those chemical… I don’t know what they’re called, equations? Or the pictures, the chemical pictures, the formulas, that’s what they are, and tables and things like that. So this process will extract those artifacts before it actually runs that OCR process.

So you’re running the OCR process just on text. So you get a better result. It’s removing those pieces that won’t OCR properly. And then we transform the content to XML, repackage it, the XML, with those artifacts that were removed. And you do that based on that page coordinates. So we did that computer vision to figure them out, we kept the page coordinates. And then you put them back together. And then they get delivered to USPTO.

AP:                                     Okay. I’ve learned something today. That is absolutely fascinating.

AW:                                   This is really interesting.

AP:                                     That is fascinating.

AW:                                   Yeah, it’s very interesting. And the result, it was a great result. It significantly improved the patent examination efficiency and the productivity of the patent examiners. At USPTO, they’re taking the structured contact. They have automated analytics that they use. They’re generating these claim trees. They report on different claims. There’s term and phrase identification. There’re all types of things they’re doing with the structured content. And it’s really amazing. I mean, they’ve significantly reduced their backlog. I mean, I don’t think they have these multi-year backlog anymore. It’s been a really successful project.

AP:                                     And if you think about it, this is the kind of thing that you can pandemic proof or help reduce the risk of events like a pandemic. Because if you have these digital automated processes, you’re not as reliant as people getting together and being together and doing this kind of work.

AW:                                   Right. Yeah, it’s a pretty cool project. The other one I thought might be interesting is one for NYPL, the New York Public Library. I’m a New Yorker. Everyone knows what NYPL is, New York Public Library. So they obtained from the US Copyright Office this catalog of copyright entries. And it’s basically this huge, vast collection of digital copyright entries dating back to 1891. And it’s really old material. So what’s in there is just the copyright status of millions of works. And so when you think about what the page would look like, I mean, they have about 450,000 pages of this stuff. But each page, they’re very dense pages. It’s three or four column. And they’re just these little catalog entries, columns of catalog entries. So each one could have a hundred entries on one page.

When NYPL came to us, what they wanted to do was create a database so somebody can quickly get in online and determine the copyright status of a specific piece of work. So they’re trying to benefit the publishing and scholarly communities, so they understand what’s within copyright, what’s not within copyright. So we developed a process there also to extract the text again, using this page coordinate data, which we’re seeing a lot in these systems. So the page coordinated data, in these systems that the end users are using, they want to show the page as it was scanned. So they want to show that image piece. And then they want to show the extracted text that’s fielded. So we use the OCR engines that use page coordinate data to be able to facilitate that type of a display for the end user. It’s interesting.

It’s based on funding for NYPL. So we’ve done three different tranches of this work. And as they get new funding, we do more. But really what’s happening is the users are able to search across these hundreds of thousands of records with a very high degree of confidence now. And they can search by specific fields. They can identify records relevant to their search. Like I said, they can use the machine readable text and the image record. I love this one. Actually, NYPL refers to this project as unlocking of American creativity, which I think is great. But that’s really what it is.

AP:                                     Because if something doesn’t have a copyright, that means someone else can take it and use it as a building block, perhaps.

AW:                                   Right. They can use it. I mean, I think that in the end, eventually if it’s a book that is no longer under copyright, maybe they’ll be able to get an ebook on demand. Or there’s just so many different applications for this. But it is unlocking all that creative, whether it’s some music, records, books, all the different types of things that people can have free access to, if it’s not under copyright anymore.

AP:                                     And again, this is metadata. At the end of the day, copyright or not, that is a piece of very important metadata.

AW:                                   Yep. So we’re back to structured content and metadata as the key to digital transformation, from our perspective.

AP:                                     Yeah. But those two case studies were really fascinating. And to wrap up, do you have any advice for companies who were wanting to maybe do something a little more innovative and consider structured content?

AW:                                   Right. So, I mean, I think, like we said, the structured content is one building block of that digitization strategy. I always have a hard time with that word, but digitation. There I go again. Anyway, I mean, my advice would be, I think you need to start with an overarching digitization strategy, that needs to be well thought out before you’re going to take on a structured content project. That’s from my perspective. And I think you need to answer some larger questions here before you say, “Oh, I’m just going to create XML.”

What kind of a content management system are you using? Or are you’re going to use a component content management system and go to data? Or what downstream systems are you going to use for your structured content? What are you doing with the content? How’s it going to be utilized? Who’s going to update it? And who’s going to use it? And how will the content be created and structured? New content, how are you going to create that in a structured format? So this is a few other questions.

But again, my advice, because again, we’re biased, I would suggest working with consultants and partners, not only just because I’m biased. It’s just because I think it’s a great way to get started on drafting that overarching strategy, because part of the advantage is you’re drawing from the experiences across different clients. Both of us have experiences working with many clients and many projects. And we can draw on those experiences. So first to me would be create that overarching strategy.

And then this is one’s that’s going to be near and dear to your heart more, Alan, would be, once you decide on a structured content project, you’re going to want to develop a content model first. And that’s you. And you want to make sure it’s supporting a good representative set of content. So if you’re in the pharma industry, you want to make sure that you’re covering different drugs and products and different localities, because they’re global, different document types. With journal content, you want to make sure you’re looking at time spans, because just like we talked about for NYPL, something from the 1800s, it’s going to look very different in the 1900s. And then, I think the content model is key, which is where you guys come in.

And then you’re ready for your actual conversion. Once you have that content model, that detailed content model, I think, then you’re ready to go into a structured format and start with a pilot and some samples. And I would suggest significant testing with downstream systems before you begin a conversion of a full set of data, because you don’t want to have to go back if you have a large volume of content and redo anything. But again, once again, I would suggest working with a company that does it. But again, not only will you able to draw from the years of experience, which I already said, but like I just talked about in a couple of these examples, we can apply some automation to the conversion process, which is going to produce a higher quality and more consistent data set.

AP:                                     Absolutely. And I will say one thing about conversion, why I think it’s really wise to use a vendor. If you are doing one of these big, innovative digital transformation projects, there’s going to be some change management you need to do to get people moved off the old way into the new systems. The absolute worst way you can introduce a content creator, in particular, to a new system and a new way of doing things, is to have them manually convert from the old system to the new system. You will gain so much hate and so many despondent, unhappy people, that right there is another perfect example of why you need to consider hiring professionals to do your conversion work.

AW:                                   Yeah. A lot of our work nowadays, are we still calling it re-platforming?

AP:                                     Yeah.

AW:                                   That’s really what it is.

AP:                                     [inaudible 00:28:41] the word, actually, yeah.

AW:                                   That’s what we’ve been doing. We take a lot from one platform, your content in one platform and move it to another platform. And sometimes we’re doing conversion from one XML to another XML. But we do a lot of re-platforming. And it’s a big, messy job. This is what we do, I mean, Data Conversion Laboratory, that’s all we do, so yeah.

AP:                                     Exactly, exactly. Amy, this has been a really interesting conversation. I cannot thank you enough.

AW:                                   You are so welcome. Thank you for having me.

AP:                                     You are most welcome. Thank you for listening to The Content Strategy Experts podcast, brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

 

The post Structured content: the foundation for digital transformation (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 29:28
Omnichannel publishing https://www.scriptorium.com/2022/08/omnichannel-publishing/ Mon, 01 Aug 2022 12:00:39 +0000 https://www.scriptorium.com/?p=21484 https://www.scriptorium.com/2022/08/omnichannel-publishing/#respond https://www.scriptorium.com/2022/08/omnichannel-publishing/feed/ 0 In episode 124 of The Content Strategy Experts podcast, Sarah O’Keefe and Kevin Nichols of AvenueCX discuss omnichannel publishing.

“Omnichannel involves looking at whatever channels are necessary within the context of your customer’s experience, how your customers engage with your brand, and then figuring out how to deliver a seamless interaction.”

– Kevin Nichols

Related links:

Twitter handles:

Transcript:

Sarah O’Keefe:                   Welcome to The Content Strategy Experts podcast, brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize and distribute content in an efficient way.

SO:                   In this episode, we talk about omnichannel publishing with special guest, Kevin Nichols of AvenueCX.

SO:                   Hi, I’m Sarah O’Keefe. Today I have Kevin Nichols joining me. Hey, Kevin.

Kevin Nichols:                   Hey, everybody.

SO:                   How are you doing over there?

KN:                   I’m doing well. I am hailing from Cape Cod, Massachusetts, so it’s nice and sunny here today.

SO:                   Excellent. Before we jump into omnichannel, tell us a little about who you are and who AvenueCX is and what you all do.

KN:                   I’m Kevin P. Nichols. I have been doing digital and content strategy now for, God, almost probably 25 years or in digital and user experience content strategy, that type of work.

KN:                   I started this company with my business partner, Rebecca Schneider. We co-founded it in 2015, actually 2016. We specialize in enterprise content strategy solutions. Those can then dovetail into omnichannel content strategy personalization.

KN:                   We do a lot of taxonomy work. We do solutions like federated search or cross-channel customer journey content solutions, or integrated customer experience content solutions.

KN:                   We work with large scale global brands, in standing up their content solutions across the enterprise. All of our clients are global. They’re all large brands. They all have for the most part, very complex content issues that we are working to help them solve.

SO:                   What is this omnichannel thing? You hear about omnichannel and also multichannel and single channel. Are those all flavors of the same thing or is omnichannel something different?

KN:                   It’s completely different. Let’s start with omnichannel. The way that I define omnichannel in 2022 is from the customer experience, because it needs to start with the customer experience.

KN:                   I would define it as seamless interaction from one customer touchpoint to the next, throughout a brand’s customer experience.

KN:                   That means the customer has seamless interaction, whether he, she or they are engaging with one touchpoint or another, throughout that brand’s customer experience.

KN:                   It comes from the Latin root omni, meaning all. So, think omnivore, omnipotent, omniscient. It can also mean every, and then with channels.

KN:                   Channels here are not just digital channels. It’s not just a website or a mobile app or a mobile device. It’s also analog.

KN:                   We have, for example, in-store is a channel. Television is a channel. Radio, believe it or not, is a channel. Print is a very important channel.

KN:                   There’s a lot of research, for example, on the importance of print, because print is permanent. There’s a permanence to print. There’s all this research that has been going on now for a while.

KN:                   Sappi, if you can get your hands on it, has an incredible book or research report they put out, called The Neuroscience of Print, or The Neuroscience of Touch, rather, but print is another. Direct mail, for example, is a channel.

KN:                   So omnichannel looks at whatever channels are necessary within the context of your customer’s experience and how your customers engage with your brand and figuring out how to deliver them a seamless interaction, as they go from one channel to the next.

KN:                   Now, when did this concept start. The term originates around, well, actually in 2010. If you really want a good reading of the history of omnichannel, Savannah Louie in 2015, at NectarOM, publishes a brief history of omnichannel and positions its beginning at 2010.

KN:                   What happened in 2010, is you really get the proliferation of smartphones and the necessity of people to be able to pull up. You also have the technology to support smartphone functionality.

KN:                   So, people want to be able to engage with brands on their smartphones, as much as they are their desktop experiences, particularly in the west.

KN:                   This means that they expect similar functionality to that, as they do their websites.

KN:                   If you go back though, even further, we see in the early 2000s, actually, Best Buy kind of pioneered this concept of developing a website that could also… When the customer’s in store, they had a website that would offer functionality, that would recognize what the customer did in store and then tie that into their customer support. They wanted to rival Walmart and compete with Walmart.

KN:                   We called this assembled commerce. This is kind of where it has its origins.

KN:                   I’m even going to go back further. I’m going to go back to Martha Stewart, Omnimedia. She named her company Omnimedia. From a storytelling narrative structure and product placement perspective, I call Martha Stewart the mother of omnichannel.

KN:                   She did cross-referential advertising, where she took a cookbook and referred to it in her magazine, and then did the same on her television show. Created hooks and then created cliffhangers to tie it all together, from the narrative and storytelling perspective. Did it in this omni experience, before anybody else did, and then built a whole platform around that. And then tied that into the website, once web started getting big.

KN:                   So, I really credit hers creating this branded customer experience around omni, way back when. This is where the foundations of omnichannel begin.

KN:                   Now there’s key concepts that comprise omnichannel. One of them is called single view of the customer.

KN:                   Single view of the customer means that no matter where the customer is, the brand or the organization or business has data points. They’re going to be able to know what that customer’s doing and how they’re engaging with the brand.

KN:                   So from a data perspective, you know what they’ve purchased, or you know what is in their shopping cart. So, if they’re in store and they add something to their shopping cart and then they decide to check out online, you’re able to track that.

KN:                   If they purchase something, you’re able to make recommendations for what you might be able to cross-sell or upsell. You can offer support, based on their previous purchases. If they have done one piece of support, you’re able to offer them support, based on what you previously supported, et cetera.

KN:                   There’s also integrated product inventory. They add something to shopping cart at home, they’re able to pick it up in store.

KN:                   Unified customer journey is another concept. So, regardless of where they are within their customer journey, you’re able to give them what they need, and then push them from one stage of the customer journey to the next. Those are key concepts in omnichannel.

KN:                   Capabilities that we’re able to deliver upon in omnichannel, self-service checkout in-store. So, self-service checkout in-store, curbside pickup, which became huge during COVID.

KN:                   So note, everybody during COVID, put their supply chain management in the cloud, which necessitated the need for content in the cloud, which necessitated the need for things like self-service, contactless types of engagement with the customer. This all sort of proliferated omnichannel capabilities, obviously.

KN:                   The notion of BOPIS, Buy Online, Purchase in Store, contactless payment options, these are all things that existed in the omnichannel realm, that have all kind of been fast tracked because of COVID-19.

KN:                   Show store inventory online in real time, for example, that’s all omnichannel capabilities and functionality. You see this getting more and more sophisticated, but I think COVID-19 definitely brought it much more into fruition.

KN:                   I’m going to go back to the definition. It’s a seamless interaction, from one customer touchpoint to the next, throughout a brand’s customer experience.

KN:                   Now, multichannel just means you’re able to deliver content to more than one channel. I can deliver content to a website. I can deliver content to a radio. It doesn’t mean I’ve integrated that customer experience to where they’re all interconnected. I forget what else you asked about.

SO:                   I mean, that’s really the interesting point here, is that when you talk about multichannel whatever, what we’re talking about is a publisher or an author-centric view of the universe. I made this content, and I can publish it to multiple channels.

SO:                   What you’re talking about is a holistic view of everything. Not just content, but customer interactions and eCommerce and all the rest of it and what that looks like across all these different potential platforms.

KN:                   Yes. The omnichannel lens takes it from the customer experience and works backwards from that. In order to execute omnichannel correctly, you need to understand the customer journey and then from that, specific customer tasks and what they’re going to need. And then you’re going to have to build a content operations model, to be able to deliver against that.

KN:                   So, you need things and it’s complicated, because it’s not just from a content perspective, but you need omnichannel order fulfillment, for example. You need warehouse management systems. You need supply chain management optimization. But there’s a lot that goes into it, and so these systems are complex.

KN:                   One thing I tell people when I’m speaking on omnichannel… I’ve been speaking about this before anybody in content strategy was talking about it. We go way back. Sapient, where I worked before I started this, was kind of delivering some of the first omnichannel experiences out there.

KN:                   I think in 2012, we did one for… Well, we were doing this for clients, but big box and big retailers.

KN:                   One of the things I say to folks is, you may not be able to do this. A lot of people that are smaller or smaller companies cannot do this, but there’s lessons that can be learned from it.

KN:                   You can all understand your customers and try to build more customer-centric experiences. But it certainly isn’t for everyone, because it does require a level of technology sophistication that not everybody’s going to be able to execute.

SO:                   Yeah. I think most of the people listening to this podcast are in the content world. So, what does that look like?

SO:                   I mean, I kind of envision a scenario where we’re talking about people that are like, “Yeah, yeah, I’ve done multichannel. I get that. But I’m being asked to take that next step and become more customer-experienced focus and start thinking about these omnichannel issues.”

SO:                   So what does it look like to establish or to plan content strategy for an omnichannel client or an omnichannel world?

KN:                   So let’s go back to… and I forget when they pioneered the concept, but it mid-2000s. So APR pioneering the concept of create once, publish everywhere. It’s kind of being able to actualize that, but take it a step further. So, not just create once, publish everywhere, but create once, publish everywhere, so that it’s optimized for the customer and his, her, or their needs.

KN:                   For example, you’re able to anticipate what the customer needs in that particular channel. You don’t have to boil the ocean. You can say, okay, let’s start small. Let’s look at the customer journey, and let’s do some customer journey modeling.

KN:                   Let’s figure out what that customer needs, if she comes into the store, from a content perspective. Let’s deliver upon that. Let’s create content specifically, that’s optimized for the channel. Let’s create a publishing model that can support that.

KN:                   So let’s go beyond the small, medium, large messaging that needs to support the channels. Let’s develop some channel-specific messaging, that’s optimized for that customer need. So that’s what that looks like, if that gives you a little bit more of a flavor.

KN:                   Maybe using personalization and advancing that and being channel-specific, based on the customer. And then doing some audiencing and figuring out how to layer persona or customer targets and customer target messaging onto it.

KN:                   Personalization can be a powerful tool to help you advance your omnichannel strategy as well. And if you have any type of personalization engine or personalization tool built into your content management system, whether that’s a headless CMS or whether it’s more of a si-core Adobe Experience Manager. You can really execute some of this quite well.

SO:                   Right. I think that leads quite nicely into the next topic, which is, what about content operations? I mean, what does it look like to be worried about content ops in an omnichannel context?

KN:                   You really have to understand your customer. This is where insights becomes important, so it’s not just a… And data, so getting your data cleaned up.

KN:                   I’m hearing more and more from data folks about, it’s not just data. It’s structured data. It’s the right data, and it’s understanding how to leverage that data.

KN:                   So it’s like having a data strategy similar to a content strategy, and then understanding those insights, qualitatively and quantitatively, in order to be able to make informed decisions around who your customers are and what they need. And then being able to figure out how to model a content operations around that, to support that and stand that up.

KN:                   From a content operations perspective it’s, how do we publish in a way that’s going to be able to give our customers the content they need?

KN:                   This is what I talk about, customer-centric content operations, to be able to publish in a way… Rahel Ballie has noted before, you’re not going to ever…

KN:                   I mean, she’s not the only one that does this, obviously. You can’t eradicate silos, but you can ventilate them. It’s really important to think about…

KN:                   This is really, really tough. I just talked to Tony Byrne or was in a meeting with Tony Byrne. He talks a lot about this because. He’s Real Story Group. They evaluate CMSs and cross-CDPs and other types of technology solutions.

KN:                   But just the difficulty in doing that and getting all the different folks in the room and having them play nicely together, to rally around the customer experience, it is a challenge.

KN:                   But in order to do this correctly, you’ve got to get those people in place across the different organizational units, to figure out how to build an operations model that’s going to be able to support that unified customer experience.

SO:                   So who typically leads this? I know the answer’s you, but if you’re going into an organization, you have a client and let’s say it’s a… It sounds as though retail certainly has been on the leading edge of some of this, but you go in there. Who’s the executive that has the span of control to manage all of this?

KN:                   That’s why I laughed. I wasn’t laughing because we go in and help businesses do this. It’s kind of like, hmm, a good question. Here’s the reason why.

KN:                   This gets into who owns the budget. Who owns the budget, interestingly enough, a lot of the budgets for all of these types of engagement, shifted out of marketing. They’ve moved to customer experience.

KN:                   We’re seeing them co-owned by CIOs, CX and CTOs. It’s really interesting how this is going to play out.

KN:                   So, you get into personalization. You get into CDPs. You get cross-data platform. You get into all these types of things that are facilitating the necessity of all these different groups to be brought together.

KN:                   The jury’s still out. I mean, I think it’s going to be a hybrid between customer experience and technology.

KN:                   I mean, obviously, technology shouldn’t own it, and data. I mean, it’s interesting. The jury is still out. CIOs are making a play for this as well.

KN:                   Forrester and a lot of the analysts… McKinsey did a report on the advancement of CX. There was a lot of reporting that came out a couple years ago, that budgets were shifting to customer experience from marketing, as the emphasis was placed on the importance of customer experience.

KN:                   This elevated, by the way, the role of technical content, technical documentation, self-service content, help content, all this kind of stuff. Which was great for us, but it also meant that the waters got really blurred.

KN:                   When you start laddering in all of this cross-functional technology, cross-functional business requirements and needs for things like omnichannel, it becomes difficult to say, who does own these budgets?

KN:                   We do a lot of enterprise engagements. We are being asked to do more and more. So, when you get into something like governance, who owns governance across the enterprise for content? It’s a really good question.

SO:                   I mean, that sounds like one of them. But what are some of the biggest obstacles that you run into, other than apparently, let’s see, a problematic diffusion of responsibilities across executives and some questions about ownership? That sounds plenty challenging on its own.

KN:                   I was going to caveat this. In traditional retail, most of these larger… Some of them have a Chief Omni Officer.

KN:                   Okay. The ones that have been doing it for a while, like Macy’s, Nordstroms, they actually have departments that do this. They’re situated, and they stand it up well.

KN:                   But in newer ones, that have had the traditional organizational matrices, they don’t have a chief digital…

KN:                   It’s not just digital, by the way, because there is that other element, like I said. But the ones that are coming from a brick and mortar structure, that adopted omnichannel early on, they’ve kind of set it up so that it can be successful and they’re doing it well.

KN:                   It’s the ones that are the later adopters that are seeking real challenges with us, I think.

KN:                   A lot of marketing does have omnichannel departments, in a lot of bigger companies, but it is definitely a challenge.

KN:                   I think the biggest challenges are silos. Data is another huge, huge challenge. A lot of companies are moving away from data warehouses, data lakes. They’re trying to do these more integrated data solutions.

KN:                   But being able to harness data from all these different systems, report out on it, have clean data, have structured data, have a good data strategy that integrates across the platforms and then execute that, that’s a huge challenge for a lot of companies. That’s another really key challenge.

KN:                   Integrated content strategy across the platforms is a huge challenge. Personalization remains a huge challenge. To do this successfully, you need to be able to adopt strong personalization capabilities, if you want to take it to the next level, because you got to personalize content and offer that within your customer experience. So, I would say those are key challenges.

KN:                   And then there’s an infrastructure challenge, if you’re going to be really mature about it and it’s expensive.

SO:                   What do you see as the biggest opportunities? I mean, what things are you seeing that are new and different and interesting, that you’re excited about, that you’re looking forward to working on over the next bit?

KN:                   I think the biggest opportunities, I think the emphasis on customer experience. I think the emphasis on loyalty, customer retention, a shift from just customer acquisition, to one on really helping stand customers up so they’re successful.

KN:                   Havas does Meaningful Brand index. They’re the ones that came up with the concept. I’ve been following them for over a decade. I talk about them all the time. I think they’re up to, 73% of consumers could care of brands were to go away tomorrow.

KN:                   They’re important because one of the things that they also do is they talk about, well, what makes brands meaningful? The buzzword of 2022 is help content, self-service content and content that’s going to help people benefit or improve their lives, but also, anything out there that helps them.

KN:                   Whether that’s self-servicing their needs or whether that’s improving, something they need to improve, help content is really important.

KN:                   So, I think this emphasis on the customer and their growing their relationship with the organization, has made organizations realize they’ve got to do more investment in how they think about their customers and their customer experience.

KN:                   This is exciting, because they are doing more of an emphasis on customer journey and more of an emphasis on content, that’s going to be situated around positioning the customer to be successful.

KN:                   Omnichannel is getting bigger. You’re hearing it more and more. It has definitely gotten traction after the pandemic. So, businesses are taking seriously, even ones that can’t execute all…

KN:                   I gave you sort of the ideal omnichannel model. If you look at Nordstrom, if you look at Walmart, Nordstrom, Best Buy, I mean, some of these… Sephora always ranks in the top 10 for doing this really well.

KN:                   These are brands that have infrastructures in place, that are set up to do this remarkably well, but smaller companies are taking lessons learned from it and learning how to adopt that.

KN:                   Even B2B’s are taking lessons learned and developing mechanisms, to develop a more long tail strategy from a business perspective, to develop a more singular view of the customer.

KN:                   I’m excited about all that. I’m excited about the emphasis placed on the customer journey and understanding how they can use that to develop more optimized content solutions, to develop that.

KN:                   I’m also really excited about the emphasis placed on content operations. When supply chain management got moved into the cloud, for a lot of businesses and contactless became an imperative, it elevated the role of necessitating a content operations that was going to support that.

KN:                   So, businesses started investing more and more in on that. I’m sure you saw an uptick of that as well.

KN:                   This all means that we’re taking content more seriously, throughout the content life cycle and value chain for businesses.

SO:                   Well, on that optimistic note, I think I’m going to close us out. Kevin, thank you for being here. That was really, really fun. Thank you to our audience, for listening to The Content Strategy Experts podcast, brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

 

The post Omnichannel publishing appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 23:17
Content ops stakeholders: Content authors (podcast, part 2) https://www.scriptorium.com/2022/07/content-ops-stakeholders-content-authors-podcast-part-2/ Mon, 11 Jul 2022 12:00:39 +0000 https://www.scriptorium.com/?p=21471 https://www.scriptorium.com/2022/07/content-ops-stakeholders-content-authors-podcast-part-2/#respond https://www.scriptorium.com/2022/07/content-ops-stakeholders-content-authors-podcast-part-2/feed/ 0 In episode 123 of The Content Strategy Experts podcast, Alan Pringle and Gretyl Kinsey wrap up our series on content ops stakeholders and continue their discussion about content authors.

“When you are trying to get executive buy-in on something as a content creator, don’t focus on the tools and the nitty gritty of the tech. That is not the way to get the attention of executives. ”

– Alan Pringle

Related links:

Twitter handles:

Transcript:

Gretyl Kinsey:                   Welcome to The Content Strategy Experts podcast, brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. This is Part 2 of a two-part podcast. And in this episode, we will continue our discussion about content creators as stakeholders. I’m Gretyl Kinsey.

Alan Pringle:                   And I’m Alan Pringle.

GK:                   I want to talk about another common challenge that we see for content creators, and that’s the lack of decision-making power, or sometimes a lack of support at the management or executive level, when something needs to change. So if they see that there is some inefficiency in the workflow, or if they can see whenever a merger has come into play, that something is making their lives harder as a result, if they can see that some part of the localization process is broken, if they can see that cross-departmental silos are a problem, a lot of times it’s the content creators who can see that happening, and they can see the results of what it does to their workflow. But they are the ones who have the least power to make that change.

AP:                   What to me is so interesting, and possibly ironic, that might be the right word to use here, is the content creators are the ones that recognize the flaws. Yet sometimes, they cannot articulate the business case to get those things fixed. So they’re the ones that understand it and see it, but then they can’t communicate to the audience at the executive level about how to fix these problems, either through some kind of return on investment argument, basically there’s some financial implications and things you have to figure out and explain, to get that funding, to fix these things. And despite these professional content creators’ ability to communicate with their end audience, sometimes they have a more difficult time communicating with the people who basically control the funding that flows into their department.

GK:                   Yeah, and I think that stems from a couple of different things. I think one is just that a lot of their focus is already on the content creation process. And so it does involve extra thought, extra time, extra research to prove that there is a loss of time and money in the inefficiencies that they face. And then I think the other piece comes from just the fact that if they are entrenched in one department, they don’t really have that bird’s eye view of how these inefficiencies are affecting the company as a whole. And so communicating that to a management level or to an executive level, to someone who can get them more budget, can be really, really difficult, especially if the company already doesn’t value content as much as it should.

AP:                   Yeah. And I think it’s also worth noting, a lot of these people know that when it comes time to make change, the content creators are going to be the ones who basically are on the receiving end of the brunt of the change and the pain, because they’re having to change tools, they’re having to change processes, all that stuff. This is where I think executives, on the other hand, need to be very much in tune with the importance of change management, and making sure that people just aren’t thrown into a new tool set, without the proper preparation, training and whatever else. Just merely putting a new process in is not nearly enough. You’ve got to get that cultural buy-in and an understanding of how these tools work, how they’re going to improve work life, otherwise you’re going to be probably flushing money down a toilet.

GK:                   Yeah, if you have a writer who comes to you and says, “Here is what is making our work inefficient for my department,” but then all you do is just throw a brand new tool at them and leave them alone, then that’s just going to make things worse. That’s going to increase the inefficiency for a long time because every time you change processes and change tools, there is a major learning curve. So instead, the better way to approach it is when you have someone coming to you and saying, “We’ve identified these inefficiencies, and we have figured out that here’s what would be the best way to get past that and to make our lives easier,” that you do provide all of the support that those writers are going to need to get through that change, that you provide all of the right training, the right follow-on support, the right resources, maybe a little bit of extra help.

GK:                   Because when you make those process changes, the writers are still going to have to do all of the work that they’re already responsible for, on top of putting these new systems in place, and getting up and running. So making sure that you not just give them the tools that they need, but also the guidance to get through that process change is what’s really going to help clear that inefficiency out of the way that they initially complained about, and get them to the point where they are working more efficiently and saving your company more cost and time.

AP:                   Yeah. And one last point I want to make in here in regard to these business challenges, as a content creator, when you are trying to get executive buy-in on something, don’t focus on the tools and the nitty gritty of the tech. In general, that is not the way to get the attention of executives. That’s my piece of advice in this regard. You’ve got to look at the return on investment, ROI, you’ve got to look at the business case, and demonstrate how the problems that are going on with your content creation processes, how they are in direct conflict with the goals, the business goals of the company, that’s the kind of language, that’s the kind of viewpoint you need to be bringing to those discussions. Not this tool is inefficient because I’m copying and pasting. That may be completely true, but that is not the way to get off on the right foot with executives when you’re having those kinds of discussions.

GK:                   Yeah, if you just start by saying, “We’re copying and pasting a lot,” that doesn’t really say much. But if you say, “We are spending X number of hours copying and pasting each time, there’s an update cycle, and that is costing the company this many dollars, then that’s going to get you a lot further and getting some type of a change made. And I think it’s also worth pointing out that if it’s difficult to have that conversation, if your company doesn’t maybe place as much value on content, or isn’t willing to listen to someone who’s in a content creator role, then that may be a time when you would want to bring in an outsider. That could be a consultant like us, it could even be just somebody in another department, to collaborate with you, who could maybe help give your argument a little bit more weight. But that might be a way to really get through to the people who have the purse strings at your organization.

AP:                   Yep, exactly.

GK:                   So we’ve talked a lot throughout the series about all of the different other types of content stakeholders besides the creators at an organization. And I want to talk about how those other stakeholders might be able to support the creators.

AP:                   Sure. And we just touched on this point in the previous conversation. The collaboration and listening angle, it’s important to speak to each other in some kind of common language. And I’m not talking about English, French, Spanish, I’m talking about speaking to someone in terms they are going to understand, hence the whole conversation we just had about don’t go in there talking about all the nit-picky things that are wrong with your authoring tool. Talk more about how the process doesn’t fit the business requirements. That kind of conversation is what’s going to get you further. And that’s how you can really ramp up the collaboration and the assistance from those who really have the money.

AP:                   You’ve also got the issue of the siloed information that we talked about. There are ways to basically take, shall we say, a more format or presentation-neutral process, and then you can take that information, which is often some kind of structured content, an XML for example, and then transform that content into the different kinds of information that you need. A good example of that is if you’ve got specifications for a product, if you have all that information collected in one format-neutral place, you can then pull it and put it in your online user guide, you can put it in a marketing slick, you can put it in some training material. And it’s only been written once, and you’re giving everybody the ability to connect to that central chunk of information, and use it in a way that provides that very critical, consistent messaging to people who are reading and consuming that content.

GK:                   Yeah, I think having not only central chunks of information, but also unified terminology, unified style, unified look and feel for your information, is the other piece of it. Because the more you unify your information, once you have all that in place, the more accurate your content will be when you deliver it to your customers, you won’t have issues like I’ve seen at a lot of organizations where people will say, “Oh, our marketing materials say this, and they use these words to describe the product. But then when the user gets the technical manuals, it describes everything completely differently and they get confused.” Or maybe if we send users to our training website, the look and feel is completely different from what you get on our main site. So the more that you can unify and get everything to have one collaborative look and feel, the better that’s going to be for your organization as a whole.

AP:                   Absolutely. And part of that, looking at things as a whole, is taking a look at what are the obstacles for the different content creators, and how can you remove them, and make their work more efficient. Not just in one department, but across the organization because what works in one place may be helpful in another. So try to take a more bird’s eye view, as you said earlier, about how changes can be something that can occur in multiple writing groups, content creation groups, to really unify that efficiency across the board.

GK:                   Yeah, absolutely. And that’s what we talk about a lot at Scriptorium when we mentioned enterprise content strategy, getting one content strategy for the entire organization, instead of just having each department with its own way of doing things. And I think that cuts into another area where stakeholders can support the content creators, which is also something we’ve touched on a little bit in some of our previous discussion. But that’s understanding the value of content, what content brings to your organization, and being able to communicate that and prove it with numbers. And I think that is a really critical way that, for example, if you’re a writer and you have maybe some people in management, in your department, or even in some other content producing departments, who need to go to bat for you, that’s especially one thing that they can do, is being able to prove here is what we save by having better content, more efficient content, more accurate, and more unified content. Here is what content does to make the organization look better to our customers, to make our organization serve our customers better.

GK:                   And that information, that proof of what the content actually does for your business is going to be what gets you the resources to continue making better content.

AP:                   Exactly. Executives are much more amenable when you’ve done that legwork that you just mentioned, and get some numbers to explain lack of efficiency and so on.

GK:                   So I want to wrap up by talking about some advice that we have as consultants, as people who’ve seen a lot in this industry, for those who might want to work as content creators.

AP:                   I think we’ve talked a little bit about some of the bigger picture things. And the big one is, it’s not just about writing, keeping your head down and cranking out the content. You’ve got to understand how your content feeds the bigger picture, the business goals and requirements for your company. Those two things need to work hand in hand. So the sooner you realize that you are contributing to a bigger picture, the better off you’re going to be. That’s my primary piece of advice.

GK:                   Yeah. I think another one to really keep in mind is to always be prepared for change. And that’s something, again, that we’ve touched on throughout this conversation. We’ve talked about the world becoming more global, more digital, more connected. And I think that as technology evolves, as we keep seeing companies take advantage of that to grow and scale their operations and their processes, that that is going to have an impact on what you do as a content creator. So it ties back also to the point about being not just about the writing, but about all of the other things. If you also know that your job is not going to be the same from one year to the next, that things are going to evolve and change, then that’s going to put you in a better position to be ready for those changes so that you can roll with the punches.

AP:                   Exactly. Basically, you need to be as adaptable and nimble as the systems you put in place, because you never know what’s around the corner as far as content creation and delivery goes.

GK:                   Yeah. And I think one area that we’re really seeing a lot of change and a lot of evolution, particularly in recent years is, again, around having more personalized content delivery, content as a service, being able to allow your users to pull specific pieces of information on demand when they need that, that that type of content creation and development, to feed into those types of systems, requires a different thought process than what you might have done 5 or 10 or 15 years ago, if you were just producing PDF manuals.

AP:                   Exactly.

GK:                   So I think we’re going to go ahead and wrap things up there. So thank you so much, Alan.

AP:                   Thank you, that was a great conversation.

GK:                   And thank you for listening to The Content Strategy Experts podcast, brought to you by Scriptorium. For more information, visit scriptorium.com, or check the show notes for relevant links.

 

The post Content ops stakeholders: Content authors (podcast, part 2) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 14:47
Content ops stakeholders: Content authors (podcast, part 1) https://www.scriptorium.com/2022/06/content-ops-stakeholders-content-authors-podcast-part-1/ Mon, 27 Jun 2022 12:00:31 +0000 https://www.scriptorium.com/?p=21466 https://www.scriptorium.com/2022/06/content-ops-stakeholders-content-authors-podcast-part-1/#respond https://www.scriptorium.com/2022/06/content-ops-stakeholders-content-authors-podcast-part-1/feed/ 0 In episode 122 of The Content Strategy Experts podcast, Alan Pringle and Gretyl Kinsey talk about content authors as content ops stakeholders.

“I think it’s really important to note here, a lot of these resources are not human people. They are systems or databases that provide information. You pull information from these multiple sources and put it together to provide a really dynamic and personalized user experience for the people who are reading your content.”

– Alan Pringle

Related links:

Twitter handles:

Transcript:

Gretyl Kinsey:                   Welcome to The Content Strategy Experts podcast, brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we continue our series on content stakeholders, this time focusing on content creators. This is part one of a two-part podcast. Hello, and welcome everyone. I’m Gretyl Kinsey.

Alan Pringle:                   And I am Alan Pringle.

GK:                   And we’re going to be wrapping up our series on content stakeholders by talking about the people who actually create the content. So to start out, what types of content creators might you have at an organization and what are some of their roles and responsibilities?

AP:                   Well, we’re going to start with the most obvious, and that would be your full-time professional content producer, writer, information developer, whatever you want to call. There are many, many titles over the years for that. But that group in general will create the bulk of the content that a company puts out, and we’re talking about all kinds of content. We’re talking about your technical/user/product content. We’re talking about training and learning materials. We are talking about marketing information, legal risk management type content. So it really crosses the spectrum. And a lot of those people are employed full-time to crank out that content.

GK:                   Yes. And then there’s another category, which is part-time contributors, or sometimes that will be called subject matter experts. And these are people who also create some content, but that’s not their full-time job. So maybe they are just writing some small pieces of content here and there that are specific to the areas where they have a lot of knowledge or expertise or experience around the company’s products. Maybe they are reviewing the content that’s being produced by the full-time writers and making sure everything is accurate, everything is consistent. And these are people who typically do have another primary role in the organization. They contribute the content as needed, but their primary responsibility is going to be focused on something else. And sometimes they may even be volunteers in the industry. And in the case where we have something like Content as a Service, sometimes these contributors or subject matter expert type sources are not even actual people.

AP:                   Exactly.

GK:                   It might be getting information out of an inventory database, that type of thing. So there are a lot of different ways that that really nitty gritty information that’s needed for the content can be contributed to what the full-time writers are developing.

AP:                   Yeah. And I think it’s really important to note here, as you talked about the database and inventory information, a lot of these resources are not human people. They are systems or databases that provide information, and you pull information from these multiple sources, put it together to provide a really dynamic and personalized user experience for the people who are reading your content.

GK:                   Absolutely. And I think that’s a great evolution of the industry because it takes pressure off of some of these people who have lots of other responsibilities. So if they have put information somewhere once in a database, it can be used over and over again and not have to keep going back and bothering some of those people as a subject matter expert going forward.

AP:                   Yeah, it’s a situation where a lot of these people who have “other real jobs,” they are brought in to review a very small slice of content or offer their expertise because they might be a product designer for something that’s being written about. It’s always good to keep in mind, they have other primary job responsibilities, and anything you can do to narrow that focus and get their contributions in as quickly and painlessly as possible is really a benefit to everybody.

GK:                   Absolutely. There are a couple other responsibilities that I want to talk about, and these may be something that a full-time writer or content creator would do, or it could also be something that falls on more of a part-time contributor. But one of them is reviewing and editing, and that’s usually the last holdout part of the writing process. You need someone to take a look at that content before it goes out the door, before it gets published and distributed to the end users, and make sure that everything is accurate and everything is correct. And that’s usually some type of a role in whatever content ecosystem you have, that someone will be assigned to that particular responsibility. And it’s that person’s job to do that final review and make sure that everything is ready to go.

GK:                   And then the other responsibility is depending on what types of content you produce, there may be some assets, things like images, things like video, audio, other things aside from just text that would be a part of your content. So at some organizations, if that is a large portion of your content, if it is something that’s very graphics heavy, if there is a lot of audiovisual stuff in your content, then there may also be a person or a team that is in charge of creating that information. And sometimes that’s outsourced as well.

AP:                   It is. And I think this is a very good place to really drive home the fact that to content creators to remember that people have different ways they like to absorb and take in information. So don’t always assume someone wants to read something. They may want to hear it. They may want to see it. You’ve got to give people those choices, and content creators can’t … I think it’s a really, really bad idea to take this narrow view, “You’re going to take what I give you and like it.”

AP:                   There was a time years ago, “I’m going to put a PDF up on a website and that should do it.” In the 21st century, it does not do it. It does not cut it anymore, and we still see that today. So remember that the people who are your consumers really may not want it in the content and the format where you think it is the primary format. So you need to think carefully about how you’re providing that information to the people and not make assumptions that because you crank it out in this format, that people are automatically happy about it.

GK:                   Yeah. I definitely agree with that. And I also think that there’s an accessibility angle here, because if you are just providing your content in one way, that may not be accessible for your entire audience. So the more ways that you can provide that information and the more that you can make that information able for your audience to personalize and get just the pieces that they need, that’s really going to help your customers respond better to it, use your information, and be more loyal customers, be more likely to buy more of your products going forward.

AP:                   Yeah. I even mentioned this in a previous episode when we were talking about the content consumers as stakeholders. It’s important to remember that not everybody takes in information like you do. Everyone is not the same in that regard, and content creators need to keep that in their heads when they’re talking about delivery formats.

GK:                   Absolutely. So speaking of things that are not always the same and that vary greatly across the spectrum of the industry, I also want to talk about content creator team sizes, because this is an area where we see a lot of variety in our work as consultants.

AP:                   A lot. And it’s a situation where you can have a very large team, but you would also be amazed at the amount of content that a small team or even a one-person shop can create and crank out. So it really depends on the size of the organization. And also, how diversified are those content types? Because in general, and like I said, this is in general. It is a broad generalization. The more different types of content an organization’s pointing out, you usually have different departments. So you’ll have a team of instructional designers creating training material. You’ll have a team of people creating your user enablement, user experience, user guide content. And then you’ll have a team creating marketing content, for example. So you may have different people creating those different types of content, and each one of them is their own department with one to ever how many people. So it really depends on the size of the company. And also, I think it’s also a nod to how serious or how well invested that company is in their content and how much time and money they spend on it.

GK:                   Yeah. And I think it’s really interesting that you mentioned the departments, because we do see a lot of variety there as well. We might have some lack of balance. So for example, if one department gets a whole lot of the organization’s resources and budget, then that department might have a lot more people involved in creating content. And then you might have another department that also has to create content, but maybe they’re not valued as much by the organization, so they don’t really have as large of a team or as much of a budget to work with.

GK:                   We also see a lot of issues with content silos across departments. I’ve been in this industry for over 10 years and I’ve seen it the entire time I’ve been in the industry. And Alan, you’ve probably also seen it for that long, if not longer, that there is just this issue where even though we’re in an increasingly collaborative and digital world, we still have a lot of departments that work very separately, even when there is a need to collaborate across departments.

GK:                   And so we do see a huge variety where at one company there may be more of a spirit of collaboration and all of the different content producing teams might work together and they might share their content across departments. And then we’ll see other companies where they are all very much sequestered off from each other. They never communicate, and there is a lot of opportunity for content sharing and reuse that goes completely unaddressed.

AP:                   The good news is I think we are seeing more and more the blurring of some of these departmental lines, and companies are starting to realize that there’s a lot of overlap in this content, and they do make an effort to find ways to reuse content. Because at the end of the day, when you’re reusing content as a content creator, you are offering your readers, your end users, content consumers a uniform, consistent message, and that is a huge, huge win. And it is a necessity to really make it in this super competitive global world.

AP:                   You need to be telling your customers the same thing and be very consistent in how you communicate specifications, anything in regard to marketing messages. You need to be consistent in how you communicate, because that’s the key to success with your content, from my point of view.

AP:                   And the good news is, like I said, some places are already addressing it by having people collaborate more, but as we move more toward this Content as a Service model, even if you’ve got these silos and they’re super embedded and you’re going to have a hard time breaking them down, you can find systems that will pull information from all of these different sources and combine them together to create that personalized delivery to your reader, your content consumers.

GK:                   Yeah. And I think it’s worth noting that as this world becomes more digital, more global, and you’ve got more options for how to create and consume content, that if your organization is not doing everything that you can, there will come a point where your audience will notice. And we’re seeing that exact thing happen a lot of times, which is why there’s so much more of a demand for that personalized content.

AP:                   Yeah. And the whole globalization angle, too. Think about how many clients we’ve worked with and how many companies out there are multinational and have presences in multiple countries. And they are having these challenges about creating this unified message, unified content. And we’re seeing that, like I said, with our own clients. We have clients who have presences in multiple countries. And we often talk to these people in different countries about helping them. For example, if we’ve helped them with an implementation, we help them with support, and we are hearing from people in multiple places.

AP:                   So again, it’s this idea of what happens in one place can have a broader effect with people who may be working from you a thousand miles away. And I think also with the pandemic, we’ve really seen this shift to remote teams. So what you do “locally” is really maybe not so local anymore.

GK:                   Yeah, absolutely. And if you just think of things locally instead of globally, you’re going to be limiting yourself. And I think this also plays into localization, because the more that you can share content, the fewer times you would have to translate something. Because it’s not just the companies that are more globally connected, it’s also your audience. And so I think there is more and more need for localized content to be produced and delivered, and especially when you do have a multinational company. I know there are a few examples that I can think of among our clients who have gone through mergers that ended up with not only the company having different collaborators across the world, but then the same is true with their customers. And so they have to think about the localization angle all the time.

AP:                   Exactly. And the merger point is a really, really good one. We should probably talk about that for just a moment. That’s one place that really has a huge impact on content creators. Because a lot of times when you’ve got a merger, you have got basically then two or more sets of tools that are pretty much doing the same thing. You’ve got some processes that are the end game may be the same, but the way that they get there is different. So you have got to combine tool sets, workflows, and cultures, company cultures, to create this, again, this unified message to send out to the world. And it is hard to understate how difficult that can be, because basically people have a tendency to want to protect their own. And I totally get that, but sometimes you’ve got to lower that defensive posturing a bit and come together to create, again, that unified message that you really need to be sending out to be successful.

GK:                   Yeah, it’s a big challenge for sure, and I see it as falling under the umbrella of change management, which is something that we see with any type of content process change. And a merger is a perfect example of that. You have to bring all of these different content creators and the people who manage them, the people who have other types of a stake in the content, you have to bring all of them together under one unified vision. And a lot of times you have to do that very quickly so that you don’t have a major disruption in the production of your products and your content. So it really is a big deal and a big challenge for content creators to face.

AP:                   It is. And what’s interesting is when you start looking at the kinds of problems content creators have, for example, inefficient workflows and processes. You’ve got a workflow that has a lot of manual work, and you’re doing a lot of copying and pasting. Or to do a revision, you basically make a duplicate of your document and then slap some changes on top of it. That very manual process. Think about that manual process being multiplied many times if you’ve got two companies coming together who really don’t have super efficient content operations, and it happens.

GK:                   Yeah. If you’ve not only got two or more companies coming together, but then translating into two or more, sometimes 20 languages as a result of that merger, it really just multiplies some of those inefficiencies that might have been present in each one coming in. And even if you’ve only got one company dealing with that, that’s still a huge issue. We see this all the time that there is some part of the workflow that’s just not doing what it should for the content life cycle. And then we have to come in, take a look and see what exactly is going wrong. But I think a lot of it does lie on what we just said. Some of these manual things, like copying and pasting, things that have to be done and then redone and redone again and checked again every time there’s an update to the information, those are the types of things that are really going to slow down production and therefore slow down localization and everything else.

AP:                   Yep. And again, this is stuff that so many content creators face, and a lot of times it feels like you’re trying to dig yourself out of this bottomless black pit, because you’re stuck doing these constant manual changes and revisions. But it is possible, slowly but surely, to put in better content operations to make that a whole lot less painful.

GK:                   I think that’s a good place to wrap up, but we will be continuing this discussion in the next podcast episode. So Alan, thank you.

AP:                   Thank you.

GK:                   And thank you for listening to The Content Strategy Experts podcast, brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

 

The post Content ops stakeholders: Content authors (podcast, part 1) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 18:17
Content ops stakeholders: Content consumers (podcast) https://www.scriptorium.com/2022/06/content-ops-stakeholders-content-consumers-podcast/ Mon, 13 Jun 2022 12:00:04 +0000 https://www.scriptorium.com/?p=21457 https://www.scriptorium.com/2022/06/content-ops-stakeholders-content-consumers-podcast/#respond https://www.scriptorium.com/2022/06/content-ops-stakeholders-content-consumers-podcast/feed/ 0 In episode 121 of The Content Strategy Experts podcast, Alan Pringle and Bill Swallow talk about content consumers as content ops stakeholders.

“If you look up a restaurant on your phone and go to view the menu, most of the time, that menu is going to be a PDF. And you are sitting there, zooming in, scrolling around, and pinching, and trying to read this menu that really should have just been a responsive HTML page.”

– Bill Swallow

Related links:

Twitter handles:

Transcript:

Alan Pringle:                   Welcome to The Content Strategy Experts podcast, brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we continue our series on content operations stakeholders, and talk about content consumers. Hello, everybody. I’m Alan Pringle.

Bill Swallow:                   And I’m Bill Swallow.

AP:                   So, far in this content operations stakeholder series, we’ve focused on really at this point, many groups. Let me think, risk management, tech support, localization, the people who manage the tech stack, executives and the IT department. And today, we’re going to focus on content consumers. But before we get away from that list that I just rattled off, I think we need to point out those people are also content consumers.

BS:                   Right.

AP:                   And I think the one that really strikes me the most is tech support being a content consumer.

BS:                   Right. Tech support, definitely. There’s a whole podcast on that topic. But yeah, they are consuming the content and repurposing and adding to that content. Other people here that are content consumers, that we’ve talked about before, certainly those who are developing products that redistribute content or remix content, or produce new content. So, developers who are working on, let’s say a chatbot feature, certainly that chatbot is a content consumer.

AP:                   Yep, it sure is. And I think what you’re going to hear more and more in this podcast, what we’ve seen is this trend where a lot of content consumers aren’t necessarily humans at first, and I think we need to account for that. And we’ll get more into that later in this conversation. I think one of the most obvious content consumer is if you work at a company that develops products and services is your end users, your customers, that’s your most obvious user base of content consumers. So, I think we need to address them upfront probably first.

BS:                   Right, because they’re the ones who have bought the product or service. So, they have the thing and they essentially need to know how to use and care for, and otherwise manage the thing that they bought. So, they need that content to help them along, whether it’s learning about the product, being able to know how to order replacement parts, if it’s mechanical, or if they need to send it in for repair, information about troubleshooting and so forth. And a lot of these people rely not so much on the content that comes in the box, which is fewer and fewer these days, but they go online to receive that content. And a lot of times they won’t think to use the vanity URL that you supplied on the box of the thing that they bought, they will rather go to Google or their search engine of choice and start searching for what they perhaps think the product name is.

AP:                   Right. And that vanity URL is on a box that is probably in a landfill or recycling facility. So, that’s one issue right there you got to think about. But bigger picture wise, you’ve got to be sure that the way you’re disseminating information to these, I’ll say first line content consumers, is actually getting to them. If your content is not at the top of search results for certain phrases that have to do with your products, you’re going to have your end users looking at third party content. I would say that is suboptimal at best.

BS:                   Mm-hmm. And I would also say that the format in which you’re producing this content is critical as well. The one thing that comes to my mind and is not directly related to technical content or what have you, but if you go online on your phone to look up a restaurant that you want to go eat at and you go to view the menu, about maybe 99 out of 100 times, that menu is going to be a PDF. And you are sitting there, zooming in and scrolling around, and pinching, and trying to read this menu that really should have just been a responsive HTML page.

AP:                   Exactly. And usually, if I am looking at a menu or looking for menus, I am hungry. And when I am hungry, I tend to get unhappy. So, hey, restaurant industry, think about your end users who are hungry, and hangry, and need to get the content in the format that they need when they want it. And like Bill said, if I’m on my phone, I want it in a quickly displayed HTML menu that I can scroll through really quickly to get to the part of the menu that I’m most interested in. And that very much applies to people with products and services, very much. Be sure that your content, first of all, is findable via the search engines and is in a format that’s usable so people can actually consume that information the way they want to.

BS:                   Right. And I would also add, make sure that it’s accessible, so that those who need some other means of consuming the content, whether it’s text to speech or some other format, that they have the ability to consume that content, that they are not left essentially stranded.

AP:                   Exactly. And I will say, as someone who now has to correct for reading vision wise, and I will just let people figure out why that may be. I can tell you, getting a PDF, for example, online on my phone is really suboptimal, because it’s much harder to deal with that than it is usually with a website that you can pinch and open up a lot more easily. So, you can’t assume that everyone’s just like you, as far as your content consumers go.

BS:                   Absolutely.

AP:                   There is another group of people who go out on the internet and research your products. And those are people who are shopping or trying to make a buying decision. And I think they very much come into play as part of this conversation.

BS:                   Definitely. And a lot of times now, more and more technical based content is being looked up prior to people making purchases. Whether they are buying a new device, whether they’re buying a car or what have you, people are scouring the internet because they don’t want to sit in endless sales meetings where they’re only being told what the company thinks that the buyer wants to hear. They want to suss out the specifications and make a rational decision or an informed decision.

AP:                   Absolutely. I know I have seen, and I’ve had clients do this in recent meetings, say, “Well, our competition or a place where I used to work before I came here, which is often the competition, us doing it this way, we need to do it this way, in regard to how content is being distributed and consumed.”

BS:                   And yeah. I had a conversation with a client that not so much pulled up the competition, but they pulled up examples of other people’s documentation, just to say, I like this aspect about how they presented the information here. And then they’d pull up another company’s documentation say, but I also like what they did here, and can we somehow marry the two?

AP:                   Yep. Yeah. So, it’s not just shoppers and end users, it may be the people who were trying to steal your business who are looking at your stuff. So, that’s something to definitely keep in mind. One other, I think less obvious content consumer is government agencies, because there are a lot of people who are in regulated industries and the way that you distribute your content for consumption is highly regulated. And there’s certain rules about how it has to look, how the wording is supposed to be, and all that stuff. In a lot of cases, there are reviews. Your content is reviewed to be sure that it meets whatever standard that is. So, the government can be a consumer of your content, not necessarily to read it to use your product, but to be sure you can sell your product, which is probably, ultimately super important that you adhere to those regulations in the way that you talk to end users through your content.

BS:                   And in addition to those, there are also trade regulations, which means that you have to be able to show proof that you have content in the particular language for a particular country or a locale that you’re distributing your product. And I cannot tell you the number of times I’ve heard stories about product being left in shipping containers at the docks, because the company was scrambling to get the localizations required in order for them to get the green light to sell product in that market.

AP:                   Yeah. And again, is this regulatory agency, are these customs people really looking to read the content to use your product? Not really, but guess what? It’s just as important as if they were, because you can’t even sell your product if these conditions are not met. So, they’re consumers, not in the traditional way, but they’re absolutely consumers of your content.

BS:                   Yeah. And actually, I’d say they’re probably even more gatekeepers, because as Alan mentioned, I mean, they don’t care about what’s written on the page, but they care that it’s there because they are trying to protect the consumers in their country and make sure that their people get what they expect from a product.

AP:                   Yep, absolutely. And I think we also now need to talk about how the different kinds of content that are out there are blurring. I think we’ve seen really a trend where it used to have this very interdepartmental view of things. These are the people who are creating your user guide content. These are the people who are creating your marketing content. These are the people who are creating your learning content. And there were these very firm rigid silos in there. But with the blurring of those things, I think that also very much ties into the content consumers. Because at the end of the day, when you need to find a bit of information that you need, I don’t think you care where it comes from, from the company who’s providing that information.

BS:                   Right. There’s no expectation from a person to say, “I really need to go in and I really need to read the content that their technical documentation staff has put together.” They just go in there and say, “I need to know how to configure this new phone I got in the mail. And all it came with was a slip of paper and the URL that is on it got smudged. So, I don’t know where to go.” Something like that. And there is that blurring of the line because you have actual users and you have shoppers and decision makers, and other people all searching for your content, whether you like it or not. You never know, is this person a long time customer who just misplaced their bookmark or what have you?

BS:                   Is this person a brand new customer who is interested in our product? Is this person a competitor? How much information should we supply without a login? That type of thing. So, there is that blurring of the line, but since you never really know who’s going to stumble across your content, (that’s for lack of a better term) out in the wild, you need to make sure that it does have a bit of everything in it. That it has the tone and structure that your marketing team believes works with your market, but it also needs to contain the correct technical information that people may be looking for.

AP:                   And not only does it need to be correct, it needs to be consistent. You can’t have a situation where, say part of the website says that specification for product X is this, yet a marketing slick somebody picked up at a trade show. And yes, they still happen, believe it or not, says something completely different. That kind of contradictory information is really a huge problem. Because first of all, you’re going to probably have that customer or potential customer call and clog up your tech support asking, which is it? And you’re also setting a really bad example. And those people may go to use another company because they can’t get consistent, easily verifiable information from you.

BS:                   Right. They’re certainly not going to waste their time if they cannot find the information, that is pretty much clear. People really are making buying decisions, essentially content first. If they can’t learn about your product, if they can’t get the information that they deem important to themselves to make a buying decision, but your competitor is providing that information, it’s a no brainer. People are going to go with the people who are being transparent about their product.

AP:                   Yeah. And you’ve also got to think about how certain people like to consume different flavors or delivery formats of content. I for one would prefer to read something. I know people who prefer to see a video. I am not that person, I’m going to assure you that, who will instead go to YouTube. And a lot of places have YouTube channels that are corporate YouTube channels for those kinds of people. So, you’re hitting all of the different delivery targets and essentially providing that same information in these different formats to meet a particular user’s or end user or buyer, to meet their personal preferences. And there are a whole lot of delivery formats out there now.

BS:                   There certainly are. I mean, just from a human content consumer point of view, there is of course the ever present PDF online or what have you, that can also be printed. There’s the HTML, so whether it’s on a website, in a help system, in a knowledge base, or wherever you’re publishing your content, there’s the online factor. And then of course, you do have audio, so this podcast, which we also provide a transcript. So, here we are providing two different formats here. I will say that my personal opinion, when I go looking for information and I need that extra help to figure something out, so fixing one of my lawn tools seems to be the task this year.

BS:                   So, I look up the manual, I don’t find the information there. I search for how tos. I may get some information, but ultimately I do land on YouTube. And honestly, from my own point of view, I look for the shortest video possible that seems targeted toward my particular need, because the last thing I want to do is click through to a video that is an hour long on something, when I only need 15 seconds of information.

AP:                   Exactly. And really, I mean, to each their own. And people who deliver and share content have to remember that, that not everybody is going to want to consume information in the same way. And in some cases, as you were saying earlier, that content needs to be accessible because you cannot assume that everyone can reach that information like you can. So, you have to account for that. And at the core of this, with this exponential growth in the different kinds of ways that you can deliver things now, I still think it’s fair to say that this digital content transformation we’ve been going through the past few years is really far from over at this point. I guarantee you, there are formats for delivery we have not even thought about yet at this point. And I’m thinking more of a lot of this virtual reality stuff in particular, that people are just starting to poke at.

BS:                   And I will note that over the past few years, JSON has really grown in popularity as an output format or a delivery format for content, because that content is going into other devices. It’s being loaded into other systems and used in many different ways. So, it’s no longer just the classic PDF and HTML. I mean, it’s going out into a variety of different formats.

AP:                   And this comes back to what we said earlier, your primary consumer sometimes at first may not be a human being. It may be another system that has to decipher and render and combine information to then provide some kind of dynamic, customized experience for your end user or shopper or buyer or whomever.

BS:                   Yeah. And as those systems become more and more robust, I think we’re going to see a lot more happening with another buzzword that’s been hitting the market lately with Content as a Service.

AP:                   Yes, exactly. I do think this is where everything is headed, and I guess we need to go ahead and define that very quickly, what Content as a Service is. We will put some links in the show notes to give you some resources on Content as a Service, or CaaS as people call it. CaaS is basically, instead of a push model where you’re pushing content out to your end user, it’s more of pulling it from multiple sources, combining it, and then serving it up to your customer in some kind of format that’s usually a little more personalized and dynamic than, say your standard webpage, for example. And Bill, you can tack onto that barely adequate definition that I just offered.

BS:                   Well, no, definitely the pull is correct. And it’s also a pull from the consumer side, because the consumer is not receiving information, they are taking it.

AP:                   Exactly. That’s a very valid point. I think this comes down to where people can, for example, specify the product that they have or they want to buy, and then immediately get feedback on the particular features that they’re interested in. And a good example of this that I can think of, if you have got people who want to fix whatever product that they’ve bought from you, and you have numbers about where inventory for parts are available across the globe, wouldn’t it be helpful to present to a person who wants to fix something? This is how you fix it. And if you need to buy these parts, guess what? This place has these parts and this many left. So, there’s inventory there for you to go get. That’s the kind of thing that I see CaaS really trying to accomplish. And I think we’ve also got something, we’ve got a client in particular who has done some really interesting things with CaaS, but it’s, I think even more critically important, because we’re dealing with medical charts and cancer. Why don’t you talk about that a little bit?

BS:                   Yeah. So, there’s a American Joint Committee on Cancer, they publish a cancer staging manual, which basically, it’s something you never want to read because it will haunt you for the rest of your days. But essentially, it is every single type of cancer and how it can manifest and what to expect and how to identify it in each and every stage. So, a lot of that information is now digitized so that they are able to inject it right into active medical charts. So, no longer are medical professionals going through a heavy tome and trying to decipher which type of cancer might be there, might be a candidate for further study. But they also are able to pull in all the specifics. Once a person gets diagnosed, they have all that information at their fingertips and can just inject it right into the chart. So, we’re not dealing with any kind of missed transcriptions and so forth.

AP:                   And outdated print editions. You’re getting the latest information, some of which is experimental or you can get that kind of cutting edge, bleeding edge information, and it’s combined altogether with a medical chart. That’s pretty important, I think.

BS:                   Definitely.

AP:                   And I think this is probably a good place to end. So, Bill, thank you.

BS:                   Thank you.

AP:                   Thank you for listening to The Content Strategy Experts podcast, brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

The post Content ops stakeholders: Content consumers (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 22:10
Content ops stakeholders: Risk management (podcast) https://www.scriptorium.com/2022/06/content-ops-stakeholders-risk-management-podcast/ Mon, 06 Jun 2022 12:00:37 +0000 https://www.scriptorium.com/?p=21455 https://www.scriptorium.com/2022/06/content-ops-stakeholders-risk-management-podcast/#respond https://www.scriptorium.com/2022/06/content-ops-stakeholders-risk-management-podcast/feed/ 0 In episode 120 of The Content Strategy Experts podcast, Gretyl Kinsey and Sarah O’Keefe discuss content ops stakeholders in risk management.

“Your regulatory environment for a single product could actually be different depending on where you’re selling it. You have to do things a certain way in Europe. You have to do things a certain way in the US.”

– Sarah O’Keefe

Related links:

Twitter handles:

Transcript:

Gretyl Kinsey:                   Welcome to The Content Strategy Experts podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we continue our series on content stakeholders. This time focusing on risk management. Hello, and welcome everyone. I’m Gretyl Kinsey.

Sarah O’Keefe:                   And I’m Sarah O’Keefe. Hi.

GK:                   And we’re going to be talking about risk management as part of the content stakeholder group. So my first question to you is what is risk management and what role does it play at an organization?

SO:                   Well, I hate to say risk management is responsible for managing risk, but the risk management group typically is a legal adjacent group of some sort, and their responsibility is to figure out how to enable the company to avoid, let’s say unnecessary problems. When we’re talking about risk and risk management, usually we’re talking about products that are inherently dangerous if they are used incorrectly. So a medical device is a really good example, right? You can save a lot of lives using a medical device correctly, but if you use it incorrectly, some really bad stuff can happen. There are machines that have pinch points, or that you don’t want to stick your hand in certain places, or they use certain kinds of chemicals that are potentially dangerous. So what we’re typically talking about here is working on products that have health and safety implications, and because they have health and safety implications, there’s potentially product liability, or there are regulatory concerns, which is sort of a different aspect of the same thing.

SO:                   So if I don’t document my medical device properly, the regulatory authorities may come along and tell me I’m no longer allowed to sell that medical device, which from my point of view, as the maker is very, very bad. And or if I don’t provide the right information about the device or even design it properly, there could be people that get hurt. So there’s that risk, which obviously we don’t want people to get hurt. And additionally, from the company’s point of view, there are potentially financial implications if somebody gets hurt and then sues the company for not providing the right instructions or providing a poorly designed product.

GK:                   Yeah. So clearly risk management is one of the most, if not the most important stakeholders at your organization. And that’s why I wanted to talk about that. Because I do think that gets swept under the rug or forgotten about when it comes to content. So I wanted to talk a little bit more about how risk management relates specifically to the content. And I think you already started to go there a little bit by talking about how there are sometimes legal and regulatory requirements around what information has to be included in your product documentation, especially when it is a product that carries high risk.

SO:                   Right. So most of the people I think listening to this podcast, if you have risk management as a stakeholder, you probably know about it. It’s pretty unlikely that you’re operating in a company somewhere that has safety concerns and you’re not aware of it. If you make software, particularly if you make things like video games, then probably you have fewer concerns in risk management, but even there, if you think about video games, very often, there’s a notice at the very beginning that talks about flashing lights and the risk of seizure for people that are photosensitive. Or you might get, there’s apparently an infamous warning of some sort having to do with video controllers and people mashing it in certain ways and getting terrible blisters on their hands. So your risk is of course more limited if you’re doing software because you don’t have probably scary chemicals or you’re not dealing with medical devices that get maybe implanted in people’s bodies.

SO:                   However, so if you have a risk management concern because of your product, you probably know about it. And then it comes down to an interesting question of perhaps regulatory. So again, I’m saying medical devices a lot. Medical devices, pharmaceuticals, drugs are regulated. You have to meet certain standards for them. Those standards are different from one country or one region to the other. So that’s a concern.

SO:                   When we talk about machinery, it gets very interesting because in the US machinery for the most part is not really regulated. It’s more, you better do this properly or somebody’s going to get hurt, and then they will sue you. So the concern is legal exposure due to a product liability suit of some sort. In Europe, generally, or in the European Union, we have things like the machinery directive, which require you to do certain things with your documentation. Your regulatory environment for a single product could actually be different depending on where you’re selling it. You have to do things a certain way in Europe. You have to do things a certain way in the US. And interestingly, when you start thinking about global content strategy, very often, one of the things you want to do is try and find a way to put all of that together in a way that meets all of your regulatory requirements.

GK:                   Yeah. And one area that I’ve seen in terms of software, where there can sometimes be differences from one region or one country to another is with data security. And that’s one area that it maybe doesn’t have the same level of risk of injury or harm that you might see with medical devices or heavy equipment or chemicals. But it’s something that a lot of people have concerns about. So if you’re a software company and you are collecting people’s data as part of the way they use your software, if it may be part of the way that you’re delivering it to them, if you are delivering a lot of personalized content and they have a profile that you are managing their data, then there can be regulations around how that data has to be kept secure, how you’re allowed to use it, how you’re allowed to share it or not share it.

GK:                   And those requirements can be different depending on region as well. So if you’re a global company and you work in software, that might be one of the areas that you have to think about. Maybe it’s not so much about how you’re documenting that safety information, but it’s how you’re handling the way that people are accessing your content if there is a data security concern.

SO:                   Yeah. That’s a really good point. And so there’s that sort of internal issue of what are we capturing about our software users and what are the legal implications of that, especially in again, Europe and California, which sort of begs the question of how do you know whether somebody’s in Europe or California in the first place. And then additionally, the type of product that you’re making, if you make a software product that collects your customers, customers data, right? So then you’re going to have to provide some information about how to manage these issues, the data security issues downstream. So if I’m a software vendor and I make a CRM, a customer relationship management system, then by definition, I’m collecting lots of data about people. And you need to make sure as the product designer and the content creator that your best practices can form, or at least you tell your end users, right, the people typing into the CRM, these are the implications of all this data you’re collecting.

GK:                   Absolutely. And I want to use that to segue into something that I have seen both with software and other types of companies, which is that your risk management group may actually create their own content around those exact types of things. So I wanted to get into some examples of the types of information that they may produce alongside of just the regular product documentation. So I’ve typically seen a lot of internal facing content come out of risk management departments. You might see things like guidelines, frequently asked questions, things like that around your safety information, your legal documentation requirements, so that people who are writing your content and documenting your products know what’s required, what has to go in there and how to make sure it’s consistent.

GK:                   You might see things like instructions or training materials for the risk management team, so that as they onboard new people, everybody is aware of all of the requirements. I’ve also seen some risk management departments be the ones in charge of creating the contracts that the company uses. And if there are again security concerns there, making sure that that’s included in those contracts. So all of that internal facing content is something that might be under the responsibility of your risk management department.

SO:                   Yeah. And I think additional to that, you see risk management very much involved again with products that are possibly hazardous, they will be involved with safety messages, those messages that say things like this content is under pressure so do these kinds of things, or always make sure that the second thing is set before you undo the first thing. Or make sure that the power is turned off so that nobody gets electrocuted when they go in and do whatever it is they’re trying to do. Another thing I’ve seen in addition to all the scenarios you’re describing, it’s not common, but the risk management team sometimes will be responsible for actually reading and reviewing the external facing content to make sure that there’s nothing in there that is potentially problematic. So they’re reviewing maybe from a compliance point of view, maybe from a legal exposure point of view, they want to make sure that the product documentation doesn’t over promise.

GK:                   Yeah. And I’ve definitely seen risk management departments catch things as part of that review that should not have necessarily been customer facing. So I think that’s a really important piece to include. If you are an organization that has a review process, make sure that your risk management group is a part of that so that nothing like that falls through the cracks.

SO:                   Right. And so then if we turn our attention to the content creation process and thinking about safety and legal information, which then results in risk management or risk reduction, what are some of the things that you see there? What kinds of content, techniques do we have that can help us with this?

GK:                   Well, one thing that I think it’s really important to ensure is that all of this safety information, legal information, anything that has to be there for compliance is consistent across the entire enterprise. And I think it’s really important to put some systems and tools in place that can make sure that happens, because if you have duplicated safety warnings and you have this information not being maintained in the same place, and then it gets out of date and you’ve got inaccurate and inconsistent pieces of information floating around, you’ve got some conflicting safety warnings floating around, that can lead to some real harm for the people who are using your products. And then that can get your company into legal trouble as we talked about earlier. So one thing that we always like to encourage people to do is have a single source of truth for your safety information.

GK:                   Typically, when people talk about content reuse, talk about content single-sourcing, safety information, legal information, regulatory requirements is the starting point for most people because that’s the information that it is the most important to have consistent. And one thing that can really help with that is getting into a structured content ecosystem. Something that can facilitate that type of reuse that can allow you to write an important safety warning one time, and then have it be reused across all of the documents where it needs to appear, have it update automatically any time that safety warning needs to change, because we’ve seen some situations where a company would have hundreds of separate copies of the same safety warning, and how do you keep up with all of that and make sure it’s accurate. You really can’t. So single-sourcing and reusing your safety information is a really big and important way to make sure that it’s accurate every time it gets published and distributed to your end users.

SO:                   Right. And then adding to that, you want all those safety warnings to be consistent when you translate, which,

GK:                   Yes.

SO:                   You’re going to take that one that you refactor, that’s the contents under pressure warning, and you’re going to translate it one time. So that then in your localized content, that warning will appear consistently as well. We’ve had some infamous cases where our customers looked at translations and were upset because the translations weren’t consistent, why’d they use three different words here. And then you go back and look at the English, the source, and it was inconsistent in the English. So it’s not too surprising that the translated version wasn’t consistent. So if you can get that source content refactored and cleaned up and aligned, then you can turn your attention to the localization and that downstream process to make sure that stuff gets cleaned up.

GK:                   Definitely. So if your organization has limited resources for content development, how can they ensure that the risk management requirements for the content are met?

SO:                   Well, I would say that this is not the most interesting content necessarily, but it’s really important, right? Because again, if you don’t get this right, your company could face some crushing legal liabilities, or even be blocked from selling a particular product. So that is a high priority kind of item. Do this or we are out of business. So from that point of view, it tends to be a pretty easy sell into the organization. And there’s a huge payoff, because as you said, when you have hundreds of copies of a single warning, which should be consistent and mostly it is, but not quite, there’s just a huge payoff to getting that thing reviewed, approved, made consistent one time. So the risk management team goes over it and says, okay, this is the language we want you to use. Great. Now we stash it in our content warehouse and we use it everywhere.

SO:                   And I, as a content creator, don’t have to worry too much. I just have to make sure that I use that approved set of warnings and cautions in my content consistently. And I don’t have to think about it anymore. I can go think more carefully about how I’m going to write that procedure or how I’m going to write that contextual information. And the safety warnings become essentially an asset that I have available to me, right?

GK:                   Yeah, exactly. And if that doesn’t convince your executives, if you show them, here’s how much cost and time we’re going to save on people writing and maintaining these multiple copies, translating these multiple copies. If we get it down to one, we’re going to save this much time and cost. If that’s not enough to convince your management to prioritize all of this risk management type of content, then another thing that might help is just providing some data around these safety related lawsuits. Here’s how much you stand to lose if this information gets us in trouble because it’s not reusable and not consistent.

SO:                   Yeah. I mean, you visualize a warning that says, stay back two meters and then you have the same warning, except it says, stay back three meters from whatever the thing is. Now, should it be two meters or three meters? I don’t know. But the main point is that you don’t want to give them two different numbers, right, for,

GK:                   Exactly.

SO:                   The same warning. You have to be consistent because otherwise somebody’s going to stand at two meters, get injured and sue, or they’ll be too far away. And so you have to get it right. I will say within that, as you said, the structured content gives you some opportunities to automate a lot of this stuff. So that from a content creation point of view, we can just do what we need to and move on. There’s a lot of value associated with getting the risk management, getting the safety content right. But it’s, and it’s bad to get it wrong, but it’s dreary. It’s just so not interesting. So what we want to do, I think is automate it as much as possible because then it’s going to be more correct, which is helpful. And also I’m going to spend less time on it as a writer, which is helpful because I don’t want to rewrite the warnings about high pressure and things getting under your skin and chemical exposure and whatever else it may be. I just want to know that they’re right.

GK:                   Exactly. So, is there any other advice that you would offer around risk management content for companies who are thinking about this for the first time, or maybe have a new regulatory requirement that they’re up against?

SO:                   Yeah, I think I would start with the question of what is your exposure, right? If you make heavy machinery, your exposure is significant. If you make video games, right, your exposure is probably less significant with the exception of some of these photosensitive issues. If you make mobile games that go on your phone, that seems pretty minimal except for don’t play while you’re driving.

SO:                   So you want to kind of look at your product and your product’s profile in terms of what the risks are, what the safety issues are. And then you want to look at where you’re selling your product because the regulatory compliance and legal issues are different in every region. So that’s something to worry about. And I think you probably have a risk management team or legal counsel somewhere in your organization, and it’s probably worth talking to them about this because they’re the experts and they’re again, a stakeholder in your content. And we want to make sure that this particular aspect is taken care of because the implications of getting it wrong can be really, really significant both to your customers in terms of them getting hurt or injured or worse and to the company.

GK:                   Well, thank you so much, Sarah, for this fantastic discussion.

SO:                   Thank you.

GK:                   And thank you for listening to The Content Strategy Experts podcast brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

 

 

The post Content ops stakeholders: Risk management (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 20:04
Accessibility when authoring DITA content https://www.scriptorium.com/2022/05/accessibility-when-authoring-dita-content/ Mon, 23 May 2022 12:00:59 +0000 https://www.scriptorium.com/?p=21448 https://www.scriptorium.com/2022/05/accessibility-when-authoring-dita-content/#comments https://www.scriptorium.com/2022/05/accessibility-when-authoring-dita-content/feed/ 1 In episode 119 of The Content Strategy Experts podcast, Elizabeth Patterson and Bob Johnson of Tahzoo discuss accessibility when authoring DITA content.

“By its very nature, DITA being strongly structured facilitates more accessible content.”

– Bob Johnson

Related links:

Twitter handles: 

Transcript:

Elizabeth Patterson:                   Welcome to The Content Strategy Experts podcast, brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize and distribute content in an efficient way. In this episode, we’ll talk with Bob Johnson of Tahzoo about authoring for accessibility in DITA. Hi, I’m Elizabeth Patterson. Bob, welcome.

Bob Johnson:                    Thank you. Glad to be here.

EP:                   So I think before we dive into our podcast, if you just want to give a brief intro, little bit about who you are, that would be great.

BJ:                    Sure. Currently, I am senior content strategist at Tahzoo. We are a company that specializes in customer experience, user experience, and using structured content to facilitate that. I have been a technical writer for almost 25 years now. I’ve been working with structured content and component content since 2000. I’ve been working with DITA since about 2006. And I’ve been working with accessibility since around 2008. I did some work for Oracle on implementing accessibility in one of the acquisitions about 10 years ago, and began digging into accessibility in DITA. And I’ve presented at a number of conferences and other venues on the subject of implementing accessibility in DITA and why you should implement accessible content.

EP:                   Well, great. Well, we are really looking forward to hearing some of your perspectives today. And we’ve really broken this podcast out into some different sections. But to kind of get things going, how can your designs, so PDF, print books, Web UIs, etc., How can those be made more accessible?

BJ:                    Yeah, that’s a good question. The foundation for whatever your deliverable format is, is the web content accessibility guidelines or WCAG, which is promulgated by the web accessibility initiative of the World Wide Web Consortium. And WCAG outlines what you need to do to make your content accessible. The current version is version 2.0, 2.1, and 2.2, which are cumulative, so 2.1 builds on 2.0. And 2.2 builds on 2.0 and 2.1. The later versions don’t supersede, they simply add more information. The foundation for the web content accessibility guidelines is a set of four principles, using the acronym POUR, P-O-U-R. Content has to be perceivable. You have to be able to get it from the screen into the user’s head. It has to be operable. The user has to be able to jump around, enter data, actually use whatever content is online. It has to be understandable. The user, once it is in their head, has to be able to decipher it and make sense of it.

BJ:                    And then the content must be robust. So if there’s a failure, there’s a fallback, so that the accessible content is still perceivable, operable, understandable to the user. And this is actually not just a backwards compatibility requirement. It’s a forward compatibility requirement. So content has to be compatible with future technologies, not just with current technologies.

EP:                   Right. That makes sense. So I think I want to dive into talking a little bit about structuring DITA content for accessibility. So how does the modular nature of DITA content help make it more accessible?

BJ:                    Well, as we all know, DITA’s a very structured format. And accessibility tools or user assistance tools really rely on that structure, so a screen reader for example, reads what’s called the document object model, which represents the structure of the document, and it uses that to navigate or to help the user navigate through the content. So by its very nature, DITA being strongly structured facilitates more accessible content.

EP:                   What are some challenges for accessibility when it comes to links? How can you optimize your approach for linking and managing related content for accessibility?

BJ:                    Yeah. Links can be troublesome in a couple of ways. One of the more fundamental ways is when the link text is either not very meaningful, or it’s repetitive. So I’m sure we’ve all seen websites that say something like, “Click here for this,” and the click here is the hot text. Screen readers, for example, have the ability to navigate from link to link. And if you’re just going from click here, to click here, to click here, that’s not very meaningful. The user doesn’t know. Where’s that link going to go? So you want to be sure that your link text is meaningful. So you want to know either the title of the resource you’re going to link to, or you want a meaningful text that communicates to the reader where they’re going to go, so they understand if they activate the link where they’re going to go.

BJ:                    The other challenge that links create is if they’re inline. Now I’m sure we all have seen a lot of pages with inline links, and it seems very natural. I mean, we’ve seen inline links from the very beginning of the world wide web. But inline links can be very disruptive for users on screen readers when the screen reader encounters the link. It stops and announces, here’s a link, and then reads out the link and then the target for the link. For a user with a cognitive disability like ADD or executive function, those inline links can be very distracting. So when someone encounters a link and clicks on it, they may lose where they are. And it can be very easy in a browser to lose your way back to where you started.

BJ:                    So it’s good practice to pull your links out of the running text, so they’re no longer inline, and organize them in groups, usually after the text so that the user’s reading flow or narrative flow is not interrupted. And then they can go directly to the links. And this is something I’m challenged on frequently because inline links just seem very natural. And I have to admit it took me a while to come around because it seemed natural to me. And what really changed my mind more than working with other accessibility experts was my own children with their own cognitive disabilities encountering problems caused by inline links. And that was the point where it became very real to me. And so I do have an understanding of why it seems unnatural. But I also have an understanding of why you want to do it.

EP:                   And I know in addition to links, that tables can also be difficult for accessibility sometimes. Is there a way you can structure your tables to make them easier to navigate?

BJ:                    So two things, one, you want to keep your structures standard and you want to keep your structures regular and consistent. And what do I mean by that? You really don’t want to merge cells in your table because it makes navigation inconsistent. When you’re navigating through the table, and if you’re on a screen reader, for example, or if you’re a user with a motor disability, and you need to use the keyboard to navigate rather than the mouse, when you tab into a merged cell, the browser really doesn’t remember where it came from. And so when you tab out of that cell, you can lose context. What typically happens is the browser defaults to the first row or column in that merged cell. And then when you tab out, you go there, which you continue on in that first column or first row, which may not be where you came from.

BJ:                    You also want to be careful because table designs that look meaningful may be difficult to build a mental model. It’s important for people on screen readers particularly to remember that they’re not viewing the table. They’re building a mental model of that table. And you need a very well structured, regularly structured, consistently structured table to help them build that mental model.

EP:                   Okay. That’s great information. So in addition to kind of going off of the tables, I want to talk a little bit about objects and resources that you include in DITA content, and how to make that accessible. So for example, what is needed to make images accessible? Are there any particular challenges around images with text, like callouts?

BJ:                    Well, let’s start with images in general because one of the first things people think about when they start thinking about accessibility is, oh, we need to add alt text to our images, and that’s very accurate, in fact. But alt text needs to be meaningful, so it’s not useful to, for example, repeat the file name as your alt text. You want to have alt text that explains what it is that the image is depicting. So this is a screenshot of the default whiffle jangle dialogue with standard configuration, so that users on a screen reader or other low vision users understand what the image actually is.

BJ:                    If you have a complex image, it is acceptable for the alt text to say, “The image is described in more detail in the running text,” and to indicate if it’s running text before or after the image. When it comes to call outs, we have to remember that low vision users probably are not able to perceive the details within the image, so callouts, you probably want to use a table to index to the call out IDs. But even those in the image are probably not accessible to a low vision user. So you want to be sure that your alt text clarifies that. You have a table that is indexing that callout text to the index numbers in the image.

EP:                   And what about audio and video?

BJ:                    So under the 21st Century Video Accessibility Act, organizations over a certain size, and it’s a surprisingly small size, it’s 50 employees, are required to provide transcriptions or closed captioning for streaming audio and video. What you can do leverage that using your DITA content is to build that transcript from the DITA content customizing a map that actually is attached to your streaming audio or streaming video that describes what is being said in the audio.

EP:                   So I want to take a minute to talk about localization considerations. Are there any interactions or connections between accessibility and localized content?

BJ:                    There are, particularly tying into that principle of being understandable. You want to be sure that the language of your text is called out so that if you’ve got a user on a screen reader, for example, it’s read out in the correct language. So for example, if you have the strings C-H-A-T, you want to specify that my language is US English, so the screen reader pronounces it as chat, as in a small conversation like we’re having right now, as opposed to if it’s in French, it pronounces it in French as the word for a small feline. So making sure that your language is specified in your content, and if you have strings not just … You don’t just specify at the topic level, but if you have strings within the content that are in a different language, you want to be sure that language is specified as well, so the screen reader can read that and call out the content correctly.

EP:                   Great. Well, I think this has been very useful. And I think that is a great place to wrap up, so thank you so much for joining us, Bob.

BJ:                    Thank you for having me. Glad to help.

EP:                   And thank you for listening to The Content Strategy Experts podcast, brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

 

The post Accessibility when authoring DITA content appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 15:35
Content ops stakeholders: Tech support (podcast) https://www.scriptorium.com/2022/05/content-ops-stakeholders-tech-support-podcast/ Mon, 02 May 2022 12:00:59 +0000 https://www.scriptorium.com/?p=21437 https://www.scriptorium.com/2022/05/content-ops-stakeholders-tech-support-podcast/#respond https://www.scriptorium.com/2022/05/content-ops-stakeholders-tech-support-podcast/feed/ 0 In episode 118 of The Content Strategy Experts podcast, Bill Swallow and Sarah O’Keefe discuss content ops stakeholders in tech support.

“If you are delivering multi-hundred page PDFs to your tech support people, then I can assure you that your tech support people hate you. Opening a 600 page document and then having to search through it while you’re on the phone under all this pressure is not the experience that you want.”

– Sarah O’Keefe

Related links:

Twitter handles:

Transcript:

Bill Swallow:              Welcome to the Content Strategy Experts podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we continue our series on content ops stakeholders, this time focusing on technical support and field service. Hey everyone, I’m Bill Swallow, and I’m here with Sarah O’Keefe.

Sarah O’Keefe:              Hey, everybody.

BS:              And this episode is part of an occasional series we’re doing on content ops stakeholders. You can find other episodes on scriptorium.com or wherever you get your podcasts. We’ve previously discussed a few different stakeholders, including IT executives and developers. This time we’re focusing on technical support. So Sarah, how does tech support fit into content ops?

SO:              Tech support is interesting because they perhaps uniquely tend to be both content contributors at times, and also content consumers. So when we say technical support, we’re talking about the frontline people that pick up the phone or answer the chat when you call with a problem and/or in a hardware world, it might be a field service technician or a field technician, people who go out and actually fix hardware, fix machinery. So on the content consumer side, the phone rings or the service tech gets a work order and they have to fix a thing. They have to do some problem solving and fix a thing.

SO:              And in that scenario, they are content consumers. The customer calls up and says, “My thing is broken. I hate you. What is going on?” And it’s the tech support person’s job to figure out as quickly as possible what the problem is and give the customer some answers, remembering that at the point when they call, they are probably upset and angry because their product isn’t working for them.

SO:              So tech support is not just a content consumer, but very much kind of a frontline emergency, kind of first responder content consumer. On the other side of things, tech support also tends to be a content contributor because I pick up the phone, I deal with some weird problem that involves an edge case of, “Oh, they have an older version of our software and they have a weird version of Linux and they have an audio driver. And those three things together contributed to some very bizarre problem bug that we’ve never seen before.” So when something like that happens, tech support will go in and document it. “I saw this configuration, I had this combination of things and we eventually figured out that if you uninstall and reinstall the audio driver, it will work.”

SO:              So they create a case or a knowledge base article that says, “Hey, if you have this configuration or this combination of circumstances, you may also run into this problem.” And so in that case, they are content contributors after, I guess, being an attempted content consumer and discovering that particular case was not yet documented in their content universe.

BS:              Right. So given that they’re both a contributor and a consumer, I’m assuming we do have and pretty much every tech support group out there has a knowledge base that they rely on. How does that kind of feed into things?

SO:              So in theory, the knowledge base is full of these weird combinations or these weird edge cases. This is something that it’s not, “Here’s how to log into the database.” But rather, “If your login fails and you have these 16 other conditions, you might see this problem or this might be the root cause, or this is how to solve the problem.” So it’s kind of like, “Here’s a problem and here’s how to solve that particular problem.” That’s related to, but not the same as the core product documentation, which tends to be more like, “Hey, hi, type in your name and type in your password. And oh, by the way, every 90 days you’ll be told to update your password.” That’s kind of the context of what you’re going to see in the docs probably.

SO:              So the knowledge base tends to be very, very specific and situational. The problem with saying that is that’s all true in theory, but in practice, a lot of the core user instructions tend to creep into the knowledge base because as you’re answering these calls and documenting what you’re finding, you’re probably going to capture information that either is all already in the user docs so you’re duplicating, you just didn’t find it, or should be in the user docs. It’s not there, but it should have been.

BS:              Right. So in that case, you have a nice blend of documentation that resonates or I should say amplifies what’s in the core documentation set and then another complete set of information that completely contradicts what was written in the first place.

SO:              Yeah, I mean, it’s really kind of a mess, because what you really want is for the knowledge base to be the quick look up, and we want to have some sort of a loop back to the user docs so that the user docs can be updated with this new information that’s being uncovered in tech support. So essentially tech support would be to a certain extent stress testing the accuracy of the docs and finding mistakes and reporting those, but they’re also finding edge cases and then you have to make a decision as to whether that edge case belongs in the core docs or not.

SO:              We have seen a number of places where the knowledge base was duplicative. It just explicitly duplicated what was in the user docs. And then it was worse than that because it actually contradicted them, typically because the tech support content was more accurate than the user docs, because it’s based on bitter experience. And so they contradict each other. And now what do you do not to mention the fact that you have two copy or two sets of your content that both document how to log in, but do it in different ways?

BS:              Right. So they have this whole set of content that the consumer has. So this way of course they can point people to, “Oh, look at page six of this particular guide and you can see where the instruction is to do the thing you’re asking about.” How else are they really seen as content consumers?

SO:              The tech support team or the field services people are going to use what’s in the user docs to provide support. So they will look up the information that they need, or they will search for the information that they need and hope that it turns up in either the user docs or the product content. And getting those kind of into alignment can be a really big problem. Typically, if you’re, again, frontline tech support, you’re answering the phone, your number one priority is speed of search, the ability to find something very quickly, because probably there’s somebody on the phone kind of yelling at you and that’s not the most fun thing.

SO:              So they tend to push back on content formats or delivery points that are not fast. And what I mean by that is if you are user docs and you are delivering multi-hundred page PDFs to your tech support people, then I can assure you that your tech support people hate you. Opening a 600 page doc or even keeping a 600 page PDF open just on general principle, but then having to search through it while you’re on the phone under all this pressure is not the experience that you want.

SO:              So what tech support needs as a consumer is a fast search that gets them to the place they need to be as fast as possible. And then secondly, and we can argue over whether it’s more important or less important, but secondly and also critically, they need accurate information, accurate up to date information that they can get to quickly. If any of those things fail, then they basically can’t do their job or at least can’t do their job with the user docs.

BS:              Or at least not efficiently anyway, because they’re spending all their time looking up the info.

SO:              Right. And getting yelled at, which is suboptimal.

BS:              So we’ve been talking a lot about tech support for software, but what about people like field technicians or service engineers?

SO:              Right. So here we’re talking about somebody who goes out into the field, which is to say out into the world outside of their corporate environment, the product manufacturer. And they go maybe onsite in a factory to fix a machine or they go to a hospital to fix a medical device that’s not working. So the field service tech is sort of, I mean, there’s tech support, but they’re mobile tech support instead of being call them up on the phone tech support.

SO:              So as a field service tech, I show up on your doorstep, you’re my customer. And you say, “Hey, this machine is broken, fix it.” And I go look at the machine to figure out what’s going on there. Well, at that point I start plugging in the issue that I’m seeing. Maybe there’s an error code. And if I’m very, very lucky I can plug in the error code and have it tell me, “Oh, that means the battery’s low,” or, “Oh, that means you need to unplug it and plug it back in,” which I think in general is good life advice although not for medical devices. I’m not giving anybody medical device advice.

BS:              Especially don’t unplug it if you’re not supposed to.

SO:              Yeah, don’t unplug the thing. So the service tech is responsible for service and/or repairs. And so they need the same thing. They need their procedure that says what to do. How do you turn the machine off? Which part do you pull out? Which part do you replace? How do you do that? Which things do you have to unscrew and open up and disassemble to get to the piece that you need to correct, put in the new part, put it all back together, do whatever you need to do?

SO:              So service techs, I would say in general, produce less content than the technical support people. You might get annotations like, “Oh, I did step four, but it wasn’t quite right. You might want to do it this other better way,” that type of thing, but they don’t generally write extended procedures. And I think part of that is because the service techs tend to be experts. It’s like a car mechanic who knows how to fix the car. I would need 127 step procedure. The car mechanic needs a procedure that says like, “Open the hood, remove the battery, put in the new battery, close the hood maybe.” I need a lot more than that, even for something like replace the battery.

SO:              And so for a service tech you might get a very high level procedure that’s four or 10 steps, but then maybe you can expand those steps because if I can’t quite remember how do I do this one thing that I need to be doing, I can kind of expand it and it’ll give me the more detailed version of that. But service techs in general are relying on detailed standard operating procedures, instructions, how to do this. From a content ops point of view, the service techs very often want or need integration with their dispatch system. So Bill you’re the mechanic in this scenario. I’m a hundred percent sure you’re a better mechanic than I am. And-

BS:              No, we need all the help in the world if that’s true.

SO:              I’m sure you’re better at it. So you show up for work and they hand you this work order that says, “Hey, we need you to go work on this car. It has this problem and we think this is the fix.” And so you kind of get this work order that says, “Go do this thing, but it’s already got the procedure glued into your work order essentially.” Again, the work order is, “Replace the brakes,” or, “Fix the battery,” or something that I understand. And then the procedure down below is, “Oh, well, for this model, from this year in this configuration, here’s what you actually need to do to replace the brakes.” Now, again, if it’s you or me we’re going to need all the details. If I’ve been doing mechanical work on that type of car for the past 20 years, I really just need to replace the brakes.

BS:              Mm-hmm (affirmative) And likely you’ll find places where the docs are wrong and you need to annotate it so that you don’t run into it a second time.

SO:              Yeah. And if I’ve got 25 years of experience, I’m probably not even looking at the docs or I’m only looking at it to get the work order and the high level. “Wait, what did they do? Oh, no, that’s probably not the brakes. They diagnosed this problem, but I’ve seen this before and it means something totally different.” And so there’s that level of expertise, but it’s interesting because the service techs very often are, again, looking for that integration between the service management system and the procedural content that tells them how to do particular kinds of tasks.

BS:              So stepping way back, so what are the core priorities that tech support and field service has for content ops?

SO:              From a content ops point of view, again, the field services people really need that connectivity between their work orders and their instructions. On the tech support side, we ask questions about how to connect the knowledge base, whatever that may be, and the product content delivery endpoint. So basically the user docs and the KB. How do you connect those? How do you establish a good feedback loop so that people are farming the tech support database looking for content updates and corrections.

SO:              And I would say, ultimately, we really want to think about how do we distinguish between core product content and sort of support only content. And I mean, I’ve said repeatedly knowledge base versus content delivery, but at least in theory those could be delivered in the same place even if your access points or your entry point as a content developer are different. But those are kind of the things that I see as the key priorities here are connecting to what amounts to the dispatching system or service management and what that looks like. And then this question of how you align product content and tech support knowledge base content, and how you feed back into that loop to make sure that they all get updated properly.

BS:              And I think that’s a good place to leave it. Thank you, Sarah.

SO:              Thank you.

BS:              And thank you for listening to the Content Strategy Experts podcast brought to you by Scriptorium. For more information visit scriptorium.com or check the show notes for relevant links.

The post Content ops stakeholders: Tech support (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 15:59
Content as a Service (podcast, part 2) https://www.scriptorium.com/2022/04/content-as-a-service-podcast-part-2/ Mon, 18 Apr 2022 12:00:29 +0000 https://www.scriptorium.com/?p=21413 https://www.scriptorium.com/2022/04/content-as-a-service-podcast-part-2/#respond https://www.scriptorium.com/2022/04/content-as-a-service-podcast-part-2/feed/ 0 In episode 117 of The Content Strategy Experts podcast, Sarah O’Keefe and Patrick Bosek of Heretto continue their discussion about Content as a Service.

“Content as a Service is becoming a necessity to really deliver a strong customer experience from an answers and knowledge perspective.”

– Patrick Bosek

Related links:

Twitter handles:

Transcript:

Sarah O’Keefe:                   Welcome to the The Content Strategy Experts podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize and distribute content in an efficient way.

SO:                   I’m Sarah O’Keefe. In this episode, Patrick Bosek and I continue our discussion about Content as a Service. This is part two of a two-part podcast.

SO:                   So, looking at this from a slightly different point of view, who are the companies, or the industries, maybe, that need Content as a Service the most?

Patrick Bosek:                   So, I think it’s going to be the usual suspects. I mean, I wish I had a more interesting answer here, but it’s technology companies. And sure, You can say, okay, we’re all technology companies now, and to an extent, that’s true. But I think if you look at the people who are going to adopt this most aggressively, right out of the gate, it’s the people who are going to have the most benefit from it. And what we see is it tends to be software companies, or companies from a high tech perspective that maybe they sell a thing. But realistically, the thing is just something that they can load software onto and sell you that, right?

PB:                   So there’s that really blurry line between high tech manufacturers and software companies. It’s really good for them because of the as licensed thing they run into. There’s this natural progression as a software company that I think every software company that reaches a certain scale goes through.

PB:                   You make a thing. It’s simple. People want it. It solves a simple problem. People come and buy it. You sell a bunch of it. Well, as you sell more of it, you gain more validity and bigger people want to come and buy it. But bigger organizations, they want this changed and they need this other thing, or they need to integrate with this thing. And over time you want to serve them, those bigger organizations or different niches, or there’s market demand that pushes a product in a bunch of different directions. And what happens is that your product becomes more complex so that it can access more niches, it can access more larger accounts. It can have its 90% value plus 10% for a lot of different groups. And what that means is that your product is very different, based on who’s using it, which group is using it.

PB:                   So now there’s this as licensed model for your software. If you’re this group, if you’re in FinTech and you’re using our product, it’s mostly the same, but it’s got this little thing that’s different. If you’re in this group, there’s all that, right? But you don’t want to… So now your choices are, okay, we can produce one manual that covers 90% of the product, or we can produce 40 manuals that all cover a bunch of different parts of the product. And they copy 90% of the content.

PB:                   Unless you move to a Content as a Service model where it can be dynamic, it can be whoever’s accessing it gets 100% of the product. And it’s just that 10% that changes based on who they are. So Content as a Service becomes a necessity to really deliver a strong customer experience from an answers and knowledge perspective, to serve those people after the fact. And I think those are the organizations that we’re seeing adopt this most aggressively today.

SO:                   Yeah. And the as licensed thing is interesting because we’re actually seeing this in that space, but also in the as built, which is essentially the hardware equivalent of as licensed. I mean, you mentioned tractors, right? Well, it turns out in some manufacturing organizations it’s, ‘oh, I need a machine,’ a tractor or a truck or a car or something. And those are in fact getting customized per customer. So they need, ‘what did you build for customer X on this date?’ And that gets super tricky and really kind of obnoxious.

PB:                   Oh yeah, the automotive industry is full of that. I was just talking to somebody from that industry. I don’t know, it was on Coffee and Content. It was actually… His name’s Nick, he’s from Tweddle. And they were talking about VIN specific content, right? So, that’s kind of like that whole thing taken to its nth degree where the number that identifies your product, it’s like a checksum almost, is the thing that determines the content that goes into your product. And because almost all cars can display content, you have a perfectly dynamic experience that relates to the person who’s sitting in the product. How much more Content as a Service could that possibly be? And to fulfill that you have to have a really strong content operations methodology that feeds into a Content as a Service infrastructure, because Lord knows cars are largely software. And when your car updates… I mean, they are. You laugh but they are.

SO:                   Yeah they are, no they totally are and it’s depressing.

PB:                   I mean, yeah, the software’s eating the world, right? Everything is software. And there’s software in everything. So when that software updates, the content updates, your car updates, you have to be able to push that stuff out, along with all those things.

SO:                   So as we get started with this, I mean, there’s a lot of people talking about Content as a Service and there’s some stuff happening. But if we look ahead, 10 years or five years or 18 months, however far you’re comfortable in looking ahead, where do you envision this going? I mean, what do you think this is going to look like when it reaches its full potential?

PB:                   Oh boy. Small question. I think the interesting thing about Content as a Service is that… I would like… Here’s how I’d like to answer that question. And then I’m going to tell you after I answer it this way, why this is probably not realistic. I would love to say that we’re going to get to a place where Content as a Service itself has its own well defined and well understood standards. And we have interoperability in a way that we have with content storage formats today, right? So like you think about DITA, right? DITA is a structured content storage format. It’s not a good format for Content as a Service, because it’s just… It’s too semantic, it’s got a lot of information in it you don’t want directly represented, you have to transform it into HTML, all those kinds of things.

PB:                   So you don’t want to send DITA through an API. You don’t want to leave that to the last mile. You have to compile DITA and you’d lose a lot of the power of DITA if you didn’t compile it, that’s just kind of part and parcel of using DITA. But what if there was a standard for what comes out the other end of a Content as a Service, right? It was like DITA where everybody knew what they were going to consume.

PB:                   Well, you’d end up in this situation where the systems that create experiences were interoperable, right? So you’d have Heretto, and Heretto would send you whatever this open source standard was over your Content as a Service. And then maybe you’d have something like Contentful and they would send the same thing, right? So different content, but it’s a standard format. And then you could just have frameworks that were just out there that knew how to interpret this stuff.

PB:                   All the rules of the road are, are put together, right? It’s all… It’s the maximal implementation of cards and components and the modularized enterprise as it relates to content and Content as a Service and modular experiences and all that kind of stuff. So that’s what I want to say is going to happen. And I want to believe it’s going to happen. I do. It’s a thing that I dream about and I hope is in the future and it’s a thing that I’m going to actively pursue. Right. It’s a thing that I believe in and I’m going to push towards. But…

SO:                   However…

PB:                   Right. So why am I a little skeptical about that? I’m a little skeptical about that because I think that the industry as a whole is very privatized. And I think that there hasn’t been any real appetite for getting to something like that.

PB:                   And I think that what you’re going to see is that if you try to go down that road, you’re going to run into a lot of forces that are going to say, well, part of the beauty of where we are today in Content as a Service is that it can be so customized. It can be so one to one. You can build these models that really fit you. And there’s no really strong way to perfectly containerize that and have it be available in a broad, universally understood interchange format. I don’t know how true that is. I don’t know if that’s necessarily the thing that would kill the ability to do this, but whatever way it goes, this is the future.

PB:                   Everybody’s going to run on this, the idea that we’re going to use things like WordPress in 10 years. I mean… I don’t know… I mean, somebody is, for a blog, but no company is going to be running on WordPress in 10 years. You know DXPs, the monolith DXPs? I think those are dinosaurs too. I think anybody who’s going and implementing a DXP today is just deciding that they’re going to re-implement that on Content as a Service in, four to five years.

SO:                   So I won’t ask you to name names, but DXP is digital experience platform.

PB:                   Yeah. That’s true. And some of them are very sophisticated technologies, and they do a lot of stuff. So they used to be web content management, right? And then they moved to digital experience. And when the Content as a Service industry can find a way to break down all the different pieces of functionality that you get in these big monolith DXPs, and provide them as perfectly modularized, interoperable, interchangeable services that you can clip together into what you need for your content experience platform, then you’re not going to buy DXPs anymore. It just doesn’t make any sense. I think that’s a big part of the future. So people are listening to this and they’re going, okay, well, how does this relate to me? What I would say is in terms of very specific technology, you should definitely understand the primary approaches to structured content because that’s not going anywhere.

PB:                   And if you really need more proof of that, which I don’t think you do if you’re listening to this podcast, honestly, but maybe somebody near you does. And if that person does, go and look at Schema.org, that’s Google stuff, right? The way to improve your search engine ranking is to inject more Schema.org into your content. That’s structured content, that’s metadata, right? And that’s what Google wants you to do. That’s where those quick answers come from. That’s where the FAQs on the front page of Google come from, it’s how they ensure that certain things are more relevant. It’s literally because you tell Google it’s more relevant. It’s not keyword stuffing. Google, in a lot of ways, gave up on like the AI approach to this. And they said, actually just go put metadata in your content. It’s like that’s the direction.

PB:                   So you’re going structured in this way. And so understand the structured formats and then get to understand the primary delivery formats. So there’s really only two ways to deliver Content as a Service. And that’s a RESTful API and a GraphQL API. Yep. That’s basically it. I was trying to think if there was a third, but nope, that’s it. So understand the two of them and understand the ones that are going to be more effective for your use cases. And then kind of get a recognition of what the different models for presentation look like and how those things come together. And I think that’s a foundation of understanding content operations, which is just an aspect of content operations that’s going to work well for you, no matter where you go. And if anybody is to write a book on content operations, I would recommend you go read it. That’s my last recommendation.

SO:                   All right, people. So it appears you have homework. It wasn’t me. It was Patrick, and that sounds like a good starting point for some of this research, but also, that sounds like about six months of reading work. So…

PB:                   Hey, you’re the one who asked me to come on here. This is not my fault.

SO:                   I’m going to stop it here before you give us more homework to do.

PB:                   No, that’s fair.

SO:                   But Patrick, thank you. And thank you for listening to the Content Strategy Experts podcast brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

 

The post Content as a Service (podcast, part 2) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 13:00
Content as a Service (podcast, part 1) https://www.scriptorium.com/2022/04/content-as-a-service-podcast-part-1/ Mon, 11 Apr 2022 12:00:48 +0000 https://www.scriptorium.com/?p=21409 https://www.scriptorium.com/2022/04/content-as-a-service-podcast-part-1/#respond https://www.scriptorium.com/2022/04/content-as-a-service-podcast-part-1/feed/ 0 In episode 116 of The Content Strategy Experts podcast, Sarah O’Keefe and Patrick Bosek of Heretto talk about Content as a Service.

“Do we still have places where building a static site or a static set of help materials makes a lot of sense? Totally. But there’s a natural aspect of dynamic changing content. If that content is going to be a little bit different based on who or where or when you access it, then you can’t build it statically. That’s one of the things you’ll never get from a PDF.”

– Patrick Bosek

Related links:

Twitter handles:

Transcript:

Sarah O’Keefe:                   Welcome to The Content Strategy Experts podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we talk about Content as a Service with special guests, Patrick Bosek of Heretto. This is part one of a two-part podcast. Hi, I’m Sarah O’Keefe. Patrick, welcome. We’re happy to have you here.

Patrick Bosek:                   I am happy to be here. Thank you, Sarah. I’m excited to chat with you on this very special podcast about content strategy. I love content strategy, you know that, and I also love Content as a Service, which is our topic. So, excited.

SO:                   Excellent. So tell us a bit, for the people that don’t know, tell us a bit about yourself and about Heretto.

PB:                   Yeah, sure. So I’m Patrick Bosek, as you mentioned. I’m CEO and one of the founders of Heretto. And what that means is that I get to kind of run around the digital universe and talk about how cool content is if you do it right. And then talk a lot about how to do it right in a bunch of different places. I do that with Coffee and Content and Win with Content and the Content Components podcast, if you see a theme. I like to get out and talk about content. I also write for CMSWire from time to time. I like to blog on our blog and all this comes down to talking about, mostly the technology aspect of how to get content operations set up in place, make it run effectively, and just get more efficiency, scalability, lower cost, and joy out of your content systems.

PB:                   And then in my more … in what I’m actually paid to do, which is to run Heretto a little bit, Heretto is a component content management system that runs on DITA and it is a content operations platform that you can use to scale up your content, manage the localization, collaborate with people who are a range of levels of technical. So you can have people who are non-technical, people who are in legal, all those kinds of things, all the way through to developers and technical authors, creating structured content in an online Software as a Service, Wiziwig environment. And then we can put that into deployments, which can go out into the cloud and power content experiences across whatever you want to hook it up to the API. And we do that using an API, which is a Content as a Service, which leads very nicely into what we want to talk about today. Yeah.

SO:                   And there we go. So for first of all, we will get links to hopefully everything you just mentioned into the show notes so that people can go find all these other podcasts and resources and the Heretto site, and your CMSWire link while we’re at it and all the rest of it. But yeah, so not too long ago, I was on one of the podcasts that Patrick mentioned and we got into an active discussion about a number of things. So I thought it might only be fair to return the favor and let you give your perspective on some of these things after our little knock down drag out. So I wanted to start with the basics, which is how do you define Content as a Service or CaaS?

PB:                   Yeah, that’s actually not that hard. When it comes right down to it, if you can access your content over a web available API and you can do it in a production way, so if I can set up an application or a website or some other user interface or really anything that’s going to be able to select content using a web call, that’s a content as a service. I’m able to make a request and it will serve me that content on the request. So it’s provided to me as a service. It’s Content as a Service application. It’s not that complicated.

SO:                   Okay. So when we think about Software as a Service, it was generally this idea that you would buy software and put it on your laptop, I guess, or your computer on your local drive and run it there versus Software as a Service was kind of like, you go to a website and you get stuff. So you’re saying that content, not as a service, the old version is essentially packaged stuff, right? Like here’s a PDF or here’s a book, or even here’s a website that I have pre-built.

PB:                   Totally. That is exactly the difference. Now there’s a bunch of, I mean, I don’t know how nerdy we want to get on this podcast. I mean, this isn’t Components after all, but there are places and there’s a time and place for both of these things. Content as a Service isn’t meant to be, even though it’s the next thing, it’s the new thing. It doesn’t remove the need for some of the packaged content, just like we have apps on our phone today. That’s the old model of software. You download them and you install them just like we used to do before. They’re not software as a service, the apps that are on our phone. So that model hasn’t gone away. Software as a Service has just become a really effective model for certain types of applications. The ones that spring to mind are obviously, social media is an application that tends to be really strong through Software as a Service when you’re on a web browser. And a lot of business applications.

PB:                   Salesforce famously is the first one to really embrace it in business applications. And Content as a Service is very much kind of the same thing but for content. So do we still need PDFs that we can download and print and take with us? Yeah, sure. I use PDFs every day. Do we still have places where building a static site or a static set of help materials makes a lot of sense? Totally. But there’s a natural aspect of dynamic changing content. If that content is going to be a little bit different based on who or where or when you access it, then you can’t build it statically. That’s one of the things you’ll never get up from a PDF. If you and I, based on who we are or where we are need to have a different piece of content in a paragraph, you can’t do that with a PDF efficiently or at scale. And that’s when you need Content as a Service. And that’s kind of the same thing with software or anything else that comes as a service in that way.

SO:                   So what do you see? I mean, you’re mentioning contextually aware or personalized kind of content. Where does this matter the most? What are the kinds of use cases that you’re seeing for Content as a Service where people need it and are using it appropriately?

PB:                   Yeah. So that question is so much fun because everybody wants to call it personalization and it is personalization. The problem is that when everyone thinks of personalization, they kind of go right to really dynamic stuff, which is Facebook or Amazon or stuff like that. Those types of experiences, which are really very individualized, personalized. When you’re thinking about Content as a Service, personalization, the purpose of it is to get us the things that we need, which is to say the information we want more quickly without having to wade through a bunch of other things. And those other things are going to be navigation or they’re going to be not having to read things. So when we think about where Content as a Service makes the most sense and where it’s having the biggest impact, it’s typically in business functions, where there is a necessity to either deliver less content to make it more easily digestible, more quickly digestible, get people to an answer or to a resolution faster, or content specifically that has an aspect of confidentiality or security or privilege.

PB:                   So if I have 10 different groups of people and what they can see changes. So the classic example is support distributor customer. Let’s say you sell tractors, I don’t know, and your distributors get certain version of the manual. You want them to be able to work on everything. Support gets a different version of the manual. You want them to be able to support people really effectively, but maybe they don’t need to know how to re-time the motor or engine. And then the end customer gets another version of the manual which is some Venn diagram of those three things. That’s a really classic example. Each of those personas, based on who they are and what their function of the product is, need to have secured effectively different access to a shared pool of content.

SO:                   Yeah. And I remember, I mean, a long time ago we had a … it wasn’t the most challenging thing, but we had a situation where a customer had support content where essentially the external facing support said, “Oh, the thing is broken, try this, this and this.” And then the last line in the knowledge-based article was more or less, if that doesn’t work, call corporate support. But the corporate support version of that same page said try the first three things that was identical. But then instead of saying call us, it said, “Okay, if a customer calls you with this problem, here are the weirdo things that you can do.” For which you need higher levels of access than the customer has or that we’re willing to give the customer. And I mean, that was doable with just a pretty simple switch, but you extend that as you said to more versions and more people and more variants, and all of a sudden it gets complicated.

SO:                   I also feel like there’s an element in here of security in the sense of if you get it right from an API point of view, there’s less likelihood that the content will leak out inadvertently.

PB:                   I think there’s an aspect of that, but I would warn people against thinking that they’re going to be able to prevent somebody from removing that content and creating a copy. I wouldn’t endorse that concept, but you can certainly make it more challenging and you can make it a thing that someone has to maybe have active male intentions or whatever you want to say. Something where they’re doing something that they know they shouldn’t and that is probably a really strong deterrent. But yeah, if it goes through the internet, people can hang onto it, for sure.

SO:                   If it’s digital, it’s … yeah.

SO:                   I think that’s a good stopping point, but we will continue this discussion in the next podcast episode. Patrick, thank you.

SO:                   And thank you for listening to the Content Strategy Experts podcast brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

 

The post Content as a Service (podcast, part 1) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 11:18
Content ops stakeholders: Localization (podcast) https://www.scriptorium.com/2022/03/content-ops-stakeholders-localization-podcast/ Mon, 28 Mar 2022 12:00:10 +0000 https://www.scriptorium.com/?p=21404 https://www.scriptorium.com/2022/03/content-ops-stakeholders-localization-podcast/#comments https://www.scriptorium.com/2022/03/content-ops-stakeholders-localization-podcast/feed/ 1 In episode 115 of The Content Strategy Experts podcast, Bill Swallow and Sarah O’Keefe discuss content ops stakeholders in localization.

“Using baseball examples isn’t going to work well in a country where baseball is not a thing. So you have to think about that. Does your text, does your content, do your examples work and are they appropriate in your target language and culture?”

– Sarah O’Keefe

Related posts: 

Twitter handles:

Transcript:

Sarah O’Keefe:                   Welcome to The Content Strategy Experts podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we continue our series on content ops stakeholders. And this time we’re focusing on localization. Hi everyone. I’m Sarah O’Keefe and I am here with Bill Swallow today.

Bill Swallow:                   Hi there.

SO:                   So this podcast is part of an occasional series that we’re doing on stakeholders in content ops projects. We’ve done a few different stakeholders already, and you can find links to those episodes in our show notes on scriptorium.com or wherever you get your podcasts. In this episode, we want to focus on stakeholders in localization. And I guess the first question then becomes, Bill, who are the localization stakeholders?

BS:                   Well, it’s a lot more than people think there are. We could start with the localization project managers so the people who are essentially running the entire localization operations for your company. Then you have your regional marketing people, so those who are promoting products and services in the target markets that you’re trying to reach with your content. Then of course you have the actual translation team or localization service providers, whether you’re using internal or external resources. Those are also stakeholders there. Another group that somewhat gets overlooked are the internationalization developers. So anyone who’s working on products or websites who have to account for any translated content. Those people have a rather large stake that is often kind of left in the dirt behind. And then of course you also have your content consumers. So those who are ultimately going to be reading, listening to, or viewing your content.

SO:                   And I know that a lot of times when I talk to these groups of stakeholders about some of the work that we do, they’re very interested and they would love to have better content, better content ops, better information flowing into the localization function. But what they typically say is, “Well, we don’t control that. The people upstream from us, the content authors, the information architects are the ones determining what this content looks like when it goes to localization.” And so I guess there certainly IA and content authors have an effect on localization.

BS:                   A big effect. Essentially, anything, a content author or an information architect does impacts the localization process, whether they are conscious of it or not, it can come down to how they write, so the style that they use in developing their content. It could come down to the infrastructure that the authors use, so which tools they use and how they use them. The time at which they send content off for any kind of translation work. There are a whole bunch of different factors that come into play here. And Sarah, you’re absolutely right. A lot of times these stakeholders are kind of well left holding the stake, so to speak. They receive stuff that may not be in the best format that may not be written well, that might be somewhat confusing to translate. And they may be given next to zero time to turn it around. So they have a lot of concerns.

SO:                   So we’ve already used at least three words to talk about this function, right? We’ve said localization several times, you mentioned the translation team, the linguists, and also internationalization. So what are those three? I mean, if you’re not somebody that lives in the space, what is the difference between translation, localization and internationalization?

BS:                   I think the easiest way to think about is that localization is the general term for all of it. It’s the process of being able to take content that’s written in one source format. Let’s say, I’m not even going to suggest a language here and then taking a look at the processes and the needs for being able to develop that content in a format and in a language that another person in another part of the world would be able to consume that content appropriately.

BS:                   Internationalization is kind of the backbone of the entire translation process or I should say the entire localization chain. Internationalization is basically the things that you bake into how you develop something that accounts for a need to change to a different language, to a different market, switch formatting and so forth. So that’s all, it’s kind of all of the technical bells and whistles that you bake in behind the scenes that allow you to easily produce content for multiple different audiences. And then the translation process is what we’re all accustomed to when we think about developing content in a different language. It’s the act of actually rewriting the content in a target language.

SO:                   Yeah. I like when I talk about internationalization, I tend to fall back on talking about currency, because if you think about it, if you develop a product, let’s say in the US, and it is dollar based and you want to bring that into the European Union, you will almost certainly have to support euros as a currency inside your product. Well, that’s not really a translation problem per se. There’s also going to be translation, but the idea that you can’t just bake in dollars as the only currency that your product understands is important, right?

SO:                   That’s that kind of internationalization layer. Then you’ve got the linguistic layer, the translation, and then there’s a separate one using baseball examples and US content isn’t going to work well in a country where baseball is not a thing. So you have to think about that. Does your text, does your content, do your examples work and are they appropriate in your target language and culture?

BS:                   And just like currency. There’s another really accessible example of internationalization. And that’s the use of time zones and being able to send a calendar invite from one person to another, in any region. If you’re setting it for 2:00 PM your time, it should not show up on their calendar at 2:00 PM their time. Otherwise, you’ll never connect. So there is that extra layer of internationalization behind the scenes that says, “Hey, what time zone am I in?” And then add or subtract hours until you get the correct time for the meeting.

SO:                   Yeah. I mean, there are other examples of this. I was talking to somebody, a few years back, at a conference in India and they politely said to me, “So your logo has an owl in it. And that’s interesting. And why did you choose an owl?” And I said, “Well, in the U.S., owls connotate wisdom and intelligence and various positive of things.” And I said, “So what does the owl say to you in Indian culture?” And he looked at me and kind of cringed because he didn’t want to give the answer. And I was like, “No, really it’s okay. So, well, what does an owl mean in Indian culture?” He says, “Death.” And something about being silly. I mean, it was very … it was a negative thing it’s as if we picked a, I don’t know, a rat or something as our logoed animal. Right? And so that was a really good example where we didn’t think too hard about the implications globally of picking a particular visual or a particular animal. So we have something that in a non-US context in certain other cultures doesn’t necessarily work exactly.

BS:                   And that’s really where the style guide is important. And being able to make these decisions both visually and with authored content about how things are being represented. There are a lot of different issues that come around different imagery, whether it be using hand gestures, I suggest you don’t. Using colors a certain way. Even certain layouts can be a little problematic when going to certain markets.

SO:                   And so related to that, the most common pushback we get, let me say, you’re going to need to do translation or localization, or you need to really have a strategy around globalizing your product. Right? If you want to sell your product in these other markets, you have to think about other languages and what we get less these days, but certainly in the past five or 10 years, we got a lot of, well, localization is expensive. So we’ll just ship English and the people who are buying our products speak English, which, I mean, if you’re only shipping in English, then that’s probably true.

SO:                   But you’ve just limited your market to the people who are willing to buy a product in their country that is only available in English. So it’s a bit of a chicken and egg, but I wanted to ask you a slightly different question, which is not is localization expensive because it is, but why? And what can you do about it? And is it really just expensive? Or is it that you need to … How can you best leverage that? If you’re going to spend the money, how do you make it as valuable as possible what you’re producing?

BS:                   Well, if you’re going to spend the money, it’s best to spend it the right way. And that’s to look at your entire chain of how the translation process, the localization process runs. The one thing you don’t want to do is spend a lot of time and money upfront, authoring your content, the way you feel it should be authored for where you are in the world. So if your company is United States based, you don’t want to be just authoring for a United States’ customer or audience. You want to take into account with the baseball references that Sarah mentioned and so forth, you don’t want to use a lot of these local idioms, anecdotes and so forth in your content, because it makes it more difficult to translate. Likewise, you don’t want to spend six months developing content and then throwing it over the wall to some poor translator saying, “Hi, we need this back on Tuesday.”

BS:                   That’s going to be expensive for a couple of reasons. One, it’s going to incur a markup for a rush rate. Two, you’re not going to get their best work. So there are going to be errors, and there’s going to be a lot of cleanup. And if there isn’t cleanup, you have another expense of having to essentially deal with the damage that your content causes down the line. It could result in incorrect procedures. It could result in offending somebody. So you need to make sure that you’re doing things the right way. And you’re including all of these stakeholders in the localization process from day one of when you’re developing content.

SO:                   Yeah. And I think it’s important. And I fall into this trap as well. You know, very often we start talking about localization and what we talk about is global markets like, “You started in the US, and then you’ve decided you want to sell in Europe. And therefore you need localization.” However, there are somewhere in the vicinity of 30 to 40 million people in the United States whose primary language at home is Spanish. Well, 30 million people is a pretty good size European country. So you might think hard about whether your first language, your first localization effort is in fact not a different geography, it’s the US market, but in Spanish, because that is a big chunk of people that you are probably not going to reach with an English only approach.

BS:                   Definitely. And Spanish is just one really good example. There’s another huge Chinese market and others in the United States alone, not withstanding any regional differences as well. When it comes down to talking about specific items in everyday life, we have different terms and we talk about them differently, depending on whether we’re in the Northeast, the Midwest West Coast. It’s also important to look at the expense of localization in how long it takes to get localized product and localized content out to those who need it. If your process isn’t as efficient as it could be, you could see a significant delay in shipping to other countries or even other regions or other target language markets, because you’re waiting for the localization work to finish. Whereas if you planned for it upfront, you can bring that time in. And you can kind of not necessarily spend less money, but you can realize the fruit of labor earlier.

SO:                   So we’ve talked a little bit about how localization essentially has implications for nearly everybody in the content chain. Who’s the stakeholder for localization? I mean, it might be easier to say who’s not a stakeholder because if I write the content properly, the first time around and follow standards that will flow through all the way into the actual translation linguistic process. We infamously had a customer where the Spanish translation team got criticized because they used six different terms for the same thing in the Spanish content. So along the lines of car seat versus baby carrier versus infant seat and you need to pick one and go with it. So they got dinged when somebody reviewed the Spanish translation for using six different terms for the same thing, and this is terrible and we should fix it.

SO:                   Well, they went back and looked at the English source content. And what they discovered was that in the original English language, they used eight different terms for the same thing. So the translators, they had improved, it was still way too many terms, but the fact that they used six instead of eight was not really on the localization workflow. That problem started much, much earlier. So what does that look like? What kind of collaboration do you need across all of these different stakeholders who are either directly with a title like localization manager or indirectly as a content person involved or contributing to localization?

BS:                   I think the big thing to think about first and foremost is making sure that everyone is aligned on the purpose of developing this content for multiple different language markets and making sure that everyone understands what the key factors of success is for those markets, making sure that everyone understands the importance of having the correct vocabulary in place and using it consistently. So we’re talking about style guide here and language rules and writing rules, to bring in those internationalization developers who often get forgotten about. These are people who are going to build in the efficiencies that you can leverage as a content author to make sure that you are doing things consistently. So using things like variable strings for commonplace terminology throughout your content set, things like labeling notes and cautions, warnings, those types of things. If you can externalize that stuff and have it programmatically inserted, it makes it very easy to replicate it across the board in any language, because you can do that customization outside of the content. And then it’s reused automatically when you’re publishing.

BS:                   Another key aspect is to agree on the workflows that are involved and it cannot be develop your content and then throw it over a wall and expect it to come back perfect. There has to be some checks and balances throughout the entire process of developing your content so that the authors get the feedback that they need should they be doing something wrong at the time when they’re doing it wrong and not six months later when they are just grumbling about the fact that edits have come back and they thought they were done with this piece of the work. Having that timely feedback helps hone in on the process and making sure that not only are things being corrected, but that things are being built into the process to ensure that those mistakes don’t happen again in the future.

SO:                   Okay. So I’m told that machine translation is going to solve all these problems. And all we have to do is shove our text in and the machine will make it into magically into all the different languages. And off we go. So why aren’t we doing that?

BS:                   I got an Amazon Echo for Christmas and it still does not understand half the things I ask it for. So I’m not putting my money on machine translation if I can’t even get my device to play the correct song that I’m looking for. Machine translation will get you a part of the way, but the machine translation is only as good as the database it’s referencing and as good as the content is going in. Aside from those two factors, you can still get very close to 100% clean and appropriate, and you will still have to do some cleanup on the machine translation side after that work has been done. It really does require a person going through proofreading. And just asking the very basic question, is this clear and does it make sense?

SO:                   Yeah, because I think we’ve all seen some amazing machine translation by which I mean amazingly plausible, but totally inaccurate.

BS:                   Totally. I used to work in translation and on my desk at that job, I had a collection of little toys and gadgets that I’d pick up along the way if I was shopping in a grocery store or a toy store with my kids, I’d find this bargain bin item that was absolutely ridiculous. The copy on the box was just outrageous. The instructions on the inside were absolutely horrendous. And I’d keep those as a reminder so that when people started complaining about quality of translation and so forth, I can pick up these examples and say, “Well, how do you think that this got out the way it did?” And that’s because no one was no one was proofing behind the work that was being done. And it was just being rushed out the door as fast as possible.

SO:                   And that’s, yeah. And I think that’s probably a good spot to leave it. Machine translation has its place, but do you really want a machine translated set of instructions on a medical procedure that people are performing on you? I am going to pass on that one.

BS:                   Yeah. It’s a hard pass.

SO:                   Hard pass. So, well I think we’ll leave it there. Thank you, Bill.

BS:                   Thank you.

SO:                   And thank you to our audience. Thank you for listening to The Content Strategy Experts podcast brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

The post Content ops stakeholders: Localization (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 20:57
Content ops stakeholders: Tech stack managers (podcast) https://www.scriptorium.com/2022/03/content-ops-stakeholders-tech-stack-managers-podcast/ Mon, 14 Mar 2022 12:00:08 +0000 https://www.scriptorium.com/?p=21382 https://www.scriptorium.com/2022/03/content-ops-stakeholders-tech-stack-managers-podcast/#respond https://www.scriptorium.com/2022/03/content-ops-stakeholders-tech-stack-managers-podcast/feed/ 0 In episode 114 of The Content Strategy Experts podcast, Bill Swallow and Gretyl Kinsey talk about developers and managers of the technical stack as content ops stakeholders.

“Without a gatekeeper, things can go awry very quickly. Other groups can take ownership of a particular piece of the tech stack and then you start to have some issues.”

– Bill Swallow

Related posts: 

Twitter handles:

Transcript:

Gretyl Kinsey:                   Welcome to The Content Strategy Experts podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we continue our series on content ops stakeholders. This time focusing on those who develop and manage the technical stack. Hello and welcome everyone. I’m Gretyl Kinsey.

Bill Swallow:                   I’m Bill Swallow.

GK:                   And this is part of our occasional series on stakeholders and content operations projects. So on some of our previous podcast episodes, we’ve discussed a couple of different stakeholders, including IT and executives. And you can find those episodes on scriptorium.com or wherever you get your podcast. This time we’re focusing on the technical stack and the people who develop and manage that. And we want to start out by just talking a little bit about how that’s different from IT, which we’ve covered before.

BS:                   So IT is in charge of making sure all the systems involved in the content lifecycle work together and with the rest of the company’s technology. Here, we’re talking about the tech stack developers and managers who are more deeper into the weeds of creating the custom plugins, templates and what have you of that publishing system.

GK:                   So when we talk about the technical stack, some of the things that may be included in that are things like system configuration. So if you have got authoring tools, if you’ve got a content management system of some sort, if you have publishing engines, maybe you’ve got some other connected systems like translation or learning management software, then people managing the technical stack will be responsible for configuration on all of those kinds of things. And as Bill mentioned, there’s also going to be things like style sheet and template creation. And of course, there’s also going to be ongoing support and maintenance.

BS:                   And your technical stack developers or managers might include any of the following, either an in-house person or a team that you have available to you to develop against the technical stack and to add additional things. It could include outside contractors or consultants who are brought in to do the technical work and then head off and come back again when they’re needed. It could be developers who work for some of your systems vendors who come in and develop against their own system for you, or could be some combination of all of the above.

GK:                   Yeah. And we’ve discovered in a lot of our projects with clients that, who manages that technical stack often comes down to the industry that you’re in or the type of products you make. It could also be related to things like your company size, your location, or just your general corporate culture. So for example, if you’re the type of company who produces software, you work in high tech, then you might be more likely to have some of those in-house resources for things like developing your custom publishing templates and other parts of your technical stack. Whereas, if you are in a completely different industry, you may be someone who brings that in from the outside, and in some cases we have been a part of that technical stack as the outside contractors or consultants.

BS:                   Right. And company size does often come into play here as well. If you have a content team of 50, 100 or more people, chances are you probably have the bandwidth and the knowledge in house to be able to take that responsibility on internally.

GK:                   Yeah, absolutely. So when you are developing a content strategy, what are some of the most important considerations from the perspective of those who work on the technical stack?

BS:                   So one key consideration to keep in mind regarding the tech stack is to have both the short-term and long-term plans. Even with the long-term plan in place, it’s important to have the short-term ones in there as well. And this includes everything from setting your goals for the tech stack itself, all the way up through the costs and the time it takes to implement. One of the short-term plans that you might have in place is being able to develop a proof of concept within a particular system. So that would be not only selecting the software and the systems involved, but being able to put together your information architecture, your content model, all the way down to how you’re going to produce your outputs. So what are driving the transformations involved to get your raw content out into a published format?

GK:                   One other thing to mention on the long-term plans in particular is that a big part of that is going to be content governance. And it’s really important to think about where your tech stack fits into that. Because like we mentioned, one of the things that the people involved in the technical stack do is ongoing support and maintenance. So if you think about governing your content lifecycle, it’s not just the content development, but also, how you’re going to govern the maintenance and the changes made to all the different parts of your technical stack over time.

BS:                   And speaking of maintenance, it’s another key consideration here is that the more you customize your systems in the tech stack, it does mean more maintenance of all of those customizations. So you have to kind of balance the benefits of doing those levels of customization with the costs associated not only with implementing them, but with maintaining them over time.

GK:                   Yeah. And I think that’s where it’s really important to think about what resources do you actually have available. So we talked about how maybe a larger company or a company that is more tech focused in its industry might be more likely to have some of those in-house technical stack people. And that if you’ve got that available, then maybe you can afford some more of that maintenance cost over time, because you’re going to have more of that continuity if you can do it in house and more of the resources available. Whereas if you don’t have that at your disposal and you are going to be relying more on contractors, then it may not be worth the cost associated with having that higher level of customization.

BS:                   And regarding those resources, you have to keep scalability in mind. As Gretyl mentioned, do you have the people available to you to do all the work? Do you need to have them stop on a particular project in order to work on the tech stack because something might have broken or something needs attention? Or are they a full-time position within your company and they are eager and ready to jump right in and get their hands dirty? Likewise, there’s a scalability of cost involved. So if you are working with outside resources, you have to keep that in mind that every time you need someone to come in and fix something, you need to be able to budget for that need.

GK:                   Yeah. And one thing I like to think about with scalability as well, which circles back to that first point we made about your short and long-term plans is that a lot of times when people are coming up with a content strategy, the scalability angle they’re thinking about is how will this grow over time with the growth of the company? So that’s one thing to think about is how does your technical stack grow and change with that? Can it grow? Can it scale? And do you have those resources available? And if you don’t right now, how can you plan for the future to make sure that you do?

BS:                   And with that in mind, I guess it’s good to ask, what type of collaboration is important between those who are managing or developing the technical stack with other stakeholders?

GK:                   Yeah. So if you are a content creator, it’s really important to talk to those involved with the technical stack about your delivery channels. So which ones do you need right now? This kind of gets to that short-term planning. Which ones do you need to add later? And that gets into your long-term. And again, how much do they need to be customized? What kinds of delivery are you looking at? What kinds of output formats? How do those need to be styled and formatted? And what is it going to take to get all of that up and running and keep it going in the future?

BS:                   Right. Because there is a significant difference in the level of effort of being able to include someone within a particular publishing run, let’s say you’re producing content for a new group within the company, it’s very easy to produce their content once it’s in the correct source format into the target delivery formats that you have available. But if they need a brand new system, even if it’s just another portion of the website where you’re potentially publishing content into a slightly different format. So if you’re doing a knowledge base for your content currently, and they want to do something that’s a little more marketing driven or is highly customized for a specific audience, it’s a completely different consideration that you need to keep in mind.

GK:                   Absolutely. And I also want to point out that when it comes to content creators and technical stack folks collaborating on maybe a new content strategy or a new direction for your content, that often involves a shift in the way that both teams have been working. So it’s really important to keep those discussions going constantly, keep them productive and make sure that you don’t have this separation or siloing off of those two groups, because it’s really, really important that they work together.

BS:                   Right. And not just work together within a one particular environment, but if they have a separate environment that also can work within your tech stack, you want to be able to keep that in mind as well. So if they are really accustomed to working in a specific tool set, if it somehow splices into the tech stack beautifully, then great. If it doesn’t, then you have a bunch of other considerations to deal with, everything from being able to secure funding for new tools, to secure training and potentially a lot of conversion of their existing content in order to get into that shared environment.

GK:                   And along those lines, I think if not only you’re a content creator, but also you’re someone in IT, it’s important for those two groups and the technical stack people to have a lot of discussions about the systems that are there to support the content lifecycle. And in particular, if you are choosing any new technologies to be part of that content lifecycle, it’s really, really critical for content creators, for IT people and for your technical stack people to be heavily involved in that decision making process in choosing whatever technology is going to be the best fit for the company, because each group is going to bring in different, but equally important considerations about how that technology needs to work.

BS:                   And I would say on the management side and the executive side, you want to make sure that you have these conversations with those who are managing the tech stack about the real cost of support. So what types of development efforts take the most time or require the most expertise to complete? Whether outside expertise is also needed in addition to internal expertise. And whether or not you do have additional costs beyond that. So additional tools that may be required to implement a particular thing into the tech stack.

GK:                   Absolutely. So what are some other things that it’s important to keep in mind regarding the technical stack and the people who work for it?

BS:                   So I think first and foremost, if your technical stack developers and managers are not in-house resources, then there needs to be a transition plan and some training for any in-house people who would be taking that role on afterwards, whether it’s doing the deep development or just managing the systems and being able to keep an eye on things.

GK:                   Yeah, absolutely. I think a lot of the projects that Scriptorium has worked on, one of the goals where we try to get companies is if they don’t have those in-house resources at the start, that they train them up over time so that they do have them and can take on their entire content lifecycle eventually. And that does take months or even years to do, especially depending on how much expertise you do or don’t already have.

GK:                   But that is something to really keep in mind when you first start planning your new content strategy and you get into those earliest phases of getting everything stood up and in place, don’t leave out the training because that is really, really important. And sometimes the training, as Bill said, can often be kind of more of a transition or an ongoing thing. So I know that with some of the projects we’ve worked on, there will be an initial training when you get a tool stood up and then there will be some ongoing sessions, maybe once a week or once a month for the next six months to a year after that, just to make sure that everyone involved knows how to use it and you’re not just kind of turning people loose with this new tool.

BS:                   Right. And also, in that same line, making sure that there’s plenty of clear documentation about things that authors should and should not be doing with their content or doing with the tech stack specifically. It usually goes back to documentation about the content model and so forth. But there need to be some expectations set that even if the content model allows for a new approach to authoring something, it may not be supported downstream or deeper in the tech stack. And it may have unintended results in the final output.

BS:                   So being able to identify exactly what people should and should not be doing with regard to the technology and the way it’s set up, because even if you have a tool in your tech stack that can do something very specific, it may not be set up to do that out of the box. And you want to make sure that no one is injecting anything that’s going to break things down the line.

GK:                   Yeah. And we always say, “Just because you can do something doesn’t mean you should.” So it’s really important, again, to just have that collaboration that we talked about between the content creators and the people who are in charge of managing the technical stack. Making sure that there aren’t these kind of miscommunications and that you don’t have some content creator just saying, “Why don’t we try this because we actually have the technology to support it?” Maybe you do, but maybe it would involve a whole lot more cost than you realize, or maybe it would involve a whole lot more time to get that stood up.

GK:                   So again, that’s why it’s just really, really important to have those ongoing discussions, to have that documentation and to have a plan so that if you do ever need to change what you’re doing, you do maybe need to add some new thing that you can do in your content or some new delivery channel or output type that you don’t just start adding that ad hoc, but you actually have a plan for that to go through to make sure that it can be supported.

BS:                   Mm-hmm (affirmative) And likewise, with doing any updates to the technology that’s in your tech stack itself, it may be that a new version of one particular tool or one particular technology comes out. You want to upgrade to the latest and greatest. You need to step back and take a look at the entire ecosystem that you’ve put together and make sure that that one change doesn’t trickle down into multiple problems elsewhere in the tech stack. You need to understand how all of the pieces fit together and where those dependencies are because one small change can equal a lot of change across the entire stack.

GK:                   Definitely. And I think one really important thing to keep in mind when it comes to managing all those dependencies and when it comes to the idea of content governance and content lifecycle governance that we talked about before is that it’s important to have a single point of contact who is in charge or responsible for all of this. So even if you don’t have that in house, it’s really still important to at least designate someone who can own that process, who can understand how all of the different pieces and parts of your technical stack fit together and who can be the gatekeeper, who can be the person in charge of that level of governance. So they can collect change requests. They can communicate that to all of the stakeholders internally. They can collaborate externally with any of your technical resources that you have contracted out. And they can just be the person who keeps everything running smoothly.

BS:                   Right. Regardless of whether you’re in-house or external, you do need that in-house person who can keep tabs on things. Because without that gatekeeper, things can go awry very quickly and other groups, other people can, with all good intentions, take ownership of a particular piece of the tech stack and then you start to have some issues, perhaps with some changes that are being made on one side that aren’t being reflected on another side of the tech stack.

GK:                   Yeah. And we have absolutely seen that happen with some of the companies we’ve worked with where that’s even maybe the reason they brought us in in the first place is because they’ve got this kind of disconnected technical stack and they don’t have that one single point of contact managing everything. So it really, really is critical that you don’t end up in a situation where you’re adding pieces and parts to your technical stack and then resulting in things not working together, not meshing well and not serving your content lifecycle overall. And with that, I think we can go ahead and wrap up this discussion. So thank you so much, Bill.

BS:                   Thank you.

GK:                   And thank you for listening to The Content Strategy Experts podcast brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

 

 

The post Content ops stakeholders: Tech stack managers (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 18:18
Trends for techcomm managers (podcast) https://www.scriptorium.com/2022/03/trends-for-techcomm-managers/ Mon, 07 Mar 2022 13:00:48 +0000 https://www.scriptorium.com/?p=21371 https://www.scriptorium.com/2022/03/trends-for-techcomm-managers/#respond https://www.scriptorium.com/2022/03/trends-for-techcomm-managers/feed/ 0 In episode 113 of The Content Strategy Experts podcast, Sarah O’Keefe and Dawn Stevens of Comtech discuss trends that are of interest to techcomm managers.

“We have an aging technical communicator community. We’re not necessarily attracting the younger generation. UX designer sounds more modern and interesting.”

– Dawn Stevens

Twitter handles:

Transcript:

Sarah O’Keefe:                               Welcome to The Content Strategy Experts podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we talk with Dawn Stevens of Comtech about trends that are of interest to techcomm managers. Dawn, hi, thanks so much for being a guest on the podcast today.

Dawn Stevens:                               Hi, Sarah, thanks for having me.

SO:                               Absolutely. So to get things started, for those of us who don’t know, tell us a little about yourself and Comtech and also the Center for Information Development Management.

DS:                               Sure. Again, I’m Dawn Stevens and I have been in technical communications basically my entire career, which is now well over 30 years. I’m one of those few people who started off saying, “I think I’ll be a technical communicator.” And so I went to school for it and have been working in it my entire career. And I was fortunate early on in my career to find Comtech, and so I’ve actually worked at Comtech twice. I worked for JoAnn for 10 years in the ’90s. Then I left because my children were small and I didn’t want to travel as much in those types of things, and then I came back after my youngest went to college and have been back since 2010. So I’ve been here at Comtech a total of, well over 20 years and I purchased it actually now five years ago, if you can believe that, Sarah, five years.

SO:                               Yeah.

DS:                               So Comtech is a competitor really of Scriptorium from, your introduction is, “Yeah, that works for Comtech as well.” We’ve been in existence since ’78, I believe is when Joanne formed it. And then about, oh, 25 years ago, she started the Center for Information Development Management, which is an organizational membership for managers largely to talk about concerns that have to do with managing technical communications and the challenges that are associated with the unique people that you manage and the people that you have to work with in terms of stakeholders and so on. And so that membership organization sponsors conferences and things like that.

SO:                               Yeah. And so in my mind we’re friendly competitors.

DS:                               Yes, that’s right.

SO:                               Because every once in a while we bid on the same project and one of us gets it and the other one doesn’t, but what I think is more important is that we as business owners and all of that, we have a lot of the same issues and challenges. So I really value having access and getting to talk to you and people like you in that peer group about all of our mutual pain and suffering.

DS:                               Absolutely.

SO:                               So with that position that you have as an industry consultant and also with CIDM, I wanted to ask you about trends, like what’s happening in techcomm that’s of interest or maybe the techcomm managers are terrified about, and I guess we really have to start with hiring, right?

DS:                               Yeah, absolutely. It’s an interesting time, the last couple of years with the pandemic and some of the changes that have been just general in any kind of industry that whole quote unquote “Great resignation” is that really impacting us. I would say there’s definitely been challenges as managers of people are leaving, people are not necessarily leaving the industry, but redistributing is what I’ve seen a lot within my clients of, oh, there’s greener pastures over here, there’s a bit more competition, I guess, for getting people. And I’ve got a lot of people who keep coming to me and saying, “What do we do to attract people?” And there’s been some interesting challenges associated with, well, what are we looking for? What kinds of people should we be looking for? How do we make the industry as a whole more attractive?

SO:                               Yeah. And redistribution or resorting maybe is a really interesting point because people aren’t… Some people are leaving the industry across the board, not just ours, but a lot of people are just going from the big company to the small company, or they’re moving up in the world or they’re leaving the company that won’t let them continue to work remotely, or they’re leaving the company that isn’t going back to the office.

DS:                               Right.

SO:                               There are some people out there that think offices are fun and that want an office as oppose to working out of their house. So within that, are people moving into certain kinds of specialist positions or generalist positions? What does that look like?

DS:                               Yeah, I think that’s actually the key piece of this redistribution or resorting is that there’s been I guess, a cycle that I have seen over the years of do we have generalists, we have technical communicators who can basically do everything. You write, you index, or you create a taxonomy nowadays. You have to be able to deal with your formatting, in some manner you have to create your own art, all of those types of things as a whole, generalist you need to be able to do everything. Versus there are specific areas that maybe interest you in within technical communication more than all of these other things, you’re better at all those. If you ask me to create an illustration for technical manual, you’re going to be very disappointed.

DS:             So the people don’t have all of those various skills, and so one of the things with this resorting that I’m really seeing is do we need to specialize more? With things like structured authoring a decade ago, these questions started coming up of, oh, do we need an information architect? Do we need content strategists, etc, as a specific position, or is that something that everybody should just be able to do? And what I think I’m seeing more and more is no, we’re backed into that trend of we need the specialists, we need somebody who absolutely understands content strategy or information modeling or information architecture, whatever you want to call that to really think about what are our goals, what kinds of content are going to meet the needs of our particular customers and how do we structure and design those particular things.

DS:                               And then somebody else who can write those things and somebody else who can program those things or film those things, or whatever those things happen to be, somebody who’s an expert in SEO, how can people find that content? So I’m seeing more and more of, I need to find people who have these specific skills and that’s a challenge when you think about a lot of budgeting. Budgeting tends to be head count where you can have 10 people. Whatever it happens to be and the idea that, well, of my 10 people I’ve only got two people who actually want to be writers and then somebody who wants to be a strategist for this, and an expert in that, that can be a big challenge for the managers.

SO:                               Right. And now your team of 10 with your one information architect, the information architect quits, and the others are all specialized into not IA, and now what do you do?

DS:                               Right.

SO:                               So there’s a risk. Yeah, I’ve always kind of associated generalist versus specialist with small company versus big company. If you have one writer or two, or one or two technical communication people, they’re going to have to be generalists because you’re not getting an illustrator or an editor.

DS:                               Right.

SO:                               But it does seem as though some of these bigger groups are swinging in some ways towards, well, we do want some of these, we’d like to have overlapping skills and we want to have the ability to take you or me and assign me to a new project that I’m not so specialized that I can only write this one thing. Now, how do some of those newer titles fit into this? I’m hearing UX writer, I’m hearing content design. The other day I actually saw somebody that said, “What do you call like a UX writer for technical content? What would that be?”

SO:                               And I thought, “Well, that would be a technical writer.”

DS:                               Exactly, “Oh, well, that sounds so boring or unappealing in some manner.” And I’ve always laughed about job titles to a certain extent, at one point in my career I just said, “Call me the scope change goddess,” because I was doing so many projects where the projects kept changing, I’m like, “That’s my new title.” So titles have not seem like they shouldn’t be that important and yet they are and what do they do is certainly associated with that title. And there in is where I got a lot of people in CIDM are talking about that of like, “Do you have any good sample job descriptions for? What is it that an IA does or a UX writer does, or those types of things, and what does distinguish them from what we’ve always called a technical writer?

DS:                               Are there special skills that make you a UX writer as opposed to just a technical writer? And I think that there are potentially aspects of that certainly of understanding in a UX situation of space and how things fit together and how the I goes through an interface and those types of things. So I think there are probably some special skills that you might call out. I don’t know that means the technical writer didn’t have them in the first place, but in terms of what you’re emphasizing for one of those particular job descriptions, I guess there’s an emphasis more on a specific title.

SO:                               Yeah, and it is obviously short form, shortest possible form writing if you’re doing-

DS:                               Absolutely.

SO:                               Strings that go on a software application versus a long form. I’m going to explain to you everything that you need to know about relational databases, but how do you do that? I guess they be subspecialties, right?

DS:                               It could be. I think there’s an interesting thing that just occurred to me, is that of course the UX part, the user part of it and we talk a lot. You’ve talked about it, I’ve talked about it, of the importance, the success of a technical writer is understanding their audience and who those users are. And yet I still see that struggle happening a lot, is that technical writers, oftentimes in an organization are banned from talking to users. That, no, you weren’t allowed to talk to them, I don’t know what the fear is per se, but you don’t talk to them, only these types of people and oftentimes that user part of a title, the user experience gives you maybe that permission to talk to people, does that imply potentially skills that are different? We know we’ve seen a lot of presentations, certainly about how technical writers tend to be more introverts.

DS:                               I know JoAnn did for a long, long time, Myers-Briggs test of every single person she could get her hands on in the industry and said, yeah, mostly they tend to be introverted. Maybe they don’t want to go out and talk to their users and so forth. They’re just happy sitting at their computer and writing. And there’s that maybe implication that if you’re a UX writer, there’s more of that, “Hey, you need to go out and understand what your users really need.” I don’t know that I want to draw that line, but it’s something that just occurred to me as we were talking.

SO:                               Yeah. It also feels to me based on really no evidence or research whatsoever, we should clarify that UX writer, it’s the new title. And the old, old title was technical writer, and then we had technical communicator and we’ve had information developer and we’ve had some other things like that, API writer maybe, and the write the docs people will talk about documentarians, but UX writer feels like the cool new thing.

DS:                               It does and I think that’s an important aspect. We have an aging, technical communicator community, people have talked about that in the past is that we’re not necessarily attracting the younger generation and something that sounds a little more cool, like you said, of being a UX designer, maybe sounds more modern, sounds more interesting. If I’ve talked to a variety of young people, I’ve asked some of the people who are in our industry that are younger, why is it that our AI industry is aging? And it’s been interesting to hear them say, there is that perception of, well technical writing, it doesn’t have a big impact to the world and that a lot of this new generation is wanting to make an impact, make sure that they’re saving the world in some manner, joking about just like area of waste of like, “Oh, good God, if you print something,” that nobody wants to be associated with, oh, you’re creating printed content, that’s a big wasted.

DS:                               And even just the idea that from a writing perspective, you create manuals. Well, does anybody really read those manuals? At least with a UX design, they’re using the interface. So you’re having some kind of an impact on a person and what I’m really hearing, and I’m not saying that I didn’t want to do that when I was young either of making an impact, but I definitely seem to hear it more than ever that I don’t want to write something that nobody ever looks at, I want to have an impact on people and make a difference in their lives.

SO:                               Yeah. So basically we’ve had such bad press for, I’m going to say forever. Nobody reads the docs, et cetera. I will say the best definition of technical communication that I ever saw actually came from Tim O’Reilly, who said that it was the purpose, the purpose of technical content, of technical communication is to enable people to use the product successfully. And so it turns into this, it’s just like good editing, if you do it well it’s invisible. People rarely say, “Oh, wow, that was a really fun experience of reading a five step procedure about how to do a thing.” They just successfully get their washing machine to turn on or drain or reset and they move on with their lives. Yeah, I don’t know what that says about us as a group, other than I know that we are super, super terrible at marketing ourselves.

DS:                               Sure, sure.

SO:                               Horrendous, the worst and there’s a whole podcast in there about why that is. But if you ever have a chance, go to a conference that is only tech writers, immediately followed by one that is only marketing people and real at the difference. It is incredibly entertaining.

DS:                               Yeah. Well, I like the definition that you gave there, that if we did market it more that way, if we are enablers, but I think back to that UX design title and everything else, there is that ideal that has existed since long before you and I entered the scene too. We should be creating products that document themselves and isn’t that part of the implication of, oh, if I’m a UX designer or UX writer, I’m helping to move in that direction.

SO:                               Well, that’s fine. But then I look at the research that says that 20% of product returns are because people can’t figure out how to use the product. So yes, the products should be obvious and intuitive and self-documenting, and self-healing and all the rest of it, but they’re not.

DS:                               And that gets to one other aspect-

SO:                               It’s just not.

DS:                               I think it gets to one other aspect of hiring or skills that really managers need to be thinking about is, what is the relationship of the writer to the product designer? Is that oftentimes as a writer, you’re trying to document something and make it sound like a feature or those types of things, or at least make it usable. And you can immediately see here it would be a whole lot easier to write this, or we wouldn’t even have to write it if you could just make this one tweak to the way you’ve designed it. But we oftentimes see ourselves in this position the way they’ve been hired in as a technical writer, the way that corporate culture is, whatever those factors are, that we’re not the expert. The people we deal with are the subject matter experts and we’re just the writer. And I’ve spent so much time with my clients coaching them to the idea of, no, you’re not just a writer, you are the writing expert or the user expert, or let’s put the word expert into our title as well.

DS:                               And yes, we’re dealing with a subject matter expert who knows what they did, what they have designed for the product, but that oftentimes needs some tweaking and can we build that relationship between them? And from a hiring perspective, that is something to take a look at is how confident are the people that you’re hiring? Will they speak up? If they have a seat at an agile development table or that type of thing, would they say something to say, “I think we could improve this,” and then that would save me 20 pages of writing.

SO:                               I think that’s a really good place to leave this. So I’ll be curious to see what the people listening to this come up with in terms of feedback, because I think you and I have some strong opinions on where this is going and why going the way it’s going. So all those of you out there, speak up, we want to hear from you, see what you think, and we might need to do a follow up on this one depending on what comes back. So Dawn, thank you. I’m going to wrap things up here. Thank you for listening to The Content Strategy Experts podcast brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

 

The post Trends for techcomm managers (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 18:58
Content scalability (podcast) https://www.scriptorium.com/2022/02/content-scalability-podcast/ Mon, 21 Feb 2022 13:00:46 +0000 https://www.scriptorium.com/?p=21358 https://www.scriptorium.com/2022/02/content-scalability-podcast/#respond https://www.scriptorium.com/2022/02/content-scalability-podcast/feed/ 0 In episode 112 of The Content Strategy Experts podcast, Elizabeth Patterson and Bill Swallow discuss content scalability.

“As you start approaching a greater percentage of bells and whistles in your process, the more work it takes to get each bell or whistle in place.”

– Bill Swallow

Related posts:

Twitter handles:

Transcript:

Elizabeth Patterson:                   Welcome to the Content Strategy Experts Podcast, brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we talk about content scalability. Hi, I’m Elizabeth Patterson.

Bill Swallow:                   And I’m Bill Swallow.

EP:                   And we’ll go ahead and kick things off today with a question. Bill, what does it mean if your content is scalable?

BS:                   Well, scalability basically means that you can increase the volume of your content, deliver to multiple different channels and add new channels as needed, translate into more languages and extend other facets of how you are developing and using your content without really bottlenecking the entire process of content production.

EP:                   Great. In order to have scalable content, you have to remove points of friction from your content life cycle. How can you go about identifying those points of friction?

BS:                   Well, one point is whether or not you have things essentially locked down, so things like templates or some kind of underlying structure that is enforcing rules on how the content is being developed. Otherwise, just making sure that you have some pretty rigid and… I shouldn’t say too rigid, but rigid, yet allowable processes in place and things that are repeatable.

BS:                   So that when it comes to developing a new piece of content, that you aren’t necessarily starting from scratch, that you have a game plan from getting from point A to point Z without stumbling and without adding any additional things that are unhandled by someone else in your content chain. Another area to look at is how reusable is your content and how smart is your reuse process? Are you copying and pasting across places, or do you have some kind of intelligent reuse via some kind of reference?

BS:                   In the former situation where you’re copying and pasting. You have to kind of guarantee that any time you reuse that content, that it is written in a way that is reusable. If you modify that language, then you suddenly have a discrepancy between where it’s used in other places. Likewise, if you have to then update the information, you have to update it in every single place where you’ve reused it or copied and pasted it. If you’re using intelligent reuse, that gives you a lot more flexibility.

BS:                   You can essentially reference one piece of content exactly how it’s written and use it wherever you want it to appear. You can also do a little bit of work with conditional text variables and other types of things to make the content unique for where it’s being used in any one instance, but you’re still reusing a singular written piece of content across multiple places. You’re are not duplicating it. Another one is to look at the publishing process and how hands-on that process is.

BS:                   If you are manually creating page flows, if you are doing some really high dynamic changes between different pages, moving images around, and so forth, your production schedule is likely, or your production process is not terribly scalable.

EP:                   Make it a little more difficult.

BS:                   Exactly. It makes it a lot more difficult. It takes a lot more time to produce. You might have something that looks extremely polished in the end, but it takes you many, many, many hours to get there.

EP:                   Right.

BS:                   On the flip side, if you have something that’s completely automated, as long as there are rules in place as to how the publishing process goes and how things are formatted as the publishing process is going, it’s a completely push button operation, in which case your content velocity for publishing has skyrocketed.

EP:                   Right. You might not have all the bells and whistles if you are taking a more hands-off approach. But if you’re doing everything yourself manually, it’s just not scalable.

BS:                   It isn’t. No. It’s not to say that you won’t have the bells and whistles, but there is. In any kind of automated situation, the more you bake into the automation, the… Basically as you start approaching a greater percentage of bells and whistles in your process, the more work it takes to get each bell or whistle in place.

EP:                   Right.

BS:                   Another area where you can remove a lot of friction is in your localization process, and that really comes down to how content is translated, how content is made available for translation, and essentially how you’re baking internationalization practices into your content development. The more that you have baked in at the beginning using good internationalization practices, the easier the localization stage and the translation stage… The localization process, including translation will be.

BS:                   This way, you are setting yourself up to use a lot of reusable factors and being able to reduce the overall number of words and so forth that you need to translate in a unique setting.

EP:                   Right. Identifying these points of friction and removing them is going to take a little bit of time, but it is essential to give you that scalable content. I want to shift focus now a little bit to web publishing. Are there any scalability issues when it comes to web publishing?

BS:                   Well, in terms of scalability, especially when it comes to technical content, web publishing can get a little hairy. The sheer volume of content that you’re producing could pose some problems. The technical content that you’re publishing through to let’s say a web CMS is highly templatized, highly standardized usually, and it’s very massive in scale. It’s kind of like drinking from a fire hose at that point. Traditionally, when you’re publishing on the web, usually pages are crafted one at a time or in small batches.

BS:                   Let’s say you’re doing a small support site, or what have you. Those pages, they might be templatized and you may have some ways of importing content into them. But by and large, they’re created manually. But when you’re talking about publishing a massive reference, for example, some kind of an API reference or product manual or what have you, you could be talking about hundreds, if not thousands of generated pages.

BS:                   It takes a very different approach to staging that content for the web than using a traditional web development mentality. The entire web system for that particular guide, for example, is generated all at once. There’s really no way to go in and hand massage things on the fly. It’s all being generated at once and ready to go.

EP:                   Okay. When we’re talking about scalable content, how exactly does the review and editing process work?

BS:                   Review and editing happens way behind the scenes. Taking a page by page review is fine, and you can certainly do that with the output that’s being generated from this collection. But you’d be looking at hundreds or perhaps thousands of pages at once to do this type of review. A lot of the review and editing really needs to happen on the source side and needs to be fixed before any publishing begins. Once any fixes are implemented, then the output can then be regenerated.

BS:                   This is true for really all output types when you’re talking about pushing out especially to multiple different channels at once. Whether it be PDF or web or some kind of API related repository, or what have you, all of that content is generated at once. If you need to fix it, you go back to the source and do a review cycle within the source before you get to that publishing stage.

EP:                   Okay. You mentioned earlier drinking from the fire hose. I want to come back to that for a minute. How do you best prepare for the fire hose of content?

BS:                   I like how you phrased that. When I talk about the fire hose, I mean, yes, there’s a lot of content going through. It’s not really an issue as far as publishing things like PDFs. Because in the end, you may have a fire hose of content going through this publishing process. But in the end, you still get a PDF file. But there are some big considerations for publishing to the web. You really have to have a framework for publishing a massive amount of content all at once available. You have to have the right targets lined up.

BS:                   Where is this content going to live? Is it going to get pushed to a staging area, and that’s going to get moved out into some kind of published area? Do you have a direct published pipeline? As soon as you click the generate output button on whatever you’re using, it generates the output and you can then go online and view it on your website. You need to think about how that’s going to work and what the pieces are that need to happen in order to get the content to the right place for that web server.

BS:                   You also have to have the right metadata in place, both in the content and in the web CMS, to make sure that as content is being received, as it’s being generated, that it’s being assigned the right metadata, both for search, for personalization, and really any other way that the content is going to be used on the site. If you have let’s say a customer portal and everyone has their own login, they’re probably assigned a certain user group. They’re probably assigned other metadata, such as what their client name is.

BS:                   You can provide easy access to perhaps the products that they have versus the products that they don’t have, so that they are free to just search your repository and pull back all the results that pertain what they own versus what somebody else owns. And other things that facilitate how the content is going to be used on the web. You also have to make sure that where this content is being published and where it’s being shown on the web, that it has all the right UI elements built around it. You might have some kind of…

BS:                   Frameset is a bit of an old word, but some kind of a wrapper UI that might have certain type of branding around the content in addition to the content itself. Perhaps another layer of UI elements, buttons, fields, so forth that they can use perhaps to refine a search or even to provide a search console that they can use to search through the content that’s being provided to them. There are a lot of things to really think about, and you need to line all of that up before you push that content out to the web.

BS:                   Otherwise, you might have a rather unruly mess of files to then go ahead and wrangle and apply each metadata piece and each personalization piece and assign other aspects of the web experience to each individual piece of content.

EP:                   Right. Definitely take the time and make sure you have those elements in place and things set up correctly so that you’re prepared for this.

BS:                   Exactly. Measure twice. Cut once.

EP:                   I think that is a really good place to wrap up. Thank you, Bill.

BS:                   Thank you.

EP:                   And thank you for listening to the Content Strategy Experts Podcast, brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

 

The post Content scalability (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 13:12
The rise of content ops (podcast) https://www.scriptorium.com/2022/02/the-rise-of-content-ops-podcast/ Mon, 07 Feb 2022 13:00:11 +0000 https://www.scriptorium.com/?p=21352 https://www.scriptorium.com/2022/02/the-rise-of-content-ops-podcast/#respond https://www.scriptorium.com/2022/02/the-rise-of-content-ops-podcast/feed/ 0 In episode 111 of The Content Strategy Experts podcast, Sarah O’Keefe and Rahel Bailie of Content, Seriously discuss the rise of content ops.

“If you want a better user experience and more customer loyalty, you need accurate content.”

– Rahel Bailie

Related posts:

Twitter handles:

Transcript:

Sarah O’Keefe:                   Welcome to The Content Strategy Experts podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize and distribute content in an efficient way. My name is Sarah O’Keefe. And I’m your host today. In this episode, we discussed the rise of ContentOps with Rahel Bailie of Content, Seriously. Rahel, welcome. I’m so happy to have you here on the podcast.

Rahel Bailie:                   Well, I’m delighted to be here on the podcast too. I thought you’d never ask.

SO:                   And here we are, finally. So yeah. I mean, I know who you are, but for our listeners, tell us a little bit about yourself and about Content, Seriously, and what you’re doing here.

RB:                   I wish I had done such a professional job as you did on introducing. So I’ve been doing ContentOps since probably 15 years ago. I’ve found references in my old slide decks to content operations, except nobody knew what it was. And that kind of went in one ear and out the other. So for many years, I’ve been under the rubric of content strategy, have been advocating for content operations to do things more efficiently. I was a consultant from 2002 until shortly after I came to the UK. And now that I’ve got my citizenship, I went back to consulting. It seems to suit me the best. And I worked in all areas, from technical writing and very technical writing to guidance writing, to marketing writing over the years. And once I went into consulting, then I turned my talents to diagnosing in client situations and finding more efficient ways for them to produce their content.

SO:                   So how do you define ContentOps? I mean, it’s been out there as you said for a while, but I think you’ve got one of the sort of cleaner definitions of what this is. So what’s your definition of ContentOps?

RB:                   So I’ve been refining it and refining it. And right now, it’s refined to the statement that ContentOps is a set of principles. And I think that’s important. It’s principles that we use to optimize content production and to leverage content as business assets to meet business objectives. It’s all about efficiency.

SO:                   And so what are some of the basic things that would drive an organization towards ContentOps?

RB:                   So I have a theory that there are six kind of meta business drivers and everything else is a subset of that. So if you want to reach one of these business goals, you’re going to need some sort of operating model that is slick and clean and efficient to be able to do that. So out of those six, there’s the one like reduced time to market, while reducing time to market means producing content in a better, faster way, expanding your reach. So as soon as you go into other countries, now you have localization issues. And if you don’t want to break the bank with your translation, your language service provider on translation costs, you need to get your source content in order, risk management. So compliance, regulatory, all those things.

RB:                   If you don’t want to get sued or shut down or whatever is the case in your industry, you want to have that all together, you need a good operating model. The next one would be a better user experience. And if you want better user experience and more customer loyalty and so on, you need accurate content. So you need content that comes from the same place. So you’re not duplicating it. And then having to maintain all those duplicate copies, which comes under content operations. And there’s a couple others, but you get the idea that anything that you do that involves having a content component, you want to manage your content really well because otherwise you’re going to be lost.

SO:                   So it’s almost like maturity, right? It’s a mature content development process as opposed to this, just throw some stuff up against the wall and then copy and paste it over here and then copy and paste it again. And did I mention copy and paste?

RB:                   Anything that says copy and paste, or I track it in a spreadsheet. Exactly, right? So I’ve seen places that had over 50 spreadsheets and the guy who was supposed to be the manager and all he did was manage spreadsheets. There was another company that was doing a retail product and online so that they’re a retail chain and they’re out of business now, not surprisingly. 99 spreadsheets to manage their content. It was ridiculous. So this idea of being able to do things more efficiently. So can you imagine on the code side having, I don’t know, a hundred developers sitting around, they’re all writing their own spaghetti code and they’re in their copying and pasting it all over again and forgetting to change the version number and all those things that happen.

RB:                   Well, that’s what’s still happening in content in a lot of places. And I get told by people, oh, can you go and see what cool Company A is doing for content? And I’ll say, well, I just happened to speak to someone from there last week or last month or whatever, and they’re coming to me because it’s a <bleep> show. So even though they’re out there, we have our book and we have our method and we have our whatever. It doesn’t apply to content.

SO:                   Yeah. People come to me a lot and say things like, what CMS should I buy? Or what CMS has the biggest market share? Who should we pick? And what they want me to tell them is, oh, this one is doing really well in the market. And depending on my mood of the day, my default answer when they say what CMS has the biggest market share, my default answer is actually Excel.

RB:                   Yes. Because as Jeff Eaton said in a discussion I had with him recently, technically, that’s a headless CMS because it’s a different rendering engine. If you put it into PowerPoint, PowerPoint is the CMS.

SO:                   We don’t use bad words like PowerPoint on this show.

RB:                   No. You told about all the four letter words I could not use, but PowerPoint has more than four letters.

SO:                   I’m sorry. I thought PowerPoint was implied. The session you’re talking about was a webcast on headless CMS that you did with Jeff Eaton. And we will add that to the show notes so that people can find it. I wanted to ask you why now? And because you’re absolutely right, you’ve been talking about ContentOps for a while, and now it seems as though this concept or this buzzword or whatever is gaining traction. So why? What changed in the market? Why is the market ready now to talk about ContentOps?

RB:                   Okay. I’m going to answer this in two parts and the first part is very brief. And if you go back to the early 2000s, who had content problems? So I remember the Cisco had a guy go in and they said they had over a million pages and it was complete mess because everything was just pages done individually and thrown up onto the web. And then they had this million pages and they had to have someone come in and organize them and put together a taxonomy and whatever. So unless you were a huge SAP, Cisco, whatever, you didn’t have a content operations problem, really, because you had a 10 page website, maybe. Now, there’s a company called Gather Content, who said that when they were first getting into this business, they were creating this piece of software where people could kind of perk their content until the website was built.

RB:                   And they built their software to handle, 20 to 200 pages. And next thing, within a few years, they’re being asked to support 20,000 pages and people aren’t using it as a temporary stopgap anymore. So they had to redo their whole code base to make this more robust. So when you look at that kind of, oh, we went from 20 pages to 200 to 20,000 and 200,000, you can see how that complexity, well, the scale is increased greatly. The complexity, because you’ve got, take the iPhone, well, just because there’s an iPhone, what’s the latest one? 14 or something? That doesn’t mean you can ditch all the support material for a 13, 12, 11, 10, nine, eight. You have to still have it out there. So how do you do all this multichannel publishing and omnichannel, and now we’ve got conversation design content and all sorts of content genres that didn’t exist.

RB:                   And they all have to work together. And one of the things I do with my students at the university is, we have a course called content and complex environments. And I create eight teams, three people each, eight teams, 24 students, great. They all go and produce a little piece of content towards a fictional product. And then I bring them back together and they have to coordinate everything. And they say it’s so hard. And it’s not the creating the content. It’s the coordinating with seven other teams. So if you take this and you multiply that out into any content environment, you get complexity and you get the need to have a tight operating model. You can’t take the operating model for software development and apply it to content. It’s not the same thing. You can’t take data ops and apply it to ContentOps. So you have to come up with your own efficient way of working.

RB:                   And that’s why it’s now because we’ve reached that, is that peak dirt, they used to say? We’ve hit that pinnacle of like, oh my gosh, my stuff is everywhere. We are breaking all the rules and whatever those rules may be in your particular industry, regulatory rules, or we racked up content debt. We don’t have the quality. We are not checking accuracy. We don’t have time. We don’t have time. We don’t have time. And now they’re saying, okay, well, we have to get more efficient than this. This copy and paste stuff has got to go.

SO:                   So looking back on this, when you look at where we are right now with ContentOps versus some of the stuff that you were looking at a while back, 10 or 15 years ago, when you look back, has anything changed? I mean, I know your definition has changed a little in that you’ve refined it or tightened it or whatever, but has ContentOps itself changed over the past 10, 15 years?

RB:                   Yes. In a couple of ways. So one is around tooling. If you look back 15 years ago, we barely had any tooling, production grade tooling. So right now, even today, there are lots of companies that they throw Microsoft Word or Google Docs at, and then expect them to go and do content production the way you would keep pace with the agile team. But here are some basic tools that are really meant for casual business use and do your best with them. And we know that, that doesn’t work anymore, but now we have some tools where we can say, actually, you have this, or you have this, you’ve got a Gather Content. You have a CCMS, you’ve got a PIM. You’ve got all these different things that are out there that are starting to come up, that you could use for a better operating model.

RB:                   And we have workflow modules that you could apply to things so that you’re not tracking things in a spreadsheet. So that’s changed. But also I think the locus of control has changed because now we have product owners and product managers and they often have the budget. And so how do you have to go about implementing is different because you have to keep up with what they’re doing and you have to convince them that content deserves its own operating model. And that’s a hard sell. It’s a really hard sell right now.

SO:                   So what’s next? When you look forward at the next, well, I’m not going to ask for 15 years because that’s ludicrous, but how about three to five? If you look into the future in the short-term, medium-term, whatever that is, what do you think is next? What’s coming down the pipe in ContentOps that’ll be interesting and fun and exciting to work on?

RB:                   Well, that’s a loaded question. When they show those curves where they show the early adopters and it’s starting to kind of, it’s at the bottom of the curve on the left and then there’s this line up and then at the other end going down, it’s the late adopters. So I think we’re so far at the beginning that for most organizations, nothing will change. There’s still going to be limping along. But I think what’s going to start happening soon is that there will be things that happen and when I say things that happen, it could be that somebody got sued. Somebody missed a deadline and got fired, those kinds of things that, somebody lost their funding because something didn’t happen on time. So there will be something that tips them over into the edge where they go, ugh, we should have listened.

RB:                   And then as they move around the industry, they will take their experience with them and start implementing things differently. And I say this because I had a former product manager where I used to work and he’s off doing his own thing now. And he called me and said, “I want a guy like Chris.” Chris was the content strategist who worked for me. And he said, I want him because our product is content and we need to manage it in a different way. We have to be really good about how we manage our content, it has to be done really well. And there are lots of moving parts and what would I call that person and where would I find one of them? Right. So here’s somebody who lived through this non successful experience with me.

RB:                   But when he went into his own business, decided he wasn’t going to make that happen again, right? He was going to do it right. So he’s looking for the right kind of person, the right shape of person to come in and do their content operations. And I just spoke with another fellow who runs… He’s one of co-founders of career.pm. So it’s for product managers. He got so excited and said, oh my gosh, product managers need to know about this. And so we’re trying to put together this deck on what the benefit will be for product managers if they will pay attention to ContentOps and we came to certain conclusions, some kind of sad conclusions, which was that, for them, content is like somebody showing up with a baby and the baby’s ready to be put into the product.

RB:                   And you say, well, it takes all this time to make a baby. And it’s like, well, that’s none of our business. Once you have a baby, then we care. And so you’ve got that piece as well where they say, well, that’s nothing to do with us in the product, that has to do whoever’s responsible for the content team. And when you start going up the chain, there’s one of those weird matrix responsibility things, and nobody’s responsible for content. It might go up to head of marketing or head of communications. They don’t know about ContentOps. They might know about ContentOps from a marketing perspective, which is a very different rhythm and a very different beast than product content. They don’t even know that some of these processes and tools and tensions exist. They think it’s a three step process, you write, you copy and paste, put it in CMS and QA it, done.

RB:                   And so when you start going into these things and I spent a long time within a government department and I did a kind of almost a time in motions study, but I used the concept of lean services and the seven types of waste. And we just mapped out the way they’re doing it now and the way they could do it. And we came up with a 75% savings. It was quite remarkable. And that was using conservative estimates. If it hadn’t been me, if it had been anyone else, they probably wouldn’t have gone in and gotten that same result because the other folks that they would bring in, know about the editorial side. So they would say things like, well, run everything through Hemingway before you write it. And then we know that it’s going to confirm to the style guide.

RB:                   And that’s about the extent of what they know and that’s about it. But when you say, well, we should hook up an authoring system to a taxonomy management tool. And then yeah. We’ll need to have some sort of digital asset management, but maybe the CMS has it. They don’t even think about those things or the implications of what happens when you have multimedia content and you need to have transcripts and captions and in multiple languages. And they just like, okay, too much, too much, go talk to the techies. And the techies don’t know because they’re not content people, they don’t know this stuff. That becomes the ping pong ball. And I think that some of these things will start to get understood, especially when there’s a, I hate the term, but the burning platform. When they find themselves in a burning platform, then they’re going to be looking for that vehicle to take them off the burning platform. And that may be some sort of vehicle connected to an operating model for content.

SO:                   Okay. Well, I mean, that seems like an almost hopeful note. So I think we should leave it there on the hopeful note of your software, your platform may be burning, but you will get off of it successfully.

RB:                   Well, I think I will say that there are people like you, like me, there’s a couple of handfuls of people that I can think of, not a lot of us, but go out and get the expertise, bring in somebody, hire in that expertise to help you and then listen to them.

SO:                   I really have nothing to add to that other than you should listen to Rahel. So Rahel, thank you. I’m going to wrap it there. Thank you so much for being here-

RB:                   My pleasure.

SO:                   And for participating on this and with that, thank you for listening to the content strategy experts podcast brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

 

 

The post The rise of content ops (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 20:02
Content ops stakeholders: Executives (podcast, part 2) https://www.scriptorium.com/2022/01/content-ops-stakeholders-executives-podcast-part-2/ Mon, 24 Jan 2022 13:00:16 +0000 https://www.scriptorium.com/?p=21335 https://www.scriptorium.com/2022/01/content-ops-stakeholders-executives-podcast-part-2/#respond https://www.scriptorium.com/2022/01/content-ops-stakeholders-executives-podcast-part-2/feed/ 0 In episode 110 of The Content Strategy Experts podcast, Alan Pringle and Sarah O’Keefe continue their discussion about executives as important stakeholders in your content operations.

“You need to understand how decisions in your organization are made and where the real power is.”

– Sarah O’Keefe

Related posts:

Twitter handles:

Transcript:

Alan Pringle:                   Welcome to The Content Strategy Experts podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. This is part two of a two-part podcast.

AP:                   I’m Alan Pringle. In this episode, Sarah O’Keefe and I continue our discussion about executives as important stakeholders in your content operations. In the previous episode we talked about the importance of business needs. In this episode, we talk about how to effectively communicate with executives. Now, we’ve talked about these business needs, business requirements and how they really affect- basically, I don’t want to put thoughts in the heads of executives but what we talked about is kind of how they think in general, basically from my experience.

AP:                   I think it’s also worth discussing how to communicate with them. For example, let’s go back to tools for a minute. We’ve already said, don’t talk about all the bells and whistles and the features of the tools, they don’t care about that. I do think one thing they would care about is that you were following the correct company process to select your tools. You are working with your procurement department, you were working with your IT group, you were working with information security folks. Those are also other stakeholders in any kind of project, and a content ops project is no exception. So you need to be sure that you are following the protocols that your company has established for assessing tools, and that you communicate that you are doing that with the executive champion of your project.

Sarah O’Keefe:                   Yeah. And that’s an interesting one because as an executive, what they are truly paid to do is to assess risk. What is the risk of taking this action? What is the risk of not taking this action? Should I spend this money? What are the implications if I don’t? And what you’re talking about in terms of the tools assessment, and I will say quite frankly, when I hear from a client, we have to go to the enterprise architecture board, that never makes me happy. Because their job, and this is legit, is to minimize the number of tools in the company.

AP:                   Exactly.

SO:                   Right? Because the more you have, the more systems you have, the more complicated everything gets and the more expensive it gets. And so the EAB is responsible for saying, “Well, we have these 17 tools already. Why are you telling us you need a specialized tool?”

SO:                   You need a super special CMS, but we already have three of them. Why can’t you use SharePoint? And then we cry. By the way, crying doesn’t work. Don’t cry. No, never cry. But the executive’s job is to test your argument that no, we are super special and we need a super special set of tools and here’s why. And then they have to make the decision that that argument that you’re making will get better content ops, which will give you all these cool business things, is worth the risk and the cost of introducing another tool or another set of tools or whatever it is that you’re asking for. So it’s not personal. They don’t hate you. They don’t hate your favorite tool, but they don’t like bringing in more complexity and nearly always, that’s what we’re arguing for. We need more stuff. We need another stack because we can’t do this in the generic business tools that you have right now.

AP:                   Yeah. And those conversations usually are not a one and done sort of thing. It usually takes a lot of time and I hate to use the word education, but I do think there is some of that going on when you’re having these discussions, because you have to explain, like you said, why this particular tool, which may seem like a match to something that already exists, why it is critical for your content ops to have this particular tool.

SO:                   I have found over the years, that it can be helpful to make the analogy to software developers or product developers if it’s hardware, especially with an engineering, whether software or hardware, manufacturing kind of executive. Essentially your software developers have a bunch of specialized tools to manage code. We are asking for the equivalent for content, right? So it’s not that we’re special and esoteric or anything like that. It’s just that there’s a certain set of tools that help us and that make us more efficient and in which we can do better work just as you have in your software development or in manufacturing, you have CAD systems and you have product lifecycle management, PLM systems, those kinds of things. So I think it’s helpful to just align this with other professional level things that are needed in order to do these jobs well. And of course we swore we wouldn’t talk about tools and here we are. As always.

AP:                   Yeah, well, let’s shift focus a little bit because politics are always part of a project. That is pretty much the rule of corporate life. At least that’s what I’ve seen in my now, whatever 25 years now, shudder, at Scriptorium. Politics are inevitable. And I think that is especially true when you have executives involved and you have to be very sensitive to them. Let’s wrap up this discussion talking about the importance of politics and why you need to pay attention to those optics.

SO:                   So two things. We talk about requirements and constraints, right? A requirement is like the system has to do X and a constraint is something like, and also it has to connect to this system, or it must not do this, or it has to run on Linux or something. But a constraint, sometimes there are personal preferences and I really wish I was making this up. We had a project where we went in. They were like, “Oh, and don’t use purple.” Okay. Well sure. But why? “Well, senior exec so and so really hates purple. If you show them anything with purple in it, they will reject the project.” Okay. Well guess what? That’s a constraint.

AP:                   Yeah.

SO:                   Absolutely ridiculous, but a constraint. So pay attention to your personal preferences slash constraints of the people that are approving stuff. If they hate PowerPoint and only want a video presentation, or they only want PowerPoint and they don’t want to hear from you, or they only want a white paper making the argument. Okay. Well deliver that, right? So that’s not really politics. That’s more like, how do you pitch to your decision maker? On the political side, there’s so many aspects to this, but basically you need to understand in your organization how decisions are made and where the real power is. So for example, if you have a CEO who’s your nominal decision maker, but on technical questions, they always defer to the CTO. They’re going to let the CTO decide. Then the CTO is your actual decision maker. And that’s who you need to pitch to. That’s who you need to tailor your solution to, to make sure that you’re giving them the information that they need in order to make the decision in the format that they want, et cetera.

SO:                   So that’s one issue, who’s the actual decision maker and that may be different from who’s on the org chart or you’ve been told, “Oh, so and so is making the decision.” And then you find out that your director of XYZ has a senior technical something who they lean on. And if you can’t convince that person, you’re done.

AP:                   Yeah. You’re sunk.

SO:                   Yeah. But they were sitting in the back of the room not talking and you didn’t notice them. And you used blue, which they hate and you didn’t know about because you didn’t pay any attention to them. So that part of it’s really important. And I’m using trivial, ridiculous examples but I will tell you, I have seen these at least once.

AP:                   Oh yeah. Absolutely.

SO:                   Usually it’s something more serious than color preferences, but maybe you built a pitch and the person you’re pitching to is color blind and you didn’t think about it. And now you’ve got an ineffective presentation because, well first of all, never do that, but you weren’t paying attention.

AP:                   You really have to be sure you’re attuned to what is going on. And that really takes some, frankly, detective work and really good observational skills on your part.

SO:                   Yeah. And it’s one of the hardest challenges that we have as consultants, right? Because we don’t have all that history with the organization. So we tend to lean on the people inside the organization that we’re working with and say, “Well, what do you know about this person?”

AP:                   Exactly.

SO:                   And ask those questions. Politically, very often these projects cross organizational boundaries. So for example, if we’re trying to integrate marketing, learning, learning, training, and technical content, then we almost certainly are dealing with two or three C-level executives, right? The marketing executive, the chief marketing officer, maybe there’s a chief learning officer, or maybe that falls under the CIO, or maybe that’s under the chief people person or HR and technical content usually but not always, under some sort of engineering function. Well who makes the decision, right? Those three executives get in a room to talk about this project.

SO:                   Are they going to do it? Are they going to push back because they don’t like each other? Who pays for it? Who owns the project? Who gets the glory? If those three execs work together well and are a team at the C-level, then things will be great. But what’s far more common is that they all have their own area of responsibility. I’m not saying fiefdom.

AP:                   I was thinking it though.

SO:                   Yeah, sorry. So they each have their little fiefdoms, which they rule with an iron fist and a project where you are trying to introduce some sort of enterprise strategy, right? Across those three organizations or more, I mean easily more, but we’ll start with those three. It threatens them because they are giving up control. Oh, we want to introduce an enterprise level taxonomy, an enterprise level terminology. Well, are you telling me that somebody else is going to tell my people how to write? Well, actually, yes because you see, we need all three of those organizations to use the same terminology and the same metadata so that when this content goes to your website or out for delivery, the people consuming it can use it in a consistent way, right? They don’t care about your empire.

AP:                   So here we are thinking that content silos are the major problem. I think it’s more the fortified castles of each one of these groups. That’s the bigger problem.

SO:                   Okay. I swear I’m not going to reference Genghis Khan.

AP:                   We might want to wrap up now. I think we’ve worn this analogy out. Yes.

SO:                   Yeah. But it is a point, I mean in all seriousness as a chief marketing officer, my job is marketing, right? And all the responsibilities that go with that. So improving engagement among customers and potential customers, outreach, getting new leads, new customers, new this, new that, right? If I’m techcomm, my job is to enable use of the product. So at up at the C-level, we do in fact have different sets of priorities and trying to bring those people into a project that must cross over is really, really difficult because they reasonably are prioritizing what their people need, not always what the overall organization needs. And now I’m going to pick on the CEO, because it’s the job of the CEO to say to these C-level people, “I want you to make this work, work together, make it happen, prioritize the cross department or cross-functional content ops, content strategy and not your individual responsibilities and priorities.”`

AP:                   That’s a really good point. And I think we can end on that somewhat hopeful note. So thank you very much, Sarah.

SO:                   Thank you.

AP:                   Thank you for listening to the Content Strategy Experts podcast brought to you by Scriptorium. For more information, visit Scriptorium.com or check the show notes for relevant links.

 

The post Content ops stakeholders: Executives (podcast, part 2) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 14:02
Content ops stakeholders: Executives (podcast, part 1) https://www.scriptorium.com/2022/01/content-ops-stakeholders-executives-podcast-part-1/ Mon, 10 Jan 2022 13:00:01 +0000 https://www.scriptorium.com/?p=21324 https://www.scriptorium.com/2022/01/content-ops-stakeholders-executives-podcast-part-1/#respond https://www.scriptorium.com/2022/01/content-ops-stakeholders-executives-podcast-part-1/feed/ 0 In episode 109 of The Content Strategy Experts podcast, Alan Pringle and Sarah O’Keefe return to the occasional series about stakeholders and content operations projects. In this episode, they talk about executives as important stakeholders in your content operations.

“An executive wants to know how a tool is going to solve business problems and support company goals. They don’t care about the widgets and what they do. They want to know about business problems being solved.”

– Alan Pringle

Related posts:

Twitter handles: 

Transcript:

Alan Pringle:                   Welcome to The Content Strategy Experts podcast, brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we talk about executives as an important stakeholder in your content operations. This is part one of a two-part podcast.

AP:                   Hey, I am Alan Pringle.

Sarah O’Keefe:                   And I’m Sarah O’Keefe.

AP:                   This podcast is part of a series about stakeholders in content operations, content ops projects. In the previous episode, we talked about the IT department being a key stakeholder. Today, we are going to shift our focus and talk about the role, or roles, really, I should probably say, that executives play in content operations.

SO:                   So execs probably don’t play a day-to-day role in content ops, with a notable exception of, if your organization is a company that produces content as a product, right? But most of the companies and clients that we work with, content is a component of the product but not the primary product. And in that case, the executives probably are not going to reach all the way down into the day-to-day content ops issues, but they have huge influence.

AP:                   Right. They are participating. It’s like an umbrella kind of over everything you’re doing that you may not notice all the time, but it’s most definitely there.

SO:                   Yeah. And first and most tremendously, obviously, executives in a business are where you get funding, right? So even if they’re not involved in the day-to-day, your probably C-level, your CIO, CTO, maybe the CMO, the chief marketing officer, that’s the person who’s going to sign off and get you funding to build out content ops, refine content ops, do what you need to do to get the investment that you need in your systems.

AP:                   Right. And it’s not just about funding. I mean, that’s a huge part of it, don’t get me wrong, because they are really the ones that are going to open up those purse strings. They also usually have a really good, big-picture view of how this slice, this content ops slice, this effort, is going to support the company’s goals. They have usually a much better handle on those short, mid, and long-term goals for the entire company, and can make sure that your efforts are going to fall in line and help with those things.

SO:                   Yeah, and that’s a really good point. And we’ve said this before. If you’re not sure how you’re going to get funding for your effort, one of the smartest things you can do is figure out what priorities or what goals does your particular funding executive have. Have they been told to grow the business? Have they been told to cut costs? Have they been told to expand into new markets? What’s on their horizon, and how can you align what you’re doing in content ops with what they are prioritizing for the year or the next couple of years?

AP:                   Exactly. You kind of need to talk their talk, more or less, or at least speak in terms that they, that’s part of what their job is, whether it’s the growth that you talked about or whatever else.

SO:                   Right. And that is, of course, highly unlikely, unfortunately for me, to be technology, right? They don’t want to talk, they don’t want to hear about tools. They don’t want to hear about shiny tools. That is not going to cut it. I am happy to talk with you, Alan, or anybody else in the world for hours, and hours, and hours about shiny tools, but that’s not how you get your executives to give you money.

AP:                   It’s the worst thing you can do, based on my experience, at least what I’ve observed.

SO:                   Yeah. It’ll work if you have a C-level exec who is also a geek, a nerd, and really wants to talk tools, maybe.

AP:                   Yeah.

SO:                   But those are actually pretty few and far between because that’s not how you get to the C-level.

AP:                   Yeah. It’s more of a situation where, yeah, we know that tools are your wheelhouse, good on you, and there’s a place for that. But that may not be the place for these particular discussions, because really, based on what we’ve seen, an executive wants to know how a tool’s going to solve business problems, support company goals and whatever else. They don’t care about the widgets and what they do. They want to know about business problems being solved, and how it’s going to fix whatever kinds of goals. And I know there’s tons of goals. We should probably kind of lay those out right now, that an executive would be particularly interested in hearing about.

SO:                   Right. So having said “not the tools,” I lean really heavily on a hierarchy of business needs. I got this from Constellation Research, but there’s numerous versions of this out there. So if you think of a pyramid, and you sort of start at the bottom, the infamous Maslow pyramid with food and shelter at the bottom and self-actualization at the top, in business, the food and water layer, right, is compliance.

AP:                   Yeah.

SO:                   If you have regulatory compliance, legal requirements, that is the bottom of your pyramid, because if you don’t do that, you will be out of business. So that’s-

AP:                   You don’t exist.

SO:                   Then you don’t exist, right? So that’s the foundation. And then in order, going up, so you have compliance, cost avoidance, revenue growth, which is kind of the flip side of cost avoidance, competitive advantage, and then branding.

AP:                   Yep. And really, you don’t do one without the one that preceded it.

SO:                   Yeah.

AP:                   So yeah, that makes a great deal of sense to me. But I do want to kind of throw in here, I’m going to back up and talk about cost avoidance. It can be very easy to fall into this trap, talking about how a tool or process is going to improve efficiency. We’re going to gain 20% on this or whatever. You’ve got to be really careful, if you are spinning efficiency as the primary argument for a content ops, or really any kind of project, because are you setting yourself up for a situation where executives are going to kind of expect those kinds of efficiency gains year after year? Because at some point, you’re going to hit a plateau where there are no more efficiency gains, really, to be had. So you got to be really careful, even if it is true you’re going to have efficiency gains, you may not want to spin it as the primary reason to do a project.

SO:                   Yeah. It’s almost like, I mean, if you think about compliance, the bottom, you need to do compliance, but you don’t need to, once you get to the point where you are compliant or compliant enough, which sounds really bad, right? But if you’re in compliance with the regulations, you don’t then say, “Oh, we need to be super compliant, or double compliant, or keep… “. No.

SO:                   And so with cost avoidance, it’s kind of the same thing. We want to get to a point where we are operating efficiently, and we have our costs managed, and we understand what those costs are. And so for example, localization, as you globalize and add more languages, can very easily be a runaway cost problem if you don’t have an efficient content operation.

AP:                   Right.

SO:                   So if your content ops are terrible, every time you localize, all that inefficiency gets just multiplied across every language. So what we want to say is, “Look, if we do it this way, it will be efficient and scalable, and we’ll be able to do what we need to do. And then we can move forward, and do some more interesting and exciting things like the next step, which is revenue growth,” right? How can content and content ops contribute to revenue growth? And maybe the answer to that is, well, we can add more languages for less money because we’re efficient. And so therefore, you, the CTO, you, the CMO, you, the organization, can go into more markets, because more markets become feasible from an investment point of view, because we don’t have to put millions and millions of dollars into localizing, because our source or our starting point is terrible. Right?

AP:                   Right. I mean, when you have a repeatable process that you can adapt for new languages, it cuts how long it takes to get into a market. And we have even had C-level folks on some of our past projects say, “I don’t care about all these bells and whistles and whatever, what I care about is getting into X country and getting this done in a very short window of time, not a three month, not a six month lag. I want to get in there simultaneously, or just a few weeks after the primary language content was released, to get that product into these different markets as quickly as possible.”

SO:                   Right. And I mean, that’s, canonically, that’s a revenue growth argument. Because what you’re saying is, when we go to market in country one, let’s say in the US with English only, if it takes us six months to localize, well, then we can’t go into any other markets for six months, or non-English speaking markets for six months. If we can get all the localization done in a few, in two months instead of six, or a few weeks, or a few days, well, then you start to get revenue from those other markets, which means you are going to get your money sooner, which is a very compelling argument and leads into competitive advantage, right?

AP:                   Right.

SO:                   Because if my product, when I release my product on day one, and on day 15 I release in non-English markets, and you release your product also on day one, but your non-English markets don’t happen until day 60-

AP:                   Right.

SO:                   Well, that’s an advantage to me, right? I’m more nimble, more flexible. I’m in Germany with German language content, which says something to my customers in Germany about how much I care about, well, it’s perceived as, “You care about us.”

AP:                   Right.

SO:                   On the inside, it may very well be, “Well, we just can’t do it. And we care very much about our German customers, but we can’t get to German language because, again, bad content ops.”

AP:                   But, and this goes to the final step in this pyramid, and that is branding. All that perception that you just mentioned goes directly into the branding angle, because if I were at a company and we were getting stuff out weeks after it went to the primary country where the content was originally released, and we were getting that product out in a few weeks thereafter, I would be crowing about that and making sure that my branding reflected the fact that, yeah, we’re getting out there giving you what you need as soon as possible. That’s a big deal, and marketing should probably reflect that.

SO:                   Yeah. I mean, we’re both focusing a lot on localization and on global markets, which I think is probably the most common justification for better content ops, right?

AP:                   Right.

SO:                   Because you can see how easy it is for every one of these steps in the pyramid to talk about what that means in a global company. But it’s also worth looking at this just from a single language point of view. Obviously, you have to do compliance. I mean, if you’re in the US, the number of industries where compliance is required is fairly limited, but you’ve got to do it. You don’t want to spend money that you don’t have to spend. That’s the cost avoidance piece. If your content is better, if your content is well-designed, and if it is easy to search and accessible on your website, those are all factors that contribute to people understanding how to use your product and using it successfully, which means they’re not going to return it, or they will be less likely to return it. Some enormous percentage of product returns are basically not “The product is defective or broken,” but actually, “I can’t figure out how to use it.”

AP:                   And in addition to returns, you’re going to have fewer people pinging your various support channels. And that, in turn, is going to help you with your bottom line and competitive advantage.

SO:                   Right, because support is stupidly expensive.

AP:                   Exactly.

SO:                   So you can see how you can tie the general business operations and the general business needs into, “If I do content ops well, and if I do these things with my content, then these are the business results you’re going to see. If our content looks better, sounds better, feels better than the content that our competitors are producing, then we will gain an advantage there,” right? You gain a competitive advantage, you gain a branding advantage, and all of these kinds of things. So if you’re looking at content ops and you’re trying to get investment for content ops, my advice is to take this five-step, or five-layer, hierarchy of needs. Think about where you are, right?

AP:                   Yep.

SO:                   “We’re not in compliance, and the FDA is threatening to shut us down” is a really good reason to invest in content ops.

AP:                   That’s a really good point, and I think we can end on that. So thank you very much, Sarah.

AP:                   Thank you for listening to The Content Strategy Experts podcast brought to you by Scriptorium. For more information, visit scriptorium.com, or check the show notes for relevant links.

 

The post Content ops stakeholders: Executives (podcast, part 1) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 14:22
Content ops stakeholders: IT (podcast) https://www.scriptorium.com/2021/12/content-ops-stakeholders-it-podcast/ Mon, 13 Dec 2021 18:00:13 +0000 https://www.scriptorium.com/2021/12/content-ops-stakeholders-it-podcast/ https://www.scriptorium.com/2021/12/content-ops-stakeholders-it-podcast/#respond https://www.scriptorium.com/2021/12/content-ops-stakeholders-it-podcast/feed/ 0 In episode 108 of The Content Strategy Experts podcast, Alan Pringle and Gretyl Kinsey kick off an occasional series about stakeholders and content operations projects. In this episode, they talk about IT groups as an important stakeholder in your content operations.

“The IT department can be such a great ally on a content ops project. IT folks are generally very good at spotting redundancies and inefficiencies. They’re going to be the ones to help whittle that redundancy down.”

– Alan Pringle

Related posts:

Twitter handles: 

Featured image: nanastudio © 123RF.com

Transcript:

Alan Pringle:                   Welcome to The Content Strategy Experts podcast, brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we talk about IT groups as an important stakeholder in your content operations. Hi, I’m Alan Pringle.

Gretyl Kinsey:                   And I’m Gretyl Kinsey.

AP:                   In this episode, we’re going to kick off an occasional series about stakeholders and content operations projects. And yes, even though content is the primary umbrella, primary objective on a content operations project, there’s still many stakeholders from all across an organization that are going to be involved, so it’s not just about the content creators and the people authoring content. What are some of the other ways that stakeholders come into play, Gretyl, on a project?

GK:                   Well, a lot of times there will be the most important stakeholder, which is your executive champion, who is in charge of the money and the resources to actually get your project approved and get it started, so that’s always someone that we make very definite sure to talk to when we get involved in content operations. You might also have developers or engineers who are working on the product itself. And of course, as we’re going to be talking about today, you might have IT, information technology, a department that’s in charge of managing your tools and your processes. So all of these other groups, even if they don’t directly create content themselves, they definitely have an important stake in it, and they need to be part of the decision making processes.

AP:                   I agree. It essentially takes a village to get one of these projects done, so it’s really good to have an understanding of other people’s responsibilities in the organization and to get their viewpoints and feedback to be sure that your content ops project is going to be successful.

GK:                   Absolutely. So as Alan said, we want to start this series by focusing on information technology or IT departments. And that’s because they are significant stakeholders in content projects. We’ve had a lot of times in the past with projects that Scriptorium has been involved in where it was actually the IT department who came to us first and who initiated the entire content strategy overhaul.

AP:                   Absolutely. And that’s an important thing to note. A lot of people may assume, just because it’s a content project, that we’re going to be contacted by the people authoring that content. And in multiple cases, that has not been the case at all. We have had people contact us who were much more into the tools and tool management and information technology side of a company who didn’t create content at all, yet it was still part of their responsibility because they’re the ones overseeing the tool chains and some of the processes for those groups, content creating groups.

AP:                   And speaking of those content creating groups, I think it is fair to say in our years of doing this, Gretyl, that we have seen a lot of content folks who had some gripes about IT groups. And then the IT people had their own stories about the content creating people. So there’s a lot of that going on, and we could spend how many podcasts on that topic, but I don’t think it’s that interesting. What we want to focus on instead is really how the IT department can be such a great ally on a content ops project. And they really have some skills and viewpoints and access that are absolutely necessary to get things running and to work well.

AP:                   And one of the first things that I can think of in regard to that general skillset is that IT folks are generally very good at spotting redundancies and inefficiencies. And that kind of makes sense because if they are managing the infrastructure of tools, they’re going to be very sensitive about, for example, if you have multiple tools doing the same exact thing within an organization, especially after say a merger, where you’ve got two different companies coming together, and you’re going to have all these layers of tools doing the same thing. They’re going to be the ones to help whittle that repetition, that redundancy down.

GK:                   Yeah, of course. I know one of the earliest projects that I was involved in that had IT coming to us as the primary stakeholder who was interested had actually spotted not only kind of these redundancies or inefficiencies in what tools they had, but also in how people were using them. So they kind of had a little bit more insight into how content creators were kind of doing a lot of manual processes with these tools that they had in place that could’ve been something more automated, how they were spending a lot of time on these things that really they knew of much more efficient ways to handle them just by nature of being in IT and seeing other departments kind of handle those processes more efficiently. So that’s definitely a good thing that they kind of have this broader view of what tools should be in place, what the kind of overlaps are, if there are any, and how they can get those out so that you could have a much more efficient way of using the tools that you have in place.

GK:                   And I think that kind of leads into another strength of IT departments, which is that they tend to have a sort of more broad, or company wide, or enterprise level view of the organization. And that’s just because of the way that they manage tools across departments. They can really kind of have that bigger picture of how they’re being used.

AP:                   Absolutely. And that perspective is sort of like when you bring in a consultant like us. We bring in a third party view because we’re not so close to it. We can give objective advice on how things are set up, how they’re running. And the IT group can do something very similar. They’re not in there day to day using the tools, authoring, or whatever. They’re a step back, so they’ve got more of a bird’s eye view, and that can be very, very helpful, like you mentioned, in spotting these redundancies.

AP:                   I think another thing worth pointing out is these folks usually have programming chops. And they have the skillsets to customize things, and they’re not scared to do so. A lot of times, to get maximum use out of your content strategy plan to be sure it’s working the best, it may require some very particular configuration, connections between tools, et cetera. And when you’ve got an IT group that’s savvy at those things, that is a huge, huge benefit to you and your content project.

GK:                   Absolutely. I really can’t think of a case that I’ve ever seen where one single tool did everything that an organization needed out of the box. And I think that’s true the more tools that you get in your tool chain, you are going to need some sort of custom configuration most of the time. And so when you’ve got an IT department who is really involved in the overall content strategy, they can help you see what exactly are the customizations that you’re going to need. They can be a really valuable asset to your content department in making sure that those customizations are done. And so again, that’s why it’s just really important to involve them from the outset before you really even get into the process of choosing what those tools are going to be. They can help you make the decisions about what customization might be involved based on what you choose.

AP:                   And I think it’s worth acknowledging that making those customizations does cost money. And when you’re doing your content strategy planing for how you want your content ops to go, looking at return on investment is going to be a big part of that assessment, so getting that feedback and input for what it’s going to cost to stand up these customizations, these configurations, is very important because you need that information to figure out essentially how long it’s going to take you with improved efficiency or whatever, to basically recoup those costs. So yes, customization is a great thing, but you need to have the clear understanding fairly early on about the kind of cost projections to get that work done and how you’re going to get that money back and then make more gains beyond that to make it worth everyone’s while to do that customization.

GK:                   Absolutely.

AP:                   We’ve kind of already touched on this, but let’s take a little time to talk about the kind of functions that an IT group is going to handle, contribute to, in content operations projects, or even the content strategy planning that precedes it. What are some of the things that come to your mind, Gretyl?

GK:                   So one of the first ones that comes to mind right off the bat based on what we just talked about is enterprise architecture. And that’s because like we said before, IT has that kind of big picture, bird’s eye view of all of the tool chains across the organization. So when it comes to developing your information architecture for content, particularly at the enterprise level, thinking about not just one type of content and one department, but all of your content across an organization, IT can really be helpful in figuring out what your strategy is going to be for that enterprise level information architecture.

AP:                   And speaking of enterprise, the more organizations move into the content as a service model, where basically it’s less about delivering a PDF, or a help set, or a marketing slick, or whatever, it’s more about giving the end user whatever kind of content they want in the specific format they want it, when they want it. That really requires a lot of connectivity. It requires a lot of understanding of the entire tool chain and how everything is connected within the enterprise. And the more we move to that content as a service CaaS model, the more critical I think IT is going to become in these sorts of projects.

GK:                   I agree. I think we’re seeing a lot more demand for Content as a Service for custom, personalized delivery for on-demand content. So I agree absolutely that IT is going to be playing I think an even bigger role as more of these kinds of projects are undertaken.

AP:                   Something else that I think that really falls into their wheelhouse is to help with evaluating new tools that you’re going to need to develop, manage, and distribute your content. If you’re doing, for example, a new content management system, you’re probably going to set up some proofs of concept with a vendor or two and get that set up and running. And it would behoove you to get some input from IT about how those tools are set up and how efficient and how well they work, also to do security checks. I think a lot of tools now are more in the cloud. Less and less, we’re seeing companies deploy tools on premise on their own servers. Instead, they use cloud based tools. Even so, security is still a concern. And that is something that they need to be part of. How you stand those tools up, how good the security is, all those kinds of things that you want to look at in a proof of concept, that definitely needs the input from your IT people.

GK:                   Yes, absolutely. And as consultants, we’ve been involved in that process of the demonstrations and the kind of questioning of the vendors with regard to choosing what tools you want. And there have been some of those times where IT was heavily involved. They helped come up with a lot of the information, a lot of the feedback, a lot of the things that were asked of those vendors during that demonstration and kind of testing process. And then we have had other projects where sometimes tools were chosen, and then later the company came back and said, “We should’ve had IT involved and we didn’t, and that was a mistake because we’re seeing that we might’ve made a wrong choice here. We didn’t evaluate this one particular aspect that was really important.” So definitely when you are looking at tool options, especially if you’re choosing more than one tool, so if you’re looking at maybe a CCMS and an LMS, that you would really want to have IT involved to help make those decisions and make sure that everything is going to work together as you intended to.

AP:                   Right. And once again, I go back to the whole enterprise level viewpoint, the connectivity among these systems. You cannot have blinders on and pick a tool that suits just your purpose. It has to fit in the bigger ecosystem you have for tools. Otherwise, everybody’s going to have their own little tool communities, and that’s just a mess that you don’t want, and frankly, an IT department probably is not going to tolerate very well.

GK:                   Yeah, exactly. And that’s a really great point because it leads us to the next kind of thing on our list of how IT can help with a content project, which is that they are really good at making those connections among disparate departmental content and data sources. So if you have a situation where you’ve got, let’s say, technical content, training content, marketing content, and all that needs to be connected and you don’t really have the infrastructure for that, IT is going to be your number one resource to make sure that can happen.

AP:                   Right. And I think one other thing that kind of puts a bow around all this is content governance. And I know you’ve talked a lot about that, so I’m going to kind of let you take that on because that’s been a topic I know that you’ve written about and talked about on the podcast previously.

GK:                   Yeah, sure. So content governance is what happens when you need to have someone in charge of overseeing all of your content processes and the changes to those processes over time, the evolution of those processes. And again, this is a place where it’s very important to have IT involved. A lot of times when we have had companies that we’ve worked with putting a governance strategy in place, it’s either been driven by IT or they’ve made sure to have someone from IT be part of whatever team of resources is in charge of content governance. And it all cuts back to what we’ve been saying, it’s because they have that viewpoint from the enterprise level. They’re the ones who are going to really know and understand how all of the parts of your content tool chain work with the content lifecycle. So when it comes to maintaining and governing and improving your processes, it’s imperative to have IT involved.

AP:                   Absolutely. And I think one of the last topics that I want to touch on before we wrap up are some final thoughts in regard to content ops projects, considerations that really feed into IT and having their participation. And the first one that I think really comes up is authoring tools. One thing that I have really learned over the past few years is when it comes to authoring content, it is very much not a one size fits all situation for the tools used to create content. There are absolutely legitimate reasons to have different types of tools for authoring content, even if they feed into the same repository or management system for the content.

AP:                   And a good example of that is sometimes you have part-time contributors on content projects, such as product engineers. Once in a while, they’ll go in and add some feedback, put in just a little bit of information. They do not want to deal with the overhead of a super duper professional strength authoring tool. They want to get in and get out very quickly with minimal overhead, minimal time spent on learning a tool. Whereas the people who are day-to-day creating content and that’s their full-time job, they’re going to want a lot more control, a lot more features, a lot more bells and whistles to get the content done and do the things they need to do that are a little more complex, for example, in regard to reuse, and get that done correctly, whereas the people who are part-time contributors probably don’t care as much about that because it’s being handled by the full-time content creators. So there is absolutely a valid reason to have different authoring tools. And it’s probably better not to force people’s hand to use just one tool because of some kind of perceived redundancy there.

GK:                   Yeah. And one thing I’ve seen IT help do with this in particular also is even if you do have the same authoring tool, there may be features that you can turn on or off for certain kinds of users. And so you could have different levels or different user roles, and IT is kind of in charge of managing which people are your power users, your ones who need all of the bells and whistles and all of the controls, which ones are maybe only in a review capacity, but not a content creation capacity, so they might need some different controls, which ones are just those kind of part-time occasional subject matter expert contributors. And that’s where it can again really be helpful to have IT involved to make sure that, whether they are using different tools altogether, or kind of different variations or access levels of the same tool, that everybody can do what they need and kind of not be forced into a bunch of features and things that they don’t need.

AP:                   Absolutely. And surprise, surprise, I think the last point we’re going to make is that a lot of times, very niche, very particular content tools may be required to get the best return on an investment for your project, like we talked about a little bit earlier. But you still have to balance the cost of those niche tools and configuring them and making any customizations against the overall cost of even just implementing and then maintaining those very specific tools down the road. So there’s got to be some ROI calculations done, and this is where IT I think will be very, very helpful in figuring out that return on investment.

GK:                   Yeah. Before you ever decide what your tools are going to be, IT can help you say, “Yes, this is going to get you everything you want, but it’s going to involve X many dollars or X much time for maintaining these customizations that are going to be involved for training people on how to use them,” really helping you think of all the different aspects that are going to be involved in using that tool. And they might be able to recommend something that gets you, let’s say only 90% of the way there instead of 100%, but you’re going to save so much cost for things like customizations and maintenance that maybe it balances that out. So it’s really helpful to have that perspective before you make your tool decisions.

AP:                   And that’s great advice and observation there, Gretyl. And I think we’re going to wrap up, so thank you very much.

GK:                   Thank you.

AP:                   Thank you for listening to The Content Strategy Experts podcast, brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

 

The post Content ops stakeholders: IT (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 20:34
Content strategy pitfalls: lacking a unified content strategy (podcast) https://www.scriptorium.com/2021/11/content-strategy-pitfalls-lacking-a-unified-content-strategy-podcast/ Mon, 29 Nov 2021 18:00:44 +0000 https://www.scriptorium.com/2021/11/content-strategy-pitfalls-lacking-a-unified-content-strategy-podcast/ https://www.scriptorium.com/2021/11/content-strategy-pitfalls-lacking-a-unified-content-strategy-podcast/#respond https://www.scriptorium.com/2021/11/content-strategy-pitfalls-lacking-a-unified-content-strategy-podcast/feed/ 0 In episode 107 of The Content Strategy Experts podcast, Bill Swallow and Gretyl Kinsey are back for another episode in our Content strategy pitfalls series. They talk about what can have happen when you lack a unified content strategy.

“One way to get funding in place is to start the conversation among different groups. Get these groups together and start talking about what their ultimate goals are with their content strategy and their content operations. That way you can have multiple voices coming together and asking for a larger pool of money that can be shared.”

– Bill Swallow

Related posts:

Twitter handles: 

Featured image: chrischips © 123RF.com

Transcript:

Bill Swallow:                   Welcome to The Content Strategy Experts podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we look at another content strategy pitfall, what can have happen when you lack a unified content strategy? Hi, everybody. I’m Bill Swallow.

Gretyl Kinsey:                   And I’m Gretyl Kinsey.

BS:                   So, before we jump into talking about what can happen when you lack a unified content strategy, we should probably start with explaining exactly what a unified content strategy is.

GK:                   Yeah. So, if you’ve listened to any of our podcasts before, looked at any information on Scriptorium’s blog, you might have also seen us refer to this as enterprise content strategy. So, what we mean by enterprise or unified content strategy is a plan for managing all of your content processes across the organization. And a lot of times that involves bringing all of your different content producing groups into alignment with each other.

BS:                   And as you can imagine, if everyone is working against a different strategy and doing different things, a lot of bad things can happen. One thing that we see right out of the gate when an organization does not have a unified content strategy, is that there are a lot of inconsistencies throughout the entire content chain, from authoring the content all the way through to the customer experience on the final destination of that content.

GK:                   Yeah, absolutely. A lot of times this happens, because not everybody in the organization places the same amount of value on content. I know one example that I’ve seen of this might be something like, an executive sees a lot of value from something like the marketing content, because that’s directly making sales. But they don’t realize maybe the importance or the value that other kinds of content like your technical documentation, your training modules, maybe your legal materials might have. So, those groups maybe don’t get as much funding, as many resources, as much invested into them. And then you end up with this inconsistency, with this lack of cohesion among the different content producing groups, just because there wasn’t really value placed on content as a whole.

BS:                   And we also can see this within even what we consider a traditional content group. So, for technical documentation, a lot of times you will have user focused guides and user focused content, and you will also have deep technical content, perhaps API references and so forth. And oftentimes, we even see several different strategies being used for these various sub-components of what we would refer to as the umbrella of technical documentation. And even in those cases, you can start seeing a lot of dissonance between how the content is being authored, how it’s being produced, and how it’s being received.

GK:                   Yeah, no matter whether you have these subgroups that you’re talking about or your larger content producing departments. Another issue that we see is that these different groups may come up with different content strategies separately. When you’ve got all of these different ideas and these different ways of work trying to come together for the first time, you can have a lot of issues like change resistance. You can have egos coming in and you can have a lot of debate over what approach is the best approach. And so, that’s why, whenever we talk about a unified content strategy, it’s oftentimes easier to work on it from that perspective from the start, rather than trying to bring together a whole bunch of separate content strategies from different groups.

BS:                   And a lot of these inconsistencies and a lot of these mismatches that you might see as you try to combine two different strategies into one, could range from how people work when they author, it could be what tools they’re using and whether they’re even compatible together with other groups. It could be the tone and voice of the content that’s coming through and that there’s a stark difference between the two, and there’s no way to easily glue them together without it sounding completely bizarre to someone who’s reading it.

GK:                   Yeah, absolutely. And I think that’s an important point, because when you start having those inconsistencies, you reflect outward in the content to where they’re going to be affecting the way that somebody might use that content. If whether you are a customer who is trying to decide whether to buy a product or you’ve already bought your product, you’re trying to figure out how to use it. If the content is not consistent, if some of these issues from the way it’s been created are spilling over into the user experience, then that’s going to have a negative impact on your company. And so, that’s why I think what we said up front about the value of content is so important and you have to really think about that from all different angles.

BS:                   Right. And a lot of times any of these changes are going to come really with a significant cost. And a lot of times we look at the dollar signs or the price tag on the tools involved in being able to swap tools and migrate content over into a new system. But sometimes that’s not even the largest cost we’re talking about. If there are workflow changes that need to happen within a company, that usually means not only changing what that process looks like, but training everybody up on using it correctly. It probably involves a completely different way of authoring into some fashion. So, whether they are using different tools to author, there are different ways of going about producing that content.

BS:                   If the tone and voice needs to come to alignment, a lot of stuff needs to be rewritten. If there’s localization involved, then anything that has been translated previously is no longer a leveragable asset, in which case you’re starting from scratch with retranslating everything. So, it’s really important when you are defining your content strategies, that you take a look around and make sure that you’re not operating in a silo and potentially magnifying the cost of unification later.

GK:                   Yeah, absolutely. And one thing that you mentioned that got me thinking about another pitfall when it comes to that cost was you mentioned tools and process changes. And I think one pitfall that I’ve seen a lot of companies fall into is they make decisions about their tools or their process changes and purchase new tools without consulting everyone who might be affected by that decision. So, for example, let’s say that you have got an LMS at your company and you need to upgrade, and you only consult people in training and e-learning who use the LMS. And you don’t talk to other content groups who may share content, who may need to use some of the training materials as part of technical documentation, as part of marketing content, for example. And then when you purchase your new LMS, it affects those other groups and there’s that spillover.

GK:                   And we see this happen all the time. We see it happen with component content management systems. We see it happen with localization, where these kinds of tool decisions are made without really taking into account the unified content strategy and the effects. And I think when we’ve got those kinds of content silos, that’s where it’s more likely to happen because you don’t really think outside of your particular group.

BS:                   To that point, if one of the key factors in moving toward a unified content strategy is to be able to intelligently reuse content rather than copying and pasting, or what have you, the tools are really going to make or break that particular aspect of your content strategy. Because if one group is authoring in one particular tool that has a very specific file format or some kind of binary format, it is going to be near impossible to be able to get that content out and reusable as a single chunk of content. A lot of times it will either need to be copy and pasted or re-keyed or something to get it into another system to be able to use it. And that completely defeats the purpose of reuse.

GK:                   Yeah, absolutely. And I think this really speaks to why whenever, at Scriptorium, we come in and help companies with their content strategy is we often say tools should be the last thing you do. You need to come up with all of your goals, all of your specifications, all of the reasons why you’re buying that tool in the first place before you start looking at options. Because what happens when you make those decisions in the early part of that process is you don’t think of all the different factors. And then you end up either in a situation where you’re locked into using a tool that doesn’t really work for you or to get out of it. It’s going to be like Bill said, really expensive. There’s a lot of costs involved with these kinds of tools. So, rather than making an expensive mistake, it’s always better to take more time upfront, to work on the strategy itself and really understand what it is you’re looking to get out of those tools before you buy them.

BS:                   And with every strategy comes one particular item that is often overlooked when putting a content strategy together, and that’s content governance.

GK:                   Absolutely.

BS:                   And if everyone is doing different things with different tools and different ways and using different processes and different quality control checks, it is going to be very difficult to get any kind of overarching governance in place to be able to make sure that everyone is working as they should be throughout this process. The governance is going to be rather wide in scope. And the more differences you have between different teams working together, the more difficult it’s going to be, to be able to govern all the aspects of content creation across the enterprise.

GK:                   Yeah. I thought it was really interesting that you mentioned how governance is something that people don’t consider enough. I also have seen it be treated as an afterthought, when really it should be one of the most important parts of your content strategy. And I think when we have a situation where there’s a unified content strategy and that’s the goal, then people tend to consider governance as a greater part of it. But you’re right, Bill, that when we’ve got a situation where there are all of these different silos, all of these different tools and they don’t fit together into one streamlined content set of processes, then governance is just going to be herding cats. It’s going to be wrangling all of this mess that you’ve got, instead of truly moving your strategy in a better direction for the whole enterprise.

BS:                   Right. I mean, the governance angle really is speaking to a lot of the other pitfalls that we talked about. If there are multiple different tools in place, it’s very difficult to govern how those tools should work and at what point in the process that tool should have a handoff and what that quality check should look like. If you have many, many, many different strategies in place, regardless of whether you’re using the same tool or not, it’s very difficult to get those quality checks and get those points defined as to where you do certain reviews, where you do certain checks and balances. It will just exacerbate the problem of not being able to produce content that looks like it came from one organization with one voice, with one intent to its audience.

GK:                   Definitely. So, I want to close out by talking about one issue that’s at the root of all of these pitfalls, which is that oftentimes when we see this lack of unified content strategy, it tends to come down to a lack of funding or resources, or maybe unequal funding across different departments. And a lot of times that’s outside of their control. So, I want to talk about what companies can do to account for that limitation and how you can avoid some of those pitfalls, even if you’re dealing with a lack of funding or resources.

BS:                   So, one way to get this funding in place is to start that conversation among different groups, to talk to different groups that may have a different content strategy that is underway or that they’re using, or that they’re thinking about. Getting these groups to come together and start talking about what their ultimate goals are with their content strategy, with their content operations. And start pulling together those ideas, and being able to look at the tools that they’re using, for example, or that they plan to use in their new content operations and start pulling that together and making sure they’re compatible, if not identical. And that way you can have multiple voices coming together and asking for a larger pool of money that can be shared, rather than individual groups getting their own little pocket of cash to work with.

GK:                   Yeah, I think that’s absolutely critical, to make sure that you have that communication across departments. Another idea that I’ll suggest that might help, if you know that you’re going to be limited on funding, you know that you’re going to be limited on budget. At least one thing that you can do for now is once you’ve done what Bill has suggested, you’ve maybe gone to some other departments, you’ve talked about your content needs, start seeking out an executive champion. So, even if you can’t get the money immediately, even if you know it’s going to take time, the sooner that you can start planting that idea in someone’s head about why content is valuable, why it’s going to help to eventually get that funding in place, what it’s going to do for the organization, then the better your chances are of actually securing that.

GK:                   And one really, really solid way to do that is by gathering some metrics. So, what information can you actually provide to the executives about how much money that you are losing right now with inefficient or inconsistent content processes and how much you’ll save by fixing those? One thing that you might even consider doing is taking what limited funding you do have and conducting some sort of a pilot project or a study to just show here is what we’re thinking with regard to content strategy. We’ve talked it over with other groups, they want to buy into this too. And here is just a little bit of proof that we think it’s going to work. And if you can show some of that evidence, then I think that really helps to prove that value of content and maybe start to have the folks at the top who have the cash take it more seriously.

BS:                   Yeah. Showing that return on investment is critical, especially to gain an executive sponsor. Another thing to look at is not necessarily the cost savings that you have by working together and doing these things in unison, but it’s also looking at opening marketing or opening market opportunities for your organization. So, if you have been hindered by the way you work from entering into new business markets, or being able to broaden an offering to an existing business market, and your thoughts of having a unified content strategy can get you there, that return on investment will be much greater than the savings you’ll get from streamlining existing processes.

GK:                   Absolutely. And don’t forget to account for time as part of your savings as well, whether that is things like time to market, whether it’s time saved in your actual content workflow. Just remember to take into account all the other factors that can go toward the idea of return on investment aside from just strictly the cost savings.

BS:                   And I think that’s a good place to close it.

GK:                   Yeah. So, thank you so much.

BS:                   Yes, thank you all. And thank you for listening to The Content Strategy Experts podcast brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

 

The post Content strategy pitfalls: lacking a unified content strategy (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 16:07
DITA and accessibility (podcast) https://www.scriptorium.com/2021/11/dita-and-accessibility-podcast/ Mon, 15 Nov 2021 18:00:20 +0000 https://www.scriptorium.com/2021/11/dita-and-accessibility-podcast/ https://www.scriptorium.com/2021/11/dita-and-accessibility-podcast/#respond https://www.scriptorium.com/2021/11/dita-and-accessibility-podcast/feed/ 0 In episode 106 of The Content Strategy Experts podcast, Gretyl Kinsey and Bob Johnson of Intuitive talk about accessibility and the Darwin Information Typing Architecture

“If you’re doing it right, accessibility doesn’t look any different than what you’re doing day to day. You’re just adding accessibility considerations when you author your content.”

– Bob Johnson

Related posts: 

Twitter handles: 

Transcript:

Gretyl Kinsey:                   Welcome to The Content Strategy Experts podcast, brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we talk about accessibility and the Darwin Information Typing Architecture with special guest Bob Johnson of Intuitive. Hello and welcome everyone. I’m Gretyl Kinsey.

Bob Johnson:                    And I’m Bob Johnson.

GK:                   And I am so happy that you are a guest on our podcast today. So, would you just start off by telling us a little bit about yourself and your experience with DITA and accessibility?

BJ:                    Sure. I actually have routes in component content management that go back before DITA. I worked for a web CMS vendor that published a web CMS that was component based. And we implemented Author-it, which is a component based CMS and authoring tool primarily for technical content. We eventually moved to a DITA publish, which solved some problems for us. And since then, I’ve worked with a number of companies, both on the authoring side and the publishing side. I’ve managed CCMS acquisitions, I’ve managed DITA transitions for companies in the medical device sphere, in software, and in medical reference conThe web CMS vendor is also where I got my experience with accessibility. We wanted to sell to government customers and so we needed to be able to make section 508 compliance statements. And so, I had to study up. Later on, I worked for a company that had been acquired by Oracle. Oracle takes a rather different approach to accessibility than a lot of companies. Where other companies centralize their accessibility practice, Oracle makes each business unit responsible. And so, I took the responsibility for helping this acquisition implement accessibility in its content. When I went looking for documentation about accessibility and DITA , I didn’t find anything.

BJ:                    So, I sat down with the web content accessibility guidelines and developed a matrix to indicate which guidelines applied to techcomm, which one applied to authoring, which one supplied to publishing. And they built a mitigation strategy based on that. I later shared my experience at DITA North America and have been working since then to share that experience with technical communicators across various markets. You mentioned at one point in our emails, what is accessibility? And that’s a really good question. I’ve never found a legal definition, but what I usually use as a definition is accessibility is the characteristics of a product and its content that allow users with disabilities to access the content or use that product.

GK:                   That’s great. And from your perspective, based on all of that experience you just described, what does accessibility look like when you are authoring DITA content?

BJ:                    In all honesty, if you’re doing it right, accessibility doesn’t look any different than what you’re doing day to day. You’re just adding accessibility considerations when you author your content. So, for example, when you add a graphic, you make sure that you add an alt text so that users, for example, on a screen reader can get a description of that graphic. You make sure that your table designs are simple and easily navigated. Designs that look easy to the human eye can be very tricky to navigate on a keyboard, which is what most users on a screen reader will be doing. It also looks like authoring content that’s well structured and very focused so users, for example, with cognitive disabilities, don’t encounter problems or distractions that might make it harder for them to follow the thread of the content and understand it so they can fulfill their tasks.

GK:                   Yeah. And I know it’s really interesting what you said about images and tables in particular, because I think for a lot of our clients at Scriptorium, that’s one of the areas that when they’re authoring content in DITA, and they become concerned about accessibility or maybe they start to have new regulatory requirements for accessibility, with their content, that tends to be one of the biggest areas they have to start with. And a lot of times when they have legacy content, one of the areas where they haven’t really been addressing accessibility in the past. So, I think that’s a really good starting point that you mentioned.

GK:                   I want to talk about one other concern that we tend to see a lot, which is when we have something like DITA or structured authoring in general, where your content and your formatting exists separately, then that means there’re going to have to be two maybe different groups thinking about the way accessibility works. So, how can we be proactive in designing accessible content when you’ve got that separation between your content and your formatting?

BJ:                    The place to start is remembering that there’s more than just visual disabilities when it comes to accessibility. One of the responses I frequently hear when I talk to people about accessibility is, why do we need to do this for a small portion of our audience? And if all you think about is users that are blind, that is a relatively small portion of the audience. But visual disabilities itself is actually larger than just blindness. Visual disabilities also encompasses color blindness. About 8% of North European males and about 5% of North American males are red-green color blind. That’s a substantial portion of any audience. And you have to consider that, particularly when you’re implementing your interface, to make sure that color is not the only signal that something is changing or something has meaning.

BJ:                    You need to be sure that the form of whatever it is also changes so it indicates that there’s something you need to pay attention to. Similarly, when you’re designing an interface, you need to be concerned with neurological disabilities, and certain rates of flashing are known to induce seizures and you don’t want to flash at those rates. When you’re thinking about authoring, again, you want to think about not just visual disabilities, but physical and cognitive disabilities. People with physical disabilities may be navigating by keyboard similar to users on a screen reader. If they have, for example, carpal tunnel or epicondylitis, which is an inflammation of the epicondyle tendon in the elbow and makes it difficult to navigate by mouse, you may need to use the keyboard in that situation.

BJ:                    And you want to make sure as the author that you make a table, for example, that’s well defined to navigate. You want to make sure that your text content minimizes distractions for users with cognitive disabilities, like ADD or dyslexia. You want to make sure that it’s well organized, that there are a lot of bullets, that you keep your paragraphs short and tight, you keep your topics short and tight. And you really want to avoid using inline links, because those are distracting for both users on screen readers and users with cognitive disabilities.

GK:                   That’s a really interesting point about the inline links, because we’ve also seen that pose issues for reuse in DITA as well. But I don’t know that we’ve ever really seen it come up as an accessibility issue, but that is a really great point. And I know we’ve been encouraging a lot of companies that do heavy reuse to get their inline links into something more like a related links list at the end of a topic, rather than sprinkled all throughout. But that’s a really good point too, that it can also really have benefits on the accessibility side to do that.

BJ:                    Definitely. I have some personal experience with this. I have two children that both have cognitive disabilities, ADD and similar related disabilities. And watching them during the COVID pandemic and having to do their school work remotely and seeing text content that they’ve had to use that has links embedded in the text. They’ve found it easy to get distracted and lose the thread of what they’re working on. And that’s equally important for someone that is using content for a business application, or if they’re a consumer trying to, for example, place an order for a product or a service. You don’t want them to lose that thread and go off and do something else.

GK:                   Absolutely. I thought it was also really interesting what you said about making sure that your topics are short and focused because that’s another area where a lot of companies have come to us and said “we have legacy content that was written more in kind of a book like format, and we want to get it more modular.” And a lot of times, accessibility is a driving force behind that, especially as they’re going into more online forms of delivery, like Webhelp or HTML or a dynamic portal. So, that is a really interesting point too, of how they can author their topics in a different way that’s better for accessibility. So, that’s all covering the authoring side, but what about on the output transform development side, what can be done with the design and the way that you deliver that content to make it more accessible?

BJ:                    You need as the publishing designer to make sure that you implement whatever accessibility affordances that your authors design into their content. You also want to make sure you consider some of the color issues that I mentioned earlier, to make sure that those users have the correct signals for content changes, not just around color, but around form as well. You also, if you’ve got any kind of streaming content, streaming audio, streaming video, similar to this podcast, that you also make either a transcript or closed captions available so that users with auditory disabilities can follow along or even access the content. Because obviously, a user with an auditory disability is going to find it very difficult, if not impossible, to listen to this podcast and the transcript is going to make that available to them.

GK:                   Absolutely. One other question following on from that I wanted to ask is that one thing that we’ve seen sometimes with clients who are trying to take things from their legacy formats into something a little bit more modular is that they tend to have lots and lots of hierarchical nesting. And I wanted to get your perspective on any issues that might cause for accessibility. Because one thing we’ve seen is when you have many, many levels of headings, it can only go so deep in a visual representation before it gets really convoluted and confusing. And I think from an accessibility point of view, a lot of times our advice tends to be to try not to have your nesting and your hierarchy, whether it’s for headings or even list items to go too many levels deep. And I wanted to get your perspective on that as well.

BJ:                    Now, that’s a good point and both for users with visual disabilities and users with cognitive disabilities. Excessively, deep nesting is really problematic. So, for example, a user on a screen reader, deeply nested content can be very challenging to navigate, especially when you’re navigating by keyboard. So, making a shallow structure is going to be much easier for that user on the screen reader to navigate. A user with ADD or executive function disorder is going to have similar problems navigating an excessively complex structure. It’s difficult for them to keep focus or to focus on their navigation of a very complicated structure.

BJ:                    So, to make it easier for the users with those disabilities, you really want to focus on making your structure relatively shallow. Three levels deep is about the deepest recommendation for any form of navigation that I have seen by accessibility experts. And by the way, I consider myself an advocate, not an expert. I advocate for implementing accessibility and technical communication content, but I’m not necessarily an expert on accessibility.

GK:                   Any other final advice or words of wisdom that you have to help people who may be starting to introduce accessible content or address accessibility for the first time?

BJ:                    One thing is to realize that you don’t necessarily have to do everything at once. Very often, when people look at accessibility, they feel overwhelmed. I usually recommend a three pronged approach to implementing accessibility if you haven’t done it before. Anything that new, anything you doing new starting now, make sure you implement accessibility and follow your accessibility practices. Anything that you touch going forward, whether it’s to implement a new feature or to mitigate a defect, plan for implementing accessibility mitigations as well, as part of that work.

BJ:                    And then for each period of work, whether it’s a sprint or some other form of work, plan implementation of accessibility mitigations in a section of your content to make that whole section accessible or to implement accessibility in that whole section. Also, work with your leadership to determine what aspects of accessibility you need to implement. It turns out that some accessibility mitigations you implement for certain disabilities might not be good for users with other disabilities. And it’s up to your leadership, your accessibility experts, and your legal team to determine which accessibility mitigations are most important for your organization.

GK:                   Thank you so much for all of that fantastic information and for joining us on the podcast today.

BJ:                    Thank you for having me. Glad to join you.

GK:                   And thank you for listening to The Content Strategy Experts podcast, brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

 

The post DITA and accessibility (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 16:22
Exit strategy for your content operations (podcast) https://www.scriptorium.com/2021/11/exit-strategy-for-your-content-operations-podcast/ Mon, 08 Nov 2021 18:00:20 +0000 https://www.scriptorium.com/2021/11/exit-strategy-for-your-content-operations-podcast/ https://www.scriptorium.com/2021/11/exit-strategy-for-your-content-operations-podcast/#respond https://www.scriptorium.com/2021/11/exit-strategy-for-your-content-operations-podcast/feed/ 0 In episode 105 of The Content Strategy Experts podcast, Alan Pringle and Sarah O’Keefe talk about an exit strategy as part of your content operations planning.

“You need to be thinking about the what-ifs 5 or 10 years down the road while you’re picking the tool. Are we going to have flexibility with this tool? Is it going to be able to help us support things we may not even be thinking about or may not even exist right now?”

– Alan Pringle

Related posts: 

Twitter handles:

Transcript:

Alan Pringle:                   Welcome to The Content Strategy Experts podcast, brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we talk about an exit strategy as part of your content operations planning. Hi, everyone. I’m Alan Pringle.

Sarah O’Keefe:                   And I’m Sarah O’Keefe.

AP:                   And today, Sarah and I are going to talk about something that probably doesn’t get enough attention, and that is an exit strategy for your content operations.

SO:                   Yeah, and it seems vaguely impolite to talk about the process of leaving a vendor when you’re planning and thinking about which tools to buy and which systems to build and how to build up your content operations. But I think it beats the alternative, which is not to think about leaving a vendor and then 5 or 10 years down the road, you have to exit and you are truly, truly in trouble.

AP:                   Yeah, and I can understand, I will admit, I have caught glimpses of side eye from client stakeholders more than once when exit strategies came up during content strategy assessments. We’re talking about getting out of a tool before it’s even selected, and I can kind of understand the thought process. Why are we talking about that now? Well, as you pointed out, you really need to talk about it during the planning phase. Otherwise, you’re going to be left with a lot of muck when something happens and you’re forced to leave a tool for some reason.

SO:                   Yeah. The side eye from the vendors is even better when we start asking awkward questions. But the alternative, we’ve got projects right now where we are looking at, how do we exit a particular component content management system, move a customer to a new system because it’s time, and they need to move for good and valid reasons. And what we’re running into is that because the inbound 5 or 10 or 15 years ago didn’t really take into account the inevitable exit, we have huge migration costs. We’ve got relicensing costs. We’ve got rebuilding, recustomization, reintegration. It’s almost as bad as the original project of going from unstructured to structured content. It is super expensive if you don’t have a good path to exit.

AP:                   Sure, and let’s kind of take two steps back. The bottom line here is that planning to get away while you’re choosing your tools is a risk mitigation strategy. It’s a way to keep things from completely blowing up 3, 5, 10 years down the road. So it’s a way to lower your risk. As part of that mitigation of risk, let’s talk about some of the odds and ends that you really need to be thinking about as a way to develop your exit strategy.

SO:                   You know, we talk a lot about standards and I think everybody listening to this knows that we do a lot of work with XML and a lot of work with DITA-based content. But with that said, you kind of want to start with this question of, am I going use a standards-based tool… now we’re talking about something like a DITA CCMS or an XML CCMS… or should I use a commercial tool, which maybe isn’t standard space per se, but has a really good setup that meets my needs? If you can find something that meets your needs out of the box, doesn’t really require customization, that should work for you. But I would argue that the more customization you’re planning, the more complex your setup is going to be, the more important it is to fundamentally have a standard underlying what you’re doing, because otherwise you’re going to be again in big trouble when you try and get out.

AP:                   So basically, the more you tinker, the bigger your problem may be when you do need to leave this tool set or tool ecosystem.

SO:                   Right, exactly, because whatever configuration customization thing you do will not transfer over to the next system, whatever that may be. So you sort of look at it and say, well, it’s a one off. I’m going to do this, and as long as we’re in Tool X, this will work, but as soon as we exit Tool X, all that work that I just did has basically zero value.

AP:                   Yeah, and it also sort of… It may force your hand where you are locked in with a system until you can do something about all those customizations, and in some cases you may not be able to do anything about those customizations.

SO:                   Yeah. I mean, we worry a lot about lock-in, getting to a point where you have built a system, a process, a technology stack, a tool set that is so specific and unique to that particular underlying layer that you have that it becomes impossible to get out. The more customization you do inside a tool, the more custom connectivity, the more integrations you build to other tools, the more locked in you’ll be, because, again, if you switch tools, you’re probably going to have to rebuild all of that, and it was daunting to do it once and it’s going to be more daunting to do it again.

SO:                   So the more you integrate and customize, the higher your exit cost is going to be. You have to balance that against the fact that obviously you’re doing the integration because you get productivity, you get value from it. How high is that value, and can you recoup that over, again, three to five years before you are potentially faced with having to switch tools for some external reason that you have no control over?

AP:                   Yeah, and this is where you really have to look at your business case, your investment. Are you going to recoup that money? And if you do, that’s great, but if you don’t and you keep basically investing in these customizations layer upon layer upon layer, it’s going to be very hard to unwind all that stuff, and more importantly, it is going to be hideously expensive, both from a money point of view and a person-hours point of view, to get that stuff recreated.

SO:                   Right. So to take a very concrete example here, if you have a DITA-based CCMS and you build style sheets, DITA Open Toolkit style sheets, to do all of your output, then those style sheets should transfer from one DITA-based system to another, with let’s say minimal-

AP:                   Yeah.

SO:                   … rework. Certainly some CCMSs do have some proprietary stuff going on that you have to either put in or strip out, but overall, something like 90% or 95% of your style sheet should just work if you move it out of one DITA-based system into another one. If, however, you build out your output using, let’s say, a proprietary publishing layer in a particular tool and then you switch tools, you have to start over. So that’s a concrete example of where vendor lock-in would cost money down the road.

AP:                   And I think it’s important to point out here that it’s this exit strategy or these problems with not having an exit strategy are not just related to tools. There are some things that have to do with finances, contracts, so on, that also have a big part in these kinds of problems. So let’s step back from the tools a little bit and talk about the bigger-picture implications of finances and contracts and that sort of thing.

SO:                   Right. So if I’m… This is a case where the interests of the vendors selling commercial tools, software, and the interests of the customer, the organization buying commercial tools or software, do tend to diverge, right? Because if I’m the vendor, I want the longest-term contract possible. I want you to stay with me. I want you to pay me every year, as we all do, because that allows me then to reinvest in my tool and make it better and keep you as a long-term customer. It also reduces my risk as a software vendor, right? A five-year contract is better than a three-year contract is better than a one-year contract, and especially in a Software as a Service, in a SaaS world.

SO:                   So, okay. Well, that’s fine. But if I’m the customer, then you’re looking at an ROI of maybe two years or three years, and I don’t want to be locked into five years. Concrete examples of things that can happen. The software that I rely on gets bought by somebody else and they discontinue it. They take it over and they discontinue it. I have to exit. The organization that I work for gets merged with another organization, and then another organization, and suddenly we have not two systems, but, like, five different authoring workflows.

AP:                   And we’ve seen that. We have seen that.

SO:                   Yeah, I’m not actually making that one up.

AP:                   No, you’re not.

SO:                   So we have to consolidate because we’re supposed to actually deliver a unified customer experience, which is pretty hard to do with two or three or five CCMSs.

AP:                   Right, and also, most IT organizations are not going to stand for having three versions of a tool that essentially do the same thing when you do merge together. So from a financial point of view, it does make sense, and from a support point of view, to jettison two and stick with one.

SO:                   So two years ago, I picked a system. It’s a good system, but we got merged. Now we have a much bigger group. My sort of facts on the ground have changed. Or, we picked a system, it was fine, but now we’re doing more languages, or, oh, we need to integrate with this new chatbot thing that we’re doing over here in the corner and I don’t have any way of doing that out of the system that I’m currently in. And I didn’t account for that on day one, because it was 10 years ago and chatbots weren’t a thing, right?

SO:                   So those are the kinds of issues that you run into, where change or having to change, having to exit, is basically inevitable. At some point, a new requirement comes along, or your company changes, or you grow, or you shrink, or you change markets, you add localization, you add more localization and more languages. Something happens, and the thing that was a good fit for you is no longer a good fit for you. So what does it look like at that point to exit your business relationship with your existing vendors and your existing set of vendors? If you’re locked in for a really long time, you’re in trouble because you can’t do what you need to do.

AP:                   That lock-in can make things very difficult for you if you need more flexibility and you need to pivot and be nimble and really kind of change course a little bit if you are so locked down in something that doesn’t give you the ability to address those things. So basically you need to be thinking about the what-ifs 5 or 10 years down the road while you’re picking the tool. Are we going to have that flexibility with this tool? Is it going to be able to make some changes and help us support things we may not even be thinking about or may not even exist right now? Some delivery format that we don’t know about. Is this flexible enough to help address a concern we don’t even know about? I mean, that’s the kind of thing you have to be asking yourself.

SO:                   You know, to take a concrete today analogy, this is exactly like the office space problem, right? Suddenly everybody’s working remote. There’s all this office space. Are we going to use it again? Are we going to come back to our office? The facts on the ground have changed, and maybe the thing that was selected is not the right thing anymore, but here we are with a 5-year or 10-year or 15-year lease, right? It’s exactly the same problem, that you get locked in and then things change. Maybe everybody’s working remotely and your particular system isn’t really set up for a distributed workforce.

SO:                   And now back to the CCMS, right?

AP:                   Right.

SO:                   Most of them now, most of the clients we see, are in fact SaaS and not on premises, but you think about those kinds of issues. Well, what if you need everybody in the same building to use the system, and being in the same building is not in fact an option?

AP:                   Yeah. You have zero flexibility in a case like that, so it is definitely a problem.

SO:                   Yeah. It’s not that you made a bad choice. It’s just that there’s new information.

AP:                   Yeah. So let’s talk about dealing with, like you just said, that new information when you didn’t do that upfront planning. What are the ramifications of not thinking about the exit strategy when you’re essentially entering a tool?

SO:                   It just means that, at the inevitable point when you do have to leave the tool for whatever reason, you are then going to have to figure out, what are my options? How can I get out? How bad is the migration going to be? How do I dismantle or rebuild or recreate these integrations that I have? How do I think about the features that I have?

SO:                   One thing I would say is that I would caution people against trying to move from Tool A to Tool B and completely recreating or reproducing the old authoring experience. If you switch tools, and particularly if you switch authoring tools, authoring tools have different strengths and weaknesses, and what you want to do is take a tool and take advantage of its strengths. You don’t want to ignore its strengths because you never did it that way before, and you don’t want to rely heavily on its weaknesses, again, because that’s how we’ve always done it, right? So there’s some change that has to happen there. The authors will probably need some training and some help to shift over, but you really want to think about, well, what’s in here and what’s the state of the art and what are the new things that I can do?

SO:                   But largely, if you have to migrate or if you have to change tools, what you have is, at that point, a tactical problem, right? You just have to do it, and you have to look at the facts as they are, the features that you have available to you, the options that you have, and figure out what to do. But I think I would argue that exit strategy and risk mitigation is something that you should be thinking about or should have been thinking about before the tools and the technology stack and the processes were originally set up. And of course, if that’s not the case or it was your predecessor, then that’s just how it is.

AP:                   Bottom line, an exit strategy should be part of your content strategy. So while you’re doing the assessment, you need to be thinking about this and not dealing with the ramifications of not considering it 5 or 10 years later.

SO:                   It’s a lot cheaper to do it before you build. Yeah.

AP:                   Exactly. And on that note, I think we will wrap up. Thank you, Sarah, very much.

SO:                   Thank you.

AP:                   Thank you for listening to the Content Strategy Experts podcast, brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

 

The post Exit strategy for your content operations (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 16:00
The Scriptorium Content Ops Manifesto (podcast) https://www.scriptorium.com/2021/10/scriptoriums-content-ops-manifesto-podcast/ Mon, 11 Oct 2021 16:00:50 +0000 https://www.scriptorium.com/2021/10/scriptoriums-content-ops-manifesto-podcast/ https://www.scriptorium.com/2021/10/scriptoriums-content-ops-manifesto-podcast/#respond https://www.scriptorium.com/2021/10/scriptoriums-content-ops-manifesto-podcast/feed/ 0 In episode 104 of The Content Strategy Experts podcast, Elizabeth Patterson and Sarah O’Keefe discuss the Scriptorium Content Ops Manifesto.

“The bigger your system is and the more content you have, the more expensive friction is, and the more you can and should invest in getting rid of it.”

– Sarah O’Keefe

Related links:

Twitter handles:

Transcript: 

Elizabeth Patterson:                   Welcome to The Content Strategy Experts podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we talk about content ops and Scriptorium’s Content Ops Manifesto. Hi, I’m Elizabeth Patterson.

Sarah O’Keefe:                   And I’m Sarah O’Keefe.

EP:                   And so we’re just going to go ahead and dive right in. Sarah, let’s start off with a definition. What is content ops?

SO:                   There are lots of great definitions out there written by people smarter than me, but the one that I really like is pretty informal. Content ops is the engine that drives your content life cycle or your information life cycle. So that means the people, the processes and the technologies that make up your content world. How do you create, author, edit, review, approve, deliver, govern, archive, delete your content? That’s content ops.

EP:                   So Scriptorium recently published a Content Ops Manifesto. And in this manifesto, you describe the four basic principles of content ops. So what I want to do is just go through those one by one, and I will of course link the manifesto in the show notes. So the first one you have in the manifesto is, semantic content is the foundation. What exactly does that mean?

SO:                   I wanted in this manifesto to take a small step back from hands-on implementation advice, and the things that we tell people to do, you need to go through and build out your systems, and here’s how you make them efficient and focus instead on the principles of what that looks like without getting too much into the details. And so with that in mind, each of these principles is intended as a guidepost that would apply for any content operation that you’re trying to build out. Semantic content is information that is essentially knowledgeable and about itself, or self-describing. Now this could be as simple as a word processor file, where you have some paragraph tags that say, “Hello, I’m a heading one,” and “Hello, I’m a heading two,” and “Hello. I am a body tag,” that kind of thing. So you need to have tags, labels of some sort that describe for each, whether it’s a block or a little chunk or a string of text.

SO:                   What that text is. Is it a heading? Is it body text? Is it a list or part of a list? That kind of thing. So that’s tags. Now, there are lots and lots of ways to do tags across every tool that you could imagine, but you need some sort of semantic labeling. Second, we need metadata. So we need information about the information itself. Usually this is classification tags. So things like, “I am a beginner level task,” or even, “I am a task. I was last updated on this date. I belong to this product or this product family.” So metadata provides you some additional context about the information and describes it further, but it doesn’t really describe the information itself, but rather where the information belongs or who should be using it. Metadata is broadly… If you’re struggling with metadata, take a step back and think about, if I were searching for this information, what kind of labels or tags would I want to use to find what I’m looking for?

SO:                   And then we have hierarchy and sequencing. So hierarchy means that you’re looking at the structure of the content from the point of view of which things are subordinate to which. So let’s say that you have an installation procedure and there are six things you have to do in a specific order and each one of them is a task or a process of some sort. Well, you need to be able to say these six things are in a group. There’s the installation process, which consists of these six things, that’s hierarchy. And then sequencing is… And they have to be done in this order, right? You have to do one, then two, then three, then four. You can’t start with four and then do one or your installation won’t work. So there’s this process or this idea that you’re collecting up information. And when you do these bigger collections above and beyond a tiny little string, you need hierarchy and you need sequencing.

EP:                   So the second principle that you touch on in the manifesto is that friction is expensive. And when we’re talking about friction, in this sense, we’re referring to the process that you’re slowing down productivity. So what are some common points of friction and what are some things you can do to eliminate them?

SO:                   Yeah, so friction is any time you have really human intervention, right? Because computers are very, very fast at what they do and humans, well, we have other skills, but-

EP:                   We make mistakes.

SO:                  We do make mistakes, but we’re good at certain kinds of creative things. We’re good at saying these things go in this logical sequence, but what we’re not good at is applying the same formatting consistently over and over and over again. Right? So friction is any place in your process where there’s human intervention. So I’m going in and I’m hand formatting things, or I’m downloading a collection of files, zipping them up, sending them to somebody else who’s then uploading them into a different system and expanding them and reinstalling them there. Anytime you see a process that is driven manually and/or driven by paper, you probably have friction in your process.

SO:                   And we think these things have gone away, but there are a non-zero percentage of people out there who are doing reviews by the process of, “Let me print this thing out and go give it to you and have you write on the paper and then give me the paper back.” That introduces friction. The problem with friction is if it’s just you and me and we’re working on two or three pages of stuff, and we’re in the same location, not that big a deal. But as you scale, as you have more people and more documents and more languages and more variants, that manual process that used to be okay when we had 10 or 15 or 50 pages of content, becomes unworkable, right? Because it slows you down. So when we talk about friction in a content ops context, what we’re usually talking about is where are these points of manual inefficient intervention and how do we get rid of them?

SO:                   And when you start talking about eliminating friction, we fall back on things that you’ve heard previously in a non-content ops context, automated formatting, automated rendering across all the different formats that you’re looking for, reuse content instead of copying and pasting, connect systems together so that you can share content efficiently, review workflows that are not human dependent and not paper dependent, but rather roles. I need somebody with the role of approver to look at this thing. I don’t need it to be you personally, Elizabeth, and as it happens, you’re on vacation this week, right? So I don’t want to send it to you. And I certainly don’t want to send it to your email specifically. What I want is for the system to say, “Hey, this thing is due for a review and here are the three people that are authorized to do it.”

EP:                   So friction, I mean, it’s going to take time and effort to eliminate that friction, but it’s definitely worth it in the long run.

SO:                   The bigger your system is and the more content you have, the more expensive friction is, and the more you can and should invest in getting rid of it. Yeah.

EP:                   Definitely. So the third principle outlined in the Content Ops Manifesto is to emphasize availability. What exactly does that look like?

SO:                   So is content available? What that means is if I am your content consumer, and I need a particular piece of content, can I even access it or have you locked it behind a log-in that I don’t know about or that I don’t have credentials for. So literally, not available to me. The information exists, but I can’t get to it. So that’s question one is, have you made it available and in many cases, availability in that aspect of it is actually synonymous with, “If I Google, will I find it,” right? Because I don’t necessarily know where you’ve stashed it, but if I can find it, then it’s available to me. Now, there are outlier cases where you do need to put things behind log-ins for good and valid reasons and that’s fine provided that your end audience knows, “Oh right. I have these credentials. I’ve signed up for the subscription. That’s where I’m going to go look for the information.”

SO:                   That’s fine. So where do you put it? What are the rights to get to it, right? Do the right people have the right access and do they know about it to get to it? Now, the second factor with availability is actually accessibility. And here, I mean, in the technical sense of, can I consume this content successfully? So there are a bunch of aspects of accessibility which usually have to do with physical limitations. So we’re talking about things like, I’m colorblind. Did you design the content in a way that I can still use it, even if I have some vision limitations? Is the content consumable by screen readers so that if I have a vision impairment, I can use it? If we’re doing a podcast, is there a transcript so that somebody with a hearing impairment or somebody who’s deaf can read the transcript instead of needing to use hearing?

SO:                   So you get into this question of, have you provided ways for people to access the information that allows for the possibility that they have some a physical limitation? There’re some others around keyboard navigation, right? Can I tab through the buttons instead of having to click on them? Have you allowed for people that have tremor or issues with fine motor control so that asking them to specifically click on a tiny little button on a screen somewhere is maybe not an option. Is there a mobile option as opposed to a desktop? Maybe I’m accessing all your content from a mobile device and if you haven’t thought about that, then I’m going to have problems trying to read the teeny, teeny tiny print on my not so big phone screen, right.

EP:                   And we’ve probably all experienced that and it is frustrating.

SO:                   It’s so annoying. So when we talk about availability, we’re talking about literal availability, like where did you publish it? And do I have access? Talking about accessibility and all the various facets of accessibility, there are lots of useful guidelines on that out there that are more detailed. And then we also need to think about languages and localization. If I’m a non-native English speaker and my comprehension of your text is going to be much, much better in French, which by the way, I can assure you, is not the case for me, you have an obligation to provide that content in French, if you want to market to your primary French speaking audience, right?

EP:                   Absolutely.

SO:                   So you need to think about languages. Localization also ties into the question of, well, if I’m writing content for a particular locale, a particular location, you need to think a little bit about what that looks like.

SO:                   So to take a really basic example, if you’re marketing to somebody in Florida, you probably don’t need to sell them snow pants in October, right?

EP:                   Probably not.

SO:                   They are not buying snow pants in October. So that’s like a really basic localization principle that… You want to think about your market and how your market differs by geography or by locale. That gets tied in with language. But they’re not really the same thing, right? You’ve got geographic stuff, you’ve got regional things and you’ve got different regulatory schemes. So for example, to take the infamous example, any legal advice that you’re giving somebody always ends with “comma except in Louisiana.” So, oh, also don’t give anybody legal advice because we’re not qualified, right. But you have to think about those kinds of locales and the different regulatory schemes to make sure that you’re covered and you’re not giving people bad advice based on making the assumption that we all live in the same spot.

EP:                   Right. So the last principle in the Content Ops Manifesto is to plan for change, which is something that we touch on in a lot of the posts that we publish and the podcasts that we publish. So how do you plan for change?

SO:                   We really are annoying about change managment.

EP:                   It’s so important.

SO:                   It’s our favorite word, our favorite phrase, or actually it’s our second favorite phrase because our first favorite phrase is, “it depends.” But let’s say it this way. When you start thinking about content ops and building up these processes and these technologies and these systems that you’re going to use to drive your content engine, the number one thing that I would advise you to do when you do this is to think about your exit strategy. So in other words, I am buying product X and I’m going to put all my content into it, or I am implementing system Y and I’m going to put all my stuff into it and that’s going to drive what we’re doing. I want you on day one, when you’re going into this really cool system that you’ve decided is going to be the be all end all for at least the next couple of years, to be thinking about what if I’m wrong or what if things change?

SO:                   What if that company gets bought by a competitor and they discontinue the product? What if the system that you put in place doesn’t work or a new requirement comes along and your system can’t accommodate it. You need to be thinking on day one about, “Okay, well, I’m going to go in, but if I have to get out, do I have a way of getting out? What’s my exit strategy? What is the cost of exiting this particular system or process? What is the cost of changing tools and technologies?” Because I’m not saying you should have a foot out the door. It’s more that we know that change is going to happen.

SO:                   Change is totally inevitable and somebody is going to come along with a new requirement that we’ve never thought about before, and we’re going to have to meet the moment. And so we need to know A, what things am I picking and are they extensible? Can I add on, can I accommodate these new requirements inside the system I’ve built or selected? And if not, how expensive is it going to be to get out? Now, if the answer is, it’s going to be super expensive to get out, but this thing meets 100% of our requirements right now, it’s extensible in these 15 ways and I don’t see a reason that we would need to get out, that’s okay. That’s a decision that you’re making, but you need to do a strategic assessment of, what is my exit strategy and what are the implications of needing to exit from whatever it is that I’m about to pick?

EP:                   Right. And I think exit strategy is a good place to wrap things up. So thank you, Sarah.

SO:                   Thank you.

EP:                   And if you would like to read the Content Ops Manifesto that will be linked in our show notes. So thank you for listening to The Content Strategy Experts podcast brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

 

The post The Scriptorium Content Ops Manifesto (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 17:00
Transitioning to a new CCMS (podcast) https://www.scriptorium.com/2021/09/transitioning-to-a-new-ccms/ Mon, 27 Sep 2021 16:00:28 +0000 https://www.scriptorium.com/2021/09/transitioning-to-a-new-ccms/ https://www.scriptorium.com/2021/09/transitioning-to-a-new-ccms/#respond https://www.scriptorium.com/2021/09/transitioning-to-a-new-ccms/feed/ 0 In episode 103 of The Content Strategy Experts podcast, Alan Pringle and Bill Swallow share some considerations for transitioning into a new component content management system or CCMS.

“You need to look at the requirements you have now. Are they being supported or not supported? Do you see this system helping you move forward with your content goals in three to five years?”

– Alan Pringle

Related links:

Twitter handles:

Transcript: 

Bill Swallow:                   Welcome to The Content Strategy Experts podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we share some considerations for transitioning into a new component content management system or CCMS. Hi everyone. I’m Bill Swallow, and today I’m here with Alan Pringle.

Alan Pringle:                   Hello everyone.

BS:                   And we’re going to jump into a discussion about when we should be switching our component content management systems. So I think Alan, I’ll probably start off with a question to you. How do you know it’s time to move on with your existing CCMS?

AP:                   Well, everybody’s situation is going to be a little different, but in general, there’s some things that you can look out for as warning signs you may need to reconsider your CCMS. One of them being, sure things worked great when you stood the system up, but now a few years later, you’re finding that it is not scaling to meet your needs. You’ve got a whole lot more content in it. You have some feature sets that may not be there, that would be very helpful to you. So it’s a matter of, is that system keeping up with your growth and your changes? Is it keeping pace?

AP:                   In regard to the feature sets that I just talked about, if you discover that you’re spending a lot of time doing customizations to make things work for you, that may be a warning sign that you need to take a look at what some other systems offer as out of the box features because you do not want to be in this loop where you are spending a lot of time and money and investing in a system by basically doing patchwork add-ons to it. That’s not sustainable in the long run. If there is a system that has the feature that you’re looking for automatically, it may be worth considering that system, instead of doing this patchwork add-on to your existing setup.

AP:                   We’ve also seen cases where we had a client that was involved in a merger. And because of that, there were multiple component content management systems in the mix from the different, mostly technical publications departments that merged together from the different companies. So when you find yourself in a situation where you have acquired another company or you’re being acquired, you may have a situation where you’ve got overlap in your tool ecosystem, and in general, a company is not going to want to support two tools that do the same thing.

AP:                   So you have to take a look kind of from a bigger business point of view, at what the overarching goals and efficiencies that the company wants to make. And some of those efficiencies may be, we’re not going to have two CCMS’s here, we need to migrate everything to one. And I think it’s worth mentioning in that case, just because you’ve got two systems in house, you may want to look at a third option, so that way, you are really not picking winners and losers because everyone has to move. I am not saying that is the perfect solution for everybody, but it’s absolutely something you should consider, if you do participate in a merger and have some overlapping systems.

BS:                   So everyone shares the pain pretty much. Okay, so let’s say we made the decision that we have outgrown our existing CCMS. How do we start to evaluate new options?

AP:                   Well, you need to gather information and you can do it internally, kind of be your own consultant, or you can hire someone to come in to help you do this. Basically, you need to take a look at the requirements that you have now, how well they’re being supported or not supported, as the case may be. And kind of break out your crystal ball. Where do you see things three to five years? Do you think this system is going to support you and some new things you may need down the road? So, that’s the kind of thinking you have to do. How well are you being supported in the present, and do you see this system helping you move forward in three to five years with your content goals?

BS:                   And I think once you start putting all this information together, at that point, you may want to consider doing a request for proposals from multiple different vendors. And in that case, definitely include your existing vendor because there may be something that you may not currently have in your existing configuration that they may be able to offer as well. Plus you’ll be able to use them as a baseline against your other options.

AP:                   Yeah, it’s not necessarily that you have to immediately assume that your current vendor is no longer going to be part of the picture. There may be a chance that they have new offerings, new features like you mentioned, and you can use the RFP process to kind of uncover some of that too. And from a business procurement point of view, I am sure your procurement department is not going to be disappointed to get a chance to renegotiate a contract. That’s just gross. And I know it sounds very matter of fact, but it’s the truth. It’s a matter of renegotiating and looking at a tool and seeing if it’s supporting things and what kind of funding is going to be required with any update that you have with that tool, if you choose to stick with it.

BS:                   Okay. So we’ve identified issues with our existing CCMS. We’ve gone through and identified a new option, whether it’s to stay with the existing one with some changes or to move to a new system. What are some of the common issues or roadblocks that you might encounter as you start to switch systems?

AP:                   This is true of anytime you switch technology, even if you, for example, were on an Android phone and you changed to iOS on an iPhone. There are going to be some features in one operating system that are not going to be exactly equivalent on the other side, you’re going to lose some features and you may gain some features. So you may have something set up that is very specific and tailored to the particular tool, the particular CCMS you’re using now. Is there anything in that, that is not going to translate well or come over to the new system? And there’s several components here in regard to this. Are there features you were using that are specific to that particular CCMS, that are not supported because it’s a proprietary feature, in whatever you’re moving to? That’s one consideration. And then another side of that is, do you have any connectivity, any connections to other kinds of systems?

AP:                   And this can include a learning management system, a digital asset management system, a translation management system. Are those connections that you have, can you get the equivalent setup in the new CCMS? Are there automatic API connectors from your new system to these things? Are you going to have to rebuild or completely recreate your existing connectors when you move to a new system? So you’ve got to look at anything that is very particular to the CCMS that you’re currently in and how well that will transition over. And then you have to think about your bigger tool ecosystem and how those things are connected and how you’re going to basically reconnect everything together when you switch to a new CCMS.

BS:                   So I’d also expect in this case, if in your existing CCMS, you’ve been making a lot of customizations on your own and hacks and whatever else to get things to work properly, you’re probably going to have to find either a resolution for those or unwind them, even in your content, perhaps, as you start migrating to a new system.

AP:                   Exactly. And this goes back to what we were talking about earlier, where if you have done a ton of customization to your CCMS, at what point do you say, “Is enough, is enough. I can’t keep adding and adding these custom hacks to this tool because it’s becoming inefficient.” That very much ties into what you’re talking about here.

BS:                   Okay. So we’ve talked about problems in the existing, evaluating new options and problems when you’re probably migrating. So what can you do to make this transition a success?

AP:                   Well, this is a tiresome piece of advice, but its solid advice, and that is, you need to make a transition plan. This is not something you can just jump into. You need to take a look at your “real work schedules,” because you do not want to be making this transition, when you have deadlines, deliverables, anything going on at your company where you’ve got a new product release coming out. That is not the time to do this. So you need to step back, look at what’s coming schedule wise, in the next few months, figure out when would be a good time to do this, and then start giving some thought about, “Okay, what might be the first thing that we can try and move over?” You may want to try to do a pilot to move over just some of your content to be sure that everything has stood up correctly, instead of going whole hog and doing everything at once. Those are two things that immediately pop into my mind.

BS:                   And probably keep both stood up and keep using the old one as your production system until everything is verified as complete on the new one.

AP:                   Absolutely. I think you also need to basically take a very deep breath and realize things are going to go wrong and be flexible and be ready to deal with things that are going to go sideways because they will. And there are going to be some things you may have an inkling, this may be a little challenge, but there may be other aspects you haven’t even considered that cause you problems. So you can’t go in with this super rigid idea, we must hit this exactly right, because in general, technology is going to wag its finger in your face and say, “I do not think so. I’m going to cause you a problem here.” But some planning can minimize those things, but I don’t know about you, I’ve yet to see any transition from one tool to another, CCMS or otherwise, that was perfectly smooth, and there were no hiccups. I have yet to see that ever happen, period.

BS:                   No, there’s no golden system. Going back to your phone analogy between Android and iPhone, there are excellent things about each one of them, but they also both have their problems.

AP:                   Exactly. And then finally, too, because you are moving to a new tool, you’ve got to realize skills people had in the old tool set, are not going to be quite as useful. So you’re going to have to provide training and support to be sure people can basically remap the skills they had from the old tool to the new tool. And that may be a little rough, especially if people have really invested a lot of time and thinking into workarounds to get things to work in the old system. And those things are no longer available. That’s a lot of muscle memory you’re going to have to undo with some training and best practice information, so people don’t keep doing those workarounds because they’re no longer needed.

BS:                   All sound advice. And I think we could probably wrap up here. Thank you, Alan.

AP:                   Sure.

BS:                   And thank you for listening to The Content Strategy Experts podcast brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

 

The post Transitioning to a new CCMS (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 13:06
The importance of terminology management (podcast) https://www.scriptorium.com/2021/09/the-importance-of-terminology-management-podcast/ Mon, 13 Sep 2021 16:00:23 +0000 https://www.scriptorium.com/2021/09/the-importance-of-terminology-management-podcast/ https://www.scriptorium.com/2021/09/the-importance-of-terminology-management-podcast/#respond https://www.scriptorium.com/2021/09/the-importance-of-terminology-management-podcast/feed/ 0 In episode 102 of The Content Strategy Experts podcast, Sarah O’Keefe and Sharon Burton of Expel talk about the importance of terminology management.

“If we don’t give customers the information to understand what we’re telling them, they won’t be successful and we have failed.”

– Sharon Burton

Related links:

Twitter handles:

Featured image: bbbrrn © 123RF.com

Transcript: 

Sharon Burton:                   Welcome to The Content Strategy Experts podcast, brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize and distribute content in an efficient way. My name is Sharon Burton, and I’m your guest today.

Sarah O’Keefe:                   And my name is Sarah O’Keefe, and I’m hosting. I delegated reading our bumper to Sharon because, well, there were some attempts and it didn’t go well. But, hopefully the rest of this episode will be more professional because we’ve got Sharon in charge.

SO:                   In this episode, I want to talk to Sharon about terminology management. Sharon Burton is a longtime friend of mine and also a senior content strategist at Expel. Sharon, welcome and thank you for taking on guest hosting.

SB:                   You’re welcome. I’m very happy to contribute to the overall mirth levels.

SO:                   This is going to be trouble.

SO:                   Tell us a little about yourself and your job at Expel, and what Expel does.

SB:                   The honest to goodness truth is I’ve done pretty much everything there is to do in this field. At least, it certainly feels like that.

SB:                   What I’m doing at Expel, I’m salaried at Expel, which is also new. I’ve not had a lot of salaried jobs. We work in the cybersecurity space. This is a new space for me, which is exciting. One of the things I love about our field is you always get to learn new things. I’m learning about cybersecurity. What we do is we are your offsite security management staff. There are groups of people in mid to larger companies called a security operations center.

SB:                   A medium to large company will have a group of people staffed called SOC, S-O-C, security operations center. Those kinds of people will monitor all of your hardware, and your software, and make sure that the people logging into the networks are the right people and all of that. The problem with that, and there is a big problem with that in the cybersecurity industry, is multi-fold.

SB:                   Number one, there aren’t enough people out there who are trained to do this, flat out aren’t. There’s a huge deficit of people. Number two, the people in the security business, because there aren’t a lot of them, they job hop a lot. Because as soon as they get bored, they can go get another job doing something interesting elsewhere, so you have a lot of staff turnover. And number three, sitting there and watching the logs of all of this stuff, all day long, is mind-numbingly boring. So you have staff shortage, mind-numbing boring and people job hop.

SB:                   What Expel does is we are your, if you will, offsite stock. But, we’ve got a whole bunch of tools and technologies, and all kinds of fun things that we’ve developed, so that we don’t bore our people. We’ve got all kinds of bots that do exciting and fun things. And, we’re a young company, we’re only five years old. When I started, they knew I was the first content anybody and they hired me because they knew that to move the product forward in any way, shape or form, was going to require content strategy. Not just content, lots of people are creating content here, but an actual strategy.

SB:                   When I first started here, I met with the CEO, and the COO, and a couple of other people and I said, “Okay, you hired me. Let’s talk about why you hired me. What problems do you see as we need to solve?” Both the CEO and the COO, two of the three founders, said, “We’re a content company that happens to make some products, and the products happen to be in the cybersecurity space. But, if we don’t provide content to our customers to understand what a threat is, things that we’ve handed to them and said this is a threat, you should go fix this. Or, you’ve got an intrusion, you should go fix this. If we don’t give them the information to understand what we’re telling them, they won’t be successful and we have failed.”

SB:                   I went, “So, you mean that we’re a content company that happens to make software?” And they said, “Yes.” I went, “I think I’m in love, perhaps more than HR is comfortable with.”

SO:                   Awesome. I wanted to talk to you about terminology management, because you’ve … Well, as you’ve said, you’ve done a lot of things but we had to pick a topic, so we came up with this one because I think it’s near and dear to your heart, and it’s also maybe not well understood.

SO:                   Can you tell everybody just what is it? What is terminology management and why does it matter?

SB:                   There are certainly people, much smarter people than I am, who could talk about this. But, I’m going to talk about it because it’s something that I’m currently involved in, part of the foundation I’m building here at Expel.

SB:                   We all know that there are a lot of words. I have been accused of having them all and using them all, all the time. Go ahead and laugh, Sarah, I know. But, there are a lot of words. Technical writers know that we should call something the same thing all the time. Unlike when we were taught to write in school when we were told, “Oh no, our reader will get bored if you always call it a field. You should call it other things.” The reality is the technical writing, techcomm community, we know once you decide to call that thing a field, by God, that thing needs to be called a field every time because otherwise, you’re going to confuse your users. That’s terminology and that’s making your terminology consistent.

SB:                   But, companies are starting to figure out it isn’t just the user guide or the online help where this matters. It matters when the salespeople are talking to customers or potential customers. It matters when marketing is talking to potential customers or to customers. It matters on the blog, it matters in the post-sales content, which is my happy place. But, it also matters in the customer support database, it matters in the UI. Because if we don’t figure out what stuff is called and then use those words, we look like we’re all self-employed, or something. There’s that customer experience.

SB:                   There’s also the translation. If you don’t have your terminology managed and somebody walks in your office and says, “Oh, we just closed a deal and all we have to do is deliver in German, in six weeks.” If you’re laughing thinking, “Oh, that never happens,” Sarah and I are here to tell you we don’t have enough numbers to count the number of clients we’ve dealt with who have that exact situation happen. “What do you mean we can’t deliver in German? Well, we’re just going to translate the content.”

SB:                   Well, if every time we say click okay, everybody says it a different way … I worked with a client once, where I did a quick analysis on just a subset of the docs, and they said click okay in over 50 different ways. 50. “Click the okay button, click okay button, click okay, select the okay button.” Well, if we had translated that, then that’s 25 cents a word, per language. We would have to pay that, because it’s $50 million in German in six weeks, we don’t have time to go fix that, so we’ll translate that right now. We’ll fix that after we translate it, which means your translation memory is now no good, or at least a limited value, it gets so expensive.

SB:                   Now at Expel, we are not currently localizing and I don’t know that we ever will, because the international language of cybersecurity is English. But, we have to act as if that guy, that person, that woman is going to walk in your space and go, “German, $50 million in six weeks.” If you’ve done your terminology management, if you’ve gotten your language groomed so that it’s all consistent, you can go, “Well, that’s not optimal. I think we’ll be okay,” instead of putting your head down on the desk and bursting into tears, which is usually the reaction.

SB:                   That’s why terminology management matters, because it’s customer experience and it’s literal dollars and cents.

SO:                   I think even setting aside localization, the point of the thing should always be called the same thing. It’s not a baby seat, a car seat and an infant seat, and a safety seat.

SB:                   And, a booster seat.

SO:                   It’s one of those. And a booster. Yeah, it’s one of those. Now frankly, I’m not in that particular business and I don’t particularly care which one you pick, but I care a lot about you picking one of them.

SB:                   Yeah.

SO:                   Even if you’re only operating in one language, there’s that consistency issue of, as you said, using the same term for the same thing so that people don’t get distracted wondering, “Wait, why did they change? Why is it different in this document, or in this chapter, or over here? Or, why is the marketing content different from the techcomm support content? This is weird.”

SO:                   How do you do this? We have convinced you, the gentle listener, we hope, that you should consider terminology management. But, what do you do? What’s your first step?

SB:                   Well, I’m in a really fortunate spot because my company got it, they literally hired me to do this stuff. I’m in an absolute sweet spot. I just said, “Well, it’s time for us to now take on the phase of the project plan where we start doing terminology management.” They went, “Oh good, I was wondering when we were going to get there.”

SB:                   But, I have also worked with places that needed to be convinced this was a thing. One of the ways to do it is to get a subset, a representative subset of docs. I don’t mean go look for the worst examples, I mean are you using Flare? Good, then grab the big Flare project. Flare is a great way to do this, because you probably have Analyzer. If you have Flare and you have Analyzer, and there are other ways to do this, but this is a great way to do it if you’ve got that tool, take a look for phrases that are almost but not quite the same. Analyzer will let you run that report. That’s how I got the click okay, all the different ways to click okay. You start getting your arms around phrases that are used all the time.

SB:                   I’ve spent much of my career in software, so things like click okay, accessing menus, the basics of the style guide, but the style guide is like an honor system. There are better ways to do this.

SO:                   For those of us who don’t work at Expel, what kinds of challenges might you have come up against in the past, when working at other less enlightened places? What kinds of pushback, or problems or issues do you run into when you tell people, “We are going to manage terminology?”

SB:                   I am formulating a hypothesis about the tech industry. I’ve been working on this hypothesis for about five years. That most software companies, for sure, don’t realize that they are in the content business. They think they’re in the software business, they think what they’re selling is a software product. But in fact, they’re in the content business. Now Expel, as I said, I’m incredibly fortunate, I know how fortunate I am.

SB:                   As I’m looking at where thinking is changing around content, and the value of content pre and post sales, I’m thinking that companies are more in the business of content and they just happen to create a product that they sell. But it’s got to have a large content ecosystem, otherwise it’s not going to be able to be used. I think about that, but I’m realizing that an awful lot of companies don’t know they’re in that business. It can be very difficult to get a company to recognize they need to control their terminology because they don’t see a value in the content, or they see a limited value in the content.

SB:                   One of the ways I’ve tried with other customers, other clients, a lot of places you can convince them to have a style review for content before it gets released. That means that a human being has to read it, read the content, they have to have the style guide memorized, and then they have to make sure that that content meets the style guide requirements. For terms, for how we talk about stuff, all of that stuff. That’s time consuming, labor intensive and fraught with errors because humans are full of errors. We want to be perfect but the reality is we are not, we are wonderfully imperfect.

SB:                   So you take the number of people who are doing that, the percentage of their time they are doing that, and then you figure out what that costs the company, fully loaded, it’s a straight business case. And then, you look at terminology management products. There are a couple of them out there. And then you start looking at what is it going to cost to do this programmatically, versus having people do it onesie, onesie. That is a way to go about it, but a company has to know it hurts before it’s willing to stop the bleeding.

SO:                   So there’s some pushback, just on the grounds of content is not important, which is pretty problematic.

SB:                   It is pretty problematic. The good news is, over the course of my career, we’ve gone from tech writers are secretaries to content strategy, bringing people in as senior content strategists, because they recognize how important content is. Now, that’s the arc of my career and it’s a beautiful thing. But, that level of enlightenment is not, perhaps, pervasive throughout the land. There are still kingdoms who don’t feel that content is as important. We’ll get them, eventually. Eventually, they will figure it out. It’s just it can be frustrating and hard.

SB:                   Because I think I am of the belief that what we do matters, in the content world. It matters because it lets people have content that will let them do the important things that they’re trying to do. Because of that, they deserve the very best that we, as an industry, and we as individuals, can give them. I really believe this.

SO:                   That was something I wanted to ask you about. We see terminology and terminology management introduced very often in the context of, “We need to mature our processes, we’re going to put in place some sort of content management, content management system. Then, we’re going to formalize our style guide and embed it using terminology management software,” those kinds of things.

SO:                   But, I feel like is there a best practice here? Which one should you do first? Do you do the terminology work and then the content management? Or, do you do content management first? Or, do you do them simultaneously? What do you think?

SB:                   At Expel, because we’re young, because we don’t have a full-time tech writer, we’ve got a contractor who we love, we’ve not built up that side of the house. Again, we’re young, we’re still in startup mode, we’re still in, “Take that hill! Okay, let’s take that hill.” We are not ready for content management, in the bigger picture. We’re building up a knowledge base. There was no knowledge base when I started here 10 months ago, there is now a knowledge base. I’m very proud of that. Is it where I want it to be? No. But, it’s only been on its feet for, what, April, so five months, six months. Toddling along now, it stopped the zombie walk.

SB:                   I wanted to get the style guide and the terminology management done early, early, early because I think this is the foundation that we can build the house on. I have been where we had the content management stuff in place, but we had no terminology management. That meant that somebody, part time at least, had to go through the content management system on a fairly regular basis and align the content. That’s labor intensive, but if you have an intern or a very young person, that can be fun for them for a while, until they get bored with it. But, it still has to be done.

SB:                   It depends on where you are on the spectrum. If you’re an established company with a lot of content, I’d go for the content management system first. I think the payoffs are going to happen faster there. Unless maybe you’re delivering in five languages, then maybe I’d go for the terminology management first. What’s bleeding?

SO:                   In short, it’s the mantra of the consultant.

SB:                   It depends.

SO:                   It depends.

SB:                   No, I would ask, “Where’s your pain point?” If you’re spending too much on localization, then I’d look at why are you spending that much. Is it because you have no terminology management? You’re not using a style guide, you’re not applying style? Can we solve it there, or do we need to go all the way back to your copying and pasting from document to document?

SB:                   And by the way, everybody knows, every time you copy and paste, a kitten gets hurt so don’t copy paste.

SO:                   Yeah, don’t hurt the kittens is probably as good a place as any to wrap this thing up. Thank you, Sharon. This was very interesting, and I think helpful. I hope that all goes well at Expel and wherever you may be.

SB:                   You are welcome and thank you so much.

SO:                   Thank you. And, thank you for listening to The Content Strategy Experts podcast, brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

 

The post The importance of terminology management (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 18:23
Life with a content management system (podcast) https://www.scriptorium.com/2021/08/life-with-a-content-management-system-podcast/ Mon, 23 Aug 2021 12:00:22 +0000 https://scriptorium.com/?p=20494 https://www.scriptorium.com/2021/08/life-with-a-content-management-system-podcast/#respond https://www.scriptorium.com/2021/08/life-with-a-content-management-system-podcast/feed/ 0 In episode 101 of The Content Strategy Experts podcast, Elizabeth Patterson and Sarah O’Keefe talk about what life is like with and without a content management system (CMS).

“You have to decide, by looking at your particular organization, whether you need what a CMS will give you. You will get improvements in consistency and automation for formatting and traceability. You can get improvements in translation because you have more consistent content and better workflows.”

– Sarah O’Keefe

Related links: 

Twitter handles:

Transcript: 

Elizabeth Patterson:                   Welcome to The Content Strategy Experts podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we talk about what life is like with and without a content management system. Hi, I’m Elizabeth Patterson.

Sarah O’Keefe:                   And I’m Sarah O’Keefe.

EP:                   And today we’re going to dive into the world of content management and CMSs. So I think it would be great to start with a couple of definitions. Sarah, could you tell us what content management is, and also what a content management system is?

SO:                   Content management is, according to Wikipedia because that’s always the right place to go, is a set of processes and technologies that support management of information, basically. So collecting, publishing, managing, editing, delivering. A content management system or a CMS then is software that helps you do content management. So how do you create, how do you modify, how do you deliver digital content? Within the CMS world, we then distinguish, there are hundreds, if not thousands of CMSs with different kinds of sub-features or sub-specialties, learning content management systems for learning content. But in our world, there are a couple of important ones. One is the distinction between a back-end content management system and a front-end CMS. A back-end CMS is where you park the content that you are creating, editing, reviewing and approving. And a front-end CMS is where you park the content that you’re delivering.

SO:                   So a lot of today’s websites, maybe most of today’s websites, run on web content CMSs. So it’s a delivery system of some sort that controls the display of what we’re doing and what we’re dealing with. Now, in addition to all of that, in our world of structured content, you also talk about a component content management system or a CCMS, and that is a specialized back-end content management system that lets you manage typically XML, but structured hierarchical content. It typically does not have formatting associated with it. That’s the job of the front end delivery system, whatever that may be. But a CCMS is there to help you manage modular, smart, intelligent XML content. If you’re involved in any sort of content operation, if you work in content and you have any scale at all, then you know that managing the content that flows through your operation is just an enormous challenge.

SO:                   Keeping track of who’s writing what, and what’s already been written and, “Was this delivered, and is it up to date. And when is the next time that we have to update it, and when does it expire? This thing should go away once a certain event happens or after a certain amount of time.” So a CMS can help you keep track of your content and do a lot of the heavy lifting around that sort of governance, but also around authoring, delivery, management, all the things.

EP:                   Right, because there’s so much involved when we’re talking about content management. And so really what I want to talk about today are some of those different things that you are going to deal with when it comes to content management and what those might look like with a CMS or without a CMS. So I think a good place to start would be traceability. This is really important because, especially if you’re in a regulated industry, there’s a lot of legal stuff associated with that. So we can start with a definition of traceability.

SO:                   So traceability means that you can connect the change that you’re making in your content with the reason that you’re making that content change, or possibly with the person that made the content change and the person that approved it. So you want the ability, and as you said, especially in a regulated industry, you want the ability to say, “Hey, somebody reported a mistake in our documents on this date. And we tracked that mistake. And then we went over to our content management system, or over to our content corpus, and we made a change related to this defect that was reported. And then we published it on thus and such date.” So traceability means that you’re following that change from the request to the content change that was made to the approval to the publishing and delivery, and possibly expiring the incorrect version that was in there. Now, traceability without a content management system, almost certainly means that some very depressed person has a spreadsheet.

EP:                   A big spreadsheet.

SO:                   A big spreadsheet. And I’ve said this before, but people say, “Who has the number one market sharing content management systems?” And the answer is Excel. That is the number one way that people manage content. It happens to be a really painful way of doing it, but that is in fact by far the most common way of doing this. So you create a terrible spreadsheet, you track the inbound request, you track who you assigned it to, you track when they made the change, you track when they publish the change. And somewhere there’s somebody with a full-time job of just keeping track of that in the spreadsheet. If you have a content management system, then what you can do is you can embed the request for the update or the correction or the change into the content itself.

SO:                   Or you could give the change request, which if this were software, we’d be talking about bug tracking, you could insert that ID into the CMS or into the content itself. And then when you publish the change request, the idea is carried along. So when you look at something, you say, “How was this paragraph modified?” You can trace back to what happened. “Why was that paragraph modified? Who modified it? When did they modify it? And also why?” So that’s traceability. And if you’re doing video game documentation, then traceability is probably not your top priority, but if you’re in a medical devices or pharma, or potentially machinery that people operate and can get hurt if they don’t operate correctly, you do develop a pretty solid interest in traceability.

EP:                   Right. And speaking from someone who has plenty of past experience with spreadsheets, it’s really easy to make mistakes when you’ve got a really long spreadsheet. And so when we’re talking about the medical industry, that can be very problematic when people’s lives are at stake.

SO:                   Right. And in addition to that, it’s just mind numbing, right?

EP:                   Mm-hmm (affirmative).

SO:                   The computers are really good at keeping track of stuff like this, and humans are really bad at it. So let the computer do it. I mean, I just don’t have any interest in having to manage the monster spreadsheet of death. I don’t want to.

EP:                   Right, work smarter not harder.

SO:                   Exactly.

EP:                   So let’s talk some about collaboration, because when you’re working in any team, you’re going to have to collaborate. Or if you want to be successful, you’re going to have to collaborate. When you have a team with multiple writers, things can get confusing if you don’t have the right processes in place. What might collaboration look like with and without that CMS?

SO:                   I think probably all of us, in days past have worked in organizations where the collaboration process was literally that you would say, “Oh, I need to update this piece of content.” And you would pop up from your cube and yell at the people in the cubes around you and say, “Hey, is anybody working on X, Y, Z document?” And they would all say, “Nope, you’re good. You can go work on it.” That works pretty okay in a group of, say three to four people who are all in the same location, working at the same time, and don’t have meetings where they might miss somebody popping up in the cube farm and asking that question. We need something a little more sophisticated than that to address A, you have 20 writers and we can’t have people popping up all the time.

SO:                   Plus your 20 writers are not in fact in the same location all the time or ever. And you need to just have a much more formal way of dealing with this. So if we need to collaborate, if you and I, just the two of us are working on a single piece of content, okay, we can create a Google doc and we can work in there together. And that would work pretty well. It gets a little weird if you get up to five or 10 or 15 people all in the same big document. At that point, you start thinking about like, “Oh, hey, Elizabeth, why don’t you take section one and I’ll take section two and then we’ll put them together later? We sort of chunk it down.” Or, “I’m working on a white paper and you’re working on a white paper, or we’re working on two different articles.”

SO:                   Okay. Well, now we have to think about, “I want to make sure that the changes that you make are reflected in my document. And by that I mean, whatever word choices you make or terminology that you choose, we need to be consistent about that. You might’ve written a really great product description, which I want to use in my document. And I don’t want to copy and paste because later you’re going to go back and change the product description and update it and correct it. And I just want that to cascade into my document.” So the collaboration becomes partly, “How do we author consistently? How do we establish a consistent voice and tone? How do we make sure our terminology is aligned?” And there’s not so much the content management system itself, but some of the things that you can layer on top of that. And then reuse, “I want to go and find that chunk of content that you wrote that I want to reuse.”

SO:                   And that’s much easier to do in a CMS versus saying, “Hey, Elizabeth, where’d you put that product description? Or where’s that logo?” What we don’t want is for people to write a bunch of content and stash it in their own private folders that nobody else has access to, because then you can’t share. So a CMS, they all of course are different, but they’re going to give you the ability to understand what content you have, where it’s being reused, what was changed recently, “Oh, I see that somebody touched the product description file. I should go look in there and see if that affects the content where I’m using the product description.” And again, lots of people are doing this with spreadsheets, and it’s terrible.

EP:                   So you just mentioned people storing that document in private locations. And that’s how you end up with different versions of the same document, which leads us to our next topic for discussion, which is consistency. So what might that look like if you have a CMS and if you don’t have a CMS?

SO:                   So versioning is a key part of that. If I can just be consistent about using the same bio for a person. Every time I publish a blog post, let’s say from one of our coworkers, we want to make sure that the same bio appears at the bottom of that page. We don’t want to copy and paste the bio in there a million times. What we want to do is just tell the CMS, “Stick the bio at the end of every single blog post written by Bill.”

EP:                   Right.

SO:                   Okay. And then that’s updated of course in a central location. And if you update it, the older posts get the updated bio, that type of thing. So you have some pretty straightforward version control over your content. Also, I mentioned terminology. So using the same words to mean the same things across all your documents. Terminology management is theoretically possible without a CMS, but in practice, it’s one of the things that people very often integrate into a CMS build. And then there are two other things which have to do with formatting consistency.

SO:                   So if you think about just pushing content, you want to publish a document or an article or a book or whatever, you want to have some formatting consistency. You want all your notes to look the same. You want all your warnings to look the same. You want all of your little summary paragraphs at the beginning to look the same. Because if they don’t, you make my life much harder for the person who’s reading the document. I mean, if I read a magazine article, I slowly learn that the byline for that author is always at the end of the document, those kinds of things. With CMSs, now the ones that we work with, the component CMSs has pretty much stripped off all the formatting and apply that upon delivery.

SO:                   So that gives you a really actually rigorous degree of consistency in terms of formatting, but even an unstructured, more of a web CMS, does have the ability to have template-based publishing. So you have a magazine article template, or you have a blog post template, or you have a knowledge-based article template. And that means that your document formatting, when you deliver it is going to be consistent. If you live in the non-CMS world, in a file-based workflow, then you probably know that rebranding happens, right?

EP:                   Mm-hmm (affirmative).

SO:                   So your company gets acquired by another one, or you just decide to change your logo, or you decide to change your company name, and you’re looking at the set of 1,000 or 10,000 or 100,000 files in some word processing or page production software, and you have to rebrand them all. You have to go through there and replace all the logos and replace every mention of your product name, or your company name with the new one. That is-

EP:                   Lot of work.

SO:                   … it is crazy expensive. And so we’ve had cases where actually moving into a content management system with all the headaches and all the costs that that entails, was justified because it was actually cheaper than going through and rebranding thousands of InDesign files one at a time.

EP:                   I think this has been a very insightful discussion, and I really want to close things out with something that everyone is going to ask. You can tell them about, of the great things about a CMS and why they should have one and how it’s going to make their life easier, but they’re going to want to know about cost. And a CMS does cost money. So why is the investment worth it?

SO:                   Yeah. So I guess we should. And this is probably a good point to mention that we at Scriptorium do not get paid by the CMS vendors, all appearances-

EP:                   Yes, correct.

SO:                   … to the contrary. And it’s also probably worth noting that there are actually CMSs… I mean, there’s a wide range of cost, from zero to millions of dollars.

EP:                   Right, depends on what you’re looking for.

SO:                   Depends on what you’re looking for. And there are open source CMSs, not so much CCMSs, not so much component content management systems, but there are open source content management systems. So that’s at least theoretically free, right?

EP:                   Mm-hmm (affirmative).

SO:                   Except of course it’s never free. It may be license free, but there’s going to be cost. So why is the investment worth it, or is the investment worth it? You have to decide, by looking at your particular organization, whether you need what a CMS will give you. You will get improvements in consistency and automation for formatting and traceability. You can get improvements in translation because you have more consistent content and better workflows. So you can look at those issues, but you have to basically look at those issues and those costs and then decide is the investment in a CMS, of course, all the pain of getting there, worthwhile to get those improvements. Among our customers, the most common justification that we hear for moving into a content management system for the first time, there are basically two. One is mergers. And I would put rebranding as a sub-issue underneath that.

SO:                   But with a merger, what typically happens is that you have two or three or five groups that each have their own content workflow. They were all doing things different ways because they were two or three or five different companies. And what you can do is you can consolidate. Now you could of course consolidate onto a single file-based workflow, but usually when you merge, you end up with a much bigger group. If you had five groups of three people, and now you have one group of 15, if you have 15 writers, you can probably justify a CMS on efficiency alone.

EP:                   Right.

SO:                   So mergers is a big one, to allow you to consolidate your tool set, not have five different tool sets that you have to support. And then the other one is localization and translation. As your organization gets bigger and has to go global, you have to start… First, it’s like, “Oh, we’re going to have to do Spanish. And oh, we’re going to have to do Canadian French. And oh, we’re going to Europe, but we’re only going to do four languages in Europe. We’ll do FIGS, French, Italian, German, Spanish.” And then, “Oh, whoops, we’re shipping into Russia and Turkey.” And, “Oh wait, we’re going to East Asia.” And next thing you know, you have 20 languages. The inefficiencies in a file-based workflow on two or three or five or six languages get multiplied. Every time you add a language, you add inefficiency or you duplicate or replicate that inefficiency. So when you have 20 languages, it gets really painful. So localization, which means streamline the content development so that the translation workflow goes better, is the other big justification that we see for moving into a content management system.

EP:                   Well, thank you, Sarah. That was a lot of valuable information.

SO:                   Well, thank you.

EP:                   And thank you for listening to the Content Strategy Experts podcast brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

 

The post Life with a content management system (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 18:41
Content strategy success at Crown (podcast) https://www.scriptorium.com/2021/08/content-strategy-success-at-crown/ Mon, 02 Aug 2021 12:00:13 +0000 https://scriptorium.com/?p=20454 https://www.scriptorium.com/2021/08/content-strategy-success-at-crown/#respond https://www.scriptorium.com/2021/08/content-strategy-success-at-crown/feed/ 0 In episode 100 of The Content Strategy Experts podcast, Bill Swallow and special guest Jodi Shimp discuss their experience with digital transformation and implementing a new content strategy at Crown Equipment Corporation.

“The initial and earliest win in the project was the go-ahead to even bring on consultants to help us determine what the scope would be and what the true need would be across all the different groups.

– Jodi Shimp

Related links: 

Twitter handles:

LinkedIn profiles:

Transcript:

Bill Swallow:                   Welcome to The Content Strategy Experts podcast, brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize and distribute content in an efficient way. In this episode, we’ll talk with Jodi Shimp about her experience with digital transformation and implementing a content strategy at Crown Equipment Corporation.

BS:                   Hi, everyone. I’m Bill Swallow. And today I have a very special guest, Jodi Shimp joining me. Hi, Jodi.

Jodi Shimp:                    Hi, Bill. Hello, Bill. Hi, everyone.

BS:                   Thanks for coming here. So before we dive in, can you tell us a little bit about yourself?

JS:                    Yeah. So, like Bill said, I am Jodi Shimp with Crown Equipment Corporation. I have been working with Scriptorium implementing a content strategy in the past, probably about seven years. So we went from a very unstructured content development process, and I started that as a technical writer, and over the past seven or so years, we’ve been working on a big digital transformation of all that content at Crown.

BS:                   And what was your reason for starting that project?

JS:                    For Crown, there were really two major starting points that got our strategy rolling from people, just talking about a need for something different to actually moving things forward. And for the main content teams in marcom and techcomm, it was the fact that we had overlapping product in different regions and overlapping content to support that project in different regions, being supported by two or more different content teams. For executives, the big pain point was the need to better support our globe growth and our burgeoning global market. Their need was to fix translations because at that point we did have a patchwork of translations and content processes, but we just needed to get that overall into something smooth that felt like it was a consistent process as opposed to ad hoc responsiveness.

BS:                   So it was more or less both getting your arms around those source content development problems. And then also being able to get your arms around the translation spend.

JS:                    Right. Thankfully it aligned very much to be two birds with one stone, if you will, because the content creators, the problems that they were seeing in their source content are what was actually causing a lot of the problems in the localization processes and content. So fixing the source was really the key to getting it all right.

BS:                   And I suppose that came with a bunch of wins during the course of the project.

JS:                    It did. So really, the initial win and the earliest win in the project was that initial go-ahead to even bring on consultants to help us determine what the scope would be and what the true need would be across all these different groups. I was told quite a few times that I wouldn’t be able to get financial support for a non-engineering project. And I was fairly new to Crown. In reality, the company really does a great job in investing heavily outside of engineering and design and development and all kinds of other places, but I didn’t know that.

JS:                  I think the key was really listening to the executive pain points and goals, and then determining how the things that the content teams needed to change could be improved to truly align, to support the company objectives and then move everything forward.

JS:                    Another big win for us with the approval to invest in that proposed strategy and actually begin the project. I ended up presenting the vision and objectives to the entire executive suite, including the president, owner and all the VPs. It was another case where probably my determination, when I was told it probably won’t go forward, really came into play. And I think you do need someone who is willing to put themselves out there when there’s some doubt on things, but it was definitely the right thing for the content teams. And it was the right thing for the overall business strategy and objectives at that time.

BS:                   So really bringing it back to what are the core pain points that the company as a whole were seeing rather than focusing on the writers could be more efficient if they did X.

JS:                    Right. Because it really did come down to focus on what was important to executives and the overall business goals. Instead of spending a lot of time discussing all the ways that a content strategy would make things better for content creation teams, I really worked on working with different directors and above to highlight the parts that would further the primary goals of each group and of the overall company. 

BS:                   So with those going on, is there anything you’re particularly proud of with regard to those goals?

JS:                    Yeah. So, really the overall strategy. We proposed a multi-layered digital transformation, and that crossed all kinds of departments and locales. We knew that the project would need to include authoring guidance with terminology and style. We knew that it would require structured authoring, a translation management system, and then a content management system to really get it done and done well. And then even more important than all of those things, we didn’t want to automate bad processes. We knew it wouldn’t be short and quick, and we knew it wouldn’t always be painless.

JS:                    But one of my favorite days in the entire project was pretty early on, after we had gotten support from the executive VPs, there was a meeting with executives from our regional headquarters across the world. And the senior VP of Engineering was talking with other VPs before the meeting started and he began discussing one of the large visuals. And when I say large visuals, I mean, we had a content inventory printed out and that was crazy to see all the different languages and all the different content. It literally covered an entire wall of this giant long room in one of our buildings. And he was discussing that with some of the other VPs in there. And then before you know it, he was passionately presenting what I had shared with him only days before.

JS:                    So that was the point that I was like, not only does he understand the goals of something that’s completely outside of any of his verticals, but he understands how it affects the company as a whole in his vertical specifically. And so once I realized that we had support at that level outside of our own vertical, I knew that we could actually accomplish this. So, that was super exciting.

BS:                   So he was totally bought in at that point.

JS:                    He was, and that was a really important win for the strategy.

BS:                   And it sounds like, before you had a bunch of different groups kind of supporting their own regional needs or their own product line needs. I know from working with you that you had the opportunity actually to create a more central group to support all that. And how did that help things?

JS:                    We did. So after the original strategy was approved, for the first year, there was a lot of solo work that happened, but we were able to show how a team would better support the initiative long-term. As we proved things out and actually implemented pieces of the strategy, at each point, we were able to show where long-term, whether it would be in the governance area or just kind of a systems administration area, content strategists, and then especially on the localization team, where those people could be very effective long-term in managing all of these different things around content strategy.

JS:                    So they turned into full-time positions that turned into its own small little department over time, that’s now running very lean still, but quite effectively. Definitely shown the value.

BS:                   That’s awesome.

BS:                  And I’m sure it really helped with managing all the change and being able to roll out all the training and being able to support all these different teams having that central group.

JS:                    It definitely did. And from a localization standpoint, again, for example, having experts within the company that people know that they can go to, it means that not every group is completely trying to reinvent the wheel every time they want to add something.

BS:                   Can you speak a little bit more about the change management process that you used and were there any really big obstacles that you had to overcome in that regard?

JS:                    Yeah. That was probably-

BS:                   Probably a loaded question.

JS:                    Yeah. And it was probably the most unanticipated thing for myself that I experienced in the project because I’m not adverse to change as an individual. And so I guess I did not understand the depths of the change management requirements that such a big change would cause for people. If I were to go back and do things again, that would probably be the place where I spend more time with the different departments before the project even started to talk about “What does that change mean? What is it going to look like? What is it going to look like when it’s messy because it isn’t always perfect and straightforward? So how are we structured and how are we anticipating the need for adapting the plan as we go or implementing that change?”

JS:                    There was an opportunity for the team to do a lot of empathetic listening, to really determine the pain points that we could solve, that sometimes people didn’t even realize existed, but they also couldn’t always see how the transformation would be needed in the coming years to support the growth that was coming, the digital change that’s coming across the world. So we did a lot of design thinking sessions, a lot of show and tell to get people on board. So once we really focused in, in the beginning, we tried to do everything all at once and we stumbled there. So we really focused in on one area and one group and one thing to get right. Once we got that right and we started being able to show that to the other groups and do more show and tell sessions, then we were able to get other groups more rapidly on board and more willing to deal with the painful pieces.

JS:                    So one of the wins on that though, was really finding the go-to person in each department and find that expert because we found that when we could get the expert’s idea on board and really listen to their thoughts and their concerns and help alleviate those thoughts and concerns and get them really engaged and on board with it, then they could bring the rest of their team on board with them as well. 

BS:                   Excellent. So you mentioned that the change management was a really big obstacle for you. Did you do anything during the project specifically to combat those particular obstacles around change management and what worked well?

JS:                    Yeah, so I mentioned a few minutes ago that of our biggest obstacles was really knowing where to start. Since that first year was such a struggle and I didn’t really have a team and my direction was just do it all, we did try to do it all. And once we got a small team, we were still trying to do it all. But we were able to then really step back one day and take a look and say, we keep trying to do everything and we just keep hitting walls and we keep going in circles, and we get something done and then we feel like we’re redoing it over and over again.

JS:                    With all of that complexity in mind, we really talked about and presented to our director at the time, can we break this up into smaller goals? Because tackling it all at once, we’re getting a lot of resistance, a lot of teams feel like we’re just spinning and wasting time and never get anything done. So what if we take our approach, instead of doing it all, and really focus in, on a certain team and a certain set of processes or content type. And that really seemed to help because people at Crown stay there for a really long time and even their entire careers. And so someone always knows who to call to get something done. It’s really great from a get-it-done perspective and we have a lot of people who are always trying to do the right thing, but answering those questions immediately and doing things immediately, sometimes at the expense of a process, it’s really hard on processes.

JS:                    So while we were configuring our content management system and the workflows that were part of that and everything else, we were sourcing a centralized translation management system. So we were working in two parallel paths, but on small departments and small areas first and getting it right there. And then as we brought in different groups and different content types, we’ve changed how a lot of things are done from a process perspective, who is involved in the content creation, when content creation begins, how versions are controlled and released, and all of those things. We’ve gotten to the point now where content development, terminology development, and things like that are actually a part of our engineering product development processes. And for a manufacturing company, that was a really big deal.

BS:                   It’s a big change.

JS:                    It’s a huge change. And the content management team, we learned so much. Like I said, we all learned a lot and there was evolution in our process too. So we had had very much a waterfall project management plan in the beginning. And now we’re in a quasi-agile approach because some of our software development teams are that. 

BS:                   So you basically started small, focused on very specific things with a tools-last approach, which is phenomenal, being able to do that because a lot of times you choose a tool and you get the blinders on that, come in the box with the tool and never really look at different ways that you could be doing things. But it sounds like also even after you have the workflows and such in place and the tools in place that you still went back and were reworking some of those workflows and how people work together, what the content needs are, how things need to happen. So, that’s phenomenal, being able to put that all together and still have an evolving ecosystem with regard to your content there.

JS:                    I often get asked the question, “Well, Jodi, when are we going to be done?” And at first I kept thinking, oh my goodness, when are we going to be done? That kind of bothered me and pressured me in the beginning. And then I realized that, well, the content will never be finished. We’re always developing new product. We’re always developing new content. There’s always going to be new end points and new delivery methods. So in reality, there never is a done. Especially as digital will continue to change and transform how people interact with the content and interact with our product, there will always be a need for continual development and continual improvement. So switching to that kind of quasi-agile project management, it is going to be that way, I think, for a long time.

JS:                    And it’s important not to get stuck exactly where we are right now, because if we do get stuck, then in another 10 to 15 years, or maybe even half that time at the rate things change, then there’s going to be another group that’s sitting there saying, well, that’s the way they used to do things, but we need to change everything again, to do it forward. And I think this at least goes a long way into future-proofing our content and giving the opportunity for change, having much more knowledge about the content itself and the content being smarter and having all that semantic tagging and everything.

BS:                   So since you’re never done, I assume you have to kind of show some kind of return on investment over time. So what are you measuring to be able to show success?

JS:                    Yeah. So that was one of the things, being a manufacturing company, KPIs are important. And I’m sure they’re important in every industry, but definitely something that was hard to do. And I think that was a real struggle for all the content development teams at one point was to show the value of what they bring to the table because content development is looked at as a cost center in a lot of situations. If you’re producing 1200-page service manuals, because you have to, to support a product, it’s a lot different than selling those. 

JS:                    So it was really important to figure out ways that we could show the value and the return on investment for what we had done. One of the things that we couldn’t put a handle on was exactly how much the entire organization in all the different countries were spending on translations. And that was because some groups did have translation as a line item in their budget, and other groups just bulked that in with other costs in other costs areas. So, that was really hard to tell how much we were spending. We weren’t spending it all with the same vendors. We weren’t spending it all out of the same, like I said, budget lines. And we could kind of have an idea about what we might be spending, but no one really knew the answer to that.

JS:                    Establishing those KPIs to show kind of a baseline of where we were starting and where we were going to, it took a little while, but we were able to do it. So now we’ve published content for much larger audiences up to 37 languages, depending on what the specific content is. And even though our annual translation spend is higher than what it would have been in the beginning, our coverage is much more consistent and less frustrating and it’s much more extensive. So we were able to start showing a translation savings, an annual $1.2 million savings, which exceeds our budget, but compared to off-the-shelf, if we were just going to a translation supplier and handing them a piece of content and getting it back, we do know how many words we’re translating. So we can say, if we were just doing it in the old way, versus with all of these terminology management and translation management and translation memories, now we save about 1.2 million annually on those translations.

BS:                   That’s a lot.

JS:                    It’s a lot. It is a lot. And that was really easy to show an ROI faster. We had said four years is what we thought it would take and we were 18 months. So that was really good. And then now we’ve started developing KPIs on our content reuse, and that’s been something that our content strategy manager has been putting together with our supplier. JS:                    We’re also working right now on a new digital content delivery system that will take full advantage of all the technical content that’s produced every year, which it’s very extensive, but it’ll take full advantage of that and allow us to really publish digitally all that translated content and take advantage of all the work that was done on the backend of content development. And this new system will give us a new way to deliver all of that across the globe.

BS:                   That’s fantastic.

JS:                    It’s really great to hear the executive updates where other teams throughout the company actually talk about how the content management team and those systems are allowing them to achieve the goals what they were trying to deliver.

BS:                   That’s great. I really appreciate you coming on here to kind of tell your story and I’m so glad we were able to get you as a guest here.

JS:                    Thank you.

BS:                   Thank you.

JS:                    Thank you. My pleasure.

BS:                   And thank you for listening to The Content Strategy Experts podcast, brought to you by Scriptorium. For more information, visit scriptorium.com, or check the show notes for relevant links.

 

 

The post Content strategy success at Crown (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 19:19
The evolution of smart content (podcast) https://www.scriptorium.com/2021/07/the-evolution-of-smart-content-podcast/ Mon, 26 Jul 2021 12:00:22 +0000 https://scriptorium.com/?p=20442 https://www.scriptorium.com/2021/07/the-evolution-of-smart-content-podcast/#respond https://www.scriptorium.com/2021/07/the-evolution-of-smart-content-podcast/feed/ 0 In episode 99 of The Content Strategy Experts podcast, Alan Pringle and special guest Larry Kunz of Extreme Networks talk about the evolution of smart, structured content.

“I’m a huge believer in big picture. We really need to stand back and ask ourselves, ‘What is this really all about? What are we trying to accomplish?’ It’s not about the content. It’s about the customer.”

– Larry Kunz

Related links: 

Twitter handles:

Transcript:

Alan Pringle:                   Welcome to The Content Strategy Experts podcast, brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content, in an efficient way. In this episode, we talk with Larry Kunz, about the evolution of smart, structured content. Hey everyone. I’m Alan Pringle. And today, we have a special guest on the podcast. I’ve got Larry Kunz here. Hi Larry.

Larry Kunz:                    Hi Alan. It’s great to be here. Thank you so much for having me. It’s a real pleasure.

AP:                   Absolutely. And I want to give our audience a little bit of understanding about your background. So would you kindly introduce yourself, and tell us a little bit about your experiences in the content world?

LK:                    Sure. I’ve been in technical communication, mostly in the computer industry, for more than 40 years. I won’t allow us exactly how many. I was working for IBM, when structured content came to be a thing, when DITA was being developed. I wasn’t one of the people who was developing it, but I was a very early user, and very quickly came to understand the value of structured content. I really thought it was a good thing, and my opinion has never changed. I have done time in marketing communication. And I’ve done training and consulting. But when the income tax form comes around every year, I still put technical writer as my occupation.

AP:                   And I remember, at one time, I had technical editor on mine. So I totally get that. And because you do have that really deep expertise, and we all know you got started when you were an infant. We all know that you got started very, very, early in this industry. But I think it would really help this discussion. And what I’m going to talk about is how smart, structured content has evolved. And for what it’s worth, I’ve been doing this for over 30 years now. And I will not be more specific than that. You’ve got a little more experience than I do. But I’m glad to be paired up, and to have this discussion, because I have seen things change quite a bit over the years. But on the flip side of that, sometimes it’s say, the more things change, the more they stay the same.

AP:                   So let’s talk about that, first, to level the playing field. I want to talk about, basically, what is smart structured content? Now, on the Scriptorium side, this is how we define smart content. It’s modular content with tags and metadata. And the formatting is separate. It’s applied later, based on the intelligence that you build in to that tagging and metadata. Now, I know before we started this podcast, we were talking about ideas. And you said you had a little bit of an issue with the term “smart content.” So I would very much like to hear your perspective on that.

LK:                    Sure, Alan. And I think your definition is an excellent one. I like to swing it around to the point of view of our audience, or our reader, rather than the content itself. And I don’t have a better term than smart content. Perhaps effective content is better. But it’s content that gives our readers the information they need, when they need it, in the proper place, the proper context, and adaptable to whatever format. It helps them complete a task, or make a decision.

AP:                   And I don’t think we’re too far apart, much at all, beyond some semantics there. Because basically, to make that customer facing content as flexible and useful, you’ve got to build in that intelligence.

LK:                    Absolutely right, Alan.

AP:                   So let’s go back, because you’ve already mentioned that you were involved in the early times. And I am not slamming you at all, because I was using some of these same tools, by the way, back in the day. When is the first inkling you got of this smart, structured content? Can you take us back to that moment?

LK:                    Well, I think it was, again, at IBM, when I realized in this new, they called it, it was an SGML format. And I don’t want to throw a lot of alphabet soup out-

AP:                   Yeah.

LK:                    … But that we were no longer talking about, let’s do a line break, let’s change the font, let’s change the indentation. We were talking about, this is a section, this is a paragraph, this is a list. That was the first inkling. And as I said, at the time, it made a lot of sense to me. I took to it really quickly. It wasn’t until later though, that I understood the potential, that by doing things that way, we could, as you said, modularize the content, and apply formatting later. And have a whole variety of uses for it.

AP:                   And my experience is very similar. I started working, early in my career, on projects for IBM. And we used their SGML, which I believe was called BookMasters. Is that correct? It’s been a while.

LK:                    That’s correct.

AP:                   And I used BookMaster. And that was my first job. And I think I’m fortunate, in the sense that my first job was more working with SGML with structured content. So I really didn’t have this notion of desktop publishing, because it was so much in its infancy. The whole, what you see is what you get. I didn’t have that mindset shift to make, because I was thrown right into the structured content pond. Here you go. But later on, when desktop publishing came along, I started seeing people who were much more focused on the formatting, because you could basically make what a page looked like, or a representation of what a printed page would look like, on your screen. How have you dealt with those very different mindsets in your career?

LK:                    I’ve tried to show people the potential of what you can do with structured content. And candidly, it’s been hard, because I think most of us have seen the effects of structured content, and what things it can do in a consumer context. Obviously, e-commerce, shopping on the web. You can break content down by facets. If you’re ordering clothing, size, color, style. The business to consumer, or B2C world, has done a good job of that. And the B2B world has really lagged behind.

AP:                   Right. Right.

LK:                    So that is changing. But-

AP:                   It is.

LK:                    … But I think the one way that I can help people understand, is by saying, “Well, when you go home from work, or maybe while you’re at work, I won’t tell, you’re shopping. You’re shopping for clothes, or gifts, or electronics.” And this is the sort of thing you’re experiencing. This is the customer experience that we can create, using smart content.

AP:                   Exactly. And that’s a really good analogy. To take something so every day and ubiquitous, like online shopping is for many of us, and to turn that on its head, with more of a content-focused lens. Because to me, this is where I really started seeing the value of smart structured content. I was working on the documentation for a very early laser printer for Lexmark, which had just spun out of IBM, by the way. So I’m really dating myself by saying that.

AP:                   But we basically wrote documentation for multiple models, that were very similar and had overlapping features. And we’re able not to have to maintain completely separate bunches of content, for every one of those models. We were able to reuse a lot of that content. And to me, now, I cannot stress how lucky I am, or was, to fall into a job that introduced me to that fairly early on. How you could do that reuse, when you have content that is, has that intelligence built in to say, “This is for model A. This is for model B. This is for model C.” And then, mix and match those parts to create your end product, which, at the time, was a printed manual.

LK:                    Yeah. You were fortunate. And I’m afraid, I see a lot of writers who still struggle with doing that, because you have to understand reuse. You have to know something about metadata and taxonomies. And this is a hard concept. And I don’t know the answer to that, but it’s something that we have to make sure that everyone has enough familiarity with, to really use these tools that are available to us, effectively.

AP:                   I think it’s also worth noting. I think in some cases, people may be so focused on the tool, and using it to its maximum ability, that they’re focused so much on how that tool works. That they’re sometimes can’t see the bigger picture about, especially, if things need to shift away from that tool, because that tool no longer supports whatever business drivers are behind, for example, a smart content initiative at a company, to move away from desktop publishing. If that has been your world, and you’re an expert at it, I can understand why people would be reluctant to want to give that up, because they are a leader in that skillset.

LK:                    That’s a good point. And you mentioned big picture. I’m a huge believer in big picture. We really need to stand back and ask ourselves, what is this really all about? What are we trying to accomplish? And it’s not about the content. It’s about the customer. Helping them get the information they need, just in time. I use the supply chain analogy. So get that information to them, just in time, in the right place, the right content, when they need it.

AP:                   Speaking of bigger picture, what are some of the other business drivers you’ve seen, pushing people and companies, to go to more of this modular, smarter content?

LK:                    Well, this is fairly recent. And I’m delighted that I’m seeing it. But people are understanding more and more, that our content is part of the overall customer experience. The silos are coming down, especially between marketing and tech pubs. Last week, I heard Megan Gillhooly say that our audience is made up of the same people who go home and shop online. And so, even if we’re writing highly technical manuals for a business audience, we can do these things. It’s part of their customer experience. Marketers no longer talk about a sales funnel, where a customer is drawn in, and then the sale is made. They understand that marketing continues after the purchase. It’s about building engagement, building loyalty. And our content can do that. That’s beginning to be seen as a business driver. The content truly is a business asset. And I say, I’m seeing this recently, because it’s been more than 20 years that structured authoring has been around. But really, only in the last four or five years, have I seen this mindset starting to take hold.

AP:                   And I agree with you. And I think it’s very good news. I think the walls really separating content types, some of which, are fairly arbitrary, that does make a huge difference, and is a huge driver for some of these initiatives. And before we wrap up, what’s the piece of advice you would give to people who were just starting their careers in content, in regard to structure? What would you say to them as they’re getting started?

LK:                    Well, first and foremost, I would, again, say, stand back and make sure that you can see the big picture. And there are so many demands on our time and attention every day. Going to stand up meetings, hitting a deadline, putting out all those fires. It’s really hard to remember what we’re doing here, that we are providing information for our customers, again, in that just in time fashion. And trying to build engagement, and build loyalty. I think if practitioners can keep in mind those principles, what we really are doing here, why we’re doing this job, it will help them embrace the possibilities of structured authoring, and look beyond the silos, and really think about what is in the content, that we need to give to our customers.

AP:                   Larry, that is a great piece of advice. And I think it’s a great place to wrap up. Thank you so much for your time and perspective. I think it will be very helpful to people.

LK:                    Well, thank you again, Alan, for having me participate on this podcast. It’s a privilege and a pleasure.

AP:                   And we share that. Thank you so much. Thank you for listening to The Content Strategy Experts podcast, brought to you by Scriptorium. For more information, visit scriptorium.com, or check the show notes for relevant links.

 

The post The evolution of smart content (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 14:28
The pros and cons of markdown (podcast, part 2) https://www.scriptorium.com/2021/07/the-pros-and-cons-of-markdown-podcast-part-2/ Mon, 12 Jul 2021 12:00:08 +0000 https://scriptorium.com/?p=20404 https://www.scriptorium.com/2021/07/the-pros-and-cons-of-markdown-podcast-part-2/#comments https://www.scriptorium.com/2021/07/the-pros-and-cons-of-markdown-podcast-part-2/feed/ 2 In episode 98 of The Content Strategy Experts podcast, Sarah O’Keefe and Dr. Carlos Evia of Virginia Tech continue their discussion about the pros and cons of markdown.

“If you want to make a website and you need to write the text in a fast way that does not involve adding a lot of the brackets that are in HTML syntax, I think that’s the main use for markdown.”

–Dr. Carlos Evia

Related links:

Twitter handles:

Transcript:

Sarah O’Keefe:                   Welcome to The Content Strategy Experts Podcast, brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize and distribute content in an efficient way. My name is Sarah O’Keefe and I’m your host today. In this episode, we continue our discussion about the pros and cons of markdown with Dr. Carlos Evia. This is part two of a two-part podcast. So when we talk about markdown, because I think that probably most of the people on this podcast in general are more familiar with DITA. When you talk about markdown, what is the sort of use case for markdown, the clearest possible place where you say “oh, this is a case where you definitely want to use markdown.” What are those factors?

Dr. Carlos Evia:                   You need to make content that is going to be published mainly to the web and here I say mainly because markdown now, that part of processing the syntax that used to be in the beginning, “let’s process with this tiny, tiny tool that will only convert to HTML or XHTML,” now there are many other tools that can actually process and transform markdown to other things to create that multichannel publishing that we also do in DITA.

CE:                   So, I think the main use case is if you need to have something that is going to be published or presented in a website, and you do not want to write HTML. It’s a shorthand, just like when we were back in junior high, the cool thing was that you will take a course in typewriting, so you can be working with computers. This is back in the Fred Flintstone days.

CE:                   But before you could touch the keyboard, you had to take a course on shorthand, and shorthand was several syntaxes. I think there are two amazing texts for shorthand. You had to learn how to do it with a pencil, actually a special pencil. And there were different notations that you will do. And then, once you dominated those things and you could take dictation super fast, you can go and transcribe it using the keyboard.

CE:                   So I think that’s kind of the equivalent of markdown. If you want to make a website and you need to write the text for your website in a fast way that does not involve you adding a lot of the brackets that are in HTML syntax, I think that’s the main use for markdown. Write it following this very simple text-based syntax.

CE:                   And then there will be a tool that will transform it, and mainly to a website. But like I said, there are some things, and these are things that I use all the time, I use Pandoc, for example, and Pandoc is a tool that can transform a markdown file to a ton of things, including XML. So we can send my markdown file or files to HTML, to EPUB, to DocBook. There’s no native transform for DITA. And I considered building one a few years ago, but it’s developing Haskell and I don’t understand Haskell as a programming language. So I gave up, but that’s the main use. The main use case is you want quick HTML. So go ahead and use markdown.

SO:                   What are some of the limitations that you found in markdown, the factors that caused you to look at it and say, this is not going to be a good fit for what I’m doing?

CE:                   The biggest problem is that it’s not really structured text. The structure provided by markdown is mainly at the block level. You can have a heading, you can have a subheading, you can have paragraphs and then you can have a couple of inlines, but there is no real structure like we have in XML, and particularly in DITA, that allows you to put attributes in a sentence or even a word to tell it how to behave, to tell it how to look, to tell it filter it out or filter it in when you create user aimed documentation. So that’s one of the biggest challenges when you’re working with markdown. If you only have one version of content to publication, and there are no filters involved, you don’t have to feed it the needs of different audiences. You don’t have to feed the needs of different platforms. Okay, use markdown.

CE:                   And people here are going to be like, “wait a minute! Well, there’s this flavor of markdown that if you add a YAML header and you put a bunch of little pairings of variables, and then you start spicing it up with more of those squiggly brackets and semi-colons,” yes, I agree. But that is not markdown. That becomes something new, a new spaghetti kind of thing that actually, John Gruber, one of the guys who invented markdown, he doesn’t like it when you start adding things to your markdown because it breaks with that idea of making it simple.

CE:                   So that’s the main limitation that I see from my perspective. When I started talking about it in my classes, when I started using it for my specific publication needs, the structure happens at the block level.

CE:                   Beyond that, it kind of becomes the worst enemy of the content specialist, which is that blob, the blob of text that we all fear that we see when we’re working in a word processor and there’s no real structure behind it. That can easily happen in markdown, that you’re just writing paragraphs and there’s really no structure to it. And again, not every document, not every website in the world needs serious intense structure, but if you’re writing for an audience of human beings in a potential audience of machines that are going to be taking your content to do some machine learning, artificial intelligence, are going to be sending this to voice assistants and sending it to the dashboard of your fancy Tesla, markdown is going to face those limitations that your text is mainly a blob with a header and a subheader, but it’s not really structured in a way that enables behind the scenes computation. And again, you can claim that “oh, but my flavor if you add these twenty-five hundred other characters,” yeah, but that is not simple markdown.

SO:                   So I know that you’ve been involved with Lightweight DITA and are leading that effort along with a couple of other people. Does that have potential to kind of unify markdown and DITA, unify these two use cases in some way?

CE:                   Yep. That’s precisely one of the ideas behind Lightweight DITA. When my colleague, Michael Priestley, came up with the idea of Lightweight DITA after, Don Day and Michael Priestly, who came up with the idea of DITA, by the way, they came up with the idea of Lightweight DITA in 2015, they wanted to have a simple way to represent the most frequently used elements of DITA in a smaller set of XML tags. And as they started working on it, then they realized, wait a minute, what if we also create a way to do this subset of elements in HTML? And it was a logical thing that as markdown became more popular as a shorthand approach for creating HTML, why don’t we have a Lightweight DITA flavor that is in markdown and you write it in simple text and then it becomes those HTML elements.

CE:                   And somebody mentioned a few years ago, somebody sent me a paper, an article that said DITA is a universal publishing solution. And I want to say Lightweight DITA is not universal. Lightweight DITA is ‘pluriversal’ because it allows different languages to be merged into DITA-like workflows. And at the end, when you publish, when you create a document and you give it to your users, they will not know what came from markdown, what came from HTML and what came from XML. When they get their document or their website or whatever it is that you’re going to transform to, it all looks like it came from the same source. And that is one of the biggest principles behind that design and development of Lightweight DITA. We want to take topics that follow some basic rules and you can create them in XML in HTML or in markdown.

CE:                   And they all live together in a map and you can transform them. Then develop documents that when you give them to your users, they won’t know what came from where. So it’s a ‘pluriversal’ approach instead of the universal “use these one way,” no, we want to be open to possibilities. So that’s what we do with markdown and by design, we have really tried to avoid creating our complicated flavor of markdown in Lightweight DITA that has a ton of bizarre characters. We don’t want to over spice our markdown with squigglies and brackets and stuff.

CE:                   So one of the principles is okay, do you want to bring a markdown file to a Lightweight DITA party? Bring the most basic, a couple of hashtags for a Heading 1 and a Heading 2 and two paragraphs, bring it on, put it in a DITA map or a Lightweight DITA map, and it will work. It will publish. It will transform. And that’s something that you can do now. And to be honest with you, it’s something that you probably could be doing. And some people are doing it since 2016, when the DITA Open Toolkit started having a version of Lightweight DITA. So I know people that do that on a daily basis. They mix their DITA with markdown topics and nobody’s complaining.

SO:                   Yeah, we actually do have a couple of clients that are doing some version of that. And it’s been interesting trying to figure out how to bring those things together and unify them while maintaining, I think in their case, it very often comes down to markdown is more convenient for the people who are creating it. And very often stashed in something like Git or GitHub and then they have this full-on narrative authoring environment, which is the DITA content, but they need to unify the two, as you said, for delivery purposes. So they slurp the markdown into DITA and then out through the DITA processing pipelines.

CE:                   And you know what, when I teach my students how to use DITA with markdown, everything lives in the same GitHub repository. And the most beautiful thing that we have seen is that you can have one GitHub repo that has topics in DITA, and some topics were filed in markdown. And then inside that same repo, you have a DITA map or an XDITA map in the case of Lightweight DITA that brings them all together. And from that, you can build and publish whatever you want, but in another repository that might be yours or might be your classmates’, you can have a sub module that borrows, that repo. And that other repo can be a headless CMS source that builds a website with a react or whatever it is. And it’s using the same markdown files that the other repository is using to build something in a Lightweight DITA workflow.

CE:                   So see, they can work together and because we’re avoiding to over spice the markdown with squigglies and whatnot, they can still work and they work in both systems, there you have it. Have a repo that builds something in a Lightweight DITA workflow and embed that as a sub module in another repository that is using react, a static site generator, you’re using the text markdown, and you can have them both living together and nobody complains.

SO:                   Well, I think I’ll leave it there because nobody complains. Seems like a good closing for this podcast. Carlos, thank you so much for coming in on this. I’m going to leave some resources, both on markdown, but also on Lightweight DITA in the show notes and I think your background information is in there and we’ll have a couple of other things. So with that, thanks again. And thank you for listening to The Content Strategy Experts Podcast brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

 

The post The pros and cons of markdown (podcast, part 2) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 14:03
The pros and cons of markdown (podcast, part 1) https://www.scriptorium.com/2021/06/the-pros-and-cons-of-markdown-podcast-part-1/ Mon, 28 Jun 2021 12:00:34 +0000 https://scriptorium.com/?p=20401 https://www.scriptorium.com/2021/06/the-pros-and-cons-of-markdown-podcast-part-1/#respond https://www.scriptorium.com/2021/06/the-pros-and-cons-of-markdown-podcast-part-1/feed/ 0 In episode 97 of The Content Strategy Experts podcast, Sarah O’Keefe and Dr. Carlos Evia of Virginia Tech discuss the pros and cons of markdown.

“I think markdown has a huge user base because most people need to develop content for the web. But there’s a set of people that need to be working in something more structured for a variety of reasons, and those are the ones who use DITA.”

–Dr. Carlos Evia

Related links:

Twitter handles:

Transcript:

Sarah O’Keefe:                   Welcome to The Content Strategy Experts podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize and distribute content in an efficient way. My name is Sarah O’Keefe and I’m your host today. In this episode, we discuss the pros and cons of markdown with Dr. Carlos Evia. Dr. Evia is a professor and associate Dean at Virginia Tech and also the Chief Technology Officer in the College of Liberal Arts and Human Sciences. Additionally, he’s an expert on DITA XML and has worked toward bringing structured authoring concepts into university curricula. This is part one of a two-part podcast.

SO:                   Carlos, welcome and welcome back to the podcast.

Dr. Carlos Evia:                   Yeah. Thank you for having me again on the podcast.

SO:                   Well, welcome back. And let me start with the basic question and the theme for this podcast, which is what is markdown?

CE:                   Ay yay yay. Well, that’s a tricky thing because if you go back to the 2004 definition from John Gruber, markdown was supposed to be a very simple text to HTML syntax that will kind of look like, I don’t want to use the word structure, but here I am using structure. Like a structured email message or the kind of structured text that we all people used in use nets. For all you youngsters out there, when the web was this new thing and the internet was pretty much text-based and using it was well, you could get all your entertainment, but it was text. To make the text readable, we used some hashtags and underlines and asterisks to emphasize and highlight components. Markdown came to life as part one, precisely that, kind of simple syntax that would make text easy to read, easy to digest, easy to understand. But then the second thing that markdown had was a little tool that will convert that syntax to actual HTML because people were running HTML and they would be like, oh, brackets, who needs that?

CE:                   Then you wouldn’t need to have brackets. You will just write following that syntax. And then there was a little tool that will attach to blog engines, like mobile type back in the early 2000s and that will automatically convert that text to HTML syntax to actual HTML or back in the day, XHTML, that would be presented to web browsers. And that’s it. That’s where markdown was. But I think the evolution of markdown has gone in very interesting ways, not because of the developers or the creators of markdown, but by the use cases that users have given to markdown.

CE:                   And now you can see people who think of markdown as a, I don’t want to say complete but a partial workflow for developing or storing or presenting or publishing content. And I think that’s kind of weird because if you were to see some of the flavors of markdown that are out there that add lots of squigglies and chain of semicolons and colons to make the content more structure or behave more like something that is not just plain text, that’s not markdown because it’s really breaking with the principle of making it easy to read and making it just plain text. That was a long, long answer to tell you what markdown, the original intention behind markdown was and where some flavors or versions of markdown are today.

SO:                   Okay. It started out as super simple and now it’s getting increasingly complicated. And I think for those of us that live in the XML and DITA world, there’s a good bit of, I don’t know, infighting or conflict between the markdown people and the DITA people. Not everybody falls on one side or the other of that fence, but there definitely seem to be two factions. Why? Why are those two groups fighting?

CE:                   Are they fighting? I don’t know about fighting. And let me tell you something. I think that markdown and DITA live in parallel. Not so much parallel because they have intersecting points, universes of content creation. And I think that the fight is something that is being represented by at least three types of individuals. Number one, publishers of self-authored, non-peer reviewed books that write something and say, “This is the way,” like the Mandalorian. “This is the way and you have to follow this way.” And because they self-publish, they are not peer reviewed, it’s my way and you’re going to think that what I propose is the way to do it. And if you don’t agree with me, don’t write my self-published non-peer reviewed book. But if you do it, you’re going to probably think that, okay, that is the way and I’m going to think about it. That’s one group of people that are like, yeah, there’s this fight.

CE:                   The second group will be people who let’s be honest, get paid to say that. We know some people on the Twitterverse who try to create this fight of DITA versus markdown, markdown versus DITA, depending on who you think should go first. And I don’t think there’s a real war. It’s just that people get paid to do that in order to sell a product. DITA versus markdown, DITA be bad. Markdown be good if you buy my content management system or my blogging platform or whatever it is that I’m selling and I get paid to tweet that there’s a war.

CE:                   And the number three in this type of individual that kind of supports this war or conflict, this friends of the DITA world who have tried to reach out to the markdown-based crowd and they went to one conference or they did one presentation about DITA and they weren’t a little sad or disappointed because not everybody in the audience immediately jumped and said, “I love you DITA.” Oh, kiss, kiss, hug, hug, I’m abandoning everything else. When they came back to the universe of DITA users, DITA developers, they were like oh, the markdown people, they don’t like us. But I don’t think there’s really a war. I think that the big population and it’s huge of people who use markdown as it was intended, as it was developed, as it was created, as a text-to-HTML tool, many of them don’t even know that DITA exists because they don’t have a use case for DITA. They have a use case for a simple shorthand approach to creating HTML and they use markdown. It’s not like they’re like, we hate DITA because they don’t even know what DITA is.

CE:                   And in the other hand, we have people who use DITA because they are in highly structured, highly regulated environments and they use DITA because that works for them. But on their everyday lives, say they want to build a website, if you want to put a comment on somebody’s blog, you use markdown. And that’s my case. I live in the DITAverse, but when I need to make a quick website or I need to tell my students how to do something super simple, that is going to be only posted on a website, we use markdown and we up here in my classes, I think we’ve been teaching markdown since 2005 or something like that. And at the same time we’ve been teaching DITA sometimes in the same class or in different courses since, I don’t know, 2002 or something like that. I don’t think there’s a war. I think it’s different use cases and I think that those funding the idea of a war fall into those three groups of individuals that I presented. But you can have them both and use them for different purposes and I think life can be good.

SO:                   Okay. It sounds as though you’re going to take the grownup perspective on this.

CE:                   Well, I can be mean.

SO:                   Which good for you.

CE:                   And tell you that.

SO:                   Okay. I’m still thinking about the Mandalorian analogy and what I want to know is in this scenario, who is the child?

CE:                   I think the child, I don’t know because people think that markdown is new, but it’s not a new thing. It’s been around formally as both of those components, as the syntax and the tool to transform that syntax to HTML or XHTML, since early 2004. And if you just take the syntax of writing, using the hashtags and asterisks and underline, well that’s existed since the early 2000s, if not earlier than that. And that’s about the time that DITA came out of IBM and became a standard. I don’t think there’s necessarily a component in this situation that is going to be the equivalent of that child who has these powers that are in development and can bring hope to the galaxy and what have you. I don’t know. I don’t know if there’s a case in here where somebody’s a David or a Goliath. I think that they are about as they’re the same age and it’s just a matter of the user base.

CE:                   And like I told you before, I think markdown has a huge user base because most people need to develop content for the web. If they’re not using a Facebook or Twitter, if you’re writing content and you’re publishing your own stuff that it’s going to be on a website, you need to do HTML and ain’t nobody got time to write HTML hand coded anymore so markdown is an approach to do that. But there’s a small set of people, very highly specialized content specialists that need to be working in something way more structured for a variety of reasons, such as you know and as the audience of your podcast know, and those are the ones who use DITA. But I think that they can be both good people and then it can be both powerful, Jedis in different environments.

SO:                   And with that, I think we will wrap up part one. We will be back to continue our discussion about markdown with Dr. Carlos Evia.

SO:                   Thank you for listening to The Content Strategy Experts podcast brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

 

The post The pros and cons of markdown (podcast, part 1) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 11:43
The importance of content governance (podcast) https://www.scriptorium.com/2021/06/the-importance-of-content-governance-podcast/ Mon, 07 Jun 2021 12:00:59 +0000 https://scriptorium.com/?p=20370 https://www.scriptorium.com/2021/06/the-importance-of-content-governance-podcast/#respond https://www.scriptorium.com/2021/06/the-importance-of-content-governance-podcast/feed/ 0 In episode 96 of The Content Strategy Experts podcast, Elizabeth Patterson and Gretyl Kinsey talk about the importance of content governance.

“An important part of governance is knowing that changes can happen. Keep your documentation in a central place where everybody can get to it and understands how it’s updated. If you don’t, some groups may start creating their own and that can result in unofficial documentation that doesn’t necessarily capture what should be captured.”

–Gretyl Kinsey

Related links: 

Twitter handles:

Transcript:

Elizabeth Patterson:                   Welcome to The Content Strategy Experts podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we talk about content governance. Hi, I’m Elizabeth Patterson.

Gretyl Kinsey:                   And I’m Gretyl Kinsey.

EP:                   And I think we’re just going to go ahead and dive right in, Gretyl. So could you start out by giving us a definition of what exactly content governance is?

GK:                   Sure. So when we talk about content governance, we are talking about a former system of checks and balances where we are defining the responsibilities, the accountability, the roles, everything that’s involved, and measuring quality each step in your content development process. And unfortunately, it is not as much of a priority as it should be at a lot of organizations, but it is critical for success. So it needs to be a big part of your content strategy.

EP:                   And I think it’s really important to note here, if you are in a regulated industry, you must have a plan for content governance and accountability in place, or you get shut down. And if you’re not in a regulated industry, that doesn’t mean you’re off the hook. You still need to document so that everyone knows what’s going on, what the plan is, and you can have that consistency in your organization.

GK:                   Absolutely.

EP:                   So when you are getting this plan in place, what types of things should you include?

GK:                   One is just standards for your content. So defining your content model and how that is going to be maintained and governed going forward. What are the workflows and systems that are going to be in place to do that? So that might be whatever your toolchains are for authoring, for publishing, for review, approval, editing, all of those kinds of steps all the way out through publishing and delivery, and all of the updates that are involved to your content. So just all of those kind of workflows for the actual development, and ensuring that that all goes smoothly as a big part of your governance.

GK:                   Another one is just defining the roles and responsibilities. So making sure we know who is in charge of doing what particular thing. I think one area where that’s really, really critical is that if there’s any sort of change to the content, that’s an overarching level. So if it’s something like branding terminology that changes, if a logo changes, that kind of thing we’ve seen with rebranding before. Who’s responsible for making sure that information gets disseminated out to everybody? Who is in charge of making sure that you’ve got processes in place to have all of that goes smoothly with your content and that it’s not this really awful manual process that takes forever?

GK:                   So that’s another part of it is those roles and responsibilities. And then finally a big piece is looking at the future. So what is your roadmap? What are your goals? What are your plans? And we like to look at short-term and long-term future when we’re helping our clients plan for this. So something like, what are your goals maybe in the next or two, and then five years out, 10 years out realizing that, of course, the longer you go in the future, the more that that can shift and change. But as long as you are looking toward the future and where you ultimately want to be, then that means you can future-proof your content and build that into your governance and always have that be something you keep in mind so you don’t lock yourself into one path with nowhere else to go.

EP:                   Right. Absolutely. And we talked about standards for content being a part of this content governance, and a part of that are the terms and vocabulary that you’re going to be using and your style guide, that type of thing. And so I want to get into touching a little bit about documenting things because you’re going to have so much information. You need to have it documented somewhere. So Gretyl, could you touch a little bit on what you should document when you are putting this plan together.

GK:                   Yeah, like you were saying, there are all kinds of things that’s important to document and you do need to document everything. Because what happens if you don’t is that inevitably change happens, people leave jobs, departments shift, all kinds of things can happen. And if something’s not documented and that knowledge is just in someone’s head and if that person then leaves, it’s lost. So it really, really is critical to get all that documented.

GK:                   And with the content itself, that’s things like your structure. So if you’ve got a content model, why is your content model built the way it is? What were the decisions that were involved in getting there? What does your structure handle? If you’re in DITA, is there specialization? All of those kinds of things, I think in particular around reuse as well if you have smart structured content and you’re doing reuse, documenting the mechanisms that you’re using and why, all of that reasoning behind it is really, really important to capture.

GK:                   It’s also really important to capture any kind of standards. And this is not so much about the content structure, but all of the little things that a structure itself can enforce. So things like your style guide, language usage, all of that stuff, it’s really, really important to document that as well. Alongside that, terminology is an important piece. And this gets into things like your branding. So the way that your company name is always written. I know we’ve talked to some people who actually had that as an issue where when they went through a rebranding, getting the company name updated, something as simple as that was really challenging because there wasn’t a documented process for dealing with it. And same for if you’ve got lots of different product names, really important to document that and say, “This is the official name we’re going with.” Because I’ve seen some cases where one department is using whatever the working title of a product was and the other one is using the official branded term. So getting all of that documented and standardized in one place is super important.

GK:                   Taxonomy is another big one. And this is a place where you can capture some of that. Taxonomy refers to how you categorize things so that people can sort and filter and find things. And so this really feeds into search. And it’s really critical for a lot of the metadata that you are going to have on your content. So that data about your data, capturing things like what the content is used for, who are the roles are the users who are involved with it? And some of the branded terminology often gets captured as part of a taxonomy as well because things like your products and the way that content is organized according to product names, product families, that sort of thing can make a difference in how customers search for it. So all of that, it’s really, really important to have that documentation.

GK:                   And then I think also when you are in a content management system, whatever tool you’re working in, document those processes as well. A lot of times the tool will have its own workflows to handle that. But it’s important for when you’ve got a new person, a new writer, or a new editor who comes on board and suddenly has to start using this, some kind of a quick start guide for that person that helps them navigate that system is another good piece of documentation to have.

EP:                   Right. And I think going back to when you mentioned planning for the future, and then also the rebranding situation, it’s important to keep in mind that a lot of these documents are going to be working documents. They’re going to change. You might have updates to your terminology and to your style guide. And so not to get completely set on it, but understand that while it’s really good to have that basis, that it might be changing as your company grows.

GK:                   Yeah, and I think that’s part of governance too, is knowing that those changes can happen, keeping your documentation in some sort of a central place where everybody can get to it, everybody understands how it’s updated, when it’s updated because we’ve seen something happen a lot of times where if there wasn’t really this good, solid central documentation that was updated and distributed periodically, that some groups would just start creating their own. And of course, you don’t want that because then you’ve got this unofficial offshoot documentation that doesn’t necessarily capture what should be captured. So really a big part of that governance is always keeping those documents updated and making sure that people get those updates as they’re made, that they’re not waiting and they’re not using old information.

EP:                   Right. So we talked about content governance being something that’s needs to be a priority, but can often be overlooked. And something that is also often overlooked is archiving content. So should you include a plan for that as a part of your governance strategy?

GK:                   Yeah, absolutely. And like you said, this is overlooked all the time. A lot of people just… When content gets old, it gets out of date, it becomes legacy content. They don’t know what to do with it, but it’s still around and it’s still there. And I think it’s really important to go through that, have part of those roles and responsibilities we talked about in that person or team, part of that responsibility should be go through the content. And when something is five, 10, even 15 years old, ask yourself, is it still relevant? Can this be deleted? Does it just need to be archived and kept somewhere, but not deleted in case it ever needs to be brought back?

GK:                   What are the guidelines around going through your content every so often and making sure that you are dealing with this archival and legacy content in a rational way? Because what often happens that we see is, if you’re doing something like a conversion, if you’re doing any kind of a content overhaul, any kind of a major update or improvement, and you haven’t sort of been pruning and dealing with your legacy stuff all along, then all of that suddenly becomes a big question of, what do we do with it? And you have to make a whole lot more decisions at that one time. Whereas if you deal with it all along, it’s a lot easier to manage if you ever have any sweeping changes to your content.

EP:                   So as you’re getting this plan in place, do you have any tips for getting the team on board with all the changes?

GK:                   Yeah, and I think that’s an important thing because change management is one of the hardest parts of any project. People naturally are resistant to it and they want to see what the benefits going to be. And I think that’s important. So make sure that when you are putting a plan in place for content governance, that you get input from everybody who’s going to be affected. And this is something we do at Scriptorium as part of our content strategies. We talk to all of the different content stakeholders because that really helps you know that there isn’t a need somewhere that’s being left out or the some group somewhere isn’t being ignored and that the content governance plan actually does encompass everything.

GK:                   So get that input from everybody, get that constant feedback, keep that input going because as you saidElizabeth, your content changes over time, your internal documentation about your content changes over time, your future goals change and evolve. And so I think not just getting initial input, but getting ongoing input from everybody is really important. And then also just be really transparent about the changes that you’re making. Don’t try to trick anybody or say, “Oh, this is going to be really easy if it’s not.”

GK:                   Be honest about what people are going to have to expect and support them. Make sure that you understand, yes, there is going to be some resistance. You’re going to have to help them through these changes if you are making a major change to the way that you are creating and governing your content and that you need to build that support in place so that people don’t get left behind and don’t get overwhelmed by those changes. And then I think going forward, once they get over that hurdle, if your content governance plan is good, it should continue to mitigate that change and make it easier for everybody going forward.

EP:                   Right. Definitely. So I think communicating, being transparent, this is going to tie into the next question, and this is how I’m going to wrap things up. But what else can you do when you’re executing your governance strategy to make sure that it’s successful in addition to that transparency and that communication?

GK:                   Yeah, and I think a lot of this is just wrapping up and reiterating some of the things that we’ve said, but my three big tips are one, like I mentioned, have that small team or that even just one person in charge of your governance. Like I mentioned, don’t just rely on the tools. Have that human intervention, have that person or that team who’s responsible. So that’s one way to make sure that you succeed.

GK:                   Another one is to communicate your plans and your updates regularly. Have some a schedule in place maybe where you say every so often and we see this sometimes in agile. It might be a sprint schedule. It might just be some other internal schedule you come up with. But every so often, here’s when the updates are going to come out about making sure when content has been changed, when documentation about our content has been changed, when we’re rebranding, when we can expect a new product to be added, any of those kinds of things, have that regular update schedule so people know when to expect that it’spcoming.

GK:                   And then finally, like I mentioned, keep getting that input from your content creators, keep those lines of communication open. And that way, if somebody has a problem, if something about your governance strategy is maybe not working so well, then they can tell you before it gets bad. And that way you can make sure that it does succeed.

EP:                   Right. Absolutely. Well, thank you so much, Gretyl, for being on the podcast today.

GK:                   Yeah, absolutely. Thank you.

EP:                   And thank you all for listening to The Content Strategy Experts podcast brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

 

The post The importance of content governance (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 14:41
DITA 2.0: What to expect (podcast) https://www.scriptorium.com/2021/05/dita-2-0-what-to-expect/ Mon, 17 May 2021 12:00:02 +0000 https://scriptorium.com/?p=20333 https://www.scriptorium.com/2021/05/dita-2-0-what-to-expect/#comments https://www.scriptorium.com/2021/05/dita-2-0-what-to-expect/feed/ 2 In episode 95 of The Content Strategy Experts podcast, Sarah O’Keefe and Kris Eberlein (chair of the OASIS DITA Technical Committee) discuss the upcoming release of 2.0. What can you expect if you are currently in DITA? And what do you need to know if you are considering DITA?

“If you’ve been shoehorning diagnostic information into troubleshooting topics,  you’re going to have a good semantic place to put that content with DITA 2.0.”

–Kris Eberlein

Related links: 

Twitter handles:

Transcript:

Sarah O’Keefe:                   Welcome to The Content Strategy Experts podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage structure, organize, and distribute content in an efficient way.

SO:                   In this episode, we talk about what to expect with the upcoming release of DITA 2.0. Hi everyone. I’m Sarah O’Keefe and I have Kris Eberlein joining me today. She’s the Chair of the OASIS DITA Technical Committee. Hey, Kris, welcome to the podcast.

Kris Eberlein:                   Hey, Sarah. Thanks for having me. I’m delighted to be here.

SO:                   So happy to see you virtually. So today with the great opportunity to talk to Kris, we wanted to talk about the upcoming release of DITA 2.0, and in particular, talk about this from the perspective of current and future DITA users. We’re not going to do a full overview of the specification, but I will include a link to the draft specification and some other DITA resources that Kris has shared with us in the show notes. So if you need to do some basic research about DITA 2.0, you’ll have those resources there.

SO:                   But with that said, what I want to talk about today is where this is going and try and gain some of your perspective, Kris, on what’s happening here. So let’s talk about current DITA users. If I’m a current DITA user and I’m in DITA 1.2 or 1.3, what’s the most important thing? Or what are the dozen most important things that I need to know about DITA 2.0?

KE:                   Well, the first thing everybody needs to know, and this is user’s, tool vendors, the community, is that DITA 2.0 Is our first release that is not backwards compatible. For all the DITA 1.x releases, DITA 1.2, DITA 1.3, DITA Technical Committee really went to a great deal of trouble to ensure that all the changes we made were backward compatible. There just comes a time when you need to do some housekeeping, when you need to do cleanup, make changes, correct design mistakes, and have a backwards incompatible release. And that is DITA 2.0.

KE:                   So it’s going to present some new challenges that folks who have been in DITA for a while and have maybe gone from 1.1 to 1.2, or 1.2 to 1.3, new challenges that those folks haven’t experienced so far. It’s going to be a release that is well worth current DITA users upgrading to. We have added very robust support for audio and video. And I think probably for the first time, it’s going to make it fairly easy for folks to really have multimedia in their content without going jumping through some unnecessary hoops.

KE:                   And just in general, improvements to hazard statements, to simple table, and just a whole lot of nice cleanup. But it does mean that folks that are currently in DITA and who are looking towards upgrading to DITA 2.0 in the future are going to have some planning and some work to do. And the very first thing I think people need to pay attention to is performing a content audit and assessing whether their content contains deprecated items that have been removed in DITA 2.0.

KE:                   The biggest items that I think are going to hit folks with existing content, is if you’ve been using the alt attribute instead of the alt element, or the navtitle attribute instead of the navtitle element. Those attributes are not included in DITA 2.0. And if your content has them and you move forward to DITA 2.0, you’re going to see breakage. So doing cleanup of your existing content, if you have any of these items that we’re deprecating.

KE:                   And just as a side note, those attributes have been deprecated since 2010. So it’s not as if we’re pulling the rug out from under people. We’ve given notice for a long time, “These are deprecated. They’ll go away in the future.” And DITA 2.0 is that future point.

SO:                   So 10 years seems like a reasonable timeframe.

KE:                   One would hope.

SO:                   I heard a rumor about steps. Tell me about steps and what you’ve done.

KE:                   Well, one of the things we’ve done in DITA 2.0 is we have removed sub steps, instead we’ve enabled steps to nest. This was done really at the request of many, many users who said, “We want to be able to reuse our steps. We want to be able to reuse steps that have substeps, and we’re running into problems because maybe we have substeps in one topic and they need to be steps in another topic. And just this whole structure of steps and substeps is impeding our reuse.”

KE:                   And so the Technical Committee listened to that and we made that change. So I think that’s going to be a change that will really affect almost every implementation’s task topics. The good news is, that is going to be a very simple change to make across a body of content using scripting or search and replace. And the DITA Technical Committee will be producing some documentation and some scripts and some other utilities to help people do this sort of migration. I also fully expect that CCMS vendors will be providing their customers with a certain level of support.

SO:                   So basically if I have a task topic today, then my first level steps would be step and my second level of steps would be substep. And first of all, I would get rid of substep and just make it step. So I would have a step with a nested step for the second level steps. And then you’re saying you could actually have a third level of steps or fourth or fifth or …

KE:                   Oh, it could nest to the depths of being ridiculous. And I hope people don’t do that. I mean, it’s certainly possible to implement some Schematron rules that would restrict the level of steps one could do. But we know nowadays that people really … If people want to have infinite levels of steps they use the info tag and they put in an ordered list within it, and so forth.

SO:                   What this means, as you said, is that a thing that used to be a substep, which is now just a regular step, it would make reuse much easier, because I don’t have to worry about the fact that it was a second level step over there, and I want it to be a first level step over here.

KE:                   Absolutely.

SO:                   Awesome. Okay. So for current DITA users, I guess there’s also the tools issue, right? I mean, there’s, as far as I know, very little out there right now that supports DITA 2.0, because I mean, it hasn’t been released yet, so it might be asking a bit much. But that’s something to keep an eye on.

KE:                   It is. I mean, right now the DITA Open Toolkit has a certain limited support for DITA 2.0. Oxygen XML Editor ships DITA 2.0 DTDs. But yeah, as of yet, tool vendors have not started making changes to their applications. You can create a DITA 2.0 topic in your Oxygen Editor, but if you want to use their insert Table Wizard, and you’re trying to insert a simple table and add a title to it, which is permitted in DITA 2.0, you can’t do that. There’s no support in the Oxygen Wizard for that yet.

SO:                   Do you have an idea of timing? Not so much for the tool vendors, but for the specification?

KE:                   I’m hoping that we’ll have the specification and the standard released in early 2022. Or sorry, early 2023. I wish, I wish it could be 2022. And this is one of the things that folks are always asking me, “Why does it take so long?” The wheels of standards organizations turn slowly. And to be honest, that is a good thing. Standards, they, they live for a long time and you really want us to get things right.

KE:                   The reason why we’ve got about a year and a half runway from now is, although we have just about finished all of the grammar files, the DTDs, the RNG, the things that codified the rules for the standard, we’re pretty much done with that. But we’ve got a lot of editing the specification to go, and reviewing the specifications. And all of that has to happen before we kick off the Oasis approval process, which takes six to eight months.

SO:                   So if I’m a current user, it sounds as though I need to start thinking about this and doing some research and doing maybe some planning, but there’s not an immediate crisis action.

KE:                   Oh, absolutely, no immediate crisis action. It is a good time to start thinking and planning. If you’re a company with a decent sized DITA implementation, this is the time to appoint somebody to be your DITA 2.0 captain. To do research, to look at your company’s content, to think about what it means for your implementation moving forward.

SO:                   Okay.

KE:                   It’s a good time to test drive DITA. It’s a little too early for people to put DITA 2.0 In production. I think that really can’t happen until there is a little more support from tools and from the DITA Open Toolkit. Or if you’re not using the DITA Open Toolkit for whatever processor you might be using to pre-process your content, or to generate your output, your PDFs, your HTML5.

SO:                   So it sounds like that’s our action item, is to do that research and, “Hey tools people, tool vendors, we need you.” So what about the future users? So, we’ve been talking a little bit about the current users. People who have working implementations, who could be looking at this and thinking about upgrade paths, but what about the people who are just considering DITA? They’re not using it, but they’re thinking about it. Should they be planning to implement in DITA 2.0, or should they just jump into 1.3? What would be the best solution there?

KE:                   It really depends on their timeframe. If you’re starting to implement tomorrow, you need to stick with DITA 1.3. And to those folks, I would say, be very careful to not use deprecated items, to not use elements, attributes, or things that are being removed in DITA 2.0. Obviously, if you’re writing task topics and you need substeps, you need to use your substeps.

KE:                   If you are looking now and maybe thinking your implementation is going to happen six, nine months or a year from now, you might be able to use to DITA 2.0, if you’re probably using GitHub as your repository. Again, I think the tools are really going to be the gating factor for people to be able to use DITA 2.0 in production?

SO:                   It sounds as almost like you’re saying if it’s a small project that would be done in three months or six months, that’s a quick and dirty. Or not even quick and dirty, but a smaller company with a smaller content set, they might jump into DITA 1.3 and then make the move, but it wouldn’t be that big a move. Whereas if it’s a big, huge enterprise behemoth that moves slowly, this might need to be on their radar.

KE:                   Yes.

SO:                   I mean, even today.

KE:                   I think it’s good to be on everybody’s radar, whether you’re tiny, small, medium, large, or ginormous. This is the time, this is the time to appoint somebody to be your DITA 2.0 resource person, to learn about it, to do that content audit. To figure out what are all the moving parts in your DITA implementation that will be affected?

SO:                   Are there any particular maybe industries or subject matter areas where DITA 2.0 would be particularly helpful, or not helpful as the case may be?

KE:                   Well, one of the things we did redesign pretty extensively for DITA 2.0 is the hazard statement element.

SO:                   So this is warnings, danger, that kind of thing?

KE:                   Yeah. Warning, danger, caution. So very much used for machinery, for medical devices, for anything that has to comply with particular standards, like particular ISO standards around hazards or ANSI standards.

KE:                   So I think we have really listened to folks in those industries and the ways in which the hazard statement element that was introduced in 2010 was falling short. I think it’s going to be very helpful for folks in those industries. And again, also, if you really have a pressing need to have a of multimedia content, the support for multimedia we’re adding is very closely tied to multimedia support in HTML5. And so I think that’s going to be very good news for a number of implementations.

SO:                   Okay. And is there anybody that you would put on the low priority, “You don’t need to do this right now, you can probably …” Who can wait?

KE:                   Wait to prepare for DITA 2.0, Sarah, or probably don’t need to go to DITA 2.0?

SO:                   More like, “This is going to be lower priority for you,” for whatever reason. Who is that person where you would say there’s not a lot here that’s going to be urgent.

KE:                   Well, I think if your company is offering content in DITA 1.2 or 1.3, and everything is working just fine for you. You’re not experiencing any difficulties with hazard statements, you’re not trying to do filtering on bookmaps. If you’re not experiencing any problems.

SO:                   Do you have any other big picture advice for people that are thinking about this? Anything that we haven’t covered that people should consider or know or think about?

KE:                   Well, I do have one thing I’ll add, you asked who I thought would really benefit from DITA 2.0. And I think companies, if you’re using the troubleshooting topic, one of the key things added in DITA 2.0 is we added structured elements for providing diagnostic information. So if you’ve been shoehorning diagnostic information into troubleshooting topics, now with DITA 2.0 you’re going to have a good semantic place to put that content.

SO:                   Oh, that’s good news.

KE:                   So that’s another area that folks will really get to see some benefit.

SO:                   Okay. Well, Kris, I really appreciate your time today and sharing all your hard-earned wisdom and knowledge about what’s coming up as the expert. And I think with that, I’m going to close it out. As I said, I will leave some additional DITA 2.0 resources in the show notes so that you can, you, the listener, can do your research and figure out where this is going. And we will make sure that we have some possible way of contacting Kris, if you have any inquiries about the DITA 2.0 committee and those kinds of things.

SO:                   And with that, thank you for listening to The Content Strategy Experts podcast brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

 

The post DITA 2.0: What to expect (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 17:53
Understanding content migration (podcast) https://www.scriptorium.com/2021/05/understanding-content-migration-podcast/ Mon, 03 May 2021 12:00:54 +0000 https://scriptorium.com/?p=20317 https://www.scriptorium.com/2021/05/understanding-content-migration-podcast/#comments https://www.scriptorium.com/2021/05/understanding-content-migration-podcast/feed/ 1 In episode 94 of The Content Strategy Experts podcast, Bill Swallow and David Turner of DCL take a look at content migration and discuss all of the players and parts involved.

“It’s not just about moving the content and loading it to the new system. You actually have to transform the content from the unstructured formats.”

–David Turner, DCL

Related links: 

Twitter handles: 

Transcript:

Bill Swallow:              Welcome to The Content Strategy Experts podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we discuss content migration. Hi, everyone. I’m Bill Swallow, and today I have a special guest, David Turner of Data Conversion Laboratories, also known as DCL. DCL is an industry leader in data and content transformation services and solutions. Hey, David.

David Turner:                   Hi, Bill. Thanks so much for including me today.

BS:                   Oh, thank you for joining. And today, we’re going to take a look at content migration and talk about the players and parts involved.

DT:                   Yeah.

BS:                   I think to kick it off, what is meant by content migration?

DT:                   Well, that’s a good question. It’s actually a broad term. But in general, you’re just talking about moving content in whatever formats to some kind of a new repository. In the work that we do at DCL, that typically means somebody’s implementing a component content management system, or maybe moving from one CCMS to another, or a lot of times we work in scholarly publishing where they’re changing website hosting platforms. All that to say, it’s not always the most popular conversation. I think I heard one technology provider recently say, “Migrations are death.” But they are an important conversation, and those are the kinds of content migrations we typically work on.

BS:                   Alright. Why might you need to migrate content then?

DT:                   Well, depending on your use case, you actually might not have to do a lot of content migration. Some platform vendors will encourage you just to start from scratch, or you might even be able to write a script to just lift and load content. If you’ve got really well-formed content, it can just work. But I think in most cases, you typically need to be thinking about the migration strategy, specifically if you’re moving from, say an unstructured content management workflow to this SCM, or structured content management space, like we’ve seen at TechDocs, or we’re starting to see a lot more in Life Sciences, educational publishing. In these instances, it’s not just about moving the content and loading it to the new system, but you actually have to transform the content from the unstructured formats, like Word or InDesign, into component-based formats, like DITA or other flavors of XML. And, ultimately, you have to do that in a way that minimizes manual cleanup.

DT:                   Now, on the scholarly publishing side, it’s a little different. You’re typically not necessarily moving to a new kind of XML. You might be taking decades of content and just updating those content models. So really for them, they’re looking to try to clean things up, get rid of some warts, make sure that links are working, things like that.

BS:                   And I can imagine it’s not particularly easy to move from something that’s unstructured, like Word or InDesign, into some kind of structured content like XML.

DT:                   No, absolutely not. I personally didn’t understand how difficult it was when I first started, but all it takes is spending a day trying to convert a Word document to some DITA document, and you’ll pull your hair out, even if you have some technology that’ll automatically do it. So typically, you’ve got to think about these things with a big picture, and you got to really approach them in a strategic way. So in any case, while a lot of your tech providers don’t like to necessarily emphasize the need for content migration, it really can be a critical piece.

DT:                   One of my favorite quotes, I think from the SaaS Institute says that, “Bad data is the leading cause of the failure of IT projects.” And I think you can just insert the word “content” in there as well. The data that is in and around your content, if you don’t get that right, it’s going to cause your project to fail.

BS:                   Oh yes, definitely. So what’s involved in a content migration then in that case?

DT:                   Well, can I give the favorite answer of all consultants, technology providers and service providers? I’ll just say it depends.

BS:                   We use that one too.

DT:                   I think that’s a very… Honestly it does depend on a lot of different factors. I say, “First of all, how much content are you moving?” If you’re just moving a little bit of content, maybe it’s just some simple in-house expertise. But if you’re bringing content together from a lot of different legacy systems, you might need a lot more help. “How big of a change is this for your team?” For like a scholarly platform migration, there’s very little in change management. But if you’ve got a team of medical writers who are used to working in Word and are now going to be working in DITA or some other flavor of XML, that’s a huge change management endeavor, so there’s going to be more involved with that migration. And then honestly, when you look at content formats, that’s going to be another piece of this. “Is scanning required? Is this just Word content, or do you have Word and InDesign and PowerPoint and RoboHelp and FrameMaker? And do you have a lot of duplicate content?” So there’s a lot of those things that they go into it, that’ll help determine what you need.

DT:                   But in general, I’d say you can probably group the players in the ecosystem into three big groups. First of all, is the technology vendors? So that would be your platform provider, or if you’re moving to a CCMS, that CCMS provider. Typically, there are some add-ons to that. So with the CCMS, sometimes they have an onboard editor for doing the XML. But other times, you’ll want to bring in a third-party editing tool, some sort of a structured content authoring or editing tool. And similarly, you might have some providers on the back end that are going to automate your export formats or manage your delivery out to different places. So those are the key players, I think, in terms of the technology piece.

DT:                   From a services side, depending on how big of the engagement is, you’re probably going to have a systems integrator of some kind. You’re probably going to need to have a good conversion vendor, and almost definitely, you’re going to need to have a good consultant. And really, that’s for the services side.

DT:                   And then I think internally, you also need to be thinking about who your players are. An internal project leader, ideally, somebody who sits in between the content people and the IT side. Because in my experience, a lot of times, a project that is just led by the tech side, those tend to fail. And similarly, if somebody is just trying to lead it from that content team side, they’re going to run into a lot of trouble. But if you can have that person in the middle who speaks both languages and could be that champion, that’ll really help you to have success. And then that person needs to also cultivate some other internal project champions. Another sure way to fail is to have a good internal project leader, and then a year from now that person goes someplace else and nobody else is there to pick up the mantra. So that person’s got to be really good at spreading the gospel, if you will.

BS:                   Yeah. Yeah, it’s very true, because once all of these other players leave, all you are left with to keep things going is that internal team.

DT:                   Yeah.

BS:                   So, yeah, if the internal team doesn’t have a game plan going forward, then the whole initiative, really, can fail.

DT:                   Absolutely.

BS:                   All right. So you have all of these players, all of these different parts going on. How do they all work together?

DT:                   Well, I think the technology vendors, it’s pretty self-explanatory, and most of the time the technologies have been made to work together. So you’ll have the CCMS, which is your place to store the content, manage the content, share the content, reuse the content. And then there’ll be an editing tool that’s typically already been integrated in some way, and then the rendering tools and things like that. On the service provider side, if I were going to start one of these projects, I would start probably with the consulting piece. Having a consultant, they can ask the hard questions, help develop those internal leaders, implement the change management. And really, one of the things that I think is most critical is being able to stay focused on that big picture. From a format side, they also help to do things, like establish content models, content standards, content workflows, et cetera. Have I left anything out on the consulting side? I think you might have some expertise there.

BS:                   Yeah. I think you hit all the big ones. But yeah, the big one around there is change management, because any kind of project where you’re moving from one technology base, or several into another, basically everyone’s whole world is going to change at that point. So being able to make sure that you have all your ducks in a row with regard to every aspect of that process really helps. And again, it helps inform that team and that team champion and the ones that they’re working with to keep things rolling, to have that game plan going forward.

DT:                   Absolutely. So after the consultant, I think a good place to get involved now is with the conversion vendor. The conversion vendor is going to then take a look at this valuable asset, this content that you have, and is going to help you to meet those content standards that were established by the consultant.

DT:                   At DCL, we actually do a lot of upfront analysis on these kinds of projects to optimize the content for whatever the new platform is and to minimize any cleanup. And then we can really do it. So many people look at these projects and think it’s an all or nothing kind of a proposition. “I have to bring everything or nothing.” And we could do a measured approach, maybe start with a small amount of content, then that leads to ingesting one content type a little further down the road, then maybe just a little further down the road. We’ll convert those content formats, we’ll provide QA, we’ll help clean up metadata.

DT:                   I should probably also just caution you, don’t overlook this part. Sometimes you’ll have people that haven’t maybe worked with maybe a DITA integration before, and they’ll think, “Ah, can’t we just write a script?” Internal developers look at this, and they go, “Oh, we should be able to just automate this.” But I would just, again, caution you, because, again, bad data kills 80% of projects. And your content is not an afterthought, it’s an asset. And you’ll spend actually a lot more fixing bad content transformations later than just investing well in the first place.

BS:                   Yeah. It’s a garbage in, garbage out thing.

DT:                   Absolutely. And your user’s experience is going to depend on that. What they get from your content really is a reflection of you as a company, and it’ll lead to more sales or it’ll hurt sales.

BS:                   Yeah. Couldn’t agree more.

DT:                   Of course, the systems integrator, they’re going to be handling the plumbing, making sure that the tech environment works properly, whether it’s cloud-hosted or internally-hosted. They’re going to try to make sure that all the technologies are working together seamlessly. Maybe they’ll take, “This is how we’re going to do the workflows.” Well, they’ll actually implement that and make sure the inputs and outputs are working, et cetera.

DT:                   And then the other really important piece, as I said before, the internal players, they’re going to work with a consultant to make sure that the company has everything it needs and learns to stand on its own. Because, as we talked about before, the consultants and integrators, eventually, won’t be there every day. So this internal staff needs to make sure that things are documented, needs to make sure that they’re actually able to use this content, which is why it’s important to get the conversion done right at the beginning, and then be able to help get the company culture to adapt to this new technology and these new processes, to really ensure that that long-term success.

DT:                   I guess I would say in summary, there’s a lot of moving parts, but knowing these players and how they fit in and placing some value on that is going to make that a lot easier. And I think, ultimately, will help you to put together a plan that’s palatable for your management when it comes to migration.

BS:                   Yeah, absolutely. Well, I think we’re going to cut it off here, but thank you, David. This has been a great chat and a lot of great information in there.

DT:                   Well, thanks so much. And I look forward to maybe doing another one of these in the future.

BS:                   That would be great. Alright. Thank you for listening to The Content Strategy Experts podcast brought to you by Scriptorium. For more information, visit scriptorium.com, or check the show notes for relevant links.

 

The post Understanding content migration (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 12:45
DITA for small teams (podcast) https://www.scriptorium.com/2021/04/dita-for-small-teams-podcast/ Mon, 05 Apr 2021 12:00:34 +0000 https://scriptorium.com/?p=20245 https://www.scriptorium.com/2021/04/dita-for-small-teams-podcast/#respond https://www.scriptorium.com/2021/04/dita-for-small-teams-podcast/feed/ 0 In episode 93 of The Content Strategy Experts podcast, Gretyl Kinsey and Sarah O’Keefe talk about how to determine whether DITA XML is a good fit for smaller content requirements.

“Scalability or anticipated scale is actually a good reason to implement DITA for a small team.”

–Sarah O’Keefe

Related links: 

Twitter handles: 

Transcript:

Gretyl Kinsey:                   Welcome to The Content Strategy Experts podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we talk about how to determine whether DITA XML is a good fit for smaller content requirements. Hello, and welcome. I’m Gretyl Kinsey.

Sarah O’Keefe:                   Hi, I’m Sarah O’Keefe.

GK:                   And we’re going to be talking about small DITA in this podcast. So just to set the scene, what do we mean when we talk about small in this context?

SO:                   So when we talk about small DITA or small DITA requirements, it could be a variety of things, but basically, a smaller company, a limited number of content creators, and or a small content set. So instead of tens of thousands of pages translated into 50 languages, we’re talking about two or 3000 pages in four languages, or 500 pages.

GK:                   Right. And sometimes we’re talking about as far as the actual content production people, maybe it’s just one writer, maybe it’s a small team of two, or three, or five. And maybe it’s also something like you have a fair number of contributors who are part time, but you only have maybe one or two people who actually gather all of that content, and put it together. So the total operation for that content production is pretty small scale.

SO:                   Following up on that, I think we all know what we mean by a big group, a big implementation. So it’s almost helpful to look at small DITA as being not large. Not tens of thousands of pages, not 50 writers, not a ton of languages, not a ton of scale. So it’s one of these environments where you don’t have the slam dunk business requirement, because you have so much stuff.

GK:                   Yeah, absolutely. We know that DITA is typically a good fit for a larger team, like you’ve said, because it really saves a lot of cost from the single-sourcing angle. So for example, the larger content set that you have, the more potential you probably have for reuse across that content set. And that means that you can save a lot more from establishing that single source of truth in DITA, whereas when you’ve got a smaller content set, you may not have that much reuse, or what you have may not justify the cost of that setup.

SO:                   Yeah. I mean, it’s really common, I think, to see organizations that have a small chunk of content with actually zero reuse. So you look at it from the, do you have a business case for DITA point of view, and for reuse? The answer is absolutely not. Because there isn’t any.

GK:                   When we look at some of the other factors that typically work for these larger groups too, another one is localization. And in a lot of ways, that one stems from reuse, because the more languages that you translate into, the more times that you have to pay, and if you are using copies rather than true reuse, that makes your costs go up. But if you have a smaller team, or maybe you’re delivering to a smaller market, and you don’t have a lot of reuse, or a lot of localization, or maybe any localization, then again, it becomes a little bit difficult to justify something like DITA. Whereas when you do have a lot of localization, especially on top of a lot of reuse, then that does justify it for a larger team.

SO:                   Right, exactly. Because the more content you have, the more time and money you’re going to save by automation. I mean, just you automate, and then you automate across 10 or 15, or 20 languages, and you automate across all your deliverables. And all that stuff adds up. When you start talking about a team of 20 people, and they spend 10, or 15, or 20% of their time producing all these different channels or deliverables, and you automate that away. That’s a huge gain in productivity. When you have one person working through those things with limited or no localization, then the value of that is just not there. So far, we’re doing an excellent job of convincing everybody that if you have a small team, you probably don’t need DITA.

GK:                   Well, it really depends. There are some circumstances where it can be a good fit, and maybe there are some where it’s not such a good fit. It really depends on your specific situation. When it comes to determining that fit, it may or may not always work out because you may see that that cost of standing up the DITA environment outweighs the benefits, like we just talked about. If you have some a use case for DITA, whether it is reuse, localization, more automation, your publishing, if you don’t have enough content to justify that, even if you’ve got the use case, then your management may just look at your setup and say, “Well, yes DITA could get you all these things that you’re asking for and make things easier, but it’s never going to recoup the cost of the initial stand up.”

SO:                   So what are those things? I mean what does it look like to have a small DITA group, or a group for whom a small DITA implementation makes sense?

GK:                   So one example is if your content has high-value, and what I mean by that is that that content is something that really is worth a lot to a lot of different people, and maybe it needs to undergo some sort of digital transformation process where it can be delivered to the right channels, it can be remixed and reused and repurposed and all sorts of different ways, there’s a lot of demand for that content. So even if there’s not much of it there, that content still has so much value, that the benefits you would get out of putting it into DITA outweigh those costs.

SO:                   So we’ve seen this I think with content that is regulatory, not necessarily regulated, but in fact the regulations themselves which are then distributed to lots of different people. So if your content is in fact the standard that says, this is how you should be doing things in XYZ industry or in XYZ organization, that may be a candidate. The other place that we’ve seen this is in high-value educational content, which might not be a ton of pages per se, but it’s the standard curriculum or it’s referenced material that explains to you how to do a particular thing, or how to get a particular certification.

GK:                   Absolutely. And when it comes to that content that’s where you know the value of it and the need for it for that audience really makes the difference. And I will also add that this is an example where with some of the clients we’ve worked with that have this a use case. Some of them do actually have a larger content set, but they’ve got a smaller team working on it. And so that’s where it really becomes this question of is DITA a good fit? And they get a lot of those benefits out of having a pretty decent volume of content, and then they can use the small team as almost more of a justification, because they can say well it makes these two or three or five writers lives easier, if they don’t have to do so much manual wrangling of that content, and they really can you know get the efficiency out of producing that high-value content.

SO:                   Right. I mean it’s probably worth noting that when we talk about high-value content, and we’ve described a couple of content types. The reason that it makes sense to put that into DITA is because it enables you to label the content in useful ways, so you can have an element called regulation, or you can have a specific table that’s repeated over and over again that has elements that describe what’s in the table, so you can provide these labels that give you information about what’s going on in the content. And because of the remixing that you’re talking about, having specific labels instead of just heading or paragraph, allows you to then capture what’s going on in the content, and then remix it downstream and do what you need to do with it.

SO:                   So the content is high-value in the sense that it gets remixed, repurposed, distributed, used by a lot of people, and it’s worth putting into DITA, because that allows you to give it those labels that make that remixing and repurposing better.

GK:                   Absolutely. And that gets into another use case that we’ve seen where maybe a smaller content set or a smaller team can benefit from DITA which is that, if the customers are demanding a type of content delivery that is not possible where you’re currently setup, so if you’re in some sort of a desktop publishing based setup or you’re not digitally delivering your content but there’s a need for that, then that’s another area where you can evaluate, and say does it help to have DITA? And in particular, when customers start demanding personalization, they want custom content that’s delivered to them just based on maybe the products that they’ve bought, or the services that they have decided to use from your company, that if you get that semantic tagging in there, and you have everything structured, then that delivery then becomes possible.

SO:                   And I think it’s worth noting here that we still today see a steady flow of customers who tell us things like, “Well, we’re authoring in some desktop publishing tool and we’re producing PDF for our content. We really need to put this on our website, not as a PDF but actually as some sort of web HTML, for the first time, and we’ve never done it.” There are enormous numbers of organizations out there that are still in that boat. So for those of you that are listening to this thinking, but everybody’s on the web, the answer is that, in fact, lots and lots of people actually are not just yet.

GK:                   Right. And that’s something that surprised me a little bit with how frequently we still do see that. And I think that’s especially the case with some of these smaller teams, because they just don’t have the resources to make that jump, to make that digital transformation. So I think that, that’s where it really gets down to this point of looking at, is DITA a good fit? Because, it can make that leap over to that digital delivery possible.

SO:                   Yeah. I mean, we’re still seeing, it’s not super common, but we’re still seeing legacy content in PageMaker, in QuarkXPress, in Inner Leaf, and I’m going to stop there before I dig myself further. Those people are out there, and you’re not alone.

GK:                   Absolutely. So one other use case, too, that can help if you have a small content set, or a small team, is looking across the organization at a broader level. Are there multiple departments that you have at your company that maybe need to share content, but they are limited in their ability to do so? Maybe they’re working in very distinct silos. Because if you look at it that way, even if each team is small on an individual basis, when you put all of that content together, then it starts to add up, and you maybe start to have more of a use case for something like DITA to save you costs on the entire content set when you put it together, and to also look at those benefits that you can start to get reuse that you couldn’t have before.

SO:                   Yeah. I mean, we always start with the tech comm group as the default. And that’s, I mean, 100%, where DITA lives to begin with. But the groups that we see here are technical training groups who may probably are reusing content, from Tech Comm. And also increasingly, the sort of technical marketing, or maybe sales enablement, groups that are producing pretty scary white papers and other kinds of marketing materials that are not just a short product description, or a one-page data sheet, but rather a longer not long-long, but longer form document that really could benefit from this. And in addition to the content reuse and sharing that you’re talking about, there’s also value in sharing the channels, the delivery channels. So if I have to build out a delivery channel for HTML, or for PDF, or for a portal, or whatever else, it’s really handy to be able to share that with those two or three or five other departments, so that we can all take advantage of that infrastructure, instead of having to build it two or three or five times for all your different silos.

GK:                   Yeah, absolutely. And this is another case that cuts back to what we were just talking about how you’d be surprised how many teams are still out there that are still entrenched in desktop publishing, and haven’t gone digital. We see a lot of cases where along similar lines, these different departments when we go in and ask, how are you sharing content right now? Because you might have the training group that needs to reference something and the technical manuals, you might have a marketing team that’s using some of the training materials and their presentations. And when you ask them, how are you sharing, they just say, “We’re going to their published documents, and copying and pasting into our own systems,” which of course gets everything out of sync, it has a lot of issues with version control, it introduces a lot of inaccuracies. And so there are still many, many cases where this is the only way that they can make those connections, and use that content. And once you open those doors, and have everyone working together with DITA as the basic framework, then it really gets rid of all of those barriers and those silos, and allows for production to be much more efficient across the board.

SO:                   So what about a component content management system, a CCMS? We’ve been talking about DITA in small teams, but is a CCMS a requirement as part of this?

GK:                   It’s actually not. And I think a lot of people misunderstand that as well. But they think that if you go the route of a DITA setup, that you have to have something like a CCMS as part of that, and have those workflow benefits that comes from for things like reviews and approvals, and the sort of end-to-end authoring to review to publishing pipeline. But you actually can work in DITA without a CCMS. And we’ve seen several examples of these smaller teams doing this as a cost saving measure. So they might use something else, maybe something like Git for version control, and then they’re just working in some sort of a web editor in DITA, and using the DITA open toolkit for publishing. And they don’t have it all connected in a CCMS, but they have enough coordination, because it’s a small team that everyone is able to communicate about the process. And they don’t necessarily need that overhead of a CCMS to make things work.

SO:                   Yeah. I mean, to be clear, you do get additional functionality from the CCMS. It’s just that in a small team, you can do sort of an 80-20 solution, you can get 80% of the functionality with source control, that last 20%, it would be nice. And if you have a big team, you’re going to need it. I mean, I think we shouldn’t. We should be careful with this one. All of our CCMS partners are going to yell at us. But the bigger your team is, the more value you get out of a CCMS. If your team is smaller, there is still value there. It’s just that the overall value is smaller, because your team is smaller. And so you can look at that. But certainly there are lots of instances where the smaller teams need a CCMS, and of course, they’re going to scale it appropriately. They’re not going to spend huge amounts of money on a CCMS, but there are some out there that can be very reasonable.

GK:                   And we’ve seen a few cases too where a smaller team will start in one CCMS that is designed for a team of that size, they have different kinds of levels and plans for different sizes of teams or different amounts of content. And they can start small and work their way up. Sometimes they do that within a single CCMS and upgrade their plans. Other times, they might change from one CCMS to another, depending on how they grow and scale over time. And we’ve also seen some companies use the homegrown approach of managing their version control and their authoring and publishing themselves, temporarily maybe for a few months or a few years, while they get to the point where they truly do need to CCMS. And they have that stopgap period. So there are a lot of different ways that you can approach things. And it’s all very flexible, because it can change over time. And hopefully it will change as you grow.

SO:                   I mean, that might be… Scalability is a really interesting point. Because one option, or one strategy that you might use is that you look at your company, your organization, you say, we’re going to grow. We’re a hot startup, we’re in the space. We’re going to grow, we have to scale. And in that case, you might take a hard look at implementing DITA now while you have a small team. You may or may not have a really great justification for it with your team of two or three or five. But you know that you’re going to be 20 in a year or in a year-and-a-half. And at that point, you’re going to be working at lightspeed. And actually taking that pause and doing a big implementation is going to be problematic.

SO:                   So scalability or anticipated scale is actually a pretty good reason to implement DITA for a small team. Like we’re this big now, we know we’re getting bigger, we know what’s going to happen, we’re going to build this out now while we have a small content set, it’s going to be relatively easier to do it instead of waiting for that challenge to snowball, and then having to really do a big conversion process.

GK:                   Yeah, absolutely. And I think that just gets into the idea of future-proofing your content strategy, and building that end as part of it. And when we come in and do an assessment, which is the sort of how we would help make that determination of whether DITA is a good fit for a smaller team or not, that that’s a big part of it, as we look at what are the problems you’re trying to solve right now, versus what are your goals for the future and the things that you anticipate happening in the next year, in the next five years, 10 years down the road. And try to figure out how we can make a plan or help you come up with a plan that takes that into account. And that’s where that scalability piece is really, really important. Because if you anticipate that need early enough, you really can save a lot of cost and effort and headaches when it comes to actually getting DITA in place.

SO:                   Yeah, it’s much easier and cheaper to do this when you don’t have a ton of existing content in some other format.

GK:                   Yeah, absolutely. So is there anything else that you want to talk about when it comes to advice for small teams using DITA?

SO:                   We haven’t talked about conditionals really, and variant content. That would be another thing to just sort of keep in the back of your mind. If you have content variants DITA’s pretty good at that. And it’s one of the things that it handles well, that can be problematic elsewhere.

GK:                   Yeah. Conditionals really play into the personalization angle. We’ve seen that with some of the small teams that we’ve worked with where that’s been a necessary part of making their personalization happen. So that’s definitely a big thing to keep in the back of your mind along with all the other things that we’ve talked about.

GK:                   So with that, I think we can go ahead and wrap things up. So thank you so much, Sarah.

SO:                   Thank you.

GK:                   And thank you for listening to The Content Strategy Experts podcast, brought to you by Scriptorium. For more information, visit Scriptorium.com, or check the show notes for relevant links.

 

The post DITA for small teams (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 20:44
How to align your content strategy with your company’s needs (podcast) https://www.scriptorium.com/2021/03/how-to-align-your-content-strategy-with-your-companys-needs-podcast/ Mon, 22 Mar 2021 12:00:18 +0000 https://scriptorium.com/?p=20230 https://www.scriptorium.com/2021/03/how-to-align-your-content-strategy-with-your-companys-needs-podcast/#respond https://www.scriptorium.com/2021/03/how-to-align-your-content-strategy-with-your-companys-needs-podcast/feed/ 0 In episode 92 of The Content Strategy Experts podcast, Elizabeth Patterson and Alan Pringle share how you get started with a content strategy project and what you can do if you really don’t have a solid grasp on your needs.

“It’s about opening yourself up to getting feedback from someone who’s done this stuff before, and may come up with some solutions that you didn’t necessarily consider in your own thinking.”

–Alan Pringle

Related links: 

Twitter handles: 

Transcript:

Elizabeth Patterson:                   Welcome to The Content Strategy Experts podcast, brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode we’re going to talk about what options you have when you know you need a content strategy but can’t get a handle on your needs.

EP:                   Hi, I’m Elizabeth Patterson.

Alan Pringle:                   I’m Alan Pringle.

EP:                   Today we’re going to discuss how you get started with a content strategy project and what you can do if you really don’t have a solid grasp on your needs. I’ll kind of start things off and just share that when we have introductory meetings with potential clients, there’s often a problem or a pain point that they express to us, but there can be a disconnect between understanding what you need to do and what you want to do in order to fix that. Alan, I want to ask you this question. Why do you think it’s common that we see that disconnect?

AP:                   To me, it’s very similar to when you go to the doctor, for example. You have got some pain or ache and you can’t quite figure out what’s going on, because guess what? You’re not medically trained so you go to the doctor and he or she looks at you and says, “Based on these symptoms, here are the systems in your body that could be contributing to that problem.” Again, because you don’t have medical training, the doctor may come back with some suggestions that you would have never have thought of, because guess what? You’re not a doctor. That’s kind of how I see it. You have an issue, a pain and ache, and in this case it’s content related, if you’re talking about content strategy projects, and you go to an expert and say, “We’ve got this going on, how can we fix this and make it better?”

EP:                   And that’s a very good point. I think also there’s a bias there, and it can relate to this doctor analogy too. If you take really good care of your health but you’re having some sort of issue, you might not really think clearly about some other causes and going to the doctor would help you. It’s the same thing with a company. You might be biased because you’re inside that organization and you’re not thinking about it as thoroughly as you should.

AP:                   Right. That gets into the whole third party thing.

EP:                   Absolutely.

AP:                   It’s like you go to a friend for advice. If you have got relationship problems or whatever, or you’re buying a house for the first time, what do a lot of people usually do? They go and talk to a friend who’s been through something similar to get their input on it because they’ve been there. Again, it’s about kind of opening yourself up and your mind to getting feedback from someone who’s done this stuff before, and will probably come up with some solutions that you didn’t necessarily consider in your own thinking.

EP:                   Right. That really pulls us into the next question, which is what you can do. One of the responses to that are to look into doing some sort of discovery project with a third party. Could you speak a little bit to what a discovery project is?

AP:                   Sure. When clients come to us they usually say, “We know we have this content related problem.” What we do, we say, “Okay, let’s take a look at that.” It becomes part of a bigger engagement essentially, because what we need to do is back up a little bit from that pain point. We need to figure out what the big overall business goals for the company are, and then we can say, “Okay, this pain point is likely happening because it’s not aligning with this particular requirement.” What we want to do is go in there and people usually come to us when something’s wrong or broken, just like you go to the doctor. It’s not usually, “I feel great. I’m going to the doctor.” It doesn’t work like that generally. Something’s wrong. They come to us. What we want to do is take a look at what’s broken, take a look at the big overarching business goals and how that content problem ties into it and what you can do to fix it to better align that content pain point with the business goals of the company and fix that problem.

EP:                   Right. Discovery projects, when we do discovery projects, there can be differences depending on the type of project that it is. But overall discovery projects are very similar. We’re identifying gaps. We are identifying tool possibilities. We’re putting together a map for solutions for your content project. They all look very similar in that sense.

AP:                   Right. There’s an overarching kind of theme or goal to these things. I’m glad you mentioned tools because there’s often this temptation when you’ve got some kind of problem, oh, I’m going to get a piece of software that will magically fix that. It doesn’t usually work like that.

AP:                   That’s why I think having someone come in to kind of evaluate and articulate what the requirements are to help you build that list so you can pick the right tools. There needs to be this conversation. There needs to be a lot of back and forth among different people in your organization, whether they are directly or indirectly affected by content, in the case of a content strategy project, of course.

AP:                   A lot of what we’re talking about applies to business in general, but of course our focus is more on content, because you want to get the big picture and get those requirements laid out and then find the tools and solutions that fit that thing and make those goals come to life and work. If you don’t do enough discovery and you don’t put a lot of thought process and you just pay attention to marketing, or what you heard another company did, that gets into a dangerous territory where you may not get return on your investment when you buy a tool and it doesn’t turn out as you anticipated.

EP:                   Absolutely. You’re talking about how important it is to talk to all of these people so that you can pick the right tool. Oftentimes stakeholder interviews are part of a discovery project so that the third party goes in and they talk to the different stakeholders that are going to be affected by that tool, find out what their pain points are, and then that helps to identify a tool that’s going to really solve the problems or fit your solution.

AP:                   Right. I think it’s also worth pointing out too, on these discovery projects, this is just about looking at tool options and suggesting the possibilities, and then later, what we generally do is have a phase where we configure and implement the tools.

AP:                   This is also very helpful from a business point of view. If you go into relationship with a consultant or a vendor or whomever, it’s probably good from a business point of view to separate out the discovery part from the configuration implementation part, because what if you get into the discovery part and you discover that you and that consultant are not sinking. It happens sometimes. You can’t quite sync, so that relationship probably shouldn’t continue.

AP:                   From a strictly business point of view, it’s a good idea to separate the discovery out from the configuration, the implementation. Also, it is very good from a budget point of view, say, “In this fiscal year, we’re going to lay out the discovery work and come up with a roadmap, which is a result of your discovery project.” Then later in the next fiscal year is when we’ll start buying the tools and implementing them, say, over the next two fiscal years or something like that.

EP:                   Oftentimes that roadmap and the results from your discovery project are what help you, or can help you to get buy-in from upper management to actually get the funding that you need for that tool or the implementation phase.

AP:                   Well, and really from my point of view, too, that when you mention upper management, they need to be part of the discussions from the get-go. They need to be part of those interviews because they need to articulate what they see as being the requirements are. They also need to hear about what the issues are in the content world. Because they’re the ones, A, like you mentioned, who have the money, and B, they’re the ones that also have the vision of how to reconcile those things. It points to a thorough communication system that you have to set up when you’re doing these interviews. You don’t just pick the people that are immediately affected by content, the creators who reads it, who edits it, who reviews it, how you distribute it. You also need to talk to your executives to find out what they expect from the content and how it aligns with their vision.

AP:                   You need to talk to your IT people, for example, because they’re controlling likely maybe some web servers that your web content lives on. They may be controlling tools. They’re the ones that do inventory of tools and decide, yeah, we’ve already got a tool that does this. Why are we going to get another one? It’s not a situation where you need to be in a vacuum. A discovery process is talking to people who were directly and indirectly impacted by decisions. And you have to include those people who have the bigger overarching vision, for lack of a better word, for where your company is headed.

EP:                   Right. We’re talking here about discovery as a way to solve a problem. Are there any other reasons that a company might consider a discovery project?

AP:                   Well, we tend to be focusing on the negative things and we really have done that in this podcast. Oh, it’s a pain point. It’s the bad things. But you’ve also in this process, have to look at the things that are working and figure out a way to either translate those or move those over into your new process or whatever new systems you’re going to recommend, to be sure those things are handled right. You’ve got to look at the good and the bad, but the bad is what usually brings people to us. But we also have to recognize as consultants and as the people who help run these discovery programs, that you have to also have a really good ear and listen, and find out about the things that people like and that in some way need to be kept as things move forward with improving whatever it is that is indeed broken.

EP:                   Right. Another reason to consider a discovery project, and this isn’t a problem, but there might be a merger and bringing in a third party to help with that merger and bringing content from two different companies together and making a plan, that can be very helpful because there’s a lot to unpack there.

AP:                   Right. There’s something to be said, especially for a third party in this case, because you’re going to often have two basically completely duplicative systems that are pretty much doing the same thing around content. From an IT point of view, keeping both is probably not ideal in the long-term, at least usually it’s not. It is not a hard fast rule here.

AP:                   It’s a good idea to have someone come in and to take a look who has had experiences helping other people with mergers, like you’ve mentioned. Is it likely you’re going to have someone on your staff that has gone through that? If you have, that’s great. Use that resource. But a lot of times you haven’t, and that again points to, let’s talk to a third party who recognizes the issues and the challenges surrounding our merger and content and how they can help us figure out how to integrate things better.

EP:                   Right. Alan, well, thank you. That was really helpful. Thank you for joining me today.

AP:                   Sure. Enjoyed it.

EP:                   With that, I think we’re going to wrap up. Thank you all for listening to The Content Strategy Experts podcast brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relative links.

 

The post How to align your content strategy with your company’s needs (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 12:32
Using text strings and microcontent (podcast, part 2) https://www.scriptorium.com/2021/03/using-text-strings-and-microcontent-podcast-part-2/ Mon, 08 Mar 2021 13:00:33 +0000 https://scriptorium.com/?p=20192 https://www.scriptorium.com/2021/03/using-text-strings-and-microcontent-podcast-part-2/#respond https://www.scriptorium.com/2021/03/using-text-strings-and-microcontent-podcast-part-2/feed/ 0 In episode 91 of The Content Strategy Experts podcast, Gretyl Kinsey and Simon Bate continue their discussion about using text strings and microcontent. This is part two of a two-part podcast.

“Make sure that their voice is heard. All groups that are using your strings need to have some input or have a way of communicating their needs to the organizations controlling those strings.”

– Simon Bate

Related links: 

Twitter handles: 

Transcript:

Gretyl Kinsey:                   Welcome to The Content Strategy Experts podcast, brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we continue our discussion around text strings and microcontent. This is part two of a two-part podcast.

GK:                   So then, yeah, we talked about all the considerations for content creation. What about content maintenance?

Simon Bate:                   Well, of course, in content maintenance, again, we get back to the same metadata that I keep harping on, and that is that having strings, having descriptions of the strings, help people perform maintenance tasks. So again, you may have particular reasons for saying things in a particular way, and it’s always good to pass along that knowledge. And not just as folklore within your company, but actually, is there something written down, and the closer to the thing that it’s describing the better. So if you’ve actually got a piece of metadata along with your string that says, “These are the considerations that we need to use when working on this string,” then that’s a good thing to do.

SB:                   Here’s another translation issue, and this is more a maintenance thing with translation. And that is, when you’re sending something off to translation, particularly when it’s individual strings that are just sort of individual pieces of information in isolation, you need to pass along information to the translators that say, “This is where this is going to get used.”

SB:                   And again, this gets back to what I was saying earlier, when you have a string in isolation, a word in isolation, like the word file, it could mean it could be a noun, or it could be a verb. The translator needs to know that piece of information.

SB:                   Along with maintenance comes sort of a management aspect to it. And that is, it’s always a good idea to identify a responsible individual or a group that is responsible for overseeing things, like the aging of content. At some point, somebody has got to go through all these strings and say, “Oh, this one’s no longer used. This is no longer necessary, and remove them from the database.” Also, what you’ll find, this is a classic reuse issue, is if you have a string and it’s reused in a number of devices, and then one device changes slightly which renders a string ambiguous or not meaningful anymore, you may need to decide, okay, it’s time to create a new version, a separate version of this string for use in this case. And so it’s always good to have some group to oversee or control how often strings get modified or how often they get forked into separate strings for separate uses.

GK:                   Yeah, absolutely. And having that kind of an individual or a small team in charge of that is especially important if you have multiple departments, or maybe even every content-producing department across the entire organization all coordinating and working together. You might have everybody in one repository or in separate, but connected, repositories and they’re all collaborating on the content, and they all need to make sure that there is kind of that one person or one group in charge that can make sure that one department isn’t just kind of going off the rails somewhere and making decisions that could affect all the rest.

SB:                   Yeah. And also, at the same time, it’s really good to make sure that people don’t feel left out of the process. So you also need to make sure that their voice is heard, for all groups that are using your strings, that they all have some input or have a way of communicating their needs to the organizations controlling those strings.

GK:                   Yeah. And I think this really gets back to what we talked about way earlier in the beginning, about how a lot of people, when they’re in the planning stage, they might start with that spreadsheet. And I think if that originated out of one department, it’s really important to make sure that any other groups who are going to be involved in the content are all aware of what’s in there. They maybe get a chance to come in and add their own ideas or their own requirements to that spreadsheet. Or if they’ve got a spreadsheet of their own, that it all gets consolidated. And that’s probably one of the best times in your project to choose who this responsible individual or group is going to be. And then that way they can take it smoothly or as smoothly as possible. It’s never perfectly smooth, but they can take it as smoothly as possible from that planning stage in something like a spreadsheet into your actual DITA environment.

SB:                   Yeah, that’s right.

GK:                   So now let’s talk about that third piece, which is content delivery. So what are some of the implications on the delivery end for microcontent and strings?

SB:                   Of course, the main idea about delivery is, you’ve got these strings and they’re in your repository in some form, and they have to be then output in a form that can be consumed by the device with the software that’s going to be using those strings. And of course, if you’re maintaining your content in DITA, good thing is, you can then use the DITA open toolkit to transform your strings in your CCMS into some delivery format. And the typical delivery formats that we see, there’s just plain text we’ve seen. Sometimes, there’s the need for comma separated values. But then also JSON is, nowadays, a very common format for encoding information. Many devices are set up to consume it.

SB:                   One of the real advantages of JSON is that in any number of languages now, the JSON files can be opened and consumed as a database almost immediately. JSON actually uses JavaScript syntax for marking it up. So you can actually take a JSON file and just plug it into JavaScript, and all the information is available using standard JavaScript selectors. For other languages, it’s only a little bit, only slightly, more difficult, but it’s a very flexible tool and a very effective way of saving information away.

SB:                   So of course, when you’re creating your output format, the software or the device that consumes those strings is going to have very specific expectations of those strings. It’s going to have expectations of the format of the strings, and expectations about the tagging. So how are each of those individual strings identified? In some cases, you don’t have much say over that. Sometimes the hardware department’s usually months ahead of you in terms of development and things, and by the time you deliver, everything’s on silicone and there’s nothing you can do. If you’re lucky, you can start early enough and start to negotiate, start to discuss with the hardware team, what might be necessary for supporting these strings. Are there things they haven’t thought about? And so then you can actually get some… Dialogue can occur between your groups about what they expect and what you can deliver.

GK:                   Yeah. And I think we’ve seen a few examples of this just in the way that some companies organize their departments. In a lot of cases, there’s a separate content department that does all of your technical documentation. Maybe there’s another one for your marketing, another one for your training. But we’ve also seen a few companies that do something more like, there’s each department for maybe a product line, and then they have their own, a writer or two, who kind of reports to that group or is ingrained there. And that’s really showing that that company values their content and is willing to kind of have that conversation earlier in the game. And I think that, even if you do have more of that sort of traditional structure where you have a content department, it’s still important to have people from that department going out and talking to your product developers, to your hardware team, and just coordinating earlier rather than kind of shoehorning either group into something.

SB:                   Yeah, absolutely. Communication among groups is essential.

GK:                   Yes.

SB:                   It’s essential for good product delivery.

GK:                   So what are some other considerations around JSON?

SB:                   Now, these are some considerations for JSON, but they may be applicable to other output formats. This is probably built in by the people consuming the JSON. But you do need to figure out, in JSON, a way of passing the JSON markup characters in your strings, and particularly the quote signs. So you have to have some way of encoding a quote. Then when they receive the quote, the quote sign, they can decode it.

SB:                   Now, the reason I call that out is, within JSON the quotation mark, double quotes, is used to delimit a string. So we have an open quote, close quote, but if you should actually need to use a quote in the middle of that string, you’ve got to do something so it’s not interpreted by JSON as being part of the JSON itself. That’s a syntax violation. Quotes need to be encoded, but also, it’s good to do the same thing for other JSON markup that may occur within your strings. For instance, square braces, curly braces, and semi-colon are also characters that are very important to the JSON syntax, and probably should be encoded.

SB:                   Another thing really worth considering is that strings in JSON, once you say, “Oh, this is a string,” that’s just all it is. It’s just a sequence of characters. And so if you’re thinking about something like DITA markup, within a DITA paragraph, you start a block object, a paragraph, and you can write characters, and then you can drop in, inline, DITA markup that does things to the characters. Like you could say, “Change to bold,” “Change to italic.” There’s a number of other things you can do for markup within a particular paragraph.

SB:                   If you take that paragraph and move it to JSON, all you’re going to have is the series of the characters in that paragraph with no markup, because there’s nothing within JSON for an individual string to have inline markup.

SB:                   Now, there are a couple of ways of dealing with it. And one is actually to have very complex JSON, but have the JSON understand that it’ll output individual objects for each piece of text in each individual format. So you might have an object that is just regular text, followed by an object which is italicized text, followed by another object which is normal text.

SB:                   Another way of dealing with this, and again, it’s based on the consuming system, if it can deal with this, you can actually drop something, an encoded form of markup, within your strings. So you might use a whole separate character set other than the JSON characters to say, okay, here’s regular text and here’s where italic text would start. And then here’s where italic text gets turned off again, and so on. But again, a lot of this is based on the consuming system. A lot of this gets back to our previous discussion about, you’ve got to be talking with the people designing the devices or consuming the content, and they have to know that inline text may be an issue, or they may say, “No, we’re not going to deal with inline text at all.”

SB:                   One final thing about just generally generating output, and this gets to all levels of things, and we see this a lot. And so it’s always good to sing it out. If the particular form of output needs to be in a different case, say all uppercase, or InitCap, or something like that, your sources don’t have to be in all uppercase, because that kind of character change can always be done by your output formatter. So as you’re transforming it in the DITA open toolkit, you can just say, “Oh, output this string, and while you’re outputting it, change the case to uppercase.” So it’s always good when your sources keep to minimum change of case within those sources.

GK:                   So now that we have talked through, we started off with content creation, went into maintenance, and wrapped up with delivery, I think one really good way to kind of tie all of this together is to talk about project management expectations. And we already touched on it a little bit. We talked about content governance and having that small team or an individual to sort of oversee everything, but what are some project management expectations that people need to keep in mind when it comes to working with text strings or microcontent?

SB:                   The main one, and it gets to sort of business common sense, you may have very high expectations about what you can do, the people who are taking in the content and they have high expectations about what they can do, but there may need to be, at some point, a middle road. There may need to be some compromise, a compromise of, what is the best you can do versus what is it that the device is capable of doing, or what is it that we are capable of delivering?

GK:                   Yeah.

SB:                   So there has to be a compromise.

GK:                   And I think a lot of what informs that compromise is cost and time. So you might have the technical capability of doing something, but if you don’t have the budget for it, if it doesn’t make financial sense, or if it’s going to really conflict with some of the time constraints you have, some of your deadlines, then it might be something to kind of consider for the future and slowly roll that out. But that’s one of those things where you can’t just have everything and you do have to make those compromises.

SB:                   Yeah, yeah. In my role of programmer, I often think, “Well, everything is possible. The question is, is there money to do it?” So I think the other thing to keep in mind with project management is that having strings, short text strings, microcontent, you can do a lot, it’s a good way of maintaining things, but it can’t necessarily solve all the issues that you may have. There may be need to go to other ways, like full-text strings, or other considerations for how you encode or transform this information.

GK:                   Yeah, absolutely. And I think that really is a consideration, not even just for strings or micro content, but for everything that… There’s no sort of one size fits all way to solve all of your problems. So it really does get back to what we talked about in the beginning, about planning your strategy and sort of figuring out how does the use of microcontent fit into your larger solution.

GK:                   Well, thank you so much, Simon.

SB:                   Absolutely.

GK:                   And thank you for listening to The Content Strategy Experts podcast brought to you by Scriptorium. For more information, visit scriptorium.com, or check the show notes for relevant links.

 

The post Using text strings and microcontent (podcast, part 2) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 15:35
Using text strings and microcontent (podcast, part 1) https://www.scriptorium.com/2021/03/using-text-strings-and-microcontent-podcast-part-1/ Mon, 01 Mar 2021 13:00:42 +0000 https://scriptorium.com/?p=20188 https://www.scriptorium.com/2021/03/using-text-strings-and-microcontent-podcast-part-1/#respond https://www.scriptorium.com/2021/03/using-text-strings-and-microcontent-podcast-part-1/feed/ 0 In episode 90 of The Content Strategy Experts podcast, Gretyl Kinsey and Simon Bate talk about using text strings and microcontent. This is part one of a two-part podcast.

“They’re starting to get the idea of taxonomy and how important it is for all parts of their business to communicate using the exact same language. If this can be captured and put in one place, then those strings can be available to everybody.”

– Simon Bate

Related links: 

Twitter handles: 

Transcript:

Gretyl Kinsey:                   Welcome to The Content Strategy Experts podcast, brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize and distribute content in an efficient way. In this episode, we talk about using text strings and microcontent. This is part one of a two-part podcast.

GK:                   Hello and welcome everyone. I’m Gretyl Kinsey.

Simon Bate:                   And I’m Simon Bate.

GK:                   And we’re going to be talking today about text strings and microcontent, so I think the best place to start is just by defining what each of those things are.

SB:                   Yeah. Well, text strings, it’s one of these things grew out of computer science, and all it really means is it’s just a sequence of text characters. Usually, it’s a paragraph or shorter. It’s more often just a sentence or just a snippet. It’s just, essentially, a series of individual characters.

SB:                   Microcontent is a little bit more specific and it has a number of definitions. The first one was about 1998, Jacob Nielsen coined it as a small group of words that can be skimmed to get a basic idea. And the idea here is something like a title or a headline or something like that, where you just look at it and you can see immediately what’s there. Since then, it transmogrified, and now it often refers to small information chunks that can be used alone or in a variety of contexts. And of course, one of the big contexts that people are really interested in using microcontent now are chat bots and voice response systems.

GK:                   Yeah, absolutely. And that gets into one of the next things I wanted to talk about, which is all the different ways that we’ve seen this concept of microcontent come up in our work with clients. We’ve had all kinds of different requests around using microcontent and use cases for it. So what are some examples of that?

SB:                   Oh, we’ll get things like, people say we need all our strings maintained in just one place, that then we can use those individual strings on a number of device interfaces. Sometimes people have a localization issue. They’ve got these strings and their devices, wherever their strings are used. And in this case, I’m talking about strings that are often used as part of an onboard device control system, so you get a small, little display panel and it pops up some words, some phrases or instructions about what someone’s supposed to do. Some of these are localized to many, many different languages, and so, it’s good to be able to have those strings all in one place and be able to localize them in a good, organized way.

SB:                   Other people will say they’ve got a new device coming online and it needs its strings in JSON format. And so, they need to know how to write and maintain the content and then export it so then it can be available in JSON.

GK:                   And just for context, for those who may be unfamiliar, what exactly is the JSON format?

SB:                   Yeah. JSON is a form that grew out of a JavaScript, actually. And it is a text way of simply describing a structure. You can think of it in many ways. It’s very much like XML, not in the way that it looks, but in the way that it’s organized. That essentially, within JSON, you have an idea of individual objects and those objects can be arranged in arrays, objects can contain objects, and so on. So it’s a way of labeling and structuring information.

SB:                   Sometimes we have requests, people just need company-wide consistency of terms. They’re starting to get the idea of taxonomy and how important it is for all parts of their business to communicate using the exact same language. If this can be captured and put in one place, then those strings can be available to everybody, and everybody, when they need them, they can say, okay, I need the string that describes this thing. And so, quite a lot of the time we hear that request.

SB:                   Another simple one we’ve just described earlier, they say, we’re implementing a chatbot. We need a way of creating and maintaining the strings. And of course, when you have a chatbot, of course, there’s loads of metadata that goes with those strings. So they have context and that has to accompany it. And sometimes, we get people, they’ve actually worked out a lot of these problems before, but they say our spread solution isn’t workable anymore. So essentially, they’ve got all these strings, but they’re maintaining it just as a spreadsheet and trying to add new columns when they add languages, or add new columns when they add additional uses of a term, and so on. But in some ways, they’ve actually got some of the issues worked out. They just need a good way, a good repository of storing the information.

GK:                   Yeah. And that really gets to some of the things we’ve talked about in some of our previous episodes around taxonomy and metadata, that planning things out in a spreadsheet is a really great starting point, but you do eventually hit this turning point where it’s not really going to be sustainable as you scale up. So that’s really a great point when you reach that to say, okay, maybe we do need to look at a different way to work with these strings.

SB:                   And that goes two directions too, because there’s the spreadsheet itself and the structure of the spreadsheet and trying to keep the information in it, and then there’s the whole management issue of who has the spreadsheet? What’s the latest spreadsheet? Now, of course, now with Microsoft Office online and tools like that, maybe the spreadsheet can be shared around, but there is often an issue when you’re maintaining things in a spreadsheet about who has control over it. And whereas, if you have a CCMS or something like that, then the control over the content is much more easily controlled.

GK:                   Yeah. You have a lot better control, like you said, over the content governance aspect, whereas when everything is just in a spreadsheet, you’re locked out in a way and it’s not as easy to disseminate that information to everybody. And this gets into the next thing that I wanted to discuss too, which is the idea of planning. because I think spreadsheets are a lot of times the starting point for that. So when people start to plan for the use of these text strings or microcontent, what are some of the factors that go into that?

SB:                   Well, there are really three angles to planning your text strings or microcontent and they are content creation, the maintenance of that information, and then delivery. And all of these factors inform your final design. And it’s a mistake to try to tackle these in order and say, Oh, let’s design the creation first and then handle maintenance and delivery. They all have to be developed all at the same time.

GK:                   Yeah, because all of them really play into each other in different ways. And when you are coming up with that strategy and figuring out that plan, you have to think across all three of those different angles. So let’s dive into each of those a little bit more and talk about, first, the aspect of creating content.

SB:                   Yeah. So often, when you’re creating the content, an XML solution or particularly DITA works very well for maintaining the content.

GK:                   And then, with one of the things that we see often with DITA is reuse, so how does that come into play?

SB:                   Well, it depends really how it’s going to be used, because there are two ways of looking at it or two ways of using reuse. One is you may be creating strings and those strings are output. And then, that output is reused across a number of individual devices. And then, there’s also the true DITA sense of reuse, in that you create these strings and you make them available for reuse across topics. You could actually need strings for both of those purposes, but in some ways, that’s going to inform your decision about how exactly you store the content in your storage solution.

GK:                   Yeah, absolutely. And we often see the need also for different forms of the same string. So, for example, a version of it that is abbreviated. So how is something like that identified?

SB:                   A lot of it’s, you have to know your content. Look at your content and know how it’s used. And this is a really big thing when you get to device strings, because there are times when the same string or the string with the same idea may need to be expressed in a number of different ways. You may need a short form of the string. It could be a phrase or a sentence or instruction, but there could be some applications where there’s not that much screen real estate, or they need a smaller number of characters to communicate the same thing. So you may actually need two different forms of the same string. You need a long form and a short form.

SB:                   And then, as we were talking about abbreviations, there are also times when you have a string or you have a term, and sometimes you just need to use that term abbreviated. You could actually be using these strings to kick out labels or something that go on a display panel or something like that. Sometimes those need to be abbreviated.

GK:                   Yeah, absolutely. I want to talk about some of the other considerations that go into this. And one of them that’s a really big one is metadata. So how does that come into play?

SB:                   Yeah. This is absolutely a big area of what you have to think through in planning your project. There are two main areas where metadata is going to come into play. And one is for your authors, because as they’re creating the content, they need to know what is the string for? What’s the final purpose of the string? Where is it going to be used? And that then informs also for the authors, what are the considerations they need to use when writing or maintaining it? You may need to leave an instruction behind or something to say, this can’t be any more than 50 characters, or I made a particular decision or corporate or legal has made a particular decision about what we can say here or what we can not say here. So that kind of information is really useful if it can be maintained in metadata, along with that string.

SB:                   Now also, there’s how the string itself is going to be used. So the consumer end, so there may be something like there’s an identifier associated with that string. Because when you create a GUI, from the GUI, for every component in the GUI you’ll have an identifier. And if you can use that identifier then to link to the string, you know which string is going to be used for which component in the GUI. And when I say GUI, of course, I mean, graphical user interface. And the others are things like keywords and things that might be used by a chatbot, or a voice response system.

SB:                   One good thing to know in metadata is there are emerging standards for some of the metadata. In particular, TCOM is building a metadata standard. It’s been out for a little while now. It’s called the IIRDS or intelligent information request and delivery standard. And it’s a standardized metadata vocabulary for describing technical content. It’s sensibly built because there are some things that are just fixed and standard pieces of metadata, but it’s also built to be expanded. You can add your own content to the metadata, because of course, every use, every application of these strings is going to have its own special needs, its own special considerations.

GK:                   So in addition to metadata, another big consideration is localization, right?

SB:                   Yeah. This is a number of considerations you have to apply if a localization is going to be one of the reasons why you’re creating these strings, or if your strings are going to be localized in the first place. Number one is there’s often a difference in length of strings for different languages. If you’re going to be localizing English text, say to German or Russian, there’s a great expansion of the length of the strings. Now, the people building the devices where your strings might be used will also have to know that the device itself is going to be marketed in other areas. They’re going to have to be able to accommodate these longer strings. This Eventually comes down to the creators and they have to know that within a particular language, the strings may have a maximum length. There may be of maximum screen interface. And again, that gets back to the metadata that describes the strings itself and what is that string for?

GK:                   So what about cases where people are trying to think about localization and they’re doing some shortcuts or work arounds and saying, Hey, we can just have this one piece in the middle of a sentence be a string, right?

SB:                   Yeah. Well that can work. And unfortunately, it works very well in English, but doesn’t necessarily work very well in other languages. And there’s a number of things to consider. And even in English, there are some issues if you’re just going to be substituting a single word. For instance, the definite articles a and an which depend on the following letter, is it a vowel or is it a consonant? So just trying to swap out a single word there is going to be problematic. It gets worse when you get into other languages, gendered languages. There’s a number of other considerations to take in mind there. So you just have to be careful, know your languages, set your expectations for what languages you’re going to go to.

SB:                   Another thing to consider, and this is not just for string substitution, but if you’re using short words, if you’re using individual words, again, English has this nice facility of we’ll have words like file that serve both as nouns and verbs. And so, you could write file and it’ll work very nicely in one use, but when it gets translated, there’s a question. Is this to be used as a noun, a label, file? Or is this actually an imperative, a verb denoting some action. Do you have to file? As you’re thinking about localization, again, it’s really important to keep these things in mind. And again, this is where the metadata describing what this string does.

GK:                   Yeah. This is something that we often caution people about when we know that localization is on the table or if they think it might be as they grow in scale. And that’s one of the reasons why some of the general advice that we tend to give is that if you are going to make a short phrase or a single word into a string that can be reused at different places, that you stick to something that’s going to be pretty safe, like a product name or a company name, something that is maybe not even going to be localized depending on how you do your branding, but something like that where the risk of the way the word is used is a lot less great than it is with just a normal word that’s part of your text. If it’s part of your brand and terminology, it’s likely to be a little bit safer when it comes to making it a string.

SB:                   Exactly. And so, yeah, sticking to product names and things, or just considering keeping it at the sentence-level. So your string should be the whole sentence. Even if that string has to get translated every time into a different language, in some ways that’s going to be a whole lot better, more predictable, get more predictable results than trying to do any of this swapping.

GK:                   Yeah, absolutely. So if you’re using DITA and you’re working with strings or microcontent, what are some of the possible models that you might use for that?

SB:                   There’s a number of ways you can look at it in DITA. And of course, a lot of this is informed by how your strings are going to be used. One approach, and some people might arrive at fairly quickly, is the idea of using keys. And there are some advantages there, but keys used directly may also run into some issues. They’re fine for single strings in isolation, but if the key itself needs to have any kind of DITA markup, you run into problems, mostly because of the DITA content model and what is allowed inside keyword, which is if you’re using keys for short pieces of texts and the keyword is the element you’re going to be using.

SB:                   Now, I say directly, because we can also use keys to identify glossary entries. And a glossary entry topic actually is something really worth considering to store these strings, because already the glossary entry topic has a number of elements for usage, different forms. It has already elements that identify acronyms or expanded forms. And DITA itself is set up to process these with the abbreviated form element. There’s a lot of good things in DITA. And you may want to consider maybe not using glossary entry straight, but actually specializing it. And of course, that always has the advantages that you’re going to be working with much more, much better semantics. If you specialize, you can actually identify for your users exactly what they’re going to be doing.

SB:                   Of course, you know, there is a downside to using the glossary entry. And that is just because it’s a great overhead. It essentially means for every string, you have to create a new topic. So this is potentially a vast number of topics you’ll have to create. So for some uses, that might be okay. For others, you may want to pull things together more. And so, you might consider creating topics, organize those topics with sections within topics, and then within those sections, you can either define individual words, strings. There’s a number of different ways you can do it. Again, you can do specialization. Several of the things we’ve done for people using strings, we’ve actually created specializations that help them manage the individual strings, the metadata that goes with those strings.

SB:                   Another thing we’ve seen is using tables. And tables themselves, of course, it gets back to the spreadsheet idea, but steer away from that for a moment. And nice idea about a table is you can have a string. You can have columns for the string itself. You can have a column with the ID. You can have a column with the description about where that’s going to be used. You can have abbreviated forms and so on. And of course, the advantage there, the differentiation with a spreadsheet is if you’re going to be translating, then the translation occurs at the topic level. And so, you’ll have a separate topic for every language, or a separate version of that same thing in each language.

GK:                   We are going to wrap things up here and continue our discussion in part two. So thank you for listening to The Content Strategy Experts podcast, brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

 

The post Using text strings and microcontent (podcast, part 1) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 19:39
The misuse of metadata (podcast) https://www.scriptorium.com/2021/02/the-misuse-of-metadata-podcast/ Mon, 15 Feb 2021 13:00:18 +0000 https://scriptorium.com/?p=20159 https://www.scriptorium.com/2021/02/the-misuse-of-metadata-podcast/#respond https://www.scriptorium.com/2021/02/the-misuse-of-metadata-podcast/feed/ 0 In episode 89 of The Content Strategy Experts podcast, Gretyl Kinsey and Bill Swallow talk about strategies for avoiding the misuse of metadata and DITA XML-based content.

“The more you fine-tune how your content model needs to operate, the easier it’s going to be to move it forward over time. The more you start taking shortcuts and using metadata for purposes other than what it was intended for, the more problems you’re going to have.”

– Bill Swallow

Related links: 

Twitter handles: 

Transcript:

Gretyl Kinsey:                   Welcome to the Content Strategy Experts podcast, brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize and distribute content in an efficient way. In this episode, we talk about strategies for avoiding the misuse of metadata and DITA XML-based content.

GK:                   Hello and welcome, I’m Gretyl Kinsey.

Bill Swallow:                   I’m Bill Swallow.

GK:                   And we are going to be talking about all the different ways that we have seen metadata get misused in DITA XML. Before we dive into that subject, Bill, what is metadata in the context of DITA? And how is it used just for anyone who may be unfamiliar?

BS:                   In the context of DITA, metadata is a series of elements and attributes that are applied to your DITA content in order to give it some meaningful purpose. A lot of times we see it as profiling metadata, so being able to set, for example, an audience on a topic to say, “This is only for beginner people.” This way, when you publish your output, you can turn on or off your beginner audience content and produce either a beginner guide or a more advanced guide without the beginner information in there. Metadata also allows you to do more interesting things with your content. One example that we see with metadata in the standard DITA implementation is around notes and warnings and cautions. They’re all the same root element of note, but you can use a type attribute to set whether it is a note, whether it is a caution, a tip, a warning, a danger flag or what have you. That’s an example of how metadata can influence the type of content that you have in DITA.

GK:                   Yeah, absolutely. It’s essentially we describe it as data about data. It’s all of the information in your DITA content that does not actually get printed or electronically distributed, I guess you could say, on the page. It’s everything that’s making it run kind of behind the scenes, but it’s not actually part of your published output. It can influence the way that it is produced as Bill described, the way it’s sorted and searched and delivered. But it’s not something that you actually see, like the words on the page. And I think that that can cause a lot of the confusion that makes it get misused because when people are coming into DITA from a mindset from desktop publishing, the way that metadata works there is quite a bit different. And I think that getting into that mindset of the proper use of metadata in DITA is a really big shift. I want to talk about some of the examples of misuse that we have commonly seen and fixed with a lot of different cases.

BS:                   I think probably one of the most common examples we’ve seen with regard to metadata misuse has been around using metadata or generic metadata buckets for very specific purposes. And one of these is the output class attribute that a lot of people end up using as formatting instruction within DITA. Which kind of breaks the rules of DITA itself because you generally are going to DITA so you can separate your content from its formatting. But here, we often see output class equals red or output class equals 16 points, where they’re adding instruction that wasn’t built into the transform itself in order to tell the transform how to render a piece of content. And it blows my mind, but it is one of the most common things we’ve seen.

GK:                   Yeah. And that just, again, it comes from that mindset of working in something like desktop publishing, where you do get to control all the little bits and pieces of the formatting at an individual level. And when you go into something like DITA, where your formatting is separated from the content and is automated, it can be really, really difficult to get your mind past that shift. And so then we see a lot of instances where people go, “Oh, I’m really limited. I can’t make this one piece of text bold anymore or I can’t turn this green anymore.” They just find the workaround of putting an output class on it. And what happens over time is that that misuse of the output class attribute ends up completely defeating the purpose of having automated formatting, because you’ve put all these overrides everywhere. And if you actually had a more legitimate use of output class, then that’s kind of ruined too by the fact that you have misused it in all of these places.

BS:                   Another bit of metadata misuse that we’ve seen is using one metadata element or attribute for another purpose, a purpose it wasn’t designed for. One example could be that you might be setting audience to let’s say a different country, which kind of makes sense if you want to be able to filter on certain types of content for certain geographies, but really that level of metadata should be held up in the xml:lang attribute where you’re describing which language and which country this content is aimed toward. If you’re labeling something for a German audience, regardless of whether it’s in English or in German, you really should be using the xml:lang attribute as opposed to profiling it for a German audience. Now, there are some differences that you can get into as to whether you want to include or omit certain types of information for a particular audience but in general, you have to be clear about which elements and which attributes you’re going to use for which specific purpose.

GK:                   Yeah. And audience is a really interesting one because I’ve seen a few different cases where if the audience that a company is delivering for is really complex and there are maybe a lot of different ways that they need to kind of parcel out the content for different chunks of that audience, that just the default metadata for audience in DITA isn’t really enough for them. And some of the workarounds that I’ve seen them do are they’ll use audience for kind of one facet of their total audience and then they’ll go in and pick another metadata element or attribute for another facet of their audience, when it’s really not designed for that. And so that points to the fact that if you’ve got complexity that’s not really built in, that you need to start looking at a more effective way to handle that than just shoving it into a metadata element where it doesn’t actually fit and where it’s not designed for it.

GK:                   Because then what happens down the road is if they actually did need to use a different metadata element that they had designated for a part of their audience and later they need to use that for its intended purpose then it’s already taken up with however they’ve described it for that piece of their audience. And then they have to do a lot of reworking. It really is important to kind of think about this. And I know we’ve talked about this in some of our other podcasts about planning out a taxonomy and thinking about your metadata as a whole before you go in and just start assigning it to the DITA elements and attributes.

BS:                   Right. And then even if you’re not misusing an attribute, a lot of times we do see cases where other meta is just used throughout an entire content set, where you’re essentially defining custom metadata, which is good, but you’re doing it in a very generic way that usually requires a lot of, well, it usually involves a lot of user error because everything is hand-typed at that point.

GK:                   Yeah. I’ve seen a lot of instances where that’s just kind of used as a catch all or a place to shove anything that doesn’t fit into all of the other existing metadata categories that there are. And then what happens is later when you need to organize that better, everything has just been shoved into other meta and there’s not really an easy way to kind of parse that back out and define it without doing a lot of work. It really is, like we said, helpful to plan this out ahead of time, think about all the different metadata that you’re going to need and figure out where and how it fits into DITA’s metadata structure.

BS:                   Absolutely. The more you fine-tune exactly how your content model needs to operate, the easier it’s going to be to move it forward over time. The more you start taking shortcuts at the beginning and using metadata for purposes other than what it was absolutely intended for, you’re going to have a lot of problems unwinding that as your content set grows, as your publication breadth grows. You’re just going to run into problem after problem after problem so it’s best to do it the right way. Rather than shoehorning a bunch of metadata into random elements and attributes and using other meta wherever you want to and output class and all these other things, probably want to talk about what might be a better approach?

GK:                   Sure. Of course one is specialization and that is the ability that DITA has to create custom elements and attributes based on the structure of existing ones. And this is absolutely something that you can and should do with metadata if you have that need. And this is really, I think one of the more common areas where we see specialization among our clients. That a lot of times the actual topic and map structure is fine, but there are metadata requirements that they have that just do not fit within what’s available by default in DITA. Coming up with some sort of a taxonomy before you start putting everything into those default DITA elements and attributes can help you see where, okay, maybe we might need a specialized element or set of elements or attributes for the specific type of metadata. And that really gives you a roadmap for how that might work.

GK:                   And I think one thing to look out for is if you do start with a kind of set metadata structure and then things change over time and you start noticing a pattern, that you may be are using other meta a lot or output class a lot for the same kind of thing over and over because it just doesn’t fit, that can oftentimes kind of be a little red flag to you that, hey, maybe we need to go back and take another look at this and think about specialization for that kind of information.

BS:                   Right. A lot of people are really hesitant to look at specialization because it means customizing the DITA model and really doing some very high tech and difficult things. And from a metadata perspective, it really is the best way to get in there and make the model work for your content and for your needs. And the beauty of the specialization approach is that once you’ve implemented it, it carries forward. You can update DITA from version to version, to version, to version and your specialized content will work. Your specialized elements and attributes will just work. It’s not divorced from the model. You’re not dealing with some FrankenDITA thing that’s never going to be able to be updated again. It’s really the ideal approach to wrestling with metadata and making sure that you have the right buckets for the right types of data about your data.

GK:                   Yeah, absolutely. I want to talk about another feature that can easily be misused but also it can be really helpful if it’s used correctly and that is subject scheme. And subject scheme is basically a special type of map that’s available in DITA that allows you to bind specific values or sets of values to attributes. And this can kind of work sometimes as an alternative to specialization if you don’t really have a compelling enough case for specialization yet, but you still need some sort of a custom set of values for your attributes.

GK:                   And some examples we’ve seen are again, if we go back to the example we talked about for audience and you want to define a list of different pieces of your audience that is kind of more complex than maybe something like beginner, intermediate, advanced, then you can set up that list in your subject scheme, you can set up hierarchical lists of values and it really just makes it a lot easier for your writers to avoid mistyping something because they have a pick list that comes from that subject scheme. But again, it’s also something that can really easily be misused.

BS:                   Absolutely. And the other piece about a subject scheme is that you can set it up so that you do have that finite list of values and you only have those values available. And that really allows you to only provide the values that you have handling built in for. If you are experimenting with different types of metadata and you don’t necessarily want them in a production mode, you can exclude those from some of the lists, depending on the authors that are working on it. You might have an experimental batch of authors that are working on the next latest and greatest batch of content, but for a lot of the content, that’s more in a maintenance mode or is using the existing publishing workflows that you have established.

BS:                   You can limit the metadata values to just what’s in the subject scheme, that’s all they have available so they don’t accidentally create something or mistype something that is not going to be handled. Because usually the publishing instruction will basically say, “I don’t understand this value. I’m just going to throw it away and just go with the content that’s there.” And that could be very detrimental if you’re publishing something that has metadata applied to it that says, “Do not publish this in this scenario,” in which case, then you get content that you didn’t intend in your output.

GK:                   Yeah. I think kind of one of the other ways I’ve seen people misuse subject scheme is that when they start to approach it as kind of a stop gap between having no specialization and eventually getting into specialization for their metadata, that over time, it starts to become really unwieldy and they’re trying to kind of shove, I think, too much complexity into some of those lists of values and really try to make it a substitute for specialization when really it’s not. And I think that’s another one of those things to look out for as kind of a red flag. Is that just like in your content, if you find that you are using output class excessively for the same thing, if you’re shoving into other meta too much, if you get into a subject scheme and you realize that it’s not actually helping with the complexity of everything that you need to capture, then that’s another one of those red flags that says, “Hey, we should look at specialization.”

BS:                   And likewise it hearkens right back to your taxonomy as well, because at the point where you’re using subject scheme, it should be reflecting what’s in your taxonomy. If there are additional things that you need to add that aren’t available in your pick list for the subject scheme, chances are they’re also missing from your taxonomy, which means you have some more thinking to do on exactly how you are categorizing your content.

GK:                   Absolutely. We’ve talked about taxonomy as kind of a way to make sure that you avoid that in this use of metadata. Another one I wanted to bring up is also not just to defining a taxonomy, but defining your formatting and your presentation needs as much as you can upfront as well, because that’s also going to play a role in where you might need some custom elements or attributes that can drive things a lot better than just using output class all over the place.

BS:                   That’s a good point. And I think the final thing that we want to mention is that you want to future-proof your content model as much as possible so that these needs are either expected or at least your model can grow as expectations grow from others of that content model. Being able to have specific metadata that’s specialized for your exact content is going to make it a lot easier to be able to introduce new values, to be able to constrain against specific values for that metadata and also having that mature taxonomy model as well will help you in that regard.

GK:                   Yeah. If you think about future-proofing and think about planning your taxonomy, planning, your publishing needs, planning your distribution around that, then that will really kind of help shape the way that you think about your metadata use and make sure that you allow for that growth and that scaling that should happen in your company if you’re going in a successful direction. Before you just start going into DITA and building the metadata, it really requires that level of forethought to make sure that you’re not going to misuse anything.

GK:                   And we’re going to wrap things up there. Thank you so much, Bill.

BS:                   Thank you.

GK:                   And thank you for listening to the Content Strategy Experts podcast, brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

The post The misuse of metadata (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 17:38
Understanding information architecture (podcast) https://www.scriptorium.com/2021/02/understanding-information-architecture/ Mon, 01 Feb 2021 13:00:56 +0000 https://scriptorium.com/?p=20112 https://www.scriptorium.com/2021/02/understanding-information-architecture/#comments https://www.scriptorium.com/2021/02/understanding-information-architecture/feed/ 2 In episode 88 of The Content Strategy Experts podcast, Alan Pringle and special guest Amber Swope of DITA Strategies talk about information architecture.

“Information architecture is a role, not necessarily a position, but by ignoring it, you end up without the discipline and the consistency that really enables great customer experiences.”

– Amber Swope

Related links: 

Twitter handles: 

Transcript:

Alan Pringle:                   Welcome to the Content Strategy Experts podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode we talk about information architecture with special guest, Amber Swope, of DITA Strategies. Hey everybody, I’m Alan Pringle, and today we have a guest on the podcast, Amber Swope, of DITA Strategies. Hey there, Amber?

Amber Swope:                   Hi there, Alan.

AP:                   So, let’s start with the basics here. Give me your definition of information architecture.

AS:                   Well, as you know there is no common definition of information architecture-

AP:                   No.

AS:                   … So rather than getting frustrated by that I chose that as an opportunity to own the version of it I want to have. So, I go with the Samantha Bailey definition that starts with information architecture is the art and science of organizing information so that it is find-able, manageable and useful. I really like that definition because it acknowledges that there’s art and science to this practice.

AP:                   Also, there’s not a whole lot of jargon in that definition, I appreciate that a lot too.

AS:                   Yeah. And particularly the science part is obvious to see. So for instance if you’re using an open standard DITA, then you could have five IAs, give them the same challenge, tell them which version of DITA they’re going to use, and they would probably come up with solutions that are 80% consistent. But that 20%, that art is where different information architects will bring to bear their experience and potentially give you something slightly different which is why it’s always great to have more than one information architect on a project.

AP:                   Sure. And there is absolutely an element of judgment call to it as you have said, it is not just a straightforward everybody’s going to do the same exact thing. There is no book basically that tells everyone how to do it exactly the same.

AS:                   And I also take this a step further and make a delineation between management information architecture and delivery information architecture. And I found that most information that is available for information architects is dedicated to delivery information architecture. That is the architecture, the structure, the metadata, et cetera, that is required to deliver information on a specific platform. So a mobile app, or a website, a portal, a working environment.

AS:                   And then there’s what I tend to do which is management information architecture. And the difference is that I’m tasked with creating an information architecture that can support omni-channel publishing to any of those platforms. I tend to work with companies that are trying to have a single source of truth that they manage in DITA but that serves different platforms, and each one of those platforms will have its own architecture because that architecture supports the display, usability, findability, et cetera, of that information.

AP:                   I think that is a very important distinction to make and it hearkens back to something that started in the late 1990s, the idea of single-sourcing, where you basically you have a source that is then output into a bunch of different formats and you’re not writing specifically for one format type.

AS:                   And that’s particularly powerful. And when you think about content that is to support learning, if you have content and you want to send it out to an LMS, you’re not going to structure it just for the LMS in the management architecture but the LMS, that experience is so important to learners that that architecture needs to be fully developed. And when you work in these larger projects the biggest challenge is first getting folks to acknowledge that they actually need information architecture as a separate discipline, and then next understanding that they need more than one. And understanding what those roles are and communicating which direction the requirements are going. And the reality is they’re both going both ways, and that leaves a lot of opportunity for some great collaboration but also an opportunity for some miscommunication.

AP:                   Sure. And I think this makes me want to ask the question, when should a group of content creators, a company, a department, whoever, when should they be thinking about information architecture? What’s kind of an inflection point where you say, “We really need to buckle down and think about this seriously?”

AS:                   Well, when I speak to a group of information developers or tech writers or whatever label you want to use for people who are creating content, I asked them, “How many of you are information architects?” And very rarely does anyone raise their hand. Then I ask, “Do you control the table of contents? Do you put in keywords or index words?” And everyone raises their hands. So, everyone’s doing information architecture as they create content in these organizations. I think the question really is when do you need to acknowledge it as a separate discipline? And I would say as soon as you have more than one deliverable. Because if you look at high-tech companies, one of the classic questions is, well, where does the troubleshooting information go? Does it go in the user guide? Does it go in a surfs guide? Where does that go?

AS:                   Well, that’s an architecture question. And if you have guidelines that indicate that user guides have this information, getting started guides have this information, administration guides have this other information, and it’s okay to have the same information in more than one deliverable or it’s not, that’s architecture. And I feel that it’s a disservice to not acknowledge that everyone’s doing it already. It’s a role, not necessarily a position, but by ignoring it you then end up not having the discipline and the consistency that really enables great customer experiences.

AP:                   I think that’s a very great point you just made. As people are creating content they are adding intelligence to it. They are categorizing that information oftentimes without even realizing that they’re doing it.

AS:                   And if you have more than one author then you have different people’s ideas and opinions and judgment calls. And I would argue that many of the style guides that teams have that allow folks to be more consistent, actually, most of the time incorporate a lot of the architecture. And that it might be helpful for teams to look at that information with a critical eye and say, if it’s about what information goes where, and what’s the structure of a specific deliverable, maybe it’s worth calling out into a separate section of the guide and acknowledging that this really is different than the words that you choose or how you format something.

AP:                   With IA, is it generally a project by itself or is there some trigger, some bigger corporate initiative that may make that happen or put attention toward it?

AS:                   I would love there to be projects where someone calls me up and says, “We just really want them have a great IA.” That never happens. Folks call up and say, “We are having this type of a business challenge. We understand that baking the structure of the content into the DTDs or baking it into the CMS structure, or completely ignoring the structures, the metadata that we need is causing us pain.” And then because IA is around the structure of the content and the metadata, and particularly if you’re working in XML, you don’t give people the raw XML, you always process it or render it with a transform. So it’s always going to be bound to additional work, you’re not going to go and change the IA and then not be able to generate the content. The simple answer is it’s always been part of a bigger project.

AS:                   When I look at projects like this I see the business question, what are we trying to achieve? And then I think of three dials or areas where we can control and make accommodations and improvements, architecture, technology, and process. And most challenges require some work in all three areas. What can the architecture do to give you more consistent, well-structured, more powerful content? What’s the technology that’s required to perhaps present that information in a better way to meet the user need? And then process, well, what processes need to change in order for us to produce the right content in a timely fashion?

AP:                   Those are really good ways to break it down. But if I’m talking to a C-level person, an executive, the person who has the money in their hand, how do you communicate to them about the importance of IA because I’m pretty sure telling them we’re going to get spiffy new tools is not the way to win that argument?

AS:                   Well, and it’s a challenge because everybody wants a simple answer, particularly in the U.S. where we get judged quarterly by our success or our failures. Most of these projects to make significant change or improvement take longer than a quarter, so the whole budget question is always difficult. The first challenge I think is for them to take a business challenge and understand when content is involved at all.

AP:                   Yes.

AS:                   Because once we say, “Oh, content is part of the solution,” then it immediately is, well, it’s not just the words but it’s also the structure, and at that point we can introduce the discussion around architecture, the role of architecture. And I’m actually working on a book with a coauthor about this exact challenge, is how should management understand when a business challenge involves content and when simply buying new software won’t be the answer because there’s always going to be some sales person out there offering them some sexy new software and telling them that it’ll fix everything.

AP:                   Indeed. And it usually doesn’t, says the narrator.

AS:                   Well, I would say it always doesn’t because the idea of buying software without understanding the inputs and outputs of it and the role of the people using it, that process, that’s how you end up with shelf-ware.

AP:                   Exactly. And I’ve seen that happen so many times I can’t even tell you, I’m sure you have too.

AS:                   Oh, yeah.

AP:                   Yeah. So there’s got to be some process here, some way to consider to map this out, especially to get that buy-in for the vision and then to implement it. Is there a loose process? Now, I realize this is a huge, huge leading question that we could talk about for hour upon hour. But is there’s some kind of a loose outline about how these projects go?

AS:                   Definitely. As with any challenge we want to start off with what the definition of success really is. Because we don’t want to make change, we want to make improvement and how will we know when we’re done if we don’t know what the goal is? And if you’re in a larger project, in larger organizations, a lot of times they’ll have a content strategist and the content strategist is usually the person who defines what success is. They work with the management team, understand what the challenge is and say, “Oh, okay, let’s talk about what success looks like from the contents point of view.”

AS:                   Then of course we want to understand the current status so we do some assessment to understand why is the current content, whether it’s its structure, its delivery, the actual words, it’s in the wrong language, where is the content falling short and what are we currently working with? For a lot of teams one of the biggest challenges is that they have multiple instances of the same content. And so it’s easy enough to write, but then when you go to update it’s like Pokemon, you have to go catch them all and you never do because you might be new, you might be busy or you might not even have access to the repository that has that fifth instance of that content. And that is why we really want to get to a single source of content in a management architecture so that when people need to update the content they simply do it once.

AS:                   After we know what we have, we want to look at the future state, what should the future state look like and understand taking the idea of success and making it concrete in a way that we can then start building toward. And then once we have this from the architectural point of view, I’m going to start looking at the deliverables, really looking at them and saying, “Okay, what’s this deliverable type? What’s its purpose? How should it be delivered? Is it for one or more audiences?” And when I mean deliverable, I mean a manual, an article, a course. If you’re a mobile app that has glossary quizzes, what is that thing that the end user consumes.

AS:                   And based upon the purpose and who it’s for then we can start looking at, well, what kind of content needs to be here to meet that purpose? And once we have the idea of what success looks like for the deliverable then we can look at the content types. In some organizations the content types are super basic. They have concept task, reference glossary, maybe some troubleshooting. If you’re in education you’re going to have learning objectives, you’re going to have questions, you’re going to have overviews, you’re going to have summaries. And the more specific your industry is, I have found the more content types you potentially have.

AP:                   Yeah. We’ve had that experience as well, I agree with that.

AS:                   When I say content type I’m not necessarily referring to an official topic type. For instance, you don’t have to specialize to get a content type. If the content structure is the same but you really want to identify the purpose of the content so that you can empower a more nuanced delivery. So for instance, if you have glossary terms, the glossary structure, for instance in DITA, but maybe you want to include that this is a vocabulary word versus this… And it’s for a specific industry or you want to say, “Oh no, this is a chemistry formula.” That’s a very specific purpose. And you don’t have to specialize, you can use the base topic, but then you are identifying for downstream systems what the content type really is.

AS:                   And when we know that then we can look at its structure. And what our goal is is for us to be super clear for the people creating the content what purpose is that they’re writing it, because creating smaller, modular, structured content is still a new concept to a lot of content authors. And even though it started way back with information mapping and has been used through multiple systems including DITA, the idea that you would write and store pieces of content for different purposes is still a big change for lots of authors.

AP:                   It is and if it were not we wouldn’t be employed, quite frankly.

AS:                   Indeed. And so once we have some idea about what the structure should be then we’re going to do some proof of concepts and try out some lightweight mock-ups and understand how things come together. And what I typically do is I start with what they do now and replicate it and then we start thinking about the art of the possible. Because, we’re not being brought in to ask them to recreate what they have now, because they have a business challenge that what they now doesn’t meet. Understanding what that change is, what’s that delta is really important from a structural point of view because we can’t help the authors make that journey to the new format and the new structure unless we fully understand it. So I’m a big fan of recreating doing a proof of concept, understanding with the stakeholders why what they have doesn’t work because me telling them that usually is not enough.

AP:                   No, but it does help that you’re a third party voice coming in there. And I’m actually very glad that you brought up proofs of concept because there is always on these kinds of projects a chicken and egg challenge with the tools and technologies. If you’re doing the information architecture and laying that all out, at that point you often don’t have the tools that will do the transformation, or the tools that they’ll be using for authoring. So how do you balance that lack of tools and doing these proofs of concept? How do you handle that chasm, for lack of a better word?

AS:                   Well, I start with what DITA gives us for free. Because it’s an open standard we have the open toolkit and a lot of the authoring vendors provide multiple transforms. And so I go with what I have available. Because it’s a proof of concept I don’t want to invest development time if I can help it and I see what I can get. So for instance, if I’m trying to show people how they can get different types of associations that can be represented as links in the output, I’ll just use a tripping helper, an HTML5, a transform, just to show them, hey, this is what you get. That’s particularly useful when you’re trying to explain to people why they will no longer have to manually manage and type the link text for all their links particularly if they’re hierarchical.

AP:                   Yeah. And I know having worked with you on some past projects, you’ve often even not even touched DITA tools to do proof of concept, for example, doing mock-ups of a table as they stand now and then doing a future state table using Excel or Word to show the differences and what’s possible without even actually having to touch DITA. Because you have that DITA knowledge, you can translate it in a way that’s very visual and help people understand without them even, or you really even having to touch the DITA code, I think that’s also very helpful.

AS:                   Well, that’s the thing and this is the chicken and egg part of it that you mentioned, Alan, which is, I’m trying to help folks understand what they can do with their content and they shouldn’t have to know DITA in order to be able to communicate their needs to me.

AP:                   Absolutely. Really, it is a situation where they need to bring their expertise and that is with the current state and with how process flow works and how information flow works. And it has to be basically combined with your expertise on DITA and whatever other model that may be, your case it is usually DITA. You’ve got to find a way to bring those two things together and have them sync for these projects to work, at least that’s my point of view.

AS:                   And I’m a big fan of using diagrams because first of all I’m very graphically-oriented, I love a good picture. And second, it allows me to help folks see past their words to see their structure. And the first version of the diagrams I do has no DITA in it, it literally, for instance if I were doing a diagram of a glossary unit and I wouldn’t even need to say the word topic, I’d say a glossary unit. It’s like, okay, we start off with the obvious, we have a term and a definition. Do we need abbreviations or some other alternate form? Okay, let’s talk about the alternate forms that you want. Do you need usage notes? What is it for instance, for the folks that did an application, a mobile app that tested glossaries, they’re basically digital flashcards.

AS:                   We had to say, “Oh, we need pronunciation here as well.” And that has nothing to do with the DITA elements I would use to support that, it’s helping the client communicate to me what does success look like, and like I said, I love using diagrams. I usually have two sets. I have a set for the structure and then I have a set that I then create that says, “Oh, here’s the DITA element,” and potentially the attributes I’m going to use to create and structure the information to meet the structure that they told me they needed.

AP:                   Yeah. And it’s almost baby steps, starting out simple and then adding another layer on top of it and that makes a great deal of sense to me.

AS:                   And I use the same ones over and over. I actually have a toolkit that I sell, that is the toolkit that I use with my clients. So not everybody has the opportunity to bring in a consultant but if you want the tools that I use you can get them.

AP:                   Absolutely. And before we wrap up, is there any one stumbling block that you can think of that really stands out based on your past experience where you can give a simple piece of advice to get around that stumbling block when you’re working on an IA project?

AS:                   I think that the biggest one is recognizing first that architecture is a separate discipline. And the second part of that is that you may have more than one architecture. Most companies I see have multiple ways that they are producing their content now and if we want to get into a management architecture we have to look at the input into that architecture and say, “How do we harmonize?” And I like the word harmonize because it allows me to express that we’re not making everything exactly the same, which when I say something like normalize it would imply massive change.

AS:                   Oh, no harmonize, I want everything to work together in one repository or repositories that they all have the same structure and then we can look at the downstream implications. So for instance if you’re doing a chatbot and you also have a self-guided troubleshooting and you have basic user manuals, you have FAQs, we should be able to structure the content and the management IA and empower it with the correct metadata so that you can deliver that content in the way that it needs to be delivered because for each of those platforms, which could be radically different if you think about the difference between an FAQ and a chatbot, that delivery IA is radically different.

AS:                   And most folks they’ve been thinking about it from the idea that, oh, we’re just going to push it out and it’s going to magically work on that platform and understanding that they will need to have two different IAs and take the effort to trace back from the delivery platform, back to the management IA to understand when metadata, specifically metadata gets assigned to which units. Is it one unit, is it a group of units, is it based on a map? Whatever that is, and recognize that there might be times when metadata never makes it back to the source, that it may need to be managed in different places. And so this idea about metadata being used to power the content needs to be discussed in the context of multiple architectures.

AS:                   And I find that that’s an evolving conversation that once we talk about it that way a light bulb goes on for people, but I wasn’t having that conversation two years ago with people. And I should have been, but it just became clear to me over the last couple of years that that is where I can really help folks understand how they can power their content in new and better ways. And maybe even using their existing content that they have and they just add a new delivery channel, whether or not they have to actually go back and touch their source, or whether there’s an opportunity to power it from a different place.

AP:                   That’s really good advice, I appreciate that. And I’ll be sure to include your website in the show notes so people can find you and continue this conversation with you. And with that, Amber, I want to thank you for your time, this has been a great conversation.

AS:                   Well, thank you, Alan. You’ve given me an opportunity to talk about one of my favorite subjects, I love talking about architecture.

AP:                   Well, we’re glad to do it, thanks again. Thank you for listening to the Content Strategy Experts podcast, brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

 

 

The post Understanding information architecture (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 27:00
Finding the value when selling structure (podcast) https://www.scriptorium.com/2021/01/finding-the-value-when-selling-structure-podcast/ Mon, 11 Jan 2021 13:00:07 +0000 https://scriptorium.com/?p=20093 https://www.scriptorium.com/2021/01/finding-the-value-when-selling-structure-podcast/#comments https://www.scriptorium.com/2021/01/finding-the-value-when-selling-structure-podcast/feed/ 2 In episode 87 of The Content Strategy Experts podcast, Sarah O’Keefe and special guest Nenad Furtula of Bluestream talk about finding the value when selling structure. Why do so many tech pubs departments fail to get support for structured content and what can we potentially do to change that?

Related links: 

Twitter handles:

Transcript:

Sarah O’Keefe:                   Welcome to the Content Strategy Experts podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize and distribute content in an efficient way. In this episode, we talk about finding the value when selling structure with special guests, Nenad Furtula of Bluestream. Why do so many tech pubs departments fail to get support for structured content and what can we potentially do to change that? Hi everyone. I’m Sarah O’Keefe from Scriptorium and I’m here with Nenad. Nenad, you’re over there in sunny Canada?

Nenad Furtula:                   Thank you, Sarah. Always good to hear your voice and talk to you. I’m located in Vancouver, British Columbia.

SO:                   Nenad, tell us a little about yourself and Bluestream and what brings you to the structured content conversation?

NF:                   Of course. Yeah. My role at Bluestream is I guess I’m one of the two managing partners and I also manage all the business development and marketing activities when it comes to Bluestream. Bluestream has been around since 1997 and we initially were an XML database company and shortly after that, transitioned into content management. We’ve been doing content management for a very long time. Around that time that we got into this business, about 2005, DITA came about and so what we’ve done is we built a product called XDocs, which is a component content management system. For the past, I guess, 15 years, I’ve been soliciting the value proposition of our flagship product.

SO:                   Right. And you and I have had many conversations, at many conferences, with many drinks, about the industry and it’s always interesting to hear what you think about it. And so today I wanted to ask you specifically about what I think is yours and perhaps my number one business problem, which is why is it so hard to sell structured content at the executive level? When we go in and we’re selling to potential clients, why is that so hard with the execs?

NF:                   I guess the famous line is to catch a gopher you to think like a gopher, Bill Murray from Caddyshack. If you think about roles in an organization, executives have a role and predominantly they are concerned with growth, business growth and returning shareholder value, and sometimes stakeholders as well, right? Classical product documentation, when we talk about structures for example, is generally seen as a low-level cost center. That is necessary when you have to release a product, but it’s not really a forefront of the business thought. It does not generate revenue and it does not necessarily improve your organization’s image like marketing does, right? So that is a problem.

SO:                   And it doesn’t bring in new leads, right?

NF:                   Exactly, exactly. It’s a cost center, that’s the problem, right? It’s not a priority and it’s also a low-level cost center, meaning that there are expenses that are not overbearing essentially, right? Just to give an example is when you’re talking about, say Salesforce, they write that check every year no problem, right? Whereas when you’re talking about a component content management system, that becomes a bit of an issue.

SO:                   Basically, we have to show that this type of content, product and technical content, does in fact add value to the business, or I guess maybe more accurately, doing it better adds value to the business, right?

NF:                   Well, we have to show where it adds value. I think that’s key and we have to think about how it does that. Right? I think that I am guilty of this as I think many of us are in the beginning, let’s just say, let’s roll back 10 years ago or so, we were really focused on content reuse and how great it is for documentation lifecycle and proves the processes and reduces publish times and that these things are not really, executives don’t think about things like that. Right? The focus, really I think, has been in the last five years to really sell value of information and also show where it brings most value to the organization.

SO:                   Yeah. I’ve been warning people that focusing on cost avoidance is pretty much a straight train to the land of commoditization.

NF:                   Right.

SO:                   Which we don’t want actually.

NF:                   Right.

SO:                   What about DITA? Is there a different argument there or is it the same?

NF:                   Well, it’s sort of worse, right? If you think about it because, well, that’s the problem. Again, going back 10 years, we’re telling the world how great DITA is, right? I think it frightens some people too because it is great, it’s a wonderful standard. At the very essence of it, it’s just a technology that helps you deliver a structured content, right? Executives care even less about it. But where I’ve found the DITA argument, in particular, the standard argument helpful, is when you’re trying to mitigate risk. Right? Because the question inevitably comes up. We’re bringing this new tool and are we going to be vendor bound, right? Here we say well look, when you’re going with something like DITA which is a standard and not a proprietary schema, there’s a bunch of them out there, you are essentially mitigating risk. To me, that’s the most valid argument. You can talk about there’s a community, there’s thought share and all that wonderful stuff, but it really comes down to, can I switch to vendor? Should I need to? And yeah, you can because you’re working with the standard.

SO:                   Right, so you have a risk mitigation and then I’ve talked about it a little bit as an enabling layer in that there are things that you want to be able to do with your structured content and the people who built out DITA originally thought pretty carefully about what those things might be. There’s a lot of stuff in there that’s useful if you have the typical kind of structured content. Okay. We know that we can do some cost avoidance, some lower expenses, but we don’t really want to focus on that too much. What other kinds of things, what other kinds of value propositions do we have then?

NF:                   Well, when it comes to value proposition, I mean it depends on the organization and it depends on the industry. We’ll get into to the industry later on in this call but the true value proposition in my mind has to show an ROI, right? We get asked for this all the time. In particular, in my line of work, when I’m working with procurements, when I’m working with technical documentation managers trying to solicit value proposition internally, it’s all about all about the ROI. The number one, I think, point when it comes to ROI is, is this going to have an impact on my revenue, right? If you can show that a structure or going towards structure is going to impact your revenue, you have a pretty good argument. That’s a good starting point, right? And not everybody can show that. Not every industry is capable of showing that.

NF:                   Now of course, the second point is as you mentioned, is impact on expenses and reducing expenses. It is about lowering translation costs and making these departments more productive, if you would, right? That’s a big one. It’s really difficult to quantify a third point, pardon me, is that through documentation, you can enhance end user experience with your product. Okay. That’s a very interesting point to make because we’re no longer shipping 500 page PDFs. We’re shipping help centers. Give you an answer to your question, right? Talk about enhancing experience with the product that you were looking for an answer, it’s there, right? But then, like I said, it’s much more difficult to quantify.

SO:                   Yeah. I think you’re right. We’ve run into some other related things to what you’re talking about. I don’t know where you put this, but regulatory compliance and making it easier to deliver the right content that your regulatory body requires and doing it correctly the first time means fewer holdups in your regulatory experience, right? Fewer calls from the regulators saying, “Hey you didn’t do this,” or, “Hey we’re not going to approve your product unless you give us X, Y, and Z.” You give them exactly what is required and accurately the first time. You talked about risk earlier in a technology context. We talk a lot about risk mitigation as a value proposition that if you have a transparent, traceable, et cetera kind of process, you can reduce the number of mistakes you make in your content, right? And if you do make a mistake, you can fix it and be confident that it’ll get fixed everywhere, which reduces potentially your exposure from a product liability point of view, right? If you ship a possibly dangerous product, dangerous if used incorrectly and you don’t provide good instructions, you’ve got some exposure there, so that’s a concern.

NF:                   I agree. I agree.

SO:                   Yeah.

NF:                   That was the preamble to the question. The answer was it really depends on the industry.

SO:                   Mm-hmm (affirmative).

NF:                   That’s what we’ve seen. I’m sure you’ve seen the same thing. Adoption of structure, from these regulatory driven industries was much quicker, right? Pharma, they jumped on this early on. Medical device manufacturers, we’ve seen them adopt structure for that very reason early on.

SO:                   Right. To your point, risk mitigation. Yep.

NF:                   Risk mitigation. Exactly. I think that should have been a fourth point is industries driven by regulation. They just have to do it.

SO:                   Well and I guess they recognize the value, right? Because they know what the consequences are if they don’t do it right. For a lot of other people, the consequences are kind of squishy.

NF:                   They are. They are.

SO:                   I did want to ask you about cost centers because you mentioned them and I sort of twitched because we have seen a pattern, especially recently, where you have a technical publication or information development group that actually charges their services back to the in-house business units. If I’m tech pubs or whatever they’re called, then every time I produce a document for a particular product line, I charge back my time or the team’s time, to that business unit. Oh, we spent 30 hours, we spent 100 hours on your document so you owe us 100 hours times our internal magic bogus rate.

SO:                   What they’ve run into is that if they layer in something like structured content, or let’s say they’re sharing content, and so I write content for business unit A, but then I actually use that content again for business unit B. Well, business unit B pays eight minutes and business unit A pays three hours because that’s what it took me to write that piece of content, but then I reused it. If I have better efficiency, I charge back fewer hours which means the team gets less budget the following year and there’s no provision really to fund the infrastructure, to fund the build of structured content or the maintenance of the style sheets or anything like that. And this may be an unanswerable question, but I’m looking at this and saying, this doesn’t work. This cost center approach doesn’t work.

NF:                   Well it sort of works for certain organizations and not for others. We’ve certainly, especially in large organizations, this is the case, we have yet to run into, or I have yet to run into a case where the technical documentation department has become so efficient that they are getting their budgets cut. That’s just my experience. I personally haven’t seen it. The other reason too is because just the demand for information is growing as well. There is more information, there are more product lines. Maybe that’s why I haven’t necessarily seen that myself, but certainly it could be a problem.

SO:                   Yeah. It’s not common, but we’ve seen it a few times and we keep saying, well you have to account for the shared infrastructure somehow. I think the challenge is when you move to structure, there’s more shared infrastructure and less hourly billing back, and that’s what you want because more reuse equals more lower translation costs and all the rest of it. You mentioned different industries have different arguments for structure and we kind of touched on regulatory and risk management and what that looks like. What are some of the other examples of that where a different industry or a different vertical might care about different things when they’re looking at structured content?

NF:                   Yeah. I’m actually glad that we went through regulatory first because the two examples that I had in mind, I’m comparing say a classic software vendor to say someone like a heavy equipment manufacturer, right? Their arguments for structure are going to be different, we found anyway. When you’re producing software manuals and say you have a software product like we do and you need user manuals and such, basically it is a straight up cost to business to develop that. Okay? For example, just to start with, ignore the fact that your processes are going to be better while using structure and you’re going to be more efficient and all that, and your localization costs are going to lower.

NF:                   What you really need to do is you need to focus on information flow and you need to figure out which recipients of that information have the most value or are getting the most value. In an example of a software company, quite often we see these delivery platforms emerging, and that’s the argument, right? The argument is we need to go into structure so that we can have a better delivery mechanism of our documentation so that for example, we can reduce the burden on our support organization. Okay? And voila, here is your delivery platform, right? What’s interesting about that argument, what we’ve seen there is we’ve seen a lot of people, a lot of software companies actually, sell structure successfully to management and of course, now they’re working debt, because they can’t get money for a tool and get budget for it, for a tool like a CCMS.

NF:                   But then, it’ll be much easier for them to sell a delivery platform because it’s outward facing. Right. And the whole argument there is, well information flow is, hey, look at my end user. They’re interacting with this documentation, with our product. Again, you’re enhancing the end-user experience with the product and you’re reducing the burden on support. Right? And that works very well for say a software manufacturer or software vendor, for example. Whereas if we take a look at someone like heavy equipment manufacturing and Bluestream has really niched into that vertical quite a bit, over the years, they have a completely different requirement and their requirement is much more sophisticated when it comes to delivering information for the use of this equipment. Right?

NF:                   Well, first of all, the equipment has a long lifespan, right? And this equipment needs to be serviced and a big portion of a company’s revenue or some fair portion of a company’s revenue is associated with servicing that equipment, and as well as selling spare parts, if you would, right? So when you look at that information flow, when you think about, well, who are the recipients of this information that really matter? Well, they become this service personnel, either third-party or internal, who have to service these machines for many, many years. And of course, they have to sell parts. And so those parts and that service, or those aftermarket parts I should say, and this service become a big part of the revenue, right? The revenue story, company’s revenue story. And so when you’re going into a situation like that, what you’re going to talk about is increasing the sale of spare parts, and that has all the attention of management.

NF:                   So I’ll give you an example. We’re dealing with a very large train manufacturer, they’re actually worldwide. And I was looking at their business case that they presented, and we’ve been dealing with this customer for about four years, but I remember their case that they presented to management, it was 95% of the business case was focused on increasing the sale of spare parts. Whereas 5% of the business case focused on basically increasing productivity of some 70 plus technical writers. Okay. And that says it all, right? Where’s the focus? Well the focus is in fulfillment, in that particular case. Right? So very different than what we see in the example that I gave earlier, like a software industry. Right? And so the focus has to really be adjusted to the industry that you’re selling into or the industry that you’re in essentially.

SO:                   Yeah, that’s interesting. And I think we’ve seen that as well, that on the software side, with some exceptions, but in general, on the software side, the focus is on cost savings and also on velocity, time to market. Because software gets distributed electronically. This sounds dumb, but some of us are old enough to remember the literal, we have a contract and our client is required to get this piece of software by close of business on December 1st. And if you miss the FedEx 9:00 PM deliverable, or sorry, you miss the 5:00 PM pickup at the office, that means you have a 9:00 PM cutoff at the airport. And if you miss that, you’re putting somebody on a plane at 6:00 AM to fly them to California, holding a CD in their lap so that they can walk into this business and deliver the software on time. Right?

SO:                   That’s how it works the olden days. And now you obviously distribute it via a patch or an electronic download or whatever, and that entire shipping process went away. And it took content a long time to catch up to that distribution mechanism. Eventually, we had PDF and we could electronically distribute. But at first it was kind of a big problem. And so software is interested in speed, velocity, time to market, cost savings. And then as you said, manufacturing really has this, it’s more like a two part sale, right? You sell the core product, but then there’s this long lifespan of maintenance and updates and service and spare parts. It’s just a much, much different chain. We’re also seeing an awful lot of companies getting into the fleet management and service management.

NF:                   That’s right.

SO:                   So they actually go from being a product like a manufacturing company to being also a software company, because they’ve got the database of all of the equipment that they’ve sold you and think of airlines, when is it due? When is this plane due for maintenance? Keeping track of that is actually a service. So now this distinction between product and service is starting to blend.

NF:                   Well you know who defined that actually initially? Believe it or not, it was Xerox. Xerox is a big partner of ours and they were for many years and Xerox actually, everybody thought the Xerox was about copiers, right? Yeah, sure, they sold copiers, but a bulk of their revenue came from servicing these copiers. Xerox actually is not a products company, it’s a services company. Right? So it’s true.

SO:                   Then what about organizations where the content is in fact the product?

NF:                   Yeah. So those guys have, we see a lot of folks generating learning content, training content in particular. We have a number of customers in those fields and they actually, interestingly enough, they’ve caught on to XML, I should say, early, early on. Okay? And at the time, I say early on, probably about 15 to 20 years ago. Right? And so they’ve paid attention to this stuff and they, for the most part, built their own systems. That’s what we’re seeing. A lot of proprietary systems, a lot of proprietary XML. And so for them, getting into structure was much easier, is much easier I should say. And for them, embracing something like DITA makes sense. The challenge of course becomes how easy is it to use? How easy is it to author?

NF:                   And this is where DITA, maybe did a disservice to some of us because it’s been presented, it’s so powerful and yet so complex. In fact, I just had a conversation last week with someone that said, “Gosh, we can’t do this. It’s too complicated. We’re going to go with something different.” Right? And so anyhow, I know there’s a discussion around DITA Light and all that wonderful stuff. But those organizations who sell content as their primary business, they’re embracing this and they really are coming on board. The other one is insurance. Sarah, we’re seeing insurance companies. I mean, it makes a lot of sense for insurance, structure makes a lot of sense for insurance. We’re seeing airlines embrace this. Of course, that’s a regulatory industry. We really are seeing an uptake in structured content. There’s no question about it. Last few years have been, in my mind, changing.

SO:                   Yeah, which sounds like some good news for all of us. So well thank you. I appreciate this because I think there’s a lot of food for thought in here and obviously you’ve not just thought about this, but had to think about this in the course of your business. And I think it’s helpful to me to chew through all these things and contemplate what they’re like. I’m going to, with that, wrap this one up. Thank you, Nenad. I appreciate it as always.

NF:                   And thank you, Sarah. Thank you for having us on. Like I said, this topic is near and dear to us and should anyone want to discuss further, I’m sure they can reach us at www.bluestream.com.

SO:                   Yep. And we will drop that in the show notes, along with some other contact information so that you know where to find everybody. Thank you to our audience for listening to the Content Strategy Experts podcast brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for the relevant links.

 

The post Finding the value when selling structure (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 25:20
Steps to structured content (podcast, part 2) https://www.scriptorium.com/2020/12/steps-to-structured-content-podcast-part-2/ Mon, 14 Dec 2020 13:00:34 +0000 https://scriptorium.com/?p=20074 https://www.scriptorium.com/2020/12/steps-to-structured-content-podcast-part-2/#respond https://www.scriptorium.com/2020/12/steps-to-structured-content-podcast-part-2/feed/ 0 In episode 86 of The Content Strategy Experts podcast, Gretyl Kinsey and Bill Swallow continue their discussion about the steps to structure, how to move from unstructured content to structure, and what each level of maturity looks like.

“Step five is when you’re  thinking even your structure is structured. You’re really thinking about how to take this to the highest possible level, how to get the most out of your automation, and how to make sure that the way you’re delivering your content is maximum efficiency.”

– Gretyl Kinsey

Related links: 

Twitter handles:

Transcript:

GK:                   Welcome to The Content Strategy Experts podcast, brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize and distribute content in an efficient way. In this episode, we talk about the steps to structure, how to move from unstructured content to structure and what each level of maturity looks like. This is part two of a two-part podcast. Hello, and welcome. I’m Gretyl Kinsey.

BS:                   And I’m Bill Swallow.

GK:                   And today we’re continuing our discussion about the steps to structure. So we previously covered steps one and two, which are unstructured phase, and step three, which is getting to structure. There’s step four, which is customized or specialized structure. So could you tell us a little bit about what that means compared to just sort of the baseline structure?

BS:                   Sure. So once you have everything in your structured format, chances are you’re going to start finding little bits of differences or dissonance between the type of content that you’re producing and what the structure will allow you to use. You may say, “Well, we have this very specific type of paragraph, or this very specific block of content that doesn’t really fit into the structure in its own native form.” We want to be able to handle it, and we want to call it something unique. We want to be able to structure it uniquely as well, yet still use it within the framework of everything else we’re doing. So this act of specialization or customization is kind of the next step because now you’re looking at the structure and saying, “This is great, but we can do more with this.” So you’re fine tuning and tailoring things a bit more so that you can label your content more appropriate to your needs and be able to handle that content specifically for the types of uses for that content.

GK:                   Yeah. Absolutely. And I think this is an area where we start to really see a lot more work on the kind of metadata and taxonomy side of things because that’s when you start thinking, “Okay. Now that everything actually is structured, now we can think about how this content needs to be organized, how it needs to be sorted and filtered, how both our authors and our customers need to be able to search for the particular information that they need within this content set, how we might need to do something like personalized delivery.” So once you kind of have that foundation laid down with just the basics of structure, that’s where you really kind of start to think about: Okay, how do we want to customize our metadata? And how do we want to build out some sort of a taxonomy that we can support with metadata so that the content is not just tagged in structure, but it’s also organized? And there is information about the content itself being captured in a way that makes it a lot more flexible.

BS:                   Right. And what’s really driving a lot of this is not only the different types of content that a company might produce, but it’s also starting to hit that personalization note with people and being able to drive content dynamically to them that is of their immediate interest, rather than generic content that might be suitable for any audience.

GK:                   Yeah. So this is where, if you’ve got that structure in place and you’ve started to do those customizations, that you can do some kind of dynamic delivery. So your users might sign into a portal, and it can pick off information about that user based on their login, and then feed them the content that they need without them having to kind of dig through and search for it. So that really kind of takes your use of content to a higher level than you were able to do before, even though this is still a structured step, but it’s just really kind of enhancing it and taking it to the next level.

BS:                   Yep. And that next level beyond that would be something that uses, or that next step, so step five. Once you have everything in step four done, which is all of your customization, step five is kind of building upon that even further and implementing a lot more additional dynamic capabilities to your content.

GK:                   Yeah. So step five, we’re kind of thinking of this as even your structure is structured, so you’re really thinking about how to take this to the highest possible level, how to get the most out of your automation, how to really make sure that the way you’re delivering your content is maximum efficiency. And this is kind of what I think of as the differentiating factor between just simply moving to structure versus true digital transformation of content. I know that’s something that we’ve talked about in some of other webcasts and podcasts and posts is this idea of digital transformation has kind of been an industry discussion as well. But this is where we tend to think of truly transformed content is content that is a lot more personalized, where you’re really making the most out of your automation and your efficiencies. And the content itself is kind of not just one single digital delivery, but it’s something that a user can customize, mix and match, and it can be really, truly personalized.

GK:                   So this is where you’re really, really looking at: What is the most that we can do with structured content beyond even steps three and four? How can we really continue to take it to the next level and make sure that it keeps on scaling as the company grows?

BS:                   Yep. Step five tends to be incredibly specific from implementation to implementation, so one company will be doing things one way in a structured environment. Another company might be using the same exact underlying structured framework, but be organizing their content and doing completely different things with it. This is where essentially every single case that we’re seeing of companies that are at or looking to move to a stage five in their structured progress, it’s a unique engagement. It’s a unique way of looking at content based on what that company specifically wants to do with their content.

GK:                   Yeah. And this is really where if you’ve got most of your content problems solved with structure, but then you just have a few of these edge cases and unique requirements, where some additional customization would really take it to that next level, that’s kind of what we consider for that step five. And as you said, it is unique from company to company. But it’s something that’s also important to consider when you’re still in stages three or four, thinking about what your future requirements might be, and making sure that you kind of don’t lock yourself out of that. So if you are, let’s say you just get to stage three, you’ve just moved to structure, and you sort of know what your five year plan is, maybe not necessarily specifically, but you have some ideas of things you want to be able to do with content in the future, it’s always important to keep that on your roadmap and keep an eye on it because you don’t want to build something in a way that when you do get to that maturity point of being at the step five, that you’ve done something with your structure earlier that then requires a massive amount of cleanup or lots of tedious fixes here and there to get to that point.

GK:                   And I know we’ve talked about this on at least one of our other episodes about how it’s really important to plan, to be very careful, and to spend a lot of time on that planning. I think especially when you’re kind of going from step three to step four, and you’re thinking more about your metadata and your taxonomy, that has a lot of implications when you get to something like step five as well, when you’re really maximizing your content potential and your efficiency. Just that when you are building those structures and when you’re thinking about a taxonomy and how you want to organize your content, that you don’t lock yourself out of those future requirements.

BS:                   Yeah. You always want to keep some options open there because things will continue to shift and change, especially as your requirements change, or if you acquire another company, or acquired by another company. You want that nimbleness still built in and room for improvement, or room for change still available, and not just nail everything down and call it done.

GK:                   Absolutely. So on that note, what are some tips for moving to structure? If you are kind of at maybe a step one or a step two, how do you eventually get all the way to step five or close to it? And how do you do that as efficiently as possible?

BS:                   The first step is to kind of wrap your head around the strategy for your content and where it’s going to go, how you’re going to author content, what your future state looks like. So a lot of the things that we’ve been talking about, not just in this episode, but in many of our podcasts, but building that content strategy that gets you from where you are to where you want to be, and make sure that you have some kind of roadmap or framework for each of those steps that you want to take, so that you understand the scope of work that is going to be required to move from one step to the next, and to have some criteria so that you can measure what done looks like, and whether you’ve accomplished things that you wanted to get done in that stage. So not just: Are you done, but is it working?

GK:                   Yeah. And I think that’s also really important when you are coming up with that strategy to build in some kind of backup or contingency plans for when things don’t always go the way you think they will. And that’s why it’s really important to kind of look further out toward the future, so if you’re kind of at a stage one or two right now, that go ahead and make your ideal plan for stage five, but know that there’s going to have to be some flexibility in how you might get there. So you may want to have a few backup options of things that you would achieve in stages three and four before you get to that ultimate goal.

GK:                   Another tip that I want to bring up is just, like we said, when you go from that second step to the third, where you are cutting over from unstructured to structured, it’s really important to come up with a conversion strategy because that’s where you are going to be getting all of your content out of one format into another and migrating it into whatever kind of tools or systems are going to be managing that content. And that’s why we really emphasize having a step two and not just kind of skipping from step one to step three because that really I think helps improve that conversion strategy. And things to think about at that stage are one, how much content cleanup has to get done on the front end versus the back end, so pre versus post conversion.

GK:                   And what can you do to minimize the amount of kind of human intervention or manual cleanup that you’ll have to do? Because the more content you have, the more time it’s going to take to convert everything, and so the better off you’ll be if you can automate it. And that’s why having a clean content set as much as possible really helps with that conversion strategy. So just before you convert everything, it’s really important to think about what’s highest priority. What state is your content in? And what kind of clean up you’re going to have to do on either end of that conversion.

BS:                   Yep. And once you get to that stage three, you no longer really have a conversion path that you need to worry about, but you need an exit strategy going forward no matter if you’re at stage three or step five. You need to have an exit strategy for your content if you do need to change tools again, so keep a lot of that in mind when you’re selecting things. It’s not necessarily that one’s going to be bad and another one’s going to be better from an exit strategy point of view. But you need to understand how these new tools and new systems work with your content so that if you do need to move from tool A to tool B, you know how you can export the content, what certain handling capabilities from the old tool need to be redone, or somehow otherwise implemented in tool B. And have that in mind going forward. Moving to structure generally allows you to have some degree of portability with your content. But again, your mileage may vary depending on the tool choices you make and the types of structure you’re looking at.

GK:                   Like you said, having an exit strategy is so important because as we’ve mentioned in a lot of our other discussion on this episode, things do change. When you go through all of these updates to your content process over time, things change. And when you are kind of moving through those structured steps, so going from step three to step five, a lot of your decisions are going to be driven by the changes that happen in your organization, and the new requirements and the new demands that you’re going to face over time. So you have to think about how your processes need to scale up to meet all of the changes that are going to come, and just sort of use that as your guide. Update the roadmap that you come up with at the beginning as you get new information, and just kind of constantly keep your eye on that so that you can ultimately sort of move through from step three, to four, to five over time, just based on what’s happening at your company.

BS:                   And also keep in mind, jumping from one step to the next, any given step to the next, you need to make sure that you have a clear understanding of the benefits that you are going to get by making these improvements in order to get buy in, not only from people who have the money that you will need to purchase new tools, or to provide training to your team, but also to get your team to buy in to the idea of, we’re going to work differently, and this is why it’s going to help you going forward.

GK:                   Yeah. That benefit is important because you don’t want to just kind of move from let’s say step four to step five without a good reason for it. You have to be able to explain, here’s why we’re doing this, and here is how it’s going to improve content production going forward. And with that, I think we can go ahead and wrap up. So thank you so much, Bill.

BS:                   Thank you.

GK:                   And thank you for listening to The Content Strategy Experts Podcast, brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

 

The post Steps to structured content (podcast, part 2) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 15:17
Steps to structured content (podcast, part 1) https://www.scriptorium.com/2020/12/steps-to-structured-content-podcast-part-1/ Mon, 07 Dec 2020 13:00:06 +0000 https://scriptorium.com/?p=20071 https://www.scriptorium.com/2020/12/steps-to-structured-content-podcast-part-1/#respond https://www.scriptorium.com/2020/12/steps-to-structured-content-podcast-part-1/feed/ 0 In episode 85 of The Content Strategy Experts podcast, Gretyl Kinsey and Bill Swallow talk about the steps to structure, how to move from unstructured content to structure, and what each level of maturity looks like.

“It’s important to keep in mind when you move from step two to step three that your authoring tools may change. The writers might have gotten used to working with one set of tools in steps one and two. But as you move to structure, the tools that you’re using for unstructured content may not support the underlying framework for the structure that you’re moving forward with.”

– Bill Swallow

Related links: 

Twitter handles:

Transcript:

Gretyl Kinsey:                  Welcome to The Content Strategy Experts podcast, brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we talk about the steps to structure, how to move from unstructured content to structure, and what each level of maturity looks like. This is part one of a two-part podcast.

GK:                  Hello and welcome everyone. I am Gretyl Kinsey.

Bill Swallow:                   And I’m Bill Swallow.

GK:                  Today we’re going to be talking about structured content and all the different steps it takes to get there. Let’s just go ahead and dive right into what is the first step or the baseline when we’re talking about moving from unstructured content to structured.

BS:                   Well, I guess the very first step that you’re on is that you have content.

GK:                  Yes.

BS:                   Congratulations. You have content. It exists. It’s probably written well. It is probably being authored by a bunch of different people or authored by people using a variety of different tools. Basically there’s no general rhyme or reason as to how the content is being produced, but it looks good, it serves its purpose, it’s published, it’s out there, people are reading it, but there’s generally no underlying structure. You might be using Microsoft Word and various other tools, no actual templates involved, all the formatting is kind of ad hoc and all hand produced.

GK:                  Yes. I think this is what we consider as the baseline or the bare minimum when it comes to content. It’s there, it’s well-written, it’s usable and you have it and it’s working, but you’re not really able to leverage it necessarily and do a lot more sophisticated things with it, and so you may have some limitations if you’re at step one. For example, with how you publish your content if everything is very manual in the process of creating it, then that’s probably true on the publishing side as well. So you’re not really getting mass automation there. You may also be limited in your ability to share content across maybe different departments, different types of documents. A lot of times when we see companies who are in what we would consider the step one, they tend to be in silos with unstructured content, and so you’ve got sort of different types of unstructured content all over the place and none of it is really connected or working together.

BS:                   Right. With regard to being able to share the content, there’s also that issue of copy paste that we ended up seeing a lot. This happens a lot in this first step where if you need to share content or you need to reproduce the same content in multiple formats, or in multiple documents, there’s a copy and paste going on, which then just adds to the whole snowballing effect of being able to manage to your content. If you need to make an update, you then have to find where you’ve copied and pasted that information throughout all of the documents and deliverables that you’ve produced.

GK:                  Yeah, and sometimes this can really have a snowball effect where if you do have different departments that produce content and let’s say maybe you don’t have as much of a problem with separate silos, but you do have this issue where there’s no connectivity. So let’s say you’ve got some folks over in training and they need to reference information from the official technical documentation and their training materials, and they go over and don’t necessarily grab the latest and greatest version, but they copy and paste some of the documentation from somewhere and then that gets into the training materials. And so there’s not really any sense of version control. There’s not really a sort of enterprise level sense of how the content is being used and maintained. That can really become a big maintenance issue over time as you need to grow and scale.

BS:                   Yep. With regard to growing and scaling and with regard to leaving this first step behind, what is, I guess, the next level that we’re going to; step two?

GK:                  Step two is when you’re using templates and a consistent style in your content. This is where for example, if you are working in something like Microsoft Word, Framemaker, InDesign, you actually have templates set up. So you’re not just kind of ad hoc creating different styles all over the place. You’ve got something that can not necessarily enforce that style, but at least give you a guideline to work within and some parameters to use as a starting point. That can really help improve that consistency of your content, can make sure that everything follows a more not exactly structured, but approaching structured pattern. This is something that I tend to refer to as implied structure because it’s not actual enforced structure on your content yet, but it’s kind of that intermediate step to getting there.

BS:                   With that implied structure, there’s also usually a style guide that will go along with it that will further help people follow the same structural composition when they’re authoring. So it’s not just the templates that are in place that they always use heading one for the first level heading in a document or use a particular note style if they need to produce a note in their documentation, but there is also a style guide that says, this is how the content should be arranged. Not only going through and saying these are what all the different styles afford, but this is generally how you approach building documentation. This is the type of content that you want to put in this type of section in whatever you’re writing.

GK:                  Yeah, absolutely. I’ve seen some company style guides also address things like branding consistency. So if you do have a lot of different departments creating content, there is something that says here is the logo you always use, here is the official way that you refer to the company, here are the official list of product names, that sort of thing so that there’s not an inconsistency there that just makes your company look unprofessional. We also can see it sometimes if your company is doing any kind of localization, if your content is being translated and there are particular things to avoid or ways that you want to phrase things that help make translation easier, that can be included in a style guide as well.

BS:                   Yep. Really, it also comes down to that level of organization of content within the documents you’re putting together. So if you’re putting together training materials or some kind of repair guide or something that’s very procedural, you generally want to have a section that says, “Okay, we’re going to start with a heading. We’re going to introduce the topic using these types of paragraphs. Then we’re going to break into a subheading and perhaps give a list of all the parts required, if you’re doing some maintenance or all of the things you need in order to complete a particular procedure. And then jump into the procedure itself, perhaps with another heading or with some other section delineation there.”

BS:                   That way the style guide allows the writers to understand when they’re going to write something for a particular audience, that they have this structure in place that they can follow. Again, it’s an implied structure. There are no set rules against it, other than the style guide and whoever enforces that coming down on the writers and saying, “No, you must do it this way.” It at least gives you a starting point to be able to start making your content look and feel the same, regardless of who’s authoring it, regardless of what tools they’re using to do it.

GK:                  I think that’s a really important foundation to get in place before you move on to step three. What is step three in this process?

BS:                   Step three is actually using structure. So being able to identify that there is a need for this level of consistency and these level of rules and adopting a framework that builds those controls in. So structure, we’re talking something like XML or DITA, which is a flavor of XML, SGML, that’s an old school one that’s still around to some degree, but it’s essentially a technological framework that says, “Here are all of the types of content that you have, and this is how they all play together. This is where they’re allowed. This is where they’re not allowed, and this is how they all flow together as well.”

GK:                  Yeah. So this going from step two to step three is really the break point between unstructured content and structured content or between that implied structure we talked about and an actual structure. I think that’s why it’s so important that if you are going to move out of your unstructured content and get into true structure, that you do have that intermediary step one to step two, because if you try to go straight from step one into step three it’s probably not going to be a very clean migration over into structure. So if you’ve already laid that groundwork and you have that implied structure in place in step two, it puts you in a much better position to go on to step three.

BS:                   Yep. Not only do you have the content aligned so that you can convert it to some kind of structured format, it makes that conversion process a lot easier. If you have step two in place and you have these solid templates that you use, and you have this consistent writing format that you’re using, you can automate that process to some degree, or if not completely to get it to a structured format. But it’s also important to not skip step two, because you want your authors to be able to acclimate to now writing in a structured format. If they’re used to just doing whatever they would like as long as the end product looks good and reads well, they’re not going to come around to the idea of authoring in a structured environment very willingly.

GK:                  Yeah. This is, I think the biggest challenge that we do see when a company goes from unstructured content to structured content, is that big mind shift that has to happen. That’s why I think it’s important to have that step two, so that people get accustomed to working in something that’s like a structure, even if it’s not a programmatically enforced actual structure that that mind shift does not have to be as big, because that is where you see a lot of resistance to change that can just really get in the way of your progress.

BS:                   Yep. It’s important to keep in mind when you move from step two to step three, that your tools may change, your authoring tools. The writers might have gotten used to working with one set of tools in steps one and two, where they were unstructured but perhaps following a style guide, perhaps using different templates, but as you move to structure, the tools that you’re using for unstructured content may not support the underlying framework for the structure that you’re moving forward with.

BS:                   Often we see a little bit of reluctance among the authors to move towards structure because the tool set is going to change. And what they’ve been accustomed to using perhaps for many years, they need to abandon, and they need to adopt a new tool with a new user interface, with a new underlying file format that they are just not accustomed to. Things may look a little strange, especially when you’re moving to structure using XML or so forth, that doesn’t have formatting necessarily applied to the content itself. They’re not accustomed to seeing a different representation of what they’re authoring than what will be delivered to the other people. So what they’re authoring in and what it looks like to them is not what it’s going to look like to the person who’s reading the finished produced deliverable.

BS:                   That’s a little jarring for some people. A lot of care needs to go into making sure your team is aware of these changes and that they have the training and the support necessary to make that leap.

GK:                  Yeah, absolutely. I think it’s a really intimidating thing because suddenly you’re going from, like you said, something where you can actually see what the finished product will look like as you’re working to something where you really have no idea. If you are moving to structure for the purpose of automating your publishing processes, for example, then you’re going to have one tool for authorizing, you’re going to most likely have some other tool or suite of tools for content management and then another tool or suite of tools for publishing, and all of those pieces are separate. So if you are used to everything all being in one tool together where you write everything, you review everything and then you just export it directly to publish from that same tool, and then suddenly you’re in this very different framework, it is a shift in not only the tools themselves, but how you work.

GK:                  It’s really, really important to make sure that nobody feels like their concerns fall by the wayside or that they’re getting left behind, but that instead they are supported because really there are a lot of benefits to this. I think that’s the main thing is just you convincing people here is how your life will be so much easier if you’re not dealing with all of those problems. We talked about earlier with copying and pasting, and not knowing where your content lives and not knowing what version is up to date. Going to structure can really help fix all of that. But it is that big change that you have to get people over that hurdle.

BS:                   Right. If they’re accustomed to producing multiple different types of deliverables, for example, a PDF and some HTML from a particular content source, it’s going to make their lives a lot easier on the publishing side, because that can be done automatically. At that point, you’re really removing the writer from the process of publishing, and their job is to make sure the content is structured appropriately and written correctly. At that point then automation takes it to the publishing stage.

GK:                  Yeah. Another thing that you get at this stage that I think it’s important to call out is that you get to leverage smart reuse. So instead of copying and pasting information, instead of finding workarounds to share it, you can actually have a single source of content that gets used in multiple places. That again is another shift in mindset, right? But that’s also a major benefit that you get out of going to structure. That’s something again, that should be a major part of training for writers.

GK:                  On a lot of the client projects I’ve worked on, we end up doing a split where we start with basic structured authoring training, and then we usually do a separate training session or series of sessions specifically on reuse for each company because each organization is going to have its own reuse strategy and its own reuse requirements. Being able to leverage that finally is a really powerful thing, and it’s important to have that as part of the training that you do to support the authors.

BS:                   And of course, once you hit the structured stage, there’s nowhere else to go. Step three is the final step, right?

GK:                  Oh no. There’s much more. We will be covering that in part two of this podcast. For now, thank you so much, Bill.

BS:                   Thank you.

GK:                  Thank you for listening to The Content Strategy Experts podcast, brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

 

The post Steps to structured content (podcast, part 1) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 16:31
The personalization paradox (podcast) https://www.scriptorium.com/2020/11/the-personalization-paradox-podcast/ Mon, 16 Nov 2020 13:00:44 +0000 https://scriptorium.com/?p=19999 https://www.scriptorium.com/2020/11/the-personalization-paradox-podcast/#respond https://www.scriptorium.com/2020/11/the-personalization-paradox-podcast/feed/ 0 In episode 84 of The Content Strategy Experts podcast, Sarah O’Keefe talks with Val Swisher of Content Rules about why companies fail and how to succeed at delivering personalized experiences at scale.

“It all has to be completely standardized in order to be successful. There have to be small, individual, standardized chunks of content that are devoid of format that can be mixed and matched. Then the output can be personalized to the person who asked for it and sent to them at that moment in time.”

—Val Swisher

Related links: 

Twitter handles:

Transcript:

Sarah O’Keefe:                   Welcome to The Content Strategy Experts podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we talk with special guest Val Swisher of Content Rules about why companies fail, which seems terrifying, and also how to succeed at delivering personalized experiences at scale. And I imagine you’re going to tell us those two things are related, like do one to avoid the other.

SO:                   So, hi, my name is Sarah O’Keefe and I’m here with my special guest Val Swisher, who is the CEO of Content Rules. Val and I have some common affinities for a variety of causes and some needlework and some other fun stuff like that. And we run similar businesses so we actually talk quite often. Today, we’re going to attempt to distill that into a useful podcast for you. So wish us luck. Val, hi.

Val Swisher:                   Hey Sarah, how are you?

SO:                   I’m good. How are you doing over there?

VS:                   I am doing just fine here.

SO:                   Excellent.

VS:                   It’s a new day.

SO:                   It is a new day. For context, we are recording this on November 9th?

VS:                   9th.

SO:                   9th in 2020 so you can take that away for whatever you want. But Val, tell us a little bit about Content Rules and what you do over there.

VS:                   Well, we do a lot of similar things to Scriptorium, don’t we? Since you and I are in similar businesses. So as you said, I’m the CEO of Content Rules and I started the company in 1994 and we do a variety of things that are all related to content. So we develop content with contract writers and editors and course developers and all those kinds of folks. We do a lot of content strategy work, helping customers move from an, unstructured environment to a structured environment or helping customers with their global content strategy and how to go global, what they need to do to go global. And we also help customers optimize their content using special software that will allow them to program in their style guides and terminology, and make sure that their content is as good as it can be. So those are all of our different service lines.

SO:                   Yeah. And you’re right, there is a good bit of overlap. Although the funny thing I think is that we don’t actually see that much customer overlap, which is probably why we manage to get along, which is helpful.

VS:                   Undoubtedly. It is interesting though.

SO:                   Do, you have a book you’re working on?

VS:                   I do. I do. I am working on my fourth book. This book is titled The Personalization Paradox: Why Companies Fail and How to Succeed at Delivering Personalized Experiences at Scale.

SO:                   Okay, so let’s start with failure. It’s 2020 so I feel like that’s where we need to start. Why are companies failing at this?

VS:                   Okay. So there are a few reasons that companies are failing. The first thing is that they focus on the wrong place. Companies have spent years focusing on the delivery of personalized experiences, the delivery mechanisms of content. They’ve spent a lot of time, a lot of money, all around how they’re going to deliver content. And that’s the wrong place to start, they need to start with the content. And when you start at the end, rather than the beginning, you’re kind of setting yourself up to fail. So that’s one reason.

SO:                   So you’re saying they should start at the very beginning and that’s a very good place to start?

VS:                   Indeed. I could break into song right now.

SO:                   Okay.

VS:                   Yes. So starting with the content is the most important thing you can do. It doesn’t matter what delivery mechanisms you have, if you don’t have your content set up to deliver personalized experiences, it’s not going to work. So that’s the first problem.

VS:                   The second reason companies fail is that they are into the new, great, bright, shiny object. So they keep buying tools and they don’t think about it before they buy tools. They’re just like, “Oh, let’s buy this tool. This’ll do it. Oh, let’s buy that tool. That’ll do it.” And my new saying, I have a new saying, if you take the same crappy content and put it into your new expensive tool, you will end up with expensive crappy content.

SO:                   That seems accurate.

VS:                   So once again, we are starting at the content. And then the third reason is the same old silos that we’ve always had. I mean, we’ve been talking about silos for decades and decades, and if you really want to deliver personalized experiences at scale, you’re really going to need to play well with each other. This silo thing gets more and more difficult. So those are the reasons.

SO:                   So those are the three. Okay, so what are we talking about here? When we talk about personalization, what does that mean? What is a personalized experience?

VS:                   So personalization is when we deliver the right content to the right person at the right time on the right device in the language of their choice. So some people refer to it, we start referring to it as the Amazon experience. When I log into Amazon, boy, they know me really well. They show me everything I want to buy right now. They’re like in my brain, “Oh, Val, she likes shoes, we’re going to show her these boots,” this sort of thing.

VS:                   More and more, we’re coming to expect the content that we receive from a company to match what we need rather than having to go hunt for it. In fact, I was talking to someone over the weekend about this and they were telling me how frustrating it is for them when they go out to a particular financial site that actually has all their information. And rather than just showing them what they need, they know what funds they have and all this, they make him just search for stuff nonstop. And he’s like, “They know all about me, why am I putting this information in? Why can’t they just show me what I need?”

SO:                   That would be nice. I actually saw an example of this that I thought was fantastic and it was a credit card company believe it or not. And this was so stunning because they did the right thing. My mind was blown. So what happened was, now this was of course in the before times, I had bought a plane ticket because I was going somewhere and I went on to the credit card website to do something and was looking at my list of transactions and there was the charge for the airline, right? And underneath it, it said, essentially, “Hey, you’re traveling overseas. Would you like to set up a travel alert?” And I thought, well, that’s pretty good.

SO:                   Now, I’ve since seen a different version of this, where I actually got an email that said, “Hey, we noticed you bought a plane ticket and so we automatically set the travel alert for the place you’re going,” which was actually even better. But I was stunned because it was so unusual. Normally you have to dig through 18,000 menus to find the travel notification. Okay, in the olden days, children, we used to this thing called getting on airplanes and we would go places, we would leave our house and go to this big building and then we would get on the small tube in the sky and go places, yes. So, anyway, sorry, bad example right now. So personalization really just means deliver reasonable information, right? I mean, is it fair to say, you’re not really talking about, it doesn’t have to be that personalized, it doesn’t have to be, “Hey Val, here’s your stuff.”

VS:                   It’s a really, really good point. It’s very interesting you should even talk about that. When we’re figuring out how to talk to the customer, we need to be super careful about how we do that. It is so contrived, dear blank, and then they use the wrong name or it says dear [first name], because something’s screwed up. It really just means, give me what I need. Honestly, I don’t care if you know my name, as long as you give me what I need.

VS:                   We’ve been working on making it easy to find content for hundreds and hundreds of years literally. In fact, I was doing some research and back in the 1500s, there was a man named Pliny the Elder, not to be confused with the beer from the Russian River Brewing Company called Pliny the Elder. There was a guy called Pliny the Elder and he wrote a 37 volume, it was like an encyclopedia at the time, of the natural world. And book one was an index to the other 36 books. This was in the 1500s.

VS:                   So we’ve tried everything as we’ve gotten more and more technologically advanced. We have the card catalog for libraries and we’ve had indexes and table of contents and lists of figures and lists of tables, navigation on a website, navigation in any type of app or training or whatever.

VS:                   We’re at the point where people don’t want to have to pull that information. All of those ways of searching, ways of finding content is pulling that content. The onus is on the person looking for the information. We don’t want that anymore. We want it pushed to us automatically, just push what I need right now. I don’t have time to look in book one to see that what I want is in book 28. We’re out of that kind of time. The expectations are really different. So it’s not new.

SO:                   No, but we seem to be sort of bad at this. I mean, there’s the creepy version, right? Or there’s the failure, dear first name, which is terrible. And then, I mean, you mentioned Amazon, but my experience with Amazon is like, “You bought a washing machine, you’re obviously starting a laundromat. Let me sell you some more washing machines,” right? They seem to have kind of lost the chain there between somebody bought a washing machine, maybe I should sell them detergent. And so they’re not quite there yet, but you buy these big appliances and they immediately assume you want more like that. And so there’s something not quite right with that algorithm, but setting aside that example and thinking more about the business content that you and I mostly deal with, why are people so bad at delivering relevant content?

VS:                   Well, again, I think it’s because they’re focused on the wrong things. For a very long time, we were focused on trying to figure out enough information about you that we could go get the content for you. And even 10 years ago, 12 years ago, there were companies focused on that problem, how are we going to get enough information about you so that we know what to target our ads, so we know what to advertise to you? And now it would be we know what content to deliver.

VS:                   That problem has been solved. I mean, big data is here. We have more of a problem with controlling all the information they know about us than gathering. We know that, we see it every day. It’s the creepy, creepy and on the one hand, it’s uncomfortable. And on the other hand, if you want only the content that you want to see delivered to you, then I’ve got to know a whole bunch of stuff about you. So it’s time to start focusing on the ways that we create, manage, publish, and deliver the content.

SO:                   And so you talked about content and you talked about, I mean, there are certainly tools that can help with this, but they won’t help unless you do the content first. What about the silo issue? What are the problems there? What are the failures there?

VS:                   Where do you begin? I mean I once saw you do this fantastic presentation at a conference where you brought up a manual and it had nothing to do with the marketing content, it was just like, this happens all the time, you see companies’ marketing messages and examples and illustrations and positioning and their terminology and the way they talk about the product and then you move over to the knowledge base, or you move over to training courses or technical documentation and we have four different descriptions of the same widget when really we need to be sharing one description of the widget. So the more content that we each make in our own silo, the worse the problem is because now we have too much content, it all is kind of sort of the same, but not really, we cannot reuse it across silos, we’re restricted in terms of what we can deliver. We can only deliver that which we create. It’s expensive, it’s inefficient, it’s often inconsistent. There’s nothing good about it. So silos get more and more exacerbated when we try to deliver personalized experiences at scale. Same problems, just maybe exponentiated a tad.

SO:                   So what does that mean? I mean, are we talking about one monster piece of software to rule them all?

VS:                   Well, so I would say we actually need to step back from the software and really focus on the content because how people store and manage and publish the content is definitely a challenge to solve, but we need to teach people how to create the content. And you know this as well as I do, the only way to deliver a personalized experience at scale is to write your content in very small units, call it a component, call it a chunk, call it a topic, call it whatever you want, but a very small unit that’s self-contained, that can be mixed and matched with other small units, devoid of format so the format comes in at the end and have this library, searchable, tagged, find-able units of content that at the point of delivery can be mixed and matched so that an output is built, a format is applied and publishing happens on the right device at the right time, etc.

SO:                   Yep. And I’m totally there with you, but all the non-tech writers just ran screaming from the room.

VS:                   I know they did. I know they did. They ran, they’re hyperventilating, but it gets worse for them. It gets worse for that. Actually.

SO:                   Tell us more.

VS:                   It does, sorry.

SO:                   It’s 2020. Tell us more.

VS:                   Well, so here’s the paradox. The paradox is that in order to be successful with this, in order to be successful mixing and matching these little components so that they create a thing that’s specific for you or specific for Tom or Sally or whatever, each one of those components needs to be standardized at every level. The terminology needs to be standardized, the grammar needs to be standardized, the style needs to be standardized, the tone of voice needs to be standardized. It all needs to be standardized to create an experience that is not disjointed, that at best kind of reads funny or looks funny because we’re not calling a widget a widget, we’re calling it 20 different things and at worst completely confuses the person you’re delivering it to.

VS:                   It all has to be completely standardized in order to be successful with this. So they have got to be small, individual standardized chunks of content, devoid of format that can be mixed and matched so that at the point of publishing, that output is personalized to the person who asked for it and sent to them at that moment in time. So yes, everybody’s now screaming. “You’ve taken away my creativity, danger Will Robinson! Creativity, creativity.”

SO:                   And I’m really sad right now that this video will not be captured on podcast. Excellent robot impersonation.

VS:                   You can see me with my hands like robot. Yes, sir.

SO:                   Okay. So having covered all the 2020 buzzwords, COVID travel, etc. What about artificial intelligence? Is that going to help us with this mess?

VS:                   So it will, it’s going to fundamentally change the way all of this happens. So with today’s technology, we have some constraints. One of the constraints is that we have to tag each piece of content with enough metadata that is appropriate, that systems can locate each chunk of content that needs to be delivered for your personalized experience. So that’s the first constraint that AI is going to pretty much mitigate. In an AI engine when they become ubiquitous, that cognitive system itself sets up its own matrices that we don’t tell an AI system, “Here are your tags,” it sets it up. And we tell it, “Here are the things that go together,” we train it, “Here are the things that go together,” and we train it with a whole bunch of information, and then it continues to figure it out on its own. So the locating of the content is going to be much easier.

VS:                   Also, AI systems can look through any kind of content. It doesn’t have to be a structured content. It can look through emails and social posts and all kinds of other content in order to grab what it is you need at that moment in time. And it does it really fast and it learns over time what’s correct and what’s not correct. So the whole process of locating that information and grabbing it, and the whole percentage accuracy goes up, right? The longer it goes on, it’s more likely to be accurate. So that’s one way.

VS:                   The second way is right now, we are constrained by output types. We really do have to define the output type. In the AI world, we won’t need to, it will just send you information. It will be able to on the fly know, “Oh, this is what you need, I’m going to take all these different pieces and I’m just going to send it to you.” We won’t need to define in advance what it’s going to look like. It will be able to do that on its own. We’re not there yet, we’re definitely a few years away minimum, probably… I mean, you and I have plenty of customers that aren’t even at the point of being in structure yet, right? They’re just getting there. So I think there’ll be companies that can leap frog right to it once AI systems are all over the place, but for now we are constrained and AI will take those constraints away.

SO:                   So that’ll be fun and hopefully not at all troubling. All right so it sounds as though we’re going to need this book. So is it out yet? Where can we get it? When can we get it?

VS:                   Any minute now. So the book is not out yet. It’s November 9th. It was supposed to be out at the end of October, but it’s 2020 and nothing happened on time in 2020. It will be out in the very beginning of 2021. You’ll be able to get it on Amazon, or you’ll be able to order it from XML Press. And again, the title is The Personalization Paradox: Why Companies Fail and How to Succeed at Delivering Personalized Experiences at Scale. And I should mention that I do have a coauthor, her name is Regina Lynn Preciado. Regina and I have worked together for, we got to 15 years and it just got blurry beyond that because we’re old, we’ve worked together for a very, very long time. She’s a phenomenal content strategist, I’m really a happy to have collaborated with her on the book.

SO:                   Awesome. So we’ll add all of that information to the show notes and hopefully with any luck XML Press or Amazon or somebody has a pre-order page up.

VS:                   XML Press does and the Content Rules website also does.

SO:                   Okay, great. So we’ll add some version of those. And I think with that, Val, thank you so much, I’m going to wrap this up. This has been the most fun I’ve had today by a long shot actually.

VS:                   Oh, goodie. That’s cause you like my robot impersonation. Danger, danger.

SO:                   The robot was very helpful. So thanks again, and hopefully I will see you in person at some point in 2021 and not just on a screen because I’m kind of over the screen thing, but we’re lucky that we get to work at home, but…

VS:                   We are. And thank you so much for inviting me on and it’s always fun to talk to you.

SO:                   You too. So with that, thank you for listening to The Content Strategy Experts podcast brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

 

The post The personalization paradox (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 26:47
DITA: The next generation (podcast) https://www.scriptorium.com/2020/11/dita-the-next-generation-podcast/ Mon, 09 Nov 2020 13:00:44 +0000 https://scriptorium.com/?p=19996 https://www.scriptorium.com/2020/11/dita-the-next-generation-podcast/#respond https://www.scriptorium.com/2020/11/dita-the-next-generation-podcast/feed/ 0 In episode 83 of The Content Strategy Experts podcast, Gretyl Kinsey and Jake Campbell talk about the next generation of DITA. What happens when you need to update your existing DITA structure?

“When you’re building everything out the first time around, you can do as much user acceptance testing as you want—but the best user acceptance testing is going to be live testing.”

—Jake Campbell

Related links: 

Twitter handles:

Transcript:

GK:                   Welcome to the Content Strategy Experts podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize and distribute content in an efficient way. In this episode, we talk about the next generation of DITA, what happens when you need to update your existing DITA structure? Hello everyone and welcome. I’m Gretyl Kinsey.

JC:                    And I’m Jake Campbell.

GK:                   And we’re going to be talking about updating your DITA content structure today so I think we want to start by just briefly talking about DITA itself and the kind of different generations or versions that it goes through for those who are unfamiliar. Jake, can you just give us a little overview of that?

JC:                    The earliest version of DITA that I’m familiar working with actually started back in 2006, DITA 1.1, and that lacked a lot of the modern conveniences that we’ve become accustomed to in DITA today, particularly when it comes to actually customizing the DITA structure. You weren’t able to do things like specialize attributes, some reuse capabilities I think we’re kind of limited. And now we’ve got a lot of very specialized topic types. We have a broad suite of specializations available out of the box for some purpose built usage, things like the troubleshooting domain or some of the more specialized element, like hazard statement for when a standard note won’t do.

GK:                   Right. And I think a lot of these kind of differences between each version of DITA because we’ve gone, as you said, from that earliest 1.1 to 1.2 and the now we’re in 1.3, and I think one of the big driving forces behind that has just been this idea of seeing how people use DITA, how people need to use DITA and what changes sort of needs to be made to that out of the box content model to kind of help make sure that all those features are available. There’s definitely been, I think, kind of a few major evolutions that we’ve seen.

JC:                    Yeah, definitely. And a lot of what’s available now is kind of in response to what people have needed. And we’ve actually seen with some clients who need to move their content model from DITA 1.2 to DITA 1.3 or in some cases DITA 1.1. to DITA 1.3, there are some things that they had specialized or that they had built specific semantic structures around that are now actually part of the base DITA model as of 1.3.

GK:                   That’s actually a really good segue into the next question I wanted to ask, which is what are some of the reasons that you might want to update your DITA content model? And I think you already kind of touched on that with the idea of being able to include features that you couldn’t before and then suddenly those start to become available in the latest version of DITA.

JC:                    Yeah. And the kind of most sweeping way that you kind of realize that’s happening when you move into a new version of DITA is some of topic types that become available. I remember when the DITA 1.3 specification was just starting to come out and there were some rumblings about it being released, there was some discussion around the troubleshooting topic type, which is a further specialization of the task type. And there were a lot of people talking about how that was really important because there were semantic structures in that that specifically said, “These are the problems you’re seeing. This is what could cause these problems. This is a way to solve that problem.” Whereas before you would have had to specialize a task structure or create specific semantic structures using out of the box components in order to contain that kind of information previously.

GK:                   Yeah. If you have a case where you need to support some kind of structures and then you see that those are becoming available in the next version of DITA, that’s a really good time to kind of evaluate what you’ve got now and think about when it’s going to be the best time to make this move over into the new version and kind of clean up some things that had to be specialized before. Because I think one thing that we try to recommend is to only specialize as much as you need to and to use the out of the box features. Keeping an eye on what becomes available out of the box over time is a really smart thing to do and can definitely be the case for making some tweaks and updates to your existing DITA structure.

GK:                   Another thing that can kind of guide you along that path is if you get into a situation where you start to change your technology, so maybe you’re looking at a different content management system, you’re looking at maybe some different publishing outputs, you’re looking at new authoring tools, any or all of the above. And you have developed your existing DITA content model in ways that sort of aligned with the current tool set you have now. But now that you’re looking to change, then that’s a place to start evaluating is there anything in the DITA model that needs to change too, as we start to change software and technology?

JC:                    Yeah, I’m sure we’ve touched on this on the podcast in the past, but once you start looking at a proprietary tool, they probably handle things in a very specific way in order to achieve their goals. And that usually means that there may be some compromises or accommodations that need to be made in order to actually make that work. Some CCMSs we’ll use more of a database model for containing all of the different information that you want to have there. It kind of treats these individual elements and files as objects within a database. There may be something on the CCMS side that is equating with the ID attribute that you need on your DITA topics, but isn’t actually using that ID attribute that’s on those DITA topics. You may need to take a look and see if that might be something that’s locking you into that particular technology depending on what kind of move you want to make from there.

GK:                   Yeah, absolutely. And I think, especially if you did some sort of a specialization, if you did some sort of workarounds on your content model that were designed to sort of accommodate the current authoring workflow and content management and publishing workflow that you have and you realized that that has created a little bit of lock in with your current tools and you need to change, then I think that really presents a good opportunity to say, “Well, if we are going to make this change anyway, we really need to look at the DITA itself and figure out how that has to change too.” And then whether the latest and greatest version of DITA that’s out there, if you’re currently in 1.1 or 1.2, if going to 1.3 can kind of help make that change easier.

JC:                    Yeah. And when you’re looking at moving your content model over as well, you might want to take a look at, do you have any customized output, any custom DITA transforms that you’ve built around your particular specialization or any changes you’ve made to your content at the same time?

GK:                   Definitely. And there’s one other thing I want to touch on here too, which is that if you have any sort of new requirements that come up, so let’s say you have a new product that you need content to support or let’s say that you are extending something about the particular information or metadata that you capture around your existing products and you need to extend your DITA content model, especially if you have specialized it, then that can also be kind of a driving force to look at well, does the current version of DITA that we’re in support that? Or should we also just look at moving to the latest and greatest? Looking at going to DITA 1.3 at the same time? Will that help make anything easier if we’re kind of touching up an existing specialization?

JC:                    Yeah. And when you’re thinking about that as well, it’s important to identify what kind of gaps you’re currently seeing when you’re thinking about making that kind of move, because you should always start with this kind of gap analysis for want of a better phrase, to say, “What needs do we have that aren’t being served? And how can we rectify that? Can we make any of those kinds of changes of what we have now? Or do we need to move somewhere else?” And I feel like that’s really going to be a driving factor in, do we make this kind of big jump into a new version?

GK:                   Absolutely. What are some things to consider when you’re approaching a DITA remodel?

JC:                    I’ve always kind of said that when you’re building everything out the first time around, you can do as much user acceptance testing as you want, but the best user acceptance testing is going to be live testing. Even when you’re in production and you’re happy with what you’ve got and it’s working and it’s not posing a significant problem, it’s still a good idea to kind of keep taking the temperature on these things, to see, do we have any content authors who are running into problems? Are we running into any weird corner case issues with some of our broader content now that we’re actually out in production with this? Definitely, see if you aren’t being served by your content and what can you do to make sure that you’re getting everything that you need out of your content?

GK:                   Yeah, absolutely. I think that leads into a lot of the steps that we try to take with our clients when they come to us and say, “We need to restructure our DITA. We know that our content model isn’t working, but we’re not quite sure how to go about making that change.” One thing that we do and it kind of really helps to have those metrics, Jake, that you were talking about is, suggesting that the company evaluate which parts of the DITA structure still work? What should you keep? Which parts of the model are still going to be functional for you after you change? And then which parts are not serving you so well? And then you have the roadmap you need to start making a plan for how you’re going to change those things that aren’t working. And when you know that, that sort of makes it manageable because you’re not just kind of going, “Oh my gosh, we have to change everything.” You have a really specific plan for how you’re going to tackle that.

JC:                    Yeah. It’s not that unusual to kind of look back on the initial development process that you did with your specialized content or with the current model that you have and just kind of compare it to what you’re actually getting out of it. If you’ve been through this once before, you probably already have some sort of roadmap that says, “This is what we have done in the past,” and you can kind of use that to measure up your current state with where you thought you would have been.

GK:                   Yeah, absolutely. I think it is really important to learn from those lessons of the past if you have been through this before, because if you’re in DITA now, you did initially go through some path to get there, whether it was just starting in DITA or going from some sort of unstructured content to DITA. You do understand kind of what it takes to develop a content model in DITA, what it takes to understand the structural needs that you have and then how to take that forward. It does give you a baseline of lessons learned for what to do when you do this remodel.

GK:                   And I think one thing to really think about and be cautious about, which is something, Jake, that you touched on a little bit earlier is when we’re talking about designing specializations and content models around your tools, around your content development workflow, it’s really important to be careful about doing any sort of work arounds or specializations that are specific to a particular tool, because that does lead to a certain degree of lock-in and that’s something I think that you could avoid going forward if you’ve already done that once.

JC:                    And it’s also important to try and think about where some of this information is being stored. I know that metadata is kind of a weird squishy concept because metadata is information about data. It doesn’t have a lot of inherent meaning sometimes. It can be kind of hard to think about and it’s not unusual for some parts of metadata to be stored within the CCMS rather than stored within the source. Trying to think about what are you trying to do with your metadata structure when you build it out? Where are you going to store it? And how is it going to be used? And where is it going to be available? Is all really important when you’re thinking about tool selection and when you’re thinking about how to model your content.

GK:                   Yeah, absolutely. And I think I’ve seen in many instances with clients I’ve worked with that there’s kind of a hybrid where there will be some metadata that’s stored in the DITA content itself and other metadata that is managed and stored by the CCMS and by the tools. And so there’s kind of that balance there that you have to think about. And if you are doing a DITA remodel, that gives you an opportunity to revisit your taxonomy and to think about metadata beyond just what’s in the content itself? But how it’s used overall. And that’s where, we get back to this idea of gathering those metrics from your customers about how they’re using your content. You gather metrics from your authors about how they’re creating content and what are some of the roadblocks that metadata can help solve? And that can give you a lot of good information about how to approach metadata and how you might want to remodel that as part of your overall DITA restructure.

GK:                   Another thing to think about is the migration process. How is content going to be migrated from the current DITA structure you’re in to your new one? And are there any concerns around scripting and automation that can be addressed on the content model side to make that easier when you’re having to go through and rework the way that all of your DITA content is tagged?

JC:                    Yeah, it’s tricky when you’re looking at migrating into a newer version of DITA from an older version. By default, DITA is backwards compatible. Theoretically speaking, you could open up any file that was created in DITA 1.1 in something that’s using that DITA 1.3 definitions and it should open up just fine. And it most likely will. It just won’t be as fully featured. When you’re looking at moving from an older version of DITA to a newer version, just the baseline, the biggest thing you should be looking at is, what are we looking to get out of this migration? Is it just to get us to a new starting point so that moving forward, our content can be richer and take advantage of the features that are afforded by this new environment? Or are we looking to try and leverage some of those new features in existing content? In which case you really need to do an analysis of the why you’re moving and put together a plan for how you can fill those gaps.

GK:                   Yeah. And I think that’s especially important if you have done any sort of specialization that may not carry over so well to the latest version of DITA, or if it needs to be kind of completely reconfigured or restructured because DITA 1.3, for example, supports something that you build a specialization for before when that didn’t exist in DITA by default. That’s something to build into your analysis and into your plan, not just thinking about what does the new structure look like? But how are we going to move our content over? And sort of what are the priorities? What content needs to be re-tagged and restructured first?

JC:                    Yeah. And to bring it back just for a second to, we’ve specialized and we’re moving, did the newer version of DITA actually implement the thing you specialized already? We’ve actually seen in the past instances where there has been specialization and in migration to a new version of DITA found that not only did that new structure exist, it was also named the same. You need to kind of take a look at that and make sure if there is something new that exists that fills the role that you wanted to fill, would it be better to keep what you already have with your specialization? Or migrate your existing content over to that newer specialization?

GK:                   Yeah, absolutely. When you are developing your very first DITA content model, what are some safeguards that you can build in to avoid headaches if you do have to update it in the future? I know some companies are able to kind of stay with the same content model for a long time, but I do think it does become sort of inevitable that after years and years, you will probably need to or at least want to switch to whatever the latest and greatest DITA version is. And so, how can you set up your very first, your initial DITA content model to make that as smooth as possible and to kind of future proof it for later versions?

JC:                    The best advice that I could give is kind of what I’ve been hitting on as we’ve been going throughout is figuring out why you need to get this set up. And if you have a really good understanding of why you’re doing something, you’ll be able to better define what you can use out of the box or to identify the places where you’ll need to specialize. And if you have a really good understanding of the reasons that you’re doing this, you will most likely have an easier time of reacting if that needs to change later.

GK:                   Yeah, absolutely. There’s something that we’ve said in many of our podcasts before, but the more upfront planning that you do and the more you kind of analyze your specific needs around why your content model should be a certain way, that will really, really help make sure that you make the right decisions and that you kind of avoid things that will become pain points down the road. And I think in particular, when you’re looking at sort of broader information architecture decisions, things like your taxonomy and metadata, things like how your content is organized, how it’s structured, how it’s broken up into the different DITA topics and maps that you have, how reuse is set up, all of that. The more planning that you do upfront, the less chance that you’ll probably have of having to do major reorganization on that in addition to just updating your DITA version.

JC:                    Yeah. And also I kind of just to get into the specifics of it real quick, I deal a lot with actual DITA transformations, taking your DITA and turning it into something different. And the most compelling reason that I’ve found for specialization kind of boils on that side of things, boils down to two sides. It’s we need to make sure that our content gets treated in a specific way once it gets turned into a PDF or into HTML or whatever, you need a specific semantic structure to key off of in the transform so that it gets treated a particular way. Or, oh no, our SEO is bad. We need to make sure that our metadata is being handled properly. It’s not only knowing what you want out of your content model, but how your content model is going to deliver that for you once you actually start generating your content with it.

GK:                   Yeah. And that actually brings up a couple of points too that are sort of general advice. They may not apply to every company, but they are things that we tend to advise people to consider at least when it comes to sort of designing your content model. When you’re talking about the concerns around your output, one thing that we do really stress is to focus on semantic based tagging and not on tagging that is going to kind of help you get a lot of specific formatting based edge cases into your outputs. And again, that’s just because the entire point of DITA is to have your formatting separated from your content itself. Building things into the actual structure, building a bunch of specializations that are there just to address formatting concerns is generally not the greatest idea, especially if you think about how in the future things with your outputs may need to change alongside of your DITA version itself. That’s one thing that we really caution people about.

GK:                   And another thing is with regard to specialization itself, we tend to advise sticking to the DITA standard out of the box, as much as you can and only specializing when necessary. And again, that’s just because when you do specialize, if you need to make a change later, that’s always a little bit more difficult than just going kind of from an out of the box structure to another out of the box structure. That’s kind of just a little bit of advice we tend to give. Of course, there are always going to be exceptions. There are always going to be some companies that really do need heavy specialization, but we just advise that you try to keep that to semantic reasons rather than just doing it because you can or because you have some concerns around formatting.

JC:                    Yeah. When you’re looking at specializing like that, you really want it to be about making sure that your content is semantically rich. If you have a product that’s italicized because your style guide says, “Product names are italicized.” You don’t want to be wrapping that in just an i-tag, because that just means it’s italics. It doesn’t mean it’s that kind of content. And you want to try and find ways to enrich your content, not just make the content fit a guide.

GK:                   Yeah, absolutely. I think the biggest things to remember are just put semantics first, put structure first and do as much of that upfront planning as you can around the semantic needs that you have so that one day when you do have to go into the next generation of DITA, that can be done as seamlessly as possible. With that I think we’re going to wrap things up. Thank you so much, Jake.

JC:                    Yeah. Thanks for having me. It’s great to be here with you.

GK:                   And thank you for listening to the Content Strategy Experts podcast, brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

 

The post DITA: The next generation (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 22:56
Taking a phased approach to your content strategy (podcast) https://www.scriptorium.com/2020/10/taking-a-phased-approach-to-your-content-strategy-podcast/ Mon, 26 Oct 2020 12:00:17 +0000 https://scriptorium.com/?p=19964 https://www.scriptorium.com/2020/10/taking-a-phased-approach-to-your-content-strategy-podcast/#respond https://www.scriptorium.com/2020/10/taking-a-phased-approach-to-your-content-strategy-podcast/feed/ 0 In episode 82 of The Content Strategy Experts podcast, Elizabeth Patterson and Bill Swallow talk about taking a phased approach to content strategy when you have limited resources and how you can prioritize that approach.

“It’s really easy to allow your scope to expand. Try to keep it finite. Try to keep the phases small.”

—Elizabeth Patterson

Related links: 

Twitter handles:

Transcript:

Elizabeth Patterson:                   Welcome to the Content Strategy Experts podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we look at taking a phased approach to content strategy when you have limited resources and how you can prioritize that approach. Hi, I’m Elizabeth Patterson.

Bill Swallow:                   And I’m Bill Swallow.

EP:                   And today we’re going to talk about taking a phased approach to your content strategy. So the first thing that we’re going to hit on is why exactly companies take that phased approach, which is something that we are seeing more and more frequently with the companies that we’re working with. And the number one reason for that is going to be limited funding, and limited resources. So when you are moving forward with a content strategy or an enterprise level content strategy, oftentimes the price tag on that is going to be pretty steep, and when you try to pitch that to upper management, it can be really difficult to get that approved. So breaking your approach to your content strategy up into phases can make those smaller price tags more appealing to upper management, and therefore it’s easier to get it approved.

BS:                   Also, what we’re seeing a lot now are more enterprise level implementations of a content strategy, and it is almost impossible to completely scope out accurately that entire implementation from start to finish. So it’s much easier to break it up into chunks and that way you have a clearer idea of what needs to happen. And usually these implementations take months, if not years. So taking a phased approach kind of keeps you on task.

EP:                   Right, and when you’re sitting down thinking about an enterprise level content strategy, and you’re coming up with a list of all of the things that you need to accomplish, that gets to be a really long list and sometimes things change, and so having those phases helps you to better prepare for those changes so that you don’t have this huge plan mapped out and then all of a sudden it’s completely different by the end of it.

BS:                   Right. Some of these phases could be as small as evaluating a new tool set, or it could be doing a content analysis to see what needs to change in either how you’re writing or how you’re managing the authoring process. It could be larger like implementing a tool set and running a bit of content through it. But by having these phases, you have a very finite start and finish. You know what your starting point is, you know where your end goal is. You can roughly scope out the amount of time that it’s going to take to get the work done. You kind of know how many resources you’re going to need, or you’re able to adjust a timeline based on the number of resources you have. And you know the rough costs that you’re looking at, to say this quarter, we’re going to focus on buying and implementing the software. Great. So your primary cost aside from a little bit of resource time is going to be the cost of the tools that you purchase.

EP:                   Right, and this approach, taking the phased approach, is going to look different for different companies because you have different needs. So what we’re talking about now might not look exactly like it’s going to look for your company. This is just sort of a general outline.

BS:                   Exactly. Some companies focus more on localization improvements, other ones focus on more authoring improvements, or they focus on systems integrations. There’s a wide variety of reasons why people would adopt a content strategy, and the phases that are involved are going to vary from case to case.

EP:                   Something that you might want to consider, if you do have these limited resources, and not just with limited funding, it can be valuable without that as well, but is a proof of concept. So completing a small project that can then show your upper management, show your company, that this is going to be worthwhile.

BS:                   Right. Especially if you’re doing something completely new from what you’ve done in the past. You want to be able to have something to say, here, I’ve proven that this can work.

EP:                   So I want to talk a little bit about prioritizing a phased approach, because I think that this is sometimes a question that we get. Really the first thing that you’re going to need to do is to clarify the problems that you’re trying to solve. That can take the form of an assessment. So you could have a content strategy assessment done by a consultant that’s going to help you to identify your gaps and then make recommendations for those gaps. And you’ve got to be able to pinpoint those things before you can get any further. Trying to decide where you’re going to go, what tools you’re going to use, before you even know what problems you’re trying to solve is a big mistake.

BS:                   Right, and it’s not to say that you can’t do it internally either, but getting some kind of an outside view, even if it’s just to look over what you’ve put together, as far as the assessment work that you’ve done, getting a third-party to go in and say, yes, this makes sense. Or did you think about this? Or what about this over here? It kind of brings a bit of clarity to what it is you’re trying to do before you actually start spending a lot of money on new tools, on training, on migrating your content, or what have you.

EP:                   You also want to try to get everyone on the same page. So starting with this assessment and really identifying those problems and helping other people at your organization to understand what the goal is, can be very helpful because company politics can be pretty nasty, and pretty difficult to work with. So you want to have everyone get as close to an understanding as possible to where you’re going with this project, because if you all have different goals in mind, that can make it very difficult to prioritize as well. Because everybody’s going to have an agenda. You want to have that end goal in mind and have everybody understand that, so that you can work together to accomplish that.

BS:                   It’s not to say that the actual focus or the actual approach isn’t going to change either. So while some people might have some reservations, they may not be able to fully articulate it. But if they at least know what the end goal is, they’re more inclined to kind of go along with the early stages, and usually at that point, once you start getting a couple of phases in, you really start seeing how everything is going to start coming together, or not. And you’re able to make those fine adjustments or you’re able to stop and redirect before things get too far off the rails, and that usually helps people see where things are, see where the end goal is, and then start understanding where they fit in within the full scope of the strategy.

EP:                   That can be really helpful, too, with this phased approach is, okay, you stop and you think, this is something that we need to tackle from a different direction. And because you’re moving through that phased approach, you’re able to do that. There’s nothing worse than making a decision quickly because you have to, and then regretting that decision later, which we see very often.

BS:                   Measure twice, cut once.

EP:                   Absolutely. So I do want to talk a little bit about some of the things that you really need to watch out for when you are taking a phased approach, and that kind of goes into what we were just talking about. You have to be patient sometimes. So you’re moving through this in phases, funding at your organization may be coming through slowly and in chunks, but you want to do it right. By doing it in phases, you’re giving yourself that opportunity to catch things as they happen. But sometimes you’re going to have people on your team that just want to get it done. They just want to go full throttle. With a phased approach, you have to be a little bit more patient with that.

BS:                   Right, and I will put this out there right now. Your first phase, or even probably your first two or three phases really should be more analytical in nature. Being able to get your arms around things. It depends on, obviously, the size and scope of what you’re trying to get done. But if you are approaching a new content strategy and you jump in phase one with let’s pick some tools, you’re doing it wrong. You’re doing it wrong. The goal is not to use new shiny tools, although it’s always fun to get new stuff and play with it and be able to do new and interesting things. But you want to make sure that those new and interesting things kind of fit where you need to go, and not losing track of all the other contingencies on your content that’s still need to be met. So you might be able to hit the highest priority on your end goal, but all of the subsequent needs are left hanging. That’s somewhere that you definitely don’t want to be, especially after you spend a significant sum of money on new software and new tools.

EP:                   I think we’ve said this until we are blue in the face. It’s in so many different blog posts and podcasts, is that tools should definitely not be the first thing that you choose. You’ve got to identify the problems you’re trying to solve first.

BS:                   And it still needs to be said because it’s still a knee jerk reaction that… You can’t help it because it’s a very tangible thing that you can implement and say, look, new, shiny. It’s going to work. But it’s really one of the last things that you want to do. You want to get all your planning done upfront, then focus on the tool sets that best match what you discovered during the planning phases that help you achieve your goals. Then toward the final end of the phases, you then have the implementation work, which is usually extremely substantial, and then your training and maintenance going on forward.

EP:                   Another thing to keep in mind is that it’s really easy to allow your scope to expand. So try to keep it finite, try to keep these phases small so that…. Don’t use those knee-jerk reactions and pick a tool set before you’re ready. Just know that the phased approaches do give you more flexibility when it comes to scope.

BS:                   If you have a pilot project, you also want to keep that scope small. Use a very small content set. Make sure that you have something defined from start to finish. So your pilot should involve a bit of authoring, should involve a bit of review, should involve a bit of publishing, and then seeing what that looks like, so that you have something tangible to poke at. The greater you increase that scope, so if you’re going from, let’s say 10 documents, to a thousand or even a hundred, you’re increasing the level of effort and the complexity of getting that proof of concept done. The point is not to get your stuff through in that proof of concept. It’s just to say, see, this is possible. Now we can expand the scope and we could take a look at it in a bit wider stance.

BS:                   It’s not to say that, also, you want to jump from one phase where you have a very finite, very controlled proof of concept to “let’s do everything now.” You want to break it off into pieces, so if you have multiple product lines, if you have multiple companies under your corporate umbrella, you don’t want to throw them all in at once. You want to take one through and see how it works, see if anything else needs to be adapted. So going back to some of the analysis work that you did and make sure that nothing has changed there and also check your horizon and make sure nothing is changing out there, and then you can proceed with the next phase.

EP:                   And let the phases do their job. Avoid those quick fixes, even if you feel like it’s something that you have to do. We did a podcast a couple months back on quick fixes, which I will link in the show notes, but that can end up costing you a lot more money in the long run, and if you already have limited funding, this can be disastrous.

BS:                   Exactly.

EP:                   Also, the phase that makes the most sense to start might not be the phase that’s going to seal the deal with your stakeholders. You need to set those crystal clear expectations upfront, and, again, have everyone on your team sit down and understand the project. Talk about it, get people on the same page, because without having those expectations in place, you’re going to have problems along the way.

BS:                   Not everyone is going to be able to play in every single sandbox along the way. You’re going to have to bring a few people in at a time, when it’s relevant for them to be involved, and make sure that that phase addresses the concerns that they have around the goals that you’re trying to meet in that phase as best possible. Then bring another crew in later for a subsequent phase. If you try bringing everyone in at once, and you try to tackle everyone’s needs at the exact same time, the scope is just going to expand exponentially because now you’re starting to really bring in all of the dependencies and discrepancies with how people work, and rather than trying to focus on making sure that they’re all being addressed, you’re spending the time mitigating a lot of conflict between the groups saying, well, mine should take priority because X, Y, and Z. You have two, three, four people saying that and suddenly nothing gets done because everyone’s bickering.

EP:                   So I know we were just kind of talking about some things that can be really intimidating, but overall taking a phased approach to your content strategy has a lot of benefits to it, and a major benefit is that you are biting off small chunks. So you’re going to address problems as they come up, rather than having really big surprises later on when you’ve already made it so far in the project, and then you have these unexpected expenses. Now, sometimes those things can still happen, but you’re really reducing the risk for that, which is important, especially if you have limited funding.

BS:                   You’re basically taking a lessons learned approach as you go. So you can scope things out. You can hit your target, even if you’re a hundred percent successful, you’re probably going to have some takeaways that are going to adjust how you move going forward. So taking that phased approach really does allow you to really stop and pivot along the way until you get to exactly where you need to be.

EP:                   So I think that that is a good place to wrap up. Thank you so much, Bill.

BS:                   And thank you.

EP:                   And thank you for listening to the Content Strategy Experts podcast brought to you by Scriptorium for more information, visit scriptorium.com or check the show notes for relevant links.

 

The post Taking a phased approach to your content strategy (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 14:47
Document ownership in your content development workflows (podcast) https://www.scriptorium.com/2020/10/document-ownership-in-your-content-development-workflows-podcast/ Mon, 05 Oct 2020 12:00:44 +0000 https://scriptorium.com/?p=19936 https://www.scriptorium.com/2020/10/document-ownership-in-your-content-development-workflows-podcast/#respond https://www.scriptorium.com/2020/10/document-ownership-in-your-content-development-workflows-podcast/feed/ 0 In episode 81 of The Content Strategy Experts podcast, Gretyl Kinsey and Alan Pringle discuss document ownership and the role it plays in content development workflows and governance.

“You’ve got to quit the focus on the tools. The tools are not going to solve mindset problems. Those are two distinct different things. You’re talking about technology, and you’re talking about culture. Culture is a lot harder to change.”

—Alan Pringle

Related links: 

Twitter handles:

Transcript:

GK:                   Welcome to the Content Strategy Experts podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we talk about document ownership and the role it plays in content development workflows and governance.

GK:                   Hello, and Welcome to the Content Strategy Experts podcast. I’m Gretyl Kinsey.

AP:                   And I am Alan Pringle.

GK:                   I want to start off this discussion about document ownership with just asking a very basic question. What is it? What is document ownership?

AP:                   Document ownership means answering the question, who is responsible for the creation of this content, for the review of this content, approval of it, any other things that you do around content? So who is responsible for basically the parts of that life cycle?

GK:                   Absolutely. I think it’s important to point out too, that those responsibilities for all those different aspects of the content and that development workflow are different from one organization to the next, and it depends on things like the size of your content team, the resources that you have available, the kinds of content you’re creating. We’ve seen some organizations where there’s just a really small team in charge of creating content, and so you might have one person who kind of owns the entire document life cycle from its creation all the way to its approval and release, and then in other cases, things are a little more segmented. You might have some folks who are in charge of writing, some who are in charge of editing, some who give the final approval. So, it really kind of depends on the organization, but there is a tendency, I think, for there to be some kind of an ownership model in place so that all those responsibilities are laid out and everyone knows what has to happen to get that content out the door.

AP:                   There’s another kind of side angle to this, another kind of ownership, and what happens if your company is acquired? What happens if there is a merger? Then you’ve got these two corporate cultures and what they perceive as the correct document ownership process. Then you’ve got to figure out how to integrate those two together, so it’s like ownership on top of ownership and that can be quite the challenge.

GK:                   Oh yes, absolutely. I want to talk a little bit about that challenge and how it kind of feeds into some other challenges that we see a lot around document ownership. One, of course, is just how document ownership differs when you look at an unstructured versus a structured content workflow. When you’ve got an unstructured workflow, then I think we more frequently see cases where documents tend to truly be owned by a specific person, a specific group, someone who’s responsible for the document from end to end, whereas in a structured workflow, since the content is more modular and you tend to have things like components or topics, the content is broken up into smaller chunks. Then, the ownership is not necessarily of an entire published document, but over the kind of pieces and parts that go into that document. So, when you’ve got a workflow where you can mix and match and reuse topics and your final published documents have more flexibility, then that changes the way you have to think about ownership.

AP:                   Right. It really has to. What you’re talking about is basically more of a printed book model, where you’ve got one monolithic thing at the end and it made sense. Okay, I’m going to own this or this author is going to own this, but when you were starting to take a more modular route and a bunch of pieces and parts are coming together to create a document, a deliverable, a book, a help set, whatever it is, it does require a really kind of big flip in your mentality about ownership.

GK:                   Yeah, absolutely, and so you kind of think about how are we going to approach ownership in a structured workflow? Instead of it being based on documents themselves, it might be something like a particular subject matter or a product line. A person or a group might own one product family or product suite instead of an individual document, and you may also have people in charge of whatever subject matter that they are experts in. So you may have some folks over here who are in charge of, let’s say engineering, and you may have a set of folks over here who are in charge of something else. So, you’ve got these different, more subject-based types of ownership roles than looking at really just who owns a document from its inception to its publication.

AP:                   This in some ways parallels the agile software development, that whole change in mindset from the more waterfall development to agile development. I’m not going to get into that because I know it can be contentious and people use those words a little more differently, but the same idea is still there, breaking things down into smaller parts. I think that very much applies to what you’re talking about here.

GK:                   Absolutely, and then you have to think about a few other things that are more on the management and workflow and governance side as well. So, instead of just saying who’s responsible for a document or who’s responsible for a subject within a document, you have to think about things like reuse and linking strategy, taxonomy and metadata personalization requirements, all of those sorts of things. It’s really important to have some sort of an ownership model for those aspects as well, because if you are just thinking about it from more of a document point of view, then those aspects that reach across documents won’t have any sort of person in charge of them or group in charge of them, so that’s something that you have to consider for your ownership model if you were in a structured workflow.

AP:                   Another aspect of this too, sometimes you can take ownership possibly a little too far and try to recreate a wheel if you’re inside a company, and, for example, you have a strong web presence marketing group. I’m going to assume there is some kind of taxonomy in place in regard to products, possibly how they’re organized on the website, things like that. So, there is some kind of hierarchy there to describe your products, your services. If you were writing for another department, for example, let’s just say the product documentation team, the product content people, you need to get that existing taxonomy and then add your two cents to it. Don’t redo the whole thing. So yes, you need to get your part in there, but don’t assume that ownership of that means it is yours. This could be more of a company enterprise level thing, and you need to bolt your part onto that.

GK:                   Yeah, absolutely. I think this really gets into the idea of how working with structured content actually opens doors to scaling up and addressing content across your entire organization, really getting it in at that enterprise level and making things consistent across the entire organization. So it is really important not to have this kind of siloed or segmented ownership, regardless of whether it’s at the document level or at some other kind of organizational level. It’s really important to think about, “Okay, we’re in structure, so obviously this model of a document-based ownership isn’t going to work. So how do we take our ownership across the organization, collaborate with other departments, use what they’ve already done and they can use what you’ve already done?” That way, it eliminates a lot of wasting time, as you said, reinventing the wheel.

AP:                   Yeah, and something you just said, talking about silos there, it just occurred to me. When you own an entire book, and yes, I know that’s kind of 20th century, but I’m going to use that word anyways, when you own a book, if you think about it, that is a silo right there on its own in a lot of cases.

GK:                   Absolutely.

AP:                   So it’s basically breaking that book up into pieces and parts, and there’s a parallel there to what you were just talking about, more of an enterprise approach to thing. Yes, there is organization, there is a method to the madness, but when you get down to it, it is a bunch of pieces and parts that are shared, and that’s the bottom line from my point of view.

GK:                   Yes, absolutely. It’s about that modularity, that granularity, and having those flexible and shareable pieces. That kind of brings me to the next question I want to ask, which is about the shift in mindset. So we talked about how it really is a very different mentality between the way that you would own documents versus own these modules or parts. So, when a company shifts from an unstructured to a structured content development workflow, how can they make that transition easier with that document ownership mindset?

AP:                   Well, first thing, you’ve got to quit the focus on the tools. The tools are not going to solve mindset problems. Those are two distinct different things. You’re talking about technology, and you’re talking about culture, and guess what? Culture is a lot harder to change.

GK:                   Yes.

AP:                   You can train someone how to use a tool proficiently. That is not the problem. It is getting them to buy in to using that tool that is the huge problem. So you have to realize, merely buying the tool, that is not going to solve your problem. You have to address culture and change management through good communication training. I sound like a broken record. I think I’ve spoken about this a zillion times on this podcast, so I’m not going to dig into that again, but basically culture, culture, culture. That is very important. The tools are not going to take care of that for you.

GK:                   Right, and I want to reiterate, training is important, but it is only one piece of it. As Alan said, it’s about thinking about that culture and not just providing the baseline training, but the true support that people need to make that shift and to understand it is going to be a major change in the way they work. It’s going to be a major change in the way they think, and so it’s really important, I think, to really show them the value of what moving to structure is going to buy them. So, as a content creator, it might do things like eliminate a lot of manual processes and inefficiencies and it might help things be more accurate, so it’s really important to show them that and help them understand, even though, yes, I know this is a big change, here’s what you’re going to get out of that change, and make sure that they don’t feel like they’re left behind and just left in the dust. They need to be supported and to be brought along so that that really big mindset shift does not cause problems.

AP:                   There are a few ways you can approach this from a mindset point of view. Number one, people are going to be learning new skills that make them more marketable. Now, if you don’t want to lose your best people, that can be kind of a hard sell, but you are giving people new skills that make them more marketable in the world, in the professional world, and that’s something that is not a bad thing to let people know. When we are making this change, you are getting new skills. So, that’s a great thing too. Once again, we come back to the whole idea of silos. You’ve got silos among departments, you’ve got silos among publications.

AP:                   Well, I think what I’m kind of headed to, you can have a silo of your own brain and experience, thinking, “This is the way things have to be. This is why they are. This is what works for me.” Well, unfortunately, you are a part of a bigger corporation, just like a content module is something that is part of a bigger group of documents, customer experiences, whatever. You were one part in this, and you’ve got to figure out basically how what you’re creating fits into the bigger picture, this giant puzzle.

GK:                   Yeah, absolutely.

AP:                   So, it’s a huge, huge shift in how you think, and it can be very daunting. I am not going to say it is an easy thing, because it absolutely is not easy for the authors, the content creators, the reviewers, and it is not easy for the people who are trying to manage and wrangle all of the expectations, the cultural shifts, and so on.

GK:                   Yeah, and I think that brings up an important point about content governance and why it’s really important to have that as part of your strategy and to have resources available for that, because that is going to help provide some of that continuity and that support for all the people who are actually creating the content and managing and publishing it. If you’ve got a strategy in place and someone who is dedicated to all of the governance around content, making sure that this shift from unstructured to structured actually goes through and actually works correctly, then that’s really going to, I think, help to smooth things over because as you said, it really is difficult. It’s a big adjustment and it’s important to think about that as part of your strategy and not leave it out and make sure that you do have those resources available for it.

AP:                   Yeah. To me, the most important thing I think I can end with is it is not just about switching tools. It is not. It is about culture and making that shift in mindset, and that is critically important. If you don’t take care of that, you have just flushed away thousands or millions of dollars. It is that simple.

GK:                   Absolutely. So one other question I want to ask based on this too, is that I think we’ve seen several instances of this happen, where you’ve got this mind shift happening, people are struggling to adjust, and one of the excuses that sometimes gets brought forward is we don’t actually own the documents, the customers do. That’s something I think that people put out there as an excuse not to change, and so I want to ask how you should approach that kind of situation when things are being deflected off onto the customers as the document owners.

AP:                   The customer’s experience is very important. You know, that is true. However, they are one stakeholder in this content experience. They are not the only ones who have a say in this. At the end of the day, while your customers are buying from your company, they are not the one directly paying your salary and I think it would behoove people to think about that. Yes, you advocate for your customers and do right by them, but you realize they’re not the only people who are involved in a change like this.

GK:                   Absolutely, and I think it’s also important to consider when you say that the customers are the ones who own your content, is the content actually serving them? Because in a lot of cases, too, one of the reasons companies move to a structured workflow is because customers are having trouble finding the content they need at the time they need it. So, if you really are truly concerned about your customers using that content and owning, it in a sense, then your first priority should be to think about, “How do I need to make the content findable, usable, and really serve the customer’s needs?” At the end of the day, the customer, even if they own the content, in a way, they don’t own the processes. That’s on you and your organization. So it’s really important to think about the bigger picture, again and in that sense, and ask yourself, “Am I really trying to serve the customer or am I just using this as a front to avoid change?”

AP:                   Absolutely. Is it real or is it deflection? That’s a great question to ask yourself.

GK:                   So I think really the main point we want to make about all of this when it comes to document ownership is again, as we’ve said, it is about that mindset, and when you change your processes, it’s really important to be adaptable and to understand that the way you may have owned a document in the past may not always work and it’s just really important to be flexible and to understand, document ownership can mean a lot of different things. Content ownership can mean a lot of different things, and what’s really most important at the end of the day is what kind of ownership model is going to be the most efficient and most effective for the company.

AP:                   To me, that’s the most important thing that you’ve said toward the end, for the company, not for you, yourself, not for just your department, but for the company.

GK:                   Yes, and I think that’s a good place to wrap things up. So, thank you so much, Alan.

AP:                   Thank you.

GK:                   And thank you for listening to the Content Strategy Experts podcast, brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

 

The post Document ownership in your content development workflows (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 18:18
Information architecture in DITA XML (podcast) https://www.scriptorium.com/2020/09/information-architecture-in-dita-xml-podcast/ Mon, 14 Sep 2020 12:00:07 +0000 https://scriptorium.com/?p=19911 https://www.scriptorium.com/2020/09/information-architecture-in-dita-xml-podcast/#respond https://www.scriptorium.com/2020/09/information-architecture-in-dita-xml-podcast/feed/ 0 In episode 80 of The Content Strategy Experts podcast, Gretyl Kinsey and Sarah O’Keefe discuss information architecture in DITA XML and other forms.

“You have to look at information architecture in metadata starting from a taxonomy point of view. This means you are looking at the structure of the content as well as the organization of the data that’s used for search and filtering.”

—Gretyl Kinsey

Related links: 

Twitter handles:

Transcript:

Gretyl Kinsey:                   Welcome to The Content Strategy Experts podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we discuss information architecture in DITA XML and other forms.

GK:                   Hello and welcome. I’m Gretyl Kinsey.

Sarah O’Keefe:                   And I’m Sarah O’Keefe.

GK:                   Today, we’re going to be talking about information architecture. So I think the best place to start is just defining broadly what information architecture is.

SO:                   And that sounds so simple and yet we’re going to hit our first snag, because if you go look at this, you’ll discover that everybody in content across all the different aspects of it has an opinion about what constitutes information architecture. I think probably the easiest place to start is to say that if you’re looking at a website, then the way that that website is organized and structured and how the content is hierarchical, you start at the top, you go to the about page, you drill down to the team or the company history, that’s information architecture.

GK:                   Right. And that can extend not just to the way a website is organized, but whatever your delivery method is. So the same thing, if you’ve got a print-based piece of content, it’s that hierarchy, it’s how is it organized into maybe chapters or parts, and that really can apply across all different types of content. I think this is a good place to mention that it is really important to know your terminology and define it, because when you’ve got lots of different types of content that you might be working with, you might get some confusion going on if you don’t really clearly define what IA means.

SO:                   Right. Exactly. And we’ve had some kind of hilarious run ins with this, where we’re sitting in a meeting and we’re talking about information architecture and what we mean is how things are encoded in the DITA files, which we’ll get to in a minute, and it turns out that our counterparts in let’s say content design or UX or something like that are thinking much more about the website delivery layer and nobody is thinking about the print, right? So we have to really be careful about this and be careful to make sure that when we say IA, that we know which one we’re talking about and at which level.

GK:                   Absolutely. You did mention DITA, so I want to talk about that next. So what is the difference when you’re talking about DITA-specific IA? How would you define that?

SO:                   So in DITA, when we talk about information architecture, what we’re usually referring to is how exactly are we structuring the content and marking it up in DITA. So which topic types are you using and what goes into each kind of a topic? Let’s say you have a bunch of reference information. Well, the decision to, for example, put all your terms and definitions into the DITA glossary is, I mean, extremely sensible, right? But that’s a decision and sometimes you might discover that you need a reference topic that the out of the box reference topic doesn’t really help with, so you go down the road of specializing. And then I think, Gretyl, you’ve run into some stuff with DITA metadata as well.

GK:                   Absolutely. And that’s an area where it’s kind of its own thing. You have to look at information architecture in metadata starting from a taxonomy point of view. So it gets into not just what is the structure of the content, but what is the organization of the data about your content that’s used for search and filtering and organizing it and making sure that everybody can find it. So that’s something I think even before you start building out the content structure itself, it’s really important to think about that piece of it, because in DITA, you can have specialized metadata structures as well. So if that’s something that you’re going to need, it’s really important to plan that out and think about it and make sure that’s part of your IA.

SO:                   Right. We talk about DITA information architecture, and of course, whatever it is that we do in coding, the DITA files is going to feed into the information architecture on the delivery side, whether that’s website or print or app or it could be any number of other things.

GK:                   Absolutely. So I want to talk about some situations where you might need to deal with both a DITA information architecture alongside other types of information architectures. I think we kind of touched on at the beginning that you might actually have IA coming from different approaches and you have to have that conversation to make sure everybody is on the same page. So one scenario that I can think of right top off my head that lots of our clients have dealt with is just when you have different departments that are in different content workflows but they need some way to connect their content.

GK:                   And maybe they’re not necessarily all going to work in the same information architecture, they’re not all going to be in DITA, but each department still has their own IA for their content and they need a way to make it play nicely together, and there can definitely be some challenges and things to figure out when that’s the case.

SO:                   Yeah. DITA versus DITA is one thing, and that can be a challenge. But perhaps that’s actually the biggest challenge, because you also see a DITA versus not DITA content. So there’s a merger and the company has five departments and eight different authoring tools, and you think I’m exaggerating, but I’m not, right? So you have all these different groups, they have all these different authoring tools, they’re delivering to all these different places, and at some point you have to step back and say, “Well, wait a minute. What about the poor customers that are looking at all this stuff together? How are they going to access this information successfully and what do we need to do to make it consistent such that they can actually have some sort of a fighting chance of finding what they’re looking for?”

GK:                   Yeah, absolutely. And when that sort of thing happens, when you’ve got something like a merger, or even whether it’s a true merger of companies or just sort of a merger of departments within a company, you’re looking at maybe DITA in one place and then maybe things like Word, FrameMaker, InDesign, all manner of other things and other places. And it’s really important especially when you can’t necessarily get rid of one of those sort of non-XML flavors of content production, where you actually need something like InDesign for example, it’s important to think about the implied structure of that content and the enforced structure of your DITA content and how to make sure that when all of that content is packaged up and published for delivery, that it all works nicely together.

GK:                   Again, back to what I mentioned earlier about taxonomy and searchability, making sure that everything is organized in some way that the customers are not going to get confused, they’re not going to start complaining to support that they can’t find what they need. It’s really important to think about, “Okay, if this content is developed in different work streams and different ways but it still needs to be coordinated and shared, how do we make sure that those different information architectures work well together?”

SO:                   Yeah. Then there’s a similar but different use case, right? Which is the, we have all this DITA content, which typically is going to be some sort of technical documentation, technical product content, that kind of thing, that forms a corner of the website. So you have what most of our customers call the dot-com. It’s company.com. It’s the main website with all the marketing information. And somewhere on that website, there is a button or a link or something that says documentation or support or additional information or technical literature, literature library. I’ve seen all these kinds of names.

SO:                   And the information that lives in that technical documentation corner of the dot-com is coming out of DITA with its own information architecture, but then the overall website has a, I guess, big picture IA. So one of our pretty common jobs is to try and bring those two things together so that we can feed the DITA based technical information into this corner of the dot-com and make sure that the people accessing it, accessing the website in general, get consistent information, they get consistent user experience, customer experience, on the information that they’re trying to access even though the website was built by one team, probably, and the tech docs were built by a completely different team using a different technology stack, different systems, different everything, but it’s still possible to make them consistent.

GK:                   Absolutely. One area that I’m starting to see more and more is the idea of delivering content through dynamic delivery portals. So that’s another layer that you have to think about. Some of the companies I’ve worked with that are doing this have to think about information architecture on both the backend, so how are they actually structuring the content itself? And then also the front-end, how is that getting delivered through a dynamic portal? How does it have to be tagged and structured to work with the way that the portal actually gathers the content up and delivers it to the customer?

GK:                   Then if you’ve got a portal for a portion of your content, so like you were saying, Sarah, if it’s for documentation or for online help or training or something like that and it sits in a corner of the website, then you have to think about how does that fit in with everything else and are there other departments that are also serving up their content dynamically as well? How can you make that play nicely together? So it really is a lot to think about and to plan.

GK:                   I think one thing I’ve seen that’s helped a lot is just having dedicated content resources or content team that sits above all these different departments and looks at how the pieces of the puzzle come together and can say, okay, “This group over here has one information architecture, this group has another. Here are maybe some tweaks or changes that have to be made to make sure that’s going to work with how you’re publishing your content through a portal onto the website.”

SO:                   Yeah. I think that’s a really good point. And your distinction between back end and front end, I think, can be very helpful to talk about backend information architecture. How are you encoding the DITA files? How are you encoding the source files, whatever those may be? And then the front end IA, which is essentially how are you presenting them? How do your end users experience this information? Now, what’s interesting is that you probably want to have some consistency between those two things. I mean, it can be a little challenging if you have a back end information architecture that in no way represents what you’re trying to do on the front-end.

SO:                   That’s probably not going to end so well. But when you start looking at this from a development and a skillset point of view, I think it is actually very helpful to think about, “Okay, we’ve got to do some back end encoding for DITA and we’ve got to do some front end encoding for user experience and we need to make sure that those two things are in fact compatible.”

GK:                   Absolutely. I want to, for the rest of this discussion, focus on that back end specifically and what happens when you’ve got some scenarios where you are sort of merging or bringing different types of content together. One instance that I’ve seen is when you’ve got to take content from other sources into a DITA based single-source of truth. This is something that we’ve seen a lot where you may have a lot of Legacy content, you may have content in different types of documents. So you may have a lot of Word files, you may have a lot of FrameMaker files, things that are not working together well when it comes to getting your content all searchable and in one place and reusable.

GK:                   In those cases, people sometimes make the decision, “Let’s bring it all into DITA and really get that maximized reuse that we don’t have right now.” So one big challenge I’ve seen is how do you make that decision of how you’re going to take the content from those other sources and make them work with whatever DITA information architecture you’re going to have. I think this hits a lot into the process of conversion and making those decisions. What content are you going to keep from your Legacy content? What content are you going to throw out? Then of course once you’ve decided what’s your important content that you have to deliver, then you look at that implied structure that I mentioned previously, that any content even if it’s in something like Word, FrameMaker, InDesign, even if it doesn’t have an enforced tag-based structure, it’s still going to have an implied structure hopefully.

GK:                   Hopefully it’s not just no style guide and complete chaos all over the place. But typically it does have some sort of an implied structure. You see patterns in the types of headings that you have, the types of content that you’ve got. You may have lots of reference information like you were saying earlier, Sarah, or you may have a lot of task-based information. So looking at that implied structure and seeing how it fits into the information architecture structures that DITA offers by default is a good starting point, and that helps you make those determinations that we were talking about upfront of, do you need specialization? How are you going to organize your metadata? All of those kinds of questions. That’s sort of the starting point, is what’s that implied structure and how does that kind of carry over into an enforced structure?

SO:                   Yeah, I think that’s right. In addition, if you think back to the wonderful days of printer technology, there’s this concept of the gamut when you print, which is the range of colors that you can produce on a given printing press with a given set of inks, right? And there are certain colors that you simply cannot produce. So for example, if you want something to look metallic, you usually have to put in a special metallic ink to make that happen. You can’t get metallic out of the traditional four-color CMYK; cyan, magenta, yellow, and black. So the gamut is helpful to think about because… I mean, you mentioned FrameMaker.

SO:                   There’s specific things that you can do in FrameMaker where you can get a little creative with the stuff that you’re putting in your files that is really, really difficult to reproduce in another tool or another content model such as DITA, and vice versa. There’s things you can do in DITA that aren’t necessarily supported in your Legacy tools. So you run into this gamut issue where because you’ve never thought about doing a thing because you couldn’t, because it was impossible in your current tool set, you have to really think carefully about, do I want to implement that now as I move forward into the new tool set? Does it add value? And as you said, this is relatively easier if you’re starting from, “I have all this unstructured content in Word and I want to move it to structure.”

SO:                   That is actually relatively easier because you have a wide open sort of blue sky make some decisions situation. It gets really, really interesting if you have a, let’s say an existing DITA set of content, and now let’s say that your organization buys another company and they have Microsoft Word files and you’re going to move the Word files into DITA. Well, now you have a structure. Like, you’ve made some decisions about your content model, do you extend your structure to support what the other company did? Do you say, “Nope, you have to jam your content into the content model we created because we feel like that’s the best approach.” That starts to get really, really squeaky from a technical point of view and it’s also very political, right?

SO:                   Because you just acquired this company and they may or may not be happy about the acquisition and they may or may not be happy about reporting to you because you’re now leading this project, and so you might make some compromises that aren’t the best technical solution but that will keep the peace in these new organizations that you’re bringing together.

GK:                   Yeah, absolutely. I’ve definitely seen examples of that happen where I think a lot of the judgment calls that were made were not so much about what’s the value of the content and what’s worth keeping and what’s worth putting into a different structure and what structure are we going to use? It was more made based on those kinds of political decisions and keeping things running smoothly. And then what ends up happening down the road once you kind of get over that hurdle of a merger happening and things do settle out, then sometimes you might have an opportunity to look at things and go, “Okay, our content processes are still not as aligned as they should be and we need to start thinking about ways that we can make that happen.”

GK:                   And maybe then down the road you can start making decisions that are a little bit more logical and a little bit more based in the content itself. But yeah, when you first start to make that choice and you’ve got a situation where there’s an existing DITA structure and then there’s unstructured content coming in, that’s definitely a place where there can be a big clash over change resistance and coming into a new process that you’re unfamiliar with. So it’s really important to think about the balance of those things.

SO:                   Yeah. And I think that a lot of times, for me or for us, we’re so deep in the technology that we like to hide in the technology and look at that and say, “Well, this is just a pure technical decision.” But nothing is ever purely technical, there are always politics there and there are always considerations of, how is this going to affect the people that are working on the content if we choose a content model that looks like this and then the conversion is going to be super difficult because we chose this really complicated content model? Well, who takes on that pain and that expense of doing the conversion? Are we inflicting that on the newly arrived merged company employees? Because that will almost certainly make them cranky.

SO:                   So there are those kinds of considerations which are really interesting to me that go above and beyond the hard enough already question of, what’s the best information architecture for this content from a markup point of view?

GK:                   Definitely. So do you have any other final thoughts, final advice for how to make an information architecture development process go as smoothly as possible especially in a situation where you might be combining DITA and non-DITA content?

SO:                   Oh, sure. Yeah. I mean, is that all? I would say that, define your terms, make sure that when you talk about information architecture everybody is talking about the same thing, or you agree that, “For this meeting, we’re talking about this kind of IA,” that sort of thing. But, define your terms, and I think it’s useful and important to get the entire team up to speed on what that entire IA piece looks like from DITA markup into storage, into rendering and delivery, and whatever else might be happening downstream so that we’re not all just looking at it through our own little lens or our own little peep hole and only focused on our piece of it. If you’re a backend IA person, the better your understanding of the front-end IA, the easier it’s going to be or the better your results will be when you bring those into alignment.

GK:                   Yeah. And I think just to add to that, my advice would be that you can never do enough planning, and so… especially if you do not have major deadline pressure, which I know that’s not the case for most of us. But if you’ve got time to plan, take advantage of it and definitely do as much of that planning that you can even in and around your other deadlines and your other work before you ever start actually encoding and that will save a lot of trouble and a lot of headaches down the road.

SO:                   Yeah. That’s excellent advice, and I’m afraid hard-earned.

GK:                   I think at this point we’ll go ahead and wrap things up. So thank you so much, Sarah.

SO:                   Thank you.

GK:                   And thank you for listening to The Content Strategy Experts podcast brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

 

The post Information architecture in DITA XML (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 21:46
The true cost of quick fixes (podcast, part 2) https://www.scriptorium.com/2020/07/the-true-cost-of-quick-fixes-podcast-part-2/ Mon, 20 Jul 2020 12:00:07 +0000 https://scriptorium.com/?p=19834 https://www.scriptorium.com/2020/07/the-true-cost-of-quick-fixes-podcast-part-2/#respond https://www.scriptorium.com/2020/07/the-true-cost-of-quick-fixes-podcast-part-2/feed/ 0 In episode 79 of The Content Strategy Experts podcast, Gretyl Kinsey and Bill Swallow continue their discussion and talk about solutions to quick fixes.

“A big part of your content strategy should be how requests come in, how the timelines are built, and what you’re responding to and how you’re responding to them in the first place.”

—Bill Swallow

Related links: 

Twitter handles:

Transcript:

Gretyl Kinsey:     Welcome to the Content Strategy Experts Podcast, brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we’ll be continuing our discussion on quick fixes, this time focusing on solutions. How can you undo quick fixes or better yet avoid them in the first place? This is part two of a two-part podcast. Hello and welcome everyone. I’m Gretyl Kinsey.

Bill Swallow:     Hi, and I’m Bill Swallow.

GK:     And today we’re going to be revisiting our previous discussion on quick fixes, but this time with a bit more of a positive spin. Just to recap a little bit from last time, what we mean when we talk about quick fixes are when you take a one off or bandaid approach to your content strategy, you do some sort of a work around to get content out the door, usually on a tight deadline or under a constrained budget, and then that later can cascade into lots of problems down the road if you have done a quick fix instead of planning and doing things the right way. And where I want to start things off today, talking about how you can undo or avoid quick fixes, if your company decided to use a quick fix in the past, what are some reasons that you might need to change that now?

BS:     Well, I think one of the first things that you should be looking at is the amount of time your team is spending on overall tasks and to see exactly how much time is being spent fighting with, or otherwise futsing with their content development tools. Are they going in and constantly having to reformat things? Are they constantly having to retag things? Are they fighting with the tool to get it to work the way they need it to? And looking at these types of things to figure out, do I have a problem with quick fixes? Did we implement things correctly? Are we using the tool the way we should be using the tool, and is the tool right in the first place?

GK:     Yeah, absolutely. And I think this kind of touches on the flip side of the scenario that we talked about in the previous episode, where we mentioned things like template abuse and tag abuse, and people going outside those parameters that you have defined in your structure or in your template and doing these one off quick fixes for formatting. So if you realize that you’re spending a whole lot of time on those kinds of things, then suddenly that’s not really a quick fix. That’s a very time consuming fix when you put all of those little individual quick fixes together. So if you realize that you’ve got a lot of writers doing that, then that can lead to something like a limitation down the road. If you realize, for example, “Hey, we really need to streamline templates that we have, or we need to introduce a new template or a new publishing output that is a lot more sleek and efficient than what we’ve already got,” and you’ve got writers all over the place breaking the existing templates, then suddenly they’re imposing a limitation unnecessarily on the tools that you have.

BS:     Yep. And we’ve been hearing a lot over the past several years about companies going through digital transformations and being able to essentially modernize their entire content set. And I want to say just putting it online because that’s not what digital transformation is all about. Yes, it’s a component. But one of the things that a lot of these companies are struggling with is that they’re looking to move to a more digital foothold on their content and where they need their content to go. And they’re taking a look at their entire legacy content set, and they’re finding out that they have millions of different Word files that are all using different formatting, different templates, if they’re using templates at all, several different content tools in play. They might have Word. They might have FrameMaker. They might have InDesign for some more higher designed outputs that they were producing.

BS:     They might have both RoboHelp and Flare in the mix because there were two different divisions of the company at the time and each one decided on their own tools to use, and they have different styles and templates and even different approaches to how they develop the content in the first place. So you start seeing all of these things where you have all of these different documents using a wide variety of conventions, and suddenly you need to be able to standardize this stuff so that you can start doing more intelligent things with your content and it makes it incredibly difficult to take that leap if everything’s a mess at the starting gate.

GK:     Yeah, of course. Absolutely. And that is a massive problem I think that I’ve seen in probably the majority of the projects I’ve worked on here at Scriptorium that… Especially when it’s factors outside of maybe the company’s overall control, if there has been something like a merger in the past, and you’ve had lots of disparate teams that suddenly are working together and they’ve all had their processes, then suddenly any of those teams who have employed a quick fix solution, that’s going to be multiplied when you’ve got all these different teams and all of their past histories of quick fixes working together. That’s when it becomes really important to look at what all these different teams are doing and streamline their processes and come up with a content strategy that brings everything together as it should be.

GK:     And I think that gets into the issue, not only of streamlining, but of scalability as well, if you need to scale your processes to a larger target audience, a larger market, or as you mentioned earlier, Bill, if you need to undergo a digital transformation and you need to deliver more intelligent content, content that is not only available online, but that is interactive or that’s personalized, then if you are hindered by all of these one off quick fixes that people have taken, it can be almost impossible to scale. And that’s when you’re looking at maybe a complete content overhaul at that point.

BS:     Yeah, and I do remember one client a while ago who decided that after looking at all the numbers and taking into account all the different documents they had in play, they needed to go ahead and rebrand, they renamed their company and had new logo, new look, new feel to all their content. They did a lot of upfront analysis and came to the conclusion that it would be a lot easier to just fix it all, to basically press the pause button, fix it all, move it to… In this case, they moved to DITA, but move it to a single content format and then apply all of their branding changes using automated formatting. It was a lot cheaper and a lot less time to do that than it would have been to go into every single document and update it by hand. And that speaks volumes.

GK:      And I’ve seen a few clients take a similar, but maybe not quite as quick approach where if they couldn’t press the pause button on everything, they at least did that one department at a time. So start in one place with DITA and then pull the next department in when they were ready and then so on and so forth. So kind of depending on the size of your company, your budget, your deadlines for different products and different content that comes from different departments, then that approach in phases or with a small starting point that expands outward might be a good idea to make it manageable as well. But it really all depends on how interconnected things are when you start, how interconnected they need to be by the end, and how that all interacts with your product release schedule.

BS:     And another consideration there is also if you happen to be merging teams or bringing on new teams, or if your team is growing, you’re bringing on new hires, it is very difficult for someone to figure out not only a new job or a new role, but also to figure out how to produce things when everything is formatted differently, when everything uses a different convention, when you have to know all these little details about how a particular deliverable comes together, because nothing is consistent in everything is done ad hoc. It becomes very difficult to get new people up and running in that environment.

GK:     Yeah. And that gets into some of the things we talked about on the previous episode with training and how I think that one of the things that we talked about is that a lack of training or a lack of documented knowledge can lead to this problem of these one off quick fixes just growing and growing. And then that perpetuates itself into this problem that any time a new hire comes on, it is very difficult to keep them trained if it was a lack of training that led to people making these mistakes before. So that’s where it becomes really imperative when you bring on new teams, whether it’s from a merger or whether it’s just expanding and hiring that you get all of your content systems streamlined and aligned across the organization and provide adequate training and ongoing training to prevent those ad hoc solutions that people were using before.

BS:     That’s great, and brings up another question here, which is types of approaches that you might take to start getting these quick fixes out of the way and start streamlining things.

GK:     Yeah, absolutely. One thing that you can do is just revisit your original content strategy if you had one, which hopefully you did. If you didn’t, then it’s time to start one. But if you had some content strategy and things maybe went off the rails, maybe there was some sort of major deadline pressure that prevented you from putting the solution in place that you really needed to, and you used a quick fix instead. Then once you get over that deadline, a question you can ask yourself is, “Okay, well now that we’re six months out or a year out from when we originally started planning and things went a different direction, which of our goals from back then are still relevant now, and how are these quick fix bandaid approaches that we took to get through this deadline impeding those original goals that we had?” And that can start to give you a path out of the weeds that you got yourself into.

BS:     Yeah, you definitely want to catch yourself before you start running too far in one direction and constantly look back and realign yourself with the goals of not only your content, but are they meeting business goals as well? Was this one off thing that you, or this screaming deadline that you were responding to, does it feed into those goals? And if it does, take a step back and see, “Okay, we had to do all of these quick fixes to get it out the door. Why did we have to make these changes? Were the decisions that we made when we started on this strategy sound and do we need to revisit those as well?”

GK:     Yeah, absolutely. Another thing that you can do is look at the situation that you’re in now and do some evaluation and come up with an estimate for the effort it’s going to take to get out of the situation that you’re in with these quick fixes. So you’ll ask yourself questions like, “How much editing is it going to involve? Are we going to have to go in and make changes to a whole lot of documents? Are we going to need to do maybe an automated process to refactor them if it’s too much to do manually? Are there solutions that can make that process a little bit more efficient and more streamlined?” Because that’s the danger of going that quick fix route is a lot of times those fixes are introduced through manual processes. It’s through a single person making a one off judgment call here and there, and then those all add up.

GK:     So it’s really important to look at what people have done and where that’s left your content now, and then how big of a mess is it to clean up. And that can help you make some of the decisions that you need to make in terms of, do we need to focus more on some a programmatic solution and getting an expert involved who can write a cleanup script to help with a lot of this, or is it going to be more worth our time and money to invest in actual human resources to clean this up? People who are going to go in and clean up every document. So that’s another thing that you can ask yourself to make sure that you get out of that mess as effectively as possible.

BS:     And it’s also a good opportunity to take time to reassess just how widespread these quick fixes have become and how necessary a lot of the documents are to fix going forward. So if you have a case where you’ve been copying and pasting information all over the place, how many of these deliverables use the same content in a different way? And do you need to fix all of them? Let’s say you’re migrating to a different tool set. Do you need to migrate every single one of them? Or can you migrate one or a small handful of them and rebuild a lot of the other deliverables that stem from that content automatically.

GK:     Another thing that is really important to do while you’re evaluating the mess that you might’ve made with your content with these quick fixes is also look at what it’s going to take to get you into the solution or solutions that you should be using. So that might be things like new content development tools. It might just be improved processes with your existing tools. It might be some combination. And it’s important to look at that aspect and then everything that goes with it. So for example, what kind of training is going to be involved to make sure that you keep up those new processes and you don’t fall into the same traps that you fell into before with the quick fixes? There’s going to be a change management aspect to that as well, which I think goes hand in hand with training. Looking at why did people go to these quick fixes? What was it about that temptation or what was it about the necessity that may have led them down that path? And how do we put some kind of checks and balances in place and content governance in place to make sure that we don’t do that again?

BS:     So after all this evaluation and all this investigation, the next thing you want to do is plan, plan, plan, and make sure you get things nailed down that are causing the problems that lead to quick fixes, not just resolving the quick fixes themselves. A big part of your content strategy should be how requests come in, how the timelines are built, and what you’re responding to and how you’re responding to them in the first place. If a lot of your quick fixes are a result of someone in the organization coming to you with a screaming need, then that is something that needs to be addressed by your content strategy, even if the strategy basically is to get management involved and coming to some agreement on how those requests for content come in. The more you get your arms around how requests for content come in and how the content flows out, the better control you’re going to have over the content creation process itself.

GK:     Yeah, absolutely. And I think this is an interesting thing to me because a lot of the content strategies that we end up doing are the result of these quick fixes, and we get brought in to solve whatever those problems were that led to those quick fixes in the first place. So the silver lining to having done these quick fixes and gotten into a mess is that it really helps you see where you went wrong and where you need to go right when you’re going forward. You get a little bit of a template or a roadmap for avoiding those mistakes once you have made them. So it’s really important to take advantage of that and not to make those mistakes again.

BS:     Right. If you have the ability to collect any metrics on exactly how much time is spent dealing with quick fixes in your content workflows, that will go a long way also to helping you formulate a solution that will stick, because then you can get firm numbers to present to management to be able to enact some real change.

GK:     Yeah, exactly. We talked before in the previous episode about how much these quick fixes can really rack up costs over time. And if you collect the information and have the numbers to actually prove that that’s what’s happening, then there’s a much greater chance that somebody higher up in management or at the C level will realize that it’s a problem and do what needs to be done to stop it.

BS:     Right. I mean, if a lot of your time is spent essentially on churning rather than actually producing, then that is a productivity problem, and you can believe me that managers are very keen on identifying and solving productivity problems. And you want to make sure that those problems are solved the correct way, which is mitigating the need for these one off documents, mitigating the need for these last minute requests and being able to then focus on creating your content in a more structured way, whether you’re using structured authoring or not. So being able to use templates correctly, being able to use a proper workflow from content creation to review to publishing and so forth, and be able to use the tools the way you ideally need to use them.

GK:     Absolutely. So if you’re just starting out with a new content system or new content process, and you have not yet had the chance to fall into this pattern of using quick fixes, how do you avoid that?

BS:     Well, first I would take into account everything that was said before. And make sure that you have things documented, make sure the pain points are documented, make sure that even things that you aren’t currently doing incorrectly, make sure that you identify what not to do in a content plan as well. All of this information really does need to be funneled up to the managers or executives who are essentially owning this entire content development process.

GK:     Yeah, absolutely. And it’s really important to help people at that level who are not creating content and are not in the weeds of it, but they are the ones controlling your budget. They need to understand just how many problems these quick fixes can cause. How much cost it incurs over time, how many messes it creates that have to be cleaned up later. And they need to know that information so that they can weigh it against things your deadlines and your schedules because it’s all too tempting, I think, even for people at that management or executive level, since they aren’t the content creators, they can be easily swayed into saying, “Yeah, go ahead and do whatever needs to be done to get it out the door.”

GK:     But if you’ve made them understand that taking that approach is going to get you into a mess later, then they might be more likely to say, “No, let’s actually make sure we do this the right way, and if that means that I need to shift somebody’s responsibilities for a little while, so that you’ve got more resources for your content for this deadline, or that means if I need to bring in someone to help with training and get you up to speed to do things the right way, then that’s going to be worth putting those things in place.” So it’s just really important to make sure that the people who are in charge of the budget truly understand how it’s being spent so that they can help everybody else avoid those quick fix approaches.

BS:     Yep. And if they’re in charge of the budget, chances are they’re also in charge of a lot of the workflow within the higher level of the organization. So it might be that a lot of these screaming needs that come in at the last minute that are creating some of these ad hoc practices in your content development process, it might be that a lot of these deliverables were known high up at a very early stage, but for whatever reason, the information did not get down to the content development teams until someone from either sales or from tech support or someone else came running down saying, “Hey, we need this thing tomorrow. Can you stop what you’re doing and work on it? This is a high priority item.” So it’s to your advantage to make sure that you have management informed of not only where the quick fixes are happening and the problems that they’re causing, but also to discuss a lot of the workflow around them to clear the… Essentially be a linebacker and clear the path for you so you can hit the goal when you need to hit it.

GK:     Yeah, absolutely. Another thing that you can do to avoid these quick fixes, and we’ve touched on this a lot in this episode and the previous one, but provide adequate training. Don’t let your writers, your reviewers, anybody involved in content development to get behind, because that ends up breeding resentment. And if you are introducing some sort of very different and very new content development process to your team, there is going to be a learning curve and there’s definitely a chance that people will be resistant to that learning curve, that they will say, “Why does my working life suddenly have to change so much and have to be so stressful?”

GK:     So support them through that learning curve. Make sure that they have the resources they need. That they don’t just have a one and done training session, but that they’ve got somebody they can continue to ask questions to whether it’s a consultant, whether it’s a dedicated resource in your organization, whether it is someone that works for the software vendor that makes your content tools. They need to have that open channel of communication where they can say, “I’ve been trained on this, but maybe I still don’t quite understand this one aspect or I’ve been through initial training but I think I need a little bit more robust training on this particular aspect of what I’m doing.” And make sure that they don’t fall through the cracks because that’s what’s going to lead them to say, “I don’t know how to do this, but I have to do this thing to get the document out the door, so I’m just going to use a quick fix.”

BS:     Yep. And it’s really important to make sure that this training is also targeted toward the type of work they’ll be doing and uses content that they’ll be developing. A lot of times we see teams that say, “Oh yeah, we were trained on using this particular tool.” And it turns out they’ve just gone through generic tool training. And as we all know, you can use, for example, Microsoft Word to produce anything. You can use it to produce a letter to a full blown manual and everything in between. It doesn’t necessarily help you if you’re only providing tool level training. You have to be able to provide contextual content related training. So something that is tailored to the exact type of content that they’re going to be developing, perhaps even using their existing content in the training class so that they know exactly how they should be writing and when and where things should be applied a certain way. Which styles do you use in which instances? How do you structure a document? Which tags do you use in which cases? How does the publishing workflow work? Why don’t we select this one particular button or select this one particular option when we’re going to print something out or to convert it to HTML? It’s really important to have that targeted training, so it’s not just about the tool, but it’s actually relevant to the work they’ll be doing.

GK:     Yeah, absolutely. And I think it’s important too, along that same road, to think about are there going to eventually be content features or aspects of content development that you won’t use until later? So it’s important to think about training at different points in the content development journey that your writers are going through. So one example I can think of is that one of the clients I worked with did basic authorizing training when they first made their move to DITA, and they had not introduced any reusable content yet. They were still doing a lot of writing. They had not fully written out their documentation, but then as they went along and as they wrote that documentation, they had more and more content that they needed to reuse.

GK:     So they realized they needed additional training on DITA reuse mechanisms a couple of years down the road. We had gone through basics of things like what is a conref, what is a key, how do you set up reuse? But it’s a very different ball game to go through that generically and just touch on the highlights of it at an early stage where there’s no context for it, then it is to talk about down the road, “Okay, we have these pieces of content that we need to reuse in this way. How do we do it?” And that’s why it’s really important that you make your training ongoing and open to addressing new needs that pop up.

BS:     And that right there really speaks to how you roll out a content strategy or how you approach developing content with a content strategy in place. You want to have things staged, because you don’t want to try doing everything at once out of the gate because you’re going to get things wrong. You’re going to implement things incorrectly. You’re going to discover that what sounded like a good idea at the time doesn’t really work well. So you’re going to have to refactor a lot as you’re going along, and it really helps to have things buttoned up and streamlined so you can make these shifts as you hit these different milestones in your content strategy implementation, to be able to say, “Okay, we tried X, Y, and Z. X and Y worked great. Z was a catastrophic failure. We can’t allow that to happen again. Let’s stop, reassess, and let’s change things.”

BS:     And if your documents and your workflows are void of any ad hoc bandaid approaches, then it’s a lot easier to make that shift. If the content needs to be refactored, chances are you can probably do it programmatically at that point. If it turns out a particular tool isn’t working well, then it’s probably going to be a lot easier to up and move your content to a different tool or to implement a new tool in the tool chain that you have for publishing if everything is done consistently up to that tool’s point. The more you can get your arms around all of the pieces that go into your content creation and address each piece systematically in the process of implementing your content strategy, the easier it’s going to be to make these pivot points when you need to, when you find that a piece of the strategy just isn’t working.

GK:     Yeah, absolutely. And that’s why I think it’s really important when you are developing that strategy to, as you said, pace it out, have it in phases, have it in stages and think about your short term versus your long term goals and realize that those long term goals might change over time, and almost certainly will change over time. I mean, you may have your overarching business goal stay the same, which is bring in more revenue, deliver content more quickly, and better quality content to your customers, but the way you actually achieve that will almost certainly shift over time. And that’s because a lot of times there are unexpected things that happen. Emergencies, challenges, things that come up that you were not planning for, so that’s why building in that flexibility into your strategy, saying here’s what we want to do in the short term, here’s what we want to do in the long term, the road to get there. We’ll probably take these steps, but it needs to be flexible, because you don’t know what kinds of things might come in and disrupt all of the plans that you had.

BS:     And let’s be honest, you’re going to have a need that is going to go outside of your established process. It’s almost a given that something’s going to come in, it’s going to be a high priority need in a very short period of time and you’re just going to need to get it done. At that point, you need to pivot. Don’t abandon your strategy, but take that one piece out and plan to take it out of that stage and have a plan to put it back into whatever content workflow you have in place. So don’t just introduce ad hoc formatting and just assume that it’s going to be a one off need, but actually plan for it to be an ad hoc process to get something out the door, and then there is a plan for bringing it into the fold. Whether it’s six months out out from delivery, whether it’s two years out from delivery, or whether it’s tomorrow, depending on how big of a need this is. But have that plan to essentially take a detour around the strategy while all the other content continues to follow the correct workflow.

GK:     Yeah, absolutely. And I think, to tie everything together, we we’ve made this point with all of our other ones, but it’s really important to plan for those unexpected things, but also still keep all of your goals and your content life cycle in mind as you execute the strategy step by step. And that’s again, why it’s so important to take this well paced or well phased approach, start maybe really small, maybe start with a proof of concept, a pilot project, something that’s low stakes to prove that what you are planning to do actually works and then expand outward from there. That’s going to help you build in a lot more room for things to change and a lot more adaptability to those changes when they come up, if you keep things well paced, instead of trying to do a whole bunch of things at once.

GK:     And I think that, that aspect of biting off more than you can chew and trying to just go all the way into a new strategy with all of your content all at once can actually lead to more of those quick fixes because you may get in the middle of transferring all of your content over from one system to another, or trying to scale way too quickly and realizing that you can’t do it on the deadlines that you have set and then just falling right back into that trap of quick fixes. So I think keeping that entire cycle of your content mind and keeping that entire path of your strategy in mind, and really pacing it well, taking each step at a time is a good way to not only avoid needing a quick fix, but if something unexpected does come up and you do have to have a quick fix, it does make it easier to address that and not let it get out of hand and bring it back into the fold of your content strategy without too many interruptions.

BS:     Yep. Slow and steady wins the race.

GK:     Absolutely. Well, I think we’re going to go ahead and wrap things up here, so thank you so much, Bill.

BS:     Thank you.

GK:     And thank you for listening to the Content Strategy Experts Podcast, brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

 

The post The true cost of quick fixes (podcast, part 2) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 33:32
The true cost of quick fixes (podcast, part 1) https://www.scriptorium.com/2020/07/the-true-cost-of-quick-fixes-podcast-part-1/ Mon, 13 Jul 2020 12:00:24 +0000 https://scriptorium.com/?p=19821 https://www.scriptorium.com/2020/07/the-true-cost-of-quick-fixes-podcast-part-1/#respond https://www.scriptorium.com/2020/07/the-true-cost-of-quick-fixes-podcast-part-1/feed/ 0 In episode 78 of The Content Strategy Experts podcast, Gretyl Kinsey and Bill Swallow talk about the true cost of quick fixes in your content strategy.

“Even if a quick fix might save you some time or a little bit of upfront cost or upfront effort on planning, it’s almost always going to add costs in the long run.”

—Gretyl Kinsey

Related links: 

Twitter handles:

Transcript:

Gretyl Kinsey:     Welcome to The Content Strategy Experts podcast, brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we’ll be talking about the true cost of quick fixes and common issues that might lead an organization to taking this kind of bandaid approach to content strategy. This is part one of a two-part podcast.

GK:     Hello and welcome everybody. I’m Gretyl Kinsey.

Bill Swallow:     Hi, I’m Bill Swallow.

GK:     And we’re going to be talking about quick fixes in your content strategy today, and how that can lead to all kinds of issues down the road. So I think the place to start is just talking about what we mean by quick fixes.

BS:     And that can be pretty much anything that doesn’t fit the greater plan. Doing things that kind of make things fit or responding to an immediate need with an ad-hoc approach to getting something done.

GK:     Yeah, absolutely. And I think we’ve both seen plenty of examples of this happen. Even if you’ve gotten a really solid plan together for your content strategy, there are oftentimes things that just pop up that do go outside of that plan. And so then it’s often really tempting to sort of apply this quick fix to get you through it. So what are some examples that you’ve seen of these kinds of quick fixes?

BS:     One that jumps right out is formatting abuse. So whether you’re working in structured content or not, ignoring any styles or any elements or whatever you’re using, and just kind of using whatever feels right to you in order to make things look or behave a certain way rather than following what the styles should be.

GK:     Right. And I’ve seen this, like you said, both in structured and unstructured content. And so from the structured side of things, that’s usually going to be a situation where you’ve got some sort of separation between your content itself and your formatting. But if you are used to working where you’ve got that control over the formatting, and then you suddenly don’t have that anymore when you go to structure, I’ve seen people do this tag abuse thing where they will use a tag in a way that is technically legal within the structure, but it is trying to control formatting. And then that can have all kinds of unintended consequences across your actual transformation processes that produce your output. But that’s just a very common thing that people will say, “Oh, I need a page break here, or I need a table to look like this here.” And they do something that’s just a tag abuse thing to get it out the door.

GK:     And from the unstructured point of view too, I’ve seen people do what I would call template abuse. And so that would be things like, for example, in InDesign, instead of all of the predetermined styles, maybe applying one but then making a little tweak to it, like making the text italic or something, or making it a slightly different size, or not connecting text frames together when they should be connected together just because you’re trying to get something to lay out and look right on a page, but you’re actually making the whole document break. And same thing in FrameMaker as well, similar to what you might do in an InDesign template. If you’re an unstructured FrameMaker, I’ve seen people do that same thing where they’re ignoring the formatting that was built into that template and overriding it with lots of little formatting tweaks here and there just to get one page perfect. But then later, if somebody makes an update and it blows away all of these little formatting tweaks, then that person would have to go back and make them again. And so that’s when a quick fix doesn’t become so quick.

BS:     Oh yeah. And there’s nothing quite like opening up a Word document and seeing everything being tagged with normal asterisk. Especially when you’re trying to do a quick rebranding job and you realize that your four page document is going to take you about eight hours to change. Especially if you’re changing fonts and sizes, colors, that type of thing. Once you start introducing a lot of these ad-hoc quick fixes, the more work you’re creating for yourself down the road.

GK:     Yeah, absolutely. Another thing that this is kind of more specific to structured content, and I think particularly to DITA, is something that we at Scriptorium call spaghetti reuse. And that is basically when instead of coming up with a reuse plan and putting all of your reusable content into warehouse topics and putting them all in one place where people know where to get them, instead what people will do is maybe conref in a piece of information ad-hoc from another topic. And then you suddenly get this tangled web of reuse that’s impossible to track. And that’s done as a quick fix because if you know that you should be reusing content, but you haven’t made a plan for it, and suddenly you need to get this content out, that might be a tempting thing to do, but then it’s, just like with the tag and template abuse, it’s just going to create a lot more problems later when somebody comes in and needs to fix it.

BS:     Oh yeah. If you’re working, especially creating new maps for new deliverables and you’re bringing topics in to build your map out and to build your document or your help system or whatever you’re producing out of DITA, once you start pulling those maps in, you’re going to find that you have all of these missing links all throughout your content to other topics that you didn’t want to include in your map file. And now you’re stuck either having to stop what you’re doing and create these warehouse topics so that you can do it correctly, or what I’ve also seen is that people just grab the topic that they need, dump it in the map and set it to resource only, just so they can resolve that conflict in a hurry. And that creates another issue down the road with more spaghetti reuse.

GK:     Absolutely. So that again, points to this idea that it’s always better to go ahead and build in that time up front to make that plan, instead of just going the quick fix route, because it’s not quick later when you have to go back and clean up the mess you’ve made. Something else that can also cause a problem like this, and this is more for if you’re working in an unstructured environment, but heavy use of copy and paste, and then if you do get into structure and you do have reuse capability, but you are in the habit of heavy use and copy and paste, still going ahead and doing that can be a real problem. That’s even worse than the spaghetti reuse if you’re in a structured content environment, like DITA, but you’re still copying and pasting everywhere and you’re not making use of that reuse potential that you now have.

BS:     Yep. And copying and pasting really creates two really fundamental and horrible problems. One is if you need to update that content, god knows how many places you’re going to have to go looking to fix that content. Even though it’s exactly the same in all places, you’re going to have to make the fix in all places. Likewise, on a localization angle, you’re just throwing money out the window if you constantly have stuff copied and pasted all around, especially if people are then modifying what’s been copied and pasted because they don’t particularly like the wording in this certain instance or they forgot to update it in one place. Then you start getting all of these fuzzy matches going into your localization workflow, and you’re throwing money at a problem that shouldn’t be there in the first place.

GK:     Absolutely. Another example of a quick fix is having multiple variants of the same output type to satisfy immediate needs. So for example, if you are generating PDF output, let’s say, from a collection of DITA topic files, maybe you realize, “Oh, I need a PDF with this particular cover variant and I need another PDF over here with this one for different audiences.” There are ways in your PDF transform to build things in where there’s a switch based on your audience or based on the product that that content goes with or what have you. But instead, maybe you decide that the quicker fix is to just basically copy over that transform and make that one little adjustment when you could have just had it all working in one single transform. And we’ve definitely seen that come up as an issue as well, and the problem there is that you are just kind of creating this ballooning effect of your outputs. So instead of having one output transformation that can do a variety of things, you’ve got multiple transforms for all these different variants when you didn’t really need it.

BS:     Oh yeah. And the same goes for on the unstructured side, especially with an InDesign and FrameMaker and other applications that allow you to use … Well, I’m going to use a FrameMaker term, but to use master pages for your layout.

GK:     Yes.

BS:     A lot of times I’ve seen people create multiple templates just to satisfy either different page sizes or different cover pages, as you mentioned Gretyl, and they ended up having to apply these templates over and over again to their files just to generate new output when they could go in and just select a different master page and allow the document to reflow into whatever that new layout needs to be.

GK:     Yeah. And again, this is just one of those things where a little bit of planning upfront could have gone a long way and saved you all of that trouble, and all of that ballooning effect of having all these different ways to produce your output when you could have gone with something much more efficient. And if you realized that that’s a problem and you need to clean it up, it’s kind of like all of these other quick fix examples, that you’ve made a mess that could have been avoidable, and now someone has to put in the time and the effort to go back in and clean that up.

BS:     Well now that we talked about some of these quick fixes that are out there, what are some common issues that might lead a company to implement a quick fix rather than do it the right way?

GK:     So one of the most common, and I think most obvious issues that we run into, is just pressure from deadlines and release schedules and things like that. You’ve got a product that has to go out the door by a certain date, or you’ve got an update to your product that has to go out by a certain date, you’ve got content that has to go out with that product. If you are localizing, you’ve got those deadlines too. So that is a big source of why people cave to the pressure of these quick fixes. Because if you are looking at a limited window of time where you’ve got a choice between, “Do I do things the right way or do I do things in a way that works, maybe not ideally, but still gets my product out the door on time?” Then that’s the one they’re going to pick. So it’s a tough situation. And again, it gets back to this idea of planning more up front, but sometimes there are scenarios where you just don’t have the time to do that properly. And so what generally will happen is a company will say, “Okay, well, it’s going to be just strictly from a cost benefit analysis perspective, better to do what we have to do to get it out the door and then go back and fix it later when we’re not under that deadline pressure.”

BS:     And you always have that extra time after a project to go back and rework and do it the right way, right?

GK:     Oh yeah, of course. No you don’t. That’s one of the pitfalls of this is that you think, once we get over this one hurdle of this deadline, then we can go back and fix it, but there are usually other deadlines coming up, and even if you’ve got long cycles between your releases, if you’re not on that typical two week sprint schedule that a lot of companies have, you still may have something unexpected that pops up. You may have somebody in a different department say, “Hey, we’re going to introduce a new project over here. And guess what all that time you thought you had to fix your mistake, your quick fix, that’s all gone. And so, this is really a tough problem that there’s not always a great solution to, but I think if you’ve got the time to plan up front, that that’s really the best way to get around this, because once you kind of get in that quick fix mindset, it becomes a perpetuating cycle that’s very hard to break out of.

BS:     Yeah. And it snowballs to a point where once you do have to do something different with your content, whether it’s a rebranding effort, whether you’re switching tools, whether you’re doing whatever. If there’s something large scale that you need to change in your content, the more quick fixes that you have in there the tougher it’s going to be to work around them to get anything done on that larger effort.

GK:     Yeah. The tougher and the more expensive, because if you’re doing something like converting your content from one format to another, or rebranding or making some sort of large scale terminology update, anything like that that affects basically all of your content set, then the more consistent your content is, and the less full of these kinds of little quick fix tweaks that it has, then it’s going to be a lot more of a smooth process to convert it or change it, update it, whatever. If you are faced with deadline pressure and it’s absolutely imperative that you have to change the content or convert it or whatever, then you’re looking at a pretty big price tag because you’re not only making a lot of major updates and kind of getting all of those consequences of your quick fixes out of there. But you’re also having to do that under a tight deadline, which is probably what got you in the quick fix boat in the first place. So that’s something to keep in mind is that there is going to be … Doing anything on a quick deadline, that usually means there’s going to be a bigger cost for that turnaround.

BS:     Absolutely. And speaking of cost and cost comes down to two things, time and money. Either you don’t have the time personnel wise to get something done right the first time, or you just don’t have the funding to either go back and revisit, or you don’t have the funding to pick up, let’s say you’re doing something ad-hoc in a tool that is not really well designed for your needs, and you don’t have the funding to actually buy the tool that you do need that would satisfy all this rework that you’re doing constantly. It can be difficult to be able to either get that budget or be able to secure that time to do things in a more efficient manner.

GK:     Yeah, absolutely. So if you do have that limited funding or limited resources to really build out the ideal solution that you want, it makes it kind of hard to think about your planning. And I think this is an issue that I’ve seen with a few clients, or maybe more than a few, where one of the challenges they ran into was just a lack of longterm planning or the ability to do longterm planning.

GK:     Whenever we’ve come in to help companies develop a strategy. One of the things we do is to encourage them to not just focus on the present, but also the future and think about what are your goals for right now versus what are your goals for two, three, five, 10 years down the line? But if they are working with really limited budget, really limited resources and even kind of an unpredictability factor in how much budget they’re going to get from year to year, that can make it really difficult to ask and answer that question of where might you be five years down the road, and what are your ultimate business goals?

GK:     So I think it’s important to be flexible in that area and still plan for it as best you can so that you can avoid falling into this rabbit hole of making quick fixes and just having that become your only strategy. But it does kind of become understandable when you look at this problem of limited funding and limited resources, why so many companies end up taking that quick fix route.

BS:     Another area we can get a lot of quick fixes creeping in are if your writers aren’t properly trained to use the tools that they need to use to get the job done, or if they are not well trained in the types of publishing that need to happen from those tools.

GK:     Yeah, absolutely. And I think, in some cases, it’s not always just a lack of training, but it can sometimes be an active avoidance of a steep learning curve, and that’s where I think it’s really important. And we’ve talked about this in some of our other podcasts and blog posts, but it is so important to customize your training to what your writers and other content creators need to make sure that nobody is getting left behind, to make sure that people are not feeling resentful about the changes that are now in their kind of day-to-day working life, because that is a really big concern, and when you’ve got this whole different way of creating content, it can be very difficult to learn.

GK:     So if you want to avoid people coming up with quick fixes with work arounds, with some sort of way of doing things that they find easy but that isn’t necessarily correct, then it’s really important to make sure that they get not just a one size fits all training, but custom training, maybe ongoing training or support to make sure that they can do their jobs correctly.

BS:     And in addition to that, if you find that either after training or if there are enough writers that refuse to do it the right way for whatever reason, or that cannot do it the right way for whatever reason, you probably picked the wrong tools. In which case, then you’re going to have to go back. That also speaks to the longterm planning. You chose a tool based on its capabilities and not necessarily the wants or needs of your staff.

GK:     Yeah. And that’s something as well that I’ve seen, that even the tool selection itself can sometimes have a quick fix approach that gets you in trouble later, and that has a lot of consequences down the road. So that’s where it is really important, kind of like we talked about previously without that longterm planning, whatever limitations that you may have with budget resources, it is still really important to think about your future needs and to make sure that your tools and your strategy are going to truly serve your company’s business goals and make the actual work that your writers and content creators are doing more efficient.

GK:     And one other area that kind of ties into this idea of training and knowledge is that we’ve seen some people, specifically in DITA workflows, apply quick fixes just because they didn’t know about a DITA feature that would have let you avoid it. There is a lot of very particular weird twisty stuff out there in DITA that is kind of considered more advanced, and a lot of times if you just have that basic level of training in DITA, you wouldn’t know about it. And in particular, I think there are some of the reuse mechanisms that are available, that a lot of times people have come up with some sort of a workaround just because they didn’t know it existed. Things like conref push for instance, and in some cases, even just the use of keys or the use of conkeyrefs. People will come up with some sort of quick fix to solve a problem that there was already a DITA mechanism in place that would have just automatically solved it. And so again, that’s where getting into not just training, but looking at what kinds of things you need to do with your content, talking those through and making sure that you haven’t overlooked some aspect of DITA that could solve that problem can help you avoid those quick fixes.

BS:     Oh yeah. I’ve seen a lot of cases where people even use tables for formatting in DITA in a specific way, which gives it a completely non-semantic markup, whereas they probably would have wanted to use maybe a definition list or maybe they’d use a glossentry or something like that that gives it a little bit more meaning as to the context of what your content is rather than just slapping it in a table and removing all borders.

GK:     Yeah. And that’s kind of a holdover, I think, from folks who have been trained in more desktop publishing oriented content development processes. Then they get into DITA, and they don’t realize that there are all of these features of structure that would let them accomplish something that previously they would have had to use maybe a table or some other kind of weird formatting to achieve, that now they don’t have to do it that way, but they may just not know. And they may be kind of … going back to things that they are familiar with if they haven’t gotten that proper training.

GK:     So I think to wrap up, we’ve already touched a little bit on this, but I want to talk about the consequences of relying on quick fixes. And I think the two major ones that we’ve touched on a few different times throughout this podcast, one is that it makes a huge mess, and eventually when there’s some sort of pressing need to have things set up the right way, then you have to go back in and clean up that mess if you have relied on quick fixes.

BS:     Oh yeah. And that can take ages depending on how much content you have.

GK:     Absolutely. And then of course, the other big consequence that we’ve mentioned as well is that it adds cost in the long run. So even if a quick fix might save you some time or might save you a little bit of upfront cost or upfront effort on planning, it’s almost always going to add costs in the long run to do a quick fix. Because even if you just start with one small, quick fix now, as that gets expanded and propagated across all of your content over time, that is going to just really blow up that one small, quick fix into an expansion of quick fixes everywhere. And so then if you have to clean that up later, that cost is going to be huge, and it’s something that could have been avoided upfront if you had not gone that quick fix route.

BS:     Absolutely. And to add one more thing in here, you could be doing everything right, you could be planning for the long term, you could have carefully chosen your tools, trained your team, everyone’s following exactly how they need to be working, and then a request comes in for a one off document or a one off deliverable that’s never going to be used again. And you decide to throw caution to the wind and just get it done quick and dirty. And then suddenly that one off document becomes something that you carry forward with you for years and years and years updating and so forth. And if you do it wrong the first time, or if you make those quick fixes and do some ad-hoc formatting or whatever else, you then have to carry that forward. And it becomes more and more difficult to get that one off document that somehow has turned into a sustained need, which they usually do. It’s harder to bring that back into the fold of the other things that you might be doing the way you need to.

GK:     Yeah, absolutely. And that problem can become even more of a headache when that one document that you have done as a one off not only has to be carried forward, but maybe they decide that that’s going to be the basis for how you want to do a whole lot of other documents. You develop new products and they go, “Oh, this document structure worked really well. Let’s take that to five, six, 10 different products.” And so now a mistake that you’ve made has just really blown up, and it’s become the standard. And so to get that mistake out is going to just really be a nightmare.

BS:     Or, “Oh, they’re just release notes. Don’t worry about it.”

GK:     So we’re going to wrap this up here. This is going to be a two-part podcast, and the next one is going to focus on the solutions to quick fixes. Because in this one, I think we’ve talked a lot about the problems, but in the next one we want to focus more on the positive side of how you can avoid these in the first place. And if you do end up doing them, how you can solve those with hopefully as little of a headache as possible. So thank you, Bill for joining me today.

BS:     Thank you.

GK:     And thank you for listening to the Content Strategy Experts podcast, brought to you by Scriptorium. For more information, visit scriptorium.com, or check the show notes for relevant links.

 

The post The true cost of quick fixes (podcast, part 1) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 26:48
Content reuse: different industries, same problems (podcast) https://www.scriptorium.com/2020/06/content-reuse-across-industries-podcast/ Mon, 15 Jun 2020 12:00:12 +0000 https://scriptorium.com/?p=19734 https://www.scriptorium.com/2020/06/content-reuse-across-industries-podcast/#respond https://www.scriptorium.com/2020/06/content-reuse-across-industries-podcast/feed/ 0 In episode 77 of The Content Strategy Experts podcast, Alan Pringle talks with Chris Hill of DCL about content reuse and what it looks like across different industries.

“You really have to start seeing content creation as a collaboration and build trust between the people who create content.”

—Chris Hill

Related links: 

Twitter handles:

Transcript:

Alan Pringle:     Welcome to The Content Strategy Experts podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage structure, organize, and distribute content in an efficient way. In this episode, we take a look at content reuse with special guest Chris Hill of DCL. Hi everybody. I am Alan Pringle. And today we have a guest on the podcast. It’s Chris Hill from DCL. Hi Chris.

Chris Hill:     Hi Alan, good to talk to you.

AP:     Yeah, it’s good to talk to you as well. Today, we are going to talk about content reuse and what that looks like across different industries. And the first thing I want to ask you, Chris, is why should people even care about reuse from say the executive who has departments that create and distribute content to the content creators themselves?

CH:     Yeah, that’s a good question. And it’s one that’s evolved quite a lot over the last 20 years as we’ve moved more and more content to formats that support reuse really the critical things about content is there’s a cost to managing content regardless of how you do it and every piece of content you can think of as an expense. As you build up more and more content, the expense rises because you have more cost to manage it, to find it, to dig through it, to decide what’s relevant. And it slowly will build up to the point where it becomes daunting to deal with larger and larger volumes of content. So content reuse really came about to help control that.

CH:     And when we see documentation that maybe has similar procedures or similar warnings or similar boiler plate text, whether it’s a copyright statement, you need to keep these things consistent. And so your users, your consumers of your content, benefits from reuse in that you create a consistency in the content that’s reliable, and that will not lead to confusion about what you’re trying to say. The creator themselves is often responsible for trying to deliver that quality consistent content to the users. And so a reuse oriented approach lends a great deal to be able to control and make sure that content is consistent and is accurate.

CH:     If you have a lot of duplicated content and I find out that there’s a problem with that piece of content, or maybe something needs to be updated in that content. I suddenly am faced with a huge search task of digging through everything, to find where that content was used. If I’m using a real reuse strategy, that content should only appear once in the content. And so if I need to update it, it can be done so accurately by just going to the single source and knowing that it’s reflected in all of the places where that content might appear. So that’s from like a user and maybe a creator level. Now, sometimes management might say to themselves, well, I don’t really care. I’ll pay someone to do that work. It costs a lot maybe to move my content to a content management system. Why should I do that? I’ll just hire another person to do searches. And that is an approach that a lot of people take.

AP:     But that’s almost like it’s the inverse of death of a thousand cuts. It’s this cumulative effect of all of this layer, upon layer, upon layer that you just keep throwing people at something where maybe technology might be a better solution.

CH:     Exactly. And it might be fine to throw people at it for the first few years, but if you become successful or your product family grows, if you’re a product company or if you’re offering a service, maybe you expand your services. It’s slowly, like you said, that death by a thousand cuts, it slowly builds to this level where suddenly you’re overwhelmed with any kind of content update. And you can usually see that in organizations, because what you’ll find is that if the content is proving a drag on the agility of your organization, so if you say, okay, we’re going to release a new product or a new version of our product, but when will the user guide be updated? And if you’re finding that, that’s always way down the line or always a drag, there’s a good chance that there’s some things going wrong in there that reuse might be a part of the solution for.

AP:     You mentioned the word control a little bit earlier, and that kind of stuck in my head because I have heard in the past from content creators, something along the lines of, “Well, my version of this stuff is better. So I’m just going to use my stuff.” How do you deal with that kind of mindset when you’re talking about a bigger picture reuse strategy?

CH:     Yeah. That’s always a challenge. I think just about anyone who has a lot of pride in their work, whether you’re a writer or a programmer, I used to be a programmer and when somebody would say somebodies already written this piece of code, my initial instinct was, “Well, I don’t know how that code is. I think I’ll rewrite it.” Right?

AP:     Exactly. Yeah, exactly.

CH:     And I think content creators have similar pride in their work. And what’s important I think there is, you’ve got a couple of things that you have to address at the organizational level. You really have to start seeing content creation as a collaboration and build trust between the people who create content and make sure that they understand each other and what they can do for each other, because really rewriting a piece of content that’s perfectly acceptable really doesn’t benefit the user in a meaningful way a lot of times we might think we wrote it better the second time, but wouldn’t it be an even better solution if there is a problem with the existing content, if I rewrote that existing content or updated that existing content so that all of the documents and all of the content that I produce could reflect that improvement? Rewriting it myself for my own manual might make my own manual a little better than someone else’s if I’m writing manuals, but at the end of the day, really, it pays from a organizational perspective to make sure that everything is written to the best level we can.

AP:     Sure. Now I know DCL works with a lot of different industries. Do you see kind of similar or different pain point struggles that organizations have based on a particular industry type when it comes to reuse?

CH:     It’s really a lot of it overlaps. I look at a lot of different industries content and the errors are all the same in a general sense. For example, one of my customers makes a conveyor belts, right? For baggage handling. And I don’t know, stuff like that. And when I look at their manuals, I don’t know what half of it’s about, but I do see the same exact errors and the same exact inconsistencies in their content as say, somebody who’s writing a journal, maybe a medical journal or something, you’ll see inconsistent phrases. You’ll see maybe somebody refers to their product in a certain way. And another writer refers to it in a slightly different way. Both of those ways may be valid, but could lead to confusion on the part of the user when those pieces come together, whether that’s a product name or whether that’s a disease, I’ve seen that where medical journals refer to a disease with two different names in the same articles sometimes. You wonder about that and those things are things you need to look for because their areas where somebody less knowledgeable about the topic might be confused.

AP:     Yeah, regardless of industry or content type, consistency, I’m assuming is something you really want to strive for regardless of where the content is coming from.

CH:     Yeah, it’s all about clarity to the end user and whoever consumes the content, we always have to think, the person reading my content is not going to be generally as knowledgeable about me or as me about the subject. So as I write that, I have to really think in terms of somebody who’s just coming to this content or this subject for the first time, they need that consistency to help remove some of those hurdles in mastering the information. If you have a lot of inconsistency in the way you talk about or refer to things, I think that’s just one more hurdle in the way of me really understanding what you’re trying to tell me.

AP:     And I think your point about thinking about the person who’s consuming, this content also addresses some of the ownership issues we were talking about earlier I don’t know if selfishness is the right word, but this idea, this content is mine. It’s really not yours. It’s the people who are reading it.

CH:     That’s a great attitude to take. I think it’s a tough one sometimes.

AP:     Oh absolutely, I agree.

CH:     But it’s a great attitude. If you can get your organization there it’s really so much the better. I used to work in some more content creation jobs and one of the things I always tried to do in a meeting when we’d have a disagreement over content or how to write something or what to write about, I always tried to focus the discussion on the users, if you keep your focus there, I think that should be your North star as you’re trying to work through these issues.

AP:     Sure and I can see it can also diffuse some tension when you’re talking about content that will eventually be shared or should be shared?

CH:     True. That can sort of be the bridge between you and somebody that you may have some disagreements with.

AP:     Well, speaking of disagreements, what are some of the horror stories you can share about reuse and going into an organization that you don’t have to name names of course, but what are some of the really kind of horrifying things that you’ve seen that you were able to help your clients fix?

CH:     It’s really interesting when you go look at a lot of content and especially I think because I often am coming into it with not a great deal of subject matter expertise, because again, we work with so many different industries.

AP:     Sure.

CH:     I mean, what do I know about luggage conveyor belts? Or what do I know about medical procedures? Not a lot is the answer. But when I look at their content, I can really see things that oftentimes they’ll miss. They’re often surprised I will come in and I’ll look and I’ll say, “Well, you obviously copied this manual from this manual and then to this manual.” Because I can almost tell from the changes that, that’s, how they’re operating. And that’s often the case you’ll see, especially in a lot of manufacturing type companies is they do a lot of, we’re going to start by copying an existing close thing and then we’re just going to edit the parts we need to edit.

AP:     I hear that all the time, all the time.

CH:     Yeah, it seems to be an easy way to work I totally understand why you want to do that.

AP:     Sure.

CH:     And in the old days, that’s all you could do. I mean, you didn’t have a lot of reuse options back in the eighties and nineties, unless you were IBM or something.

AP:     Right.

CH:     So it’s totally understandable that, that’s how you’ll work. And also that’s how we learned to work in our personal lives. None of us have set up content management systems in our home, as far as I know.

AP:     I hope not.

CH:     I don’t have my IT department downstairs, maintaining my files for me. So how do I work? Well, that’s literally how I work. I mean, I’m a reuse person and yet in my personal life, I’m not afraid to say that I will take a document that I’ve already written and revise it a little bit for some other purpose.

AP:     Sure.

CH:     But what happens if you do that at an organization level, is that those two duplicates then have their own life of their own. They’re really a split in the road and so if we find out there’s a problem with the content, most of the content you and I deal with personally, it doesn’t matter too much. If it’s out of sync a little bit, it’s like, “Okay, well, we’ll get over it.” But if I’m writing a space shuttle manual, or even a luggage conveyor belt manual, there are safety issues that come in. If I find out that there’s a safety problem and I’ve got to revise part of the documentation, if that documentation has been duplicated in dozens of places that I don’t know about, and I’m not very good at doing an exhaustive search, I may continue to expose my users to those incorrect or inconsistent pieces of information that could become a real liability.

AP:     A legal, costly, financial liability.

CH:     Absolutely. The other area a lot of times that this will come and I don’t know if this is a horror story, but is translation. So companies will start maybe in the U.S. or if there sometimes a Canadian company will start in a couple of languages like French and English, but they’ll start with a very narrow band of their user base. And if they achieve success, their user base expands. So if I take my company global, all of a sudden, I’ve got all kinds of other issues about my content and even if the content is in the product, any documentation I provide, there are laws in every country about how that gets delivered and what languages it gets delivered in.

CH:     So I might be perfectly content using my copy and paste and starting from an existing document for my English speaking content, but suddenly I move into France and I have to add French to the mix, or I move into Germany and now Germany’s on the table and German language is on the table. And as I keep doing this, I think it quickly becomes evident that you can’t hope to manage not only copies of content, but then also the language variations of content very easily using that copying process as you go.

AP:     Sure. Because once again, you have layers that are just exponentially increasing every time you do buy a new version of whatever you add a new language to the mix. So very rapidly, I can see it getting out of control.

CH:     Yeah, it does. And that’s a big horror story in a lot of companies, a lot of companies will come to us to talk reuse because they are going international or they have gone international. And suddenly they’ve got this nightmare of stuff. As far as translation goes, it’s very expensive to do translation. And so if I have a manual, the first time I get it translated, there’s just kind of a fixed cost. I mean, all those words have to be turned into German or whatever. The next time I go back to that manual, if I have a way of doing reuse, I can break that manual up into parts and just keep track of what parts have changed instead of retranslating the whole manual a second time.

CH:     And that can have a really dramatic effect on the cost and the velocity with which you can produce content internationally. If you can track and have a reuse strategy where only the reused components that get changed have to be retranslated that can often be very significant to an organization. So this is where the management starts to perk their ears up because they’ll start saying, how much money can we save on translation? Or how much faster can we get those translation done if we use this approach? And those are the areas that are often the real big pain points that an organization will come to us with.

AP:     I know from past experience doing copying and pasting of my own, yes, I have done it it’s been a long time, but I have done it. That it’s easy to get content where they are sort of like near matches. It’s almost the same content, but a word or two is different. So from a reuse point of view, I mean, what kind of different matches are there? Because there’s got to be some variety in how you can identify and track them all the way from absolutely identical to fuzzy kind of the same.

CH:     Yeah and that’s really where it gets very complicated if you are using that copy paste strategy. So if I take an existing manual and maybe I don’t like just the order of some of the phrases in the introduction, so I might move a couple of sentences around. Maybe I’m not really changing the meaning I haven’t really changed it much. I’m just aesthetically making some modifications because I like it better that way.

AP:     Right.

CH:     Well, suddenly it’s very hard to do searches to find that stuff. If there was an error in say a paragraph and I need to go look for that paragraph everywhere that it’s been duplicated, it can be incredibly difficult to find that stuff. And so fuzzy matching is something that is very hard to do in a traditional tool. You can do wildcard searches, say in Windows, if you’re looking at a shared directory or in most content management systems, but they really have a hard time if maybe the meanings mostly the same, and maybe a lot of the words are the same, but they might be in different orders. It’s almost impossible for a regular person to write at an… You have to really get into regular expression writing. And even the experts on that can’t really address those fuzzy matches very well because there’s just so many variations.

AP:     Right. And the people you’re talking about a lot of the time are content creators. They are not programmers.

AP:     So they may not have that in depth knowledge of how to do regular expressions and other kinds of searches to really find that stuff.

CH:     Yeah. And a lot of times what you see then to go back to our horror stories is… I’m amazed at how many organizations rely on, I always say one old guy, but it could be one person who is just intimately familiar with everything and that people go to and go, “We have to fix this.” And they’ll go, “Oh, this is in this, this and this manual.” And, “Oh, did you look there? Because it’s probably in there.” And that kind of reliance is very dangerous to an organization.

AP:     I have seen exactly what you were talking about in a manufacturing firm in particular. Yes. I know exactly the type of person you’re talking about. He or she has been there forever knows where everything is, it’s this huge, fast domain knowledge that they’ve got tucked away in their heads, but they are usually approaching a retirement age. Very dangerous, indeed.

CH:     Yep.

AP:     So let’s kind of move beyond, we know the horror stories, we’ve got some ideas of how to fix them, but once you know that you’ve got reuse and you’ve identified it, what kind of things do you have to do to really get a return on investment? Because merely a dent, just identifying that reuse is probably not enough.

CH:     Right. So the steps that follow generally, you’re going to have to find a framework or a platform on which to build a reuse strategy. So it generally is not possible or sufficient to just say, “I’m going to try to make reusable components on a file system.” There’s just too many limitations. So that’s when you start to get into the area of content management. And we’re kind of lucky today compared to say 15 years ago in that there are lots and lots of content management solutions out there that can support reuse and they’re better than they’ve been ever and there’s more options than ever. Some of them are cloud based and you can just get into them at a pretty reasonable monthly fee to start with and then build your way up if you need to. Or some of them are deployed content management systems that you bring into your organization and your IT department can manage if that’s your approach.

CH:     But usually once you’ve identified the need for reuse, that’s the next stage of the conversation. And really the reason why you need to do the reuse analysis first, generally is these content management systems are not free. Some of them are quite expensive and depending on your needs, it may be worth making an investment like that. But to make that case, you really have to look at all of the ways it’s going to improve the organization. And at the heart of that tends to be reuse in a lot of the work I run into. So to able to go to your management and say, “I want this much a month in licenses.” Or, “I want this much to deploy some software solution.” To do that, you really have to come with some metrics. And one of the things, knowing where all the duplication is in your existing content that can really help you put together those metrics.

CH:     You can put some estimated hourly or dollar figure costs on each piece of content and the changes you can talk about the time it took you to produce the next version of a manual or an updated version of a manual, sort of come up with some ballpark figures to work with as far as cost savings and efficiency improvements that these tools might have.

AP:      I think that’s very important what you’ve pointed out, you are not going to have a lot of luck going up the management chain saying, “I need this.” Without showing ROI, you’ve got to have something that shows how that system is going to be paid for, it may be over a period of time, but you have to show how your return on investment is going to pay for this new technology. Otherwise, what’s the point?

CH:     Exactly. Yeah. Yeah, I find that I’ve seen teams and I’ve been part of teams where we went to the management to ask for something and the only real thing we had at the end of the day, if you summed up our argument was it’ll make our lives easier. And I learned very early in my career, management doesn’t always care about making my life easier. They might even say, “I’d rather just give you a raise, than make your life easier.” Or “I’ll hire you some help.” Or whatever, because it can seem daunting these content management systems. And it’s like, why should we change everything that we’ve been doing for the last 40 years? We’ve been doing it this way I don’t understand why we need to change it, we’ll just get you some help. So making that bigger case and talking about some of these reuse issues and talking about content cost and velocity, those are all things you can start to put those numbers on so that you’re not just going to management with make my life easier.

CH:    Right. You’ve got something fairly objective instead of all about you and how it will make your life easier.

AP:     Yep. Yep. So for people who were thinking about reuse, are there some common places to start to look for duplicated information kind of low hanging fruit, if you will?

CH:     Well, I mean, almost always you have copyright statements, right? And how many times do you find a copyright statement that’s out of date or inconsistent? It’s a lot and I can look at those usually and very quickly see, okay, there’s a reuse issue here. You may have, depending on what industry you’re in. We see a lot of background information, introductory paragraphs, those kinds of things often have a lot of overlapping subject matter. And it’s, again, you’re not looking for exact duplication all the time oftentimes you’re looking for the same objective for the piece of content. So maybe this content is to familiarize you with some process that our equipment performs. And so the background information might show up in several places and those are areas where that’s really easy to find.

CH:     Another thing that’s easy to find is if you have product variations, you might have in the example of the conveyor belt company, there’s a straight conveyor belt, there’s a curved conveyor belt. There’s a conveyor belt that moves at a different speed. Those may have a lot of the same parts. They might almost be exactly the same, but they have different manuals. Usually, you know from your products where those things occur.

CH:     One of the things we did, and this is really my role at DCL is I was hired on as product manager for one of the products we actually sell that looks for duplicated content. And it wasn’t originally a product so well before I joined, it was a part of the conversion process. When someone would come to DCL and say, we need to move our content out of word or out of Framemaker or out of whatever tool we’re using into this format, say DITA or XML that our content management system can use. One of the first steps is to say, “Well, where are those reusable pieces?” And we used to do that analysis by hand and throw armies at people at it.

AP:     Wow.

CH:     But nowadays over time they evolved a product to do that and that’s the product we call Harmonizer that I manage and that product has improved over the years because of all the breakthroughs in natural language processing and artificial intelligence and machine learning, all those fields have given us a lot of algorithms and a lot of approaches to find fuzzy matches, those near matches or even kind of matches of content that you’d never find by hand that a person would never see. I constantly see when we run a bunch of documents through this tool, it’ll find things where people have rearranged entire phrases into different orders and move the sentences around and it still picks it up as a near match to something else.

CH:     And when you first look at it, if you were just scanning the page, you’d miss it because it doesn’t look anything like it at first glance and then when you read it, you’re like, “Oh, those are the same.” Somebody really rewrote this in maybe a better way, but didn’t rewrite the original one. So that’s where there are some tools now emerging that can help you do some of this stuff.

AP:     Well, I think those sound very helpful and we’ll be sure and include a link in the show notes to the Harmonizer tool so people can learn more about it. And I think with those recommendations, we’re going to leave it at that. So, Chris, thank you very much. This was a great conversation.

CH:     Yeah, I really enjoyed it. Thanks Alan.

AP:     Thank you. Thank you for listening to The Content Strategy Experts podcast brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

 

The post Content reuse: different industries, same problems (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 30:41
Moving to structured content: Expectations vs. reality (podcast) https://www.scriptorium.com/2020/05/moving-to-structured-content-expectations-vs-reality-podcast/ Mon, 18 May 2020 13:30:18 +0000 https://scriptorium.com/?p=19673 https://www.scriptorium.com/2020/05/moving-to-structured-content-expectations-vs-reality-podcast/#respond https://www.scriptorium.com/2020/05/moving-to-structured-content-expectations-vs-reality-podcast/feed/ 0 In episode 76 of The Content Strategy Experts podcast, Elizabeth Patterson and Alan Pringle talk about expectations versus realities of tools when moving to smart structured content.

“You can have different people using different tools and still pour all of the content into the single content management system. People connect to it differently based on the authoring tool that they prefer, and what works best for them.”

—Alan Pringle

Related links: 

Twitter handles:

Transcript:

EP:     Welcome to The Content Strategy Experts Podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way.

EP:     In this episode, we talk about expectations versus realities of tools when moving to smart structured content. Hi, I’m Elizabeth Patterson.

AP:     And I’m Alan Pringle.

EP:     And I want to get things started by just having a brief definition of what structured content is.

AP:     Smarter structured content is a content workflow that lets you define and enforce a very specific, consistent organization of your content. And it also captures some intelligence about your content. For example, what the audience is, for what product it is, you can embed that intelligence inside the structure for that content.

EP:     Okay, great. So when you decide to make that move to smart structured content, what are some questions that you need to ask yourself before you make that move?

AP:     Once you’ve established that business case that you do need to move to structure content, one thing you can start to do is really take a look at what you’re doing right now with your tools. I’m going to make the assumption that you were working in some kind of unstructured tool, some kind of desktop publishing tool, word processing tool. There are lots of them out there, Microsoft Word, InDesign, FrameMaker, any of those kinds of tools that are more on the traditional desktop publishing design side. And take a look at what you’re doing with those tools right now, are you using a template? Are you doing things or your people in your department doing things, shall we say in a Wild West way? Where anything goes. Because having a template is kind of like a baby step towards structure, because you have very specific tagging that your content creators assign and it gives an implied structure to your documents.

AP:     So there’s that mindset already there, yes. There are certain tags that I need to use and it’s best that if I use them in a certain order. And that kind of mindset would be very helpful when you move into structure where there is an actual enforcement under the covers by the software, to be sure you are following that particular organization of content.

EP:     And then also, you should probably take a look at the profiles of the people who create and review that content because that’s going to look different across the board.

AP:     Yes, it will. You have different kinds of content contributors in an organization. And for example, you may have people professionally, that’s all they do, is write content. Now that could be marketing content, that could be training content, that could be product content, support content. And then you’re going to have people who are reviewing that content and either making comments on it or are actually getting into the content and making changes and adding small bits and pieces to it. These people are not going to be your professional full-time content contributors, it’s more likely for example, a product engineer or somebody like that who has a deep understanding of the particular topic or thing you’re writing about. And then they’re going to offer input based on what you have created as a full-time content professional.

EP:     So what are some of the expectations that people tend to have when moving into a structured environment?

AP:     Again, it really kind of hearkens back to what we were just talking about. How are you using your tools now? If you’ve got a templatized system in place already on the unstructured side, it’s an easier adjustment, like we’ve already talked about. So those people, it’s going to be easier to get them to come along, if you will. However, if you have got, and this is a term that I have heard from many department organization heads, where they’re told, “Oh, we as content creators, we have to have creativity. We have to have free rein.” Dealing with that kind of scenario is a little more challenging and difficult if you’re the person leading this transition, because oftentimes that claim to need creativity is more really means, we don’t want to have any set rules, we want to be able to do whatever we want, how we want.

AP:     And so therefore, there is no consistency at all in the way tagging is applied, the way that content is structured, at all. So it depends really on the mindset of the people that you’re dealing with. And then beyond just those kinds of full-time content creators, what about your part time people? What about the people who were just going to contribute a little bit here and there, or review content? What are they doing right now? Are they marking up a PDF and sending it to you via email? Are they getting into the files and actually adding comments? You’ve got to think about how they’re working and the system, and be sure that the new environment you’re moving into can accommodate them as well.

AP:     And they’re probably not going to want a tool with all the bells and whistles, they’re going to want something a little more narrow that lets them address just what they need to do. Review, comment, or maybe add a little thing here and there within the content without getting bogged down in a bigger tool. So you’ve got to think about all the levels of people, what kind of contributions they need to make, the amount of those contributions and how those fit in the new tool system.

EP:     So could you touch a little bit more on the different levels of tools that you had mentioned?

AP:     Sure. There is an infrastructure that supports a lot of the standards for structured content. These tool makers know that there is a large market share for these standards, such as Darwin Information Typing Architecture, DITA, Docbook is an older one, there are a lot of people using those standards so a lot of the toolmakers out there will support them.

AP:     A lot of the content management systems that people use to manage their source, smarter, structured content, they even have built-in tools. So there’s a large choice out there of what you can use and not everybody has to use the same thing. For example, there may be a browser based tool that would be really great for your part-time contributors, for your reviewers. The interface is simplified. It’s stripped down. It doesn’t have all the bells and whistles, probably works a whole lot like Google Docs, for example. So it’s more basic, but it still gets the job done, especially for people who don’t have content production as their primary job responsibility. On the other side of that, you’ve got a lot of tools that offer really in-depth features that let you, for example, edit the direct XML code, the structured code, instead of seeing it with an interface, if you like to get in there and get your hands dirty. And it has a lot more features as far as guiding reuse and some other things.

AP:     That kind of industrial strength authoring tool, is gauge more time, your full-time content contributors. So it really depends on the level and the depth of how much you’re going to dive into that content, about what tool you’re going to use and like I said, it is not a one size fits all situation at all. You can have different people using different tools that still pour all of the content into the single repository, the single content management system. It’s just people connect to it differently based on the authoring tool that they prefer, and that works best for them.

EP:     Right. And so this is where we were talking about the different profiles of people and making sure that you’re asking yourself that question, it’s really important that you do that so that you can make sure that you are taking into account all of the different needs that the people on your team are going to have.

AP:     Right. And the thing is, a move to smarter structured content may start in one department, but it’s very likely because you have success there, it may go enterprise-wide. So that’s also really important to realize. Just within your group, there may be needs for different authoring tools, that need for those different tools is going to expand probably even more when you start going out into different departments and groups, and expanding the reach of smarter content across your organization.

AP:     So it’s just like, as you’re working now, the people in, for example, the product content department, they’re not probably using PowerPoint as much as the people in the training group. So just like on the unstructured side, you have different tool needs. The same thing is going to be true on the structured side. Don’t expect everybody to use the same exact tool, because frankly it’s not necessary. There are a lot of choices out there and they still will work together. Still put all that content in the same repository, the same content management system, your same single source of truth. People just have different ways of pouring content and reviewing it within that system.

EP:     Right. And this is why we’ve said in so many different podcasts and blog posts, not to choose your tools first, to make sure that you really understand what your needs are going to be.

AP:     Right. And it’s like we said at the very top, you have to have business requirements that drive your decision to do this. You need to do a little bit of investigation on your return on investment and to be sure that you’re going to be able to get that return, to pay for what you’re doing, to continue to pay for it through whatever costs-savings, more efficient localization, simplifying, rebranding, there’re all kinds of reasons and ways you can save money and boost efficiency with smart structured content. And I’m guessing we’ve got a white paper, or blog post or two that we can add to help people understand that return on investment. So we’ll put that into the show notes.

AP:     But once you have that business case, you understand your return on investment right. That’s when you start to need to really look at the requirements of the different people. It’s not one size fits all, and I’ve said that many times, but I do think that is a very common stumbling block, because this is structure, it must be just this one way. Yes, the structure itself of your content, it is going to be enforced. It is pretty tight, but the ecosystem that surrounds it, there’s going to be some flexibility there. And people need to realize that. And I think people hear structure and they run away thinking, oh no, we’re going to try and cram everybody into the same system. That’s not necessarily true when it comes to the authoring tools.

EP:     Right. So let’s say you really need highly designed content and you’re going to need to finesse that design. Are you still able to do that as you move to structured content?

AP:     Yes. You are, with caveats. And let me rewind a little bit here. This is also a very common challenge, misconception, whatever you want to call it. Generally in a structured authoring environment, the formatting is applied automatically. So you write your content and you basically tag it with the various elements to build in that intelligence. And then, you run a transformation process to create a website, to create a portal, to create a PDF, to create training materials, to create marketing slicks, whatever, the choices are endless.

AP:     Because that content is not formatted by hand, it is done automatically through a transformation process, a lot of times you’re not going to have super finite control over page breaks and things like that. Now the good news is, within the programming of those transformation processes, you can add a lot of rules that say, yes, I need to always keep like a caption with a table. I always need to have the first three or four lines of a section to stick with the heading, things like that. There can be rules about how tables break across columns. You can build in a lot of intelligence and really get to a very good point without having to actually manually touch everything. Everything’s automatic.

AP:     There are times where you have business requirements that say, “Yes, I do need the ability to really touch up the formatting.” A good example of this is, suppose you have workbooks that you sell to people and they are very highly designed. Your production staff spends a lot of time making sure that the text flows across pages to really help readers comprehension. They pay a lot of attention to the way images and tables are placed. If you have a case like that, you can create a scenario where yes, you were using the structured smart content as the source. You transform it into a form of markup language that one of your traditional desktop publishing tools can ingest. And then you can do those last little bits of formatting touch-ups in that tool.

AP:     And here’s an example of that, InDesign. You can take structured content, you can transform it into an InDesign compatible XML format. You can put that InDesign compatible markup into an InDesign template. And what happens is, is that that transform has basically matched styles within your InDesign template to elements in the structure, so when you put that InDesign compatible XML into an InDesign template, it will automatically format the content for you.

AP:     What it’s probably not going to do though, is get those page breaks, get the placement of images and all that kind of really more highly designed stuff. It’s not going to do that automatically for you because in some cases that’s going to take the judgment of a human being. One of our consultants here, Jake Campbell, had a really good saying, “It’s the art of design versus the science of design.” Yes, you can program the science of design and get a lot of those rules in regard to how styles are applied built in automatically, but what you can’t do is really get that last, say 10 or 15% of those touch ups you want to do to make something look really, really good. That’s when you need the art of design, the human intervention.

AP:     So at that point you have got an InDesign document that’s say 90% done, as far as formatting goes, then you go in and clean up that last little bit of the formatting that needs some tweaking, and then you make your PDF for print. So you can do that. It is possible. Same thing with training information. If you need to create slide decks and people need PowerPoint, you can set things up, because we’ve done it, where you take the structured content and you poured it into a PowerPoint template, and then it will apply the slide design, the correct formats for bullets and all that kind of stuff. So it is possible to do that. One thing I will say that you’ve got to understand this is a one way street. This is just for production, all the authoring of content, all the modifications to content still needs to happen in your structured authoring tool.

AP:     So you can’t go in and change words, for example, you don’t want to do that. You don’t want to change the content, once it’s ported into PowerPoint, once it’s ported into InDesign. What you want to do is make the changes in your single source of truth, that is, that’s your structured content and then re-import it if you need to make text changes. And all of that last minute finessing you did, as far as formatting goes, really the XML does not care. The structure content does not care because formatting is separated from the actual content. So it doesn’t see it. It doesn’t need to know it. That’s why I say it’s a one way street.

AP:     You transform your smarter structured content into an XML that your desktop publishing tool understands, open it up, and then do your last little bit of production work. So basically you were using the skills you already had in those unstructured tools, but the good news is, is most of the formatting, or the more manual labor is already done for you. All the assignment of the paragraphs for the title, the paragraph, or the styles for your paragraphs, all that’s done for you, so that’s something else to consider too.

AP:     If, if, if, if you have a very good business reason to continue to do this very specialized, highly designed content, there are ways to still create it, to still be able to manually intervene and touch things up when you need to. So it is possible, not everyone needs that level of format control, but you can do it.

EP:     Right. So I think we’re looking at, as you make this change, you’re looking at a lot of new processes that as a team you’ll have to adopt. So I just think that this is important to touch on. Is there anything that people can do to specifically help with change management when moving to structured content?

AP:     I think the important thing to remember is that you can invest all the money in the world in new tools, new systems, if you don’t train people on how to use them, and they don’t understand how to properly use them, that investment is a waste. You have to build in training as part of your process to moving to the new system, to smarter structured content.

AP:     There are lots of things you can do, classroom training, hands-on training. It’s also good too, to set up kind of like mentoring programs where you maybe break off into small groups, have someone who’s a little more seasoned, someone who already gets it, perhaps you’ve hired someone new who did it at another job, they could help you kind of work out the big picture concerns people have. How do I do this? How do I map my knowledge from using this tool on the unstructured side, how do I get that same result, basically when I’m working with the structured content? So classroom, hands-on training, web training, mentoring, a lot of Q&A and it’s always really good to have a continuous feedback loop. Because people, as they start to work with new tools, new systems may uncover some things that aren’t working quite like you intended. So pay attention to that.

AP:     Yeah, there’s going to be complaints, “I don’t want to do this,” but don’t assume that every observation is necessarily a complaint. It may be a valid, constructive criticism.

EP:     Right. And I think that that is a good place to wrap up. And like we mentioned above, I will link some additional resources in the show notes. So thank you so much, Alan.

AP:     Thank you.

EP:     And thank you for listening to The Content Strategy Experts Podcast brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

 

The post Moving to structured content: Expectations vs. reality (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 22:49
Saving localization costs with content reuse (podcast) https://www.scriptorium.com/2020/04/saving-localization-costs-with-content-reuse/ Mon, 20 Apr 2020 13:30:16 +0000 https://scriptorium.com/?p=19634 https://www.scriptorium.com/2020/04/saving-localization-costs-with-content-reuse/#respond https://www.scriptorium.com/2020/04/saving-localization-costs-with-content-reuse/feed/ 0 In episode 75 of The Content Strategy Experts podcast, Elizabeth Patterson and Bill Swallow talk about how content reuse can help you save on your localization costs.

“The savings you get from a reduced word count is all fine and good, but the translation is only as good as the quality of the translation itself.”

—Bill Swallow

Related links: 

Twitter handles:

Transcript:

Elizabeth Patterson:     Welcome to The Content Strategy Experts podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we talk about how content reuse can help you save on your localization costs. Hi, I’m Elizabeth Patterson.

Bill Swallow:     And I’m Bill Swallow.

EP:     And we’re going to dive in to talking about how content reuse can help you save on your localization costs. So I want to get started with just a really general question, and when we talk about reuse, what are we talking about?

BS:     That’s a very good place to start. When we talk about reuse, what we’re not talking about is copying and pasting of content. You could think of that in terms of reuse, but it’s not really what we’re talking about here. When you copy and paste content, you’re essentially duplicating it and then need to manage it in multiple places. What we’re talking about is more intelligent reuse of content, so writing it once and using it by reference wherever you need to use it. So this way it’s only written once, and it’s used multiple times as needed.

EP:     Great. And we have done a podcast and an additional blog post just solely on reuse, so I will link those in the show notes. But I want to dive into now looking more specifically at how content reuse, now that we’ve defined that, can help us save on localization costs.

BS:     Well, generally speaking, reuse reduces the overall number of unique words that you are translating. By using intelligent reuse in your writing once, and using it multiple times by reference, you have the opportunity to choose pieces of content that you will author once and only once, and that content gets translated once and only once regardless of how many times it’s being used. If you copy paste, you can still see a savings if the wording that you’re using is one for one, so if it’s absolutely exact all the time.

BS:     For example, I know Microsoft Word has an auto text feature, so you can throw a basic reusable component like a caution statement or some other boilerplate text, and you can use that to insert it every single time. That may save you a bit of time on the offering side and ensure that the text that you’re inserting is exact every single time. The only problem with that is that it is inserted as normal text every single time you insert it, so it does still increase the total amount of words that you need to send to the translator. It might be a 100% match, but they still have to do a check against it to make sure everything is fine. And the systems that they use will still count those words and say, “Yes, this is a 100% percent match.” But it’s still being counted as part of your incurred cost, because there’s something that’s going to the translator for them to see, even though there’s a match.

BS:     And in some cases you may even get what they call an ICE match, or an in context exact match, on that text. So if you are using something like Microsoft’s Word’s feature, you can drop that text in every single time and you can get this, “Yes, it is a 100% match every single time it’s inserted.” And if it’s a full paragraph, it could be, “Yes, it’s a contextual perfect match. It’s a paragraph and it says the same exact thing.” But more times than not, when you talk about inserting strings of text that say the same thing over and over and over again, the context may shift depending on where you’re using that text. In which case then you get maybe a 100% match, which still requires some review, or you get what we call a fuzzy match, where if you happen to make an edit to that text that was inserted and copied and pasted it’s no longer 100%, and therefore the translator has more work to do.

BS:     And there may be questions. This one has two words that are different from this other block of text. They say roughly about the same thing, should they be translated the same way or is there a reason why they’re different? That just slows down your translation process, it injects confusion, it injects questions that need to be mitigated and answered or you can then suddenly have a divergence in the translation where you shouldn’t have. The translator might’ve translated it two different ways because it used two different structures.

BS:     So true reuse or intelligent reuse moves that out of the way by taking the text that is being reused every single time, putting it somewhere to the side, is translated separately, and then can be used as it needs to be used throughout whatever it is you’re writing. Your manual, your web content, whatever you need to. And there are plenty of tools that are out there that do this well. Two of which that come immediately to mind for desktop publishing based tools are FrameMaker and MadCap Flare.

BS:     FrameMaker uses a series of conventions where they store content in chapter files, and those chapter files are assembled within a book file. And you can easily reuse an entire chapter in multiple different books just by linking to that chapter file from the book. You don’t have to rewrite the information, it’s not being copied and pasted. It’s a dynamic link that goes right to that file and pulls it into the book.

BS:     FrameMaker also has text insets which function a little bit in the same way where you have a separate file that has a block of text and you can say, “Hey, go to this file, grab that text, and place it here.” And the smart thing about this is that when you do that in FrameMaker you are not creating an editable copy of that text. It is a reference to the file that contains the text, but you cannot modify it within the context of whatever it is you’re writing. It is uneditable. You can see it, you can read it, but it is uneditable, and you can’t modify it.

BS:     The same goes for MadCap Flare, where you’re building things in a similar fashion. Where you’re grabbing individual files, and you’re putting them together in an order to create some kind of document or website or what have you. And MadCap Flare also has something similar to text insets, they call them snippets, and you are able to insert these snippets throughout your content. And those, again, are managed in a separate place, they’re written only once, and they are non-editable in the context of where you’re using them. They’re only there as a reference point.

BS:     Now, these are great, however, you do have some concerns when you’re using these tools for localization purposes. They’re not inherently bad, but if you are looking to do a lot more with your content, let’s say you are styling your content very differently for different outputs or you’re creating the same type of output, but the styling is different. The text insets in such are in the snippets. They’re going to, I believe, carry a lot of the formatting information over with them however they’re formatted, where they’re stored. So it’s not incredibly ideal, but it does reduce the total number of words that you’re translating.

BS:     When you move to something like XML, you have a bit more available to you because you have these conventions, but they’re built into a format of writing that does not have the formatting applied to the content. So it’s all text base and you can do quite a bit with organizing and reorganizing your content without having to worry about your headings being formatted one way or another. It’s all just plain text and the formatting is applied at the point you’re publishing.

EP:     Right. So I think what we’re seeing here, obviously, is that there’s really one main way that you’re going to be saving money on your localization costs through reuse, and that’s just reducing that word count. But the way that you go about making that happen in your strategy is really going to vary depending on where you’re at as a company.

BS:     Right.

EP:     So I want to get into a few tips. So what are some tips that you have for reusing content, particularly when you are planning to localize that content?

BS:     Well, the knee jerk response to anyone who is doing localization for the first time, and has all of this reuse potential in front of them, is to reuse as much as possible and to apply conditional text or conditional formatting as much as possible. And even I was guilty of that many, many, many years ago where we would have a manual that would go out in 19 or 20 different languages but one of them was over in Europe. And I figured, “Oh, well, for the English stuff we’ll just condition in and out the characters that differ between certain words. We’ll condition in or out a U in color or we’ll condition out a Z for an S for localize.” These types of things. And I thought I was being quite inventive and it came back immediately that no, you cannot do this, because when you send something for translation the translator gets a wall of garbage that they’re looking at and wondering what you’re trying to do with these words.

BS:     So my first bit of advice is do not go too granular with your reuse. Things like reusing words or phrases, I would really limit that as much as possible. You really want to reuse at a larger chunk level. So if we’re talking DITA or if we’re talking something like MadCap Flare, reusing at a topic level. So here is a topic with a heading and a bunch of text or a procedure or what have you, reuse that whole piece. If you need to reuse it five, six, seven times, that’s great. You’ve written it once and you can leverage it an additional four or five times. That’s fantastic. Reusing things like notes, cautions, warnings, they tend to stand on their own. I mean, they’re used in context with other text, but the warning itself, you can write those to be very standalone as far as what the thing you should not do is and what the outcome of that is within the context of that warning statement. And you should be able to put that off to the side, write it once and use it everywhere.

BS:     There are two benefits to that. One is the localization impact and the other one is that all your warning messages are exactly the same wording. And it will drill that information into your readers’ heads over time as they read it to say, “Oh yeah, I shouldn’t do this. I should not do this.” There are only so many ways that you really should say, “Don’t stick your hand in the machine while it’s working or you’ll lose it.” You really want to say it only once and repeat that statement multiple times until it’s drilled into your audience’s head to, “Hey, don’t stick your hand there.”

EP:     Yep. And you want to make sure that it’s being said in the same way so they don’t take a different meaning from that.

BS:     Exactly. Or have it translated differently even though you meant to say the same thing.

EP:     Right. And something that I’m thinking about as we’re talking about what companies give their translators, so that their translators are trying to figure out what they mean, is writing style. So when you get into an organization that has many different writers, what are some things that you need to be aware of when you’re planning on sending content to translators?

BS:     The first thing you have to do is have your style guide nailed down and make sure that all of your writers are following that guidance. Sometimes in larger organizations where you have too many authors, and perhaps not enough editors to clean up after them, you might want to look into some kind of editing based software or language based software like Acrolinx or Congree to do a lot of the spot checks automatically, rather than relying on someone to catch it in proofreading. Especially if you have tight timelines, quick turnarounds, and everyone’s just too busy to proof each other’s work. I know that the days of having a fleet of editors cleaning up after writers has kind of run its course. There are still many technical and editorial editors out there, but not to the degree they used to be in, let’s say, even the 1980s when, unfortunately, I started working.

EP:     Right. And content governance can help with that as well, right?

BS:     Oh, absolutely. The more you can nail things down and have a process for how you produce your content, the better off you’re going to be. And the one thing you absolutely must do, and I wanted to touch upon this also with the style guide, is you have to include your localization people in that overall plan for governance in styling as well. You want to bring them in to help define the language style that you’re going to be presenting this information in. The way they’re going to write their translations, how you want their translations to read, and which words they should use, and which words they should not use and why.

BS:     You really need to have a global style guide at that point and be able to provide glossaries of information to your translators, because you may have different translators for each language depending on when they’re available to take on the work. Unless you’re fortunate enough to have them all in house as employees, which is extremely rare. A lot of times that work is outsourced, whether it’s through a language service provider or you’re doing it direct with other freelance translators. So being able to have that global style guide in place, to have a global glossary in place. And what’s really critical is being able to, when you do have these reusable components, that you’re going to be giving them to translate not only the components, but the content where the component is lacking. Because when you’re reusing by reference that content does not exist in the file that they’re looking at.

BS:     So you want to be able to provide additional contextual information to the translator to say, “Oh, hey. When you get to this point there’s a bit of content that’s being inserted.” And maybe even provide them with the reasonable content to say, “This has already been translated, but this is what’s going in here.” So that way when they get to that point they’re not stumped and say, “Well, this doesn’t make sense because it goes from part A to part C, we’re missing part B. I don’t know what it says there.” That can certainly throw off the translation process. So being able to provide that additional context around what is going on in your content set is critical when you’re doing things with intelligent reuse.

EP:     Right, right. So I think really one of our main takeaways from today is that you certainly can save money when it comes to localization on the translation side of things, but you need to be prepared to really pay attention to your translator’s needs.

BS:     Absolutely. I mean, the savings that you get from a reduced word count is all fine and good, but the translation is only as good as the quality of the translation itself. And if you’re tripping up the translator in any way, you’re not going to see that return on an investment in localizing.

EP:     Right. Absolutely. Well, I think that that’s a good place to wrap up. So thank you so much, Bill.

BS:     Thank you.

EP:     And thank you for listening to The Content Strategy Experts podcast brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

 

The post Saving localization costs with content reuse (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 17:01
The benefits of a taxonomy (podcast, part 2) https://www.scriptorium.com/2020/04/the-benefits-of-a-taxonomy-podcast-part-2/ Mon, 06 Apr 2020 13:30:06 +0000 https://scriptorium.com/?p=19560 https://www.scriptorium.com/2020/04/the-benefits-of-a-taxonomy-podcast-part-2/#respond https://www.scriptorium.com/2020/04/the-benefits-of-a-taxonomy-podcast-part-2/feed/ 0 In episode 74 of The Content Strategy Experts podcast, Gretyl Kinsey and Simon Bate continue their discussion about the benefits of establishing a taxonomy.

“Communicate with the stakeholders. Don’t just get their input and then go away. Communicate all along what you’re doing and identify your benefits.”

—Simon Bate

Related links: 

Twitter handles:

Transcript: 

Gretyl Kinsey:     Welcome to The Content Strategy Experts podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way.

GK:     In this episode, we continue our discussion about the benefits of establishing a taxonomy. This is part two of a two part podcast.

GK:     So if you are at an organization and you have never had any sort of taxonomy in place and you’re starting to realize that you need something to help categorize your information, how do you go about starting that process to build a taxonomy?

Simon Bate:     Well, the first thing of course, is to meet with your project sponsor, the person who’s really asking for this thing and get a sense of what’s their purpose and rationale and what’s the actual purpose, why are you building out the taxonomy.

SB:     So then you want to, once you get a sense of that, you can map the scope of the project, including the knowledge domains and both visible and invisible stakeholders in those domains. So in meeting with the sponsor, you find out what do they need and who has a major stake in it.

GK:     Yeah, absolutely. And I think it’s really important. A lot of people I think skip that step of getting that sponsor buy-in upfront. Especially if you’re not the one who has the power to or the finances to sponsor that taxonomy yourself, then it’s really important to make sure that you have someone who does have that power to be your ally and really help understand what you need. And so if that person’s not the driving force behind it, but maybe you are, but maybe you’re not in any sort of management or leadership role where you have control over finances, it’s really important to talk to whoever does have that power and make sure that, between the two of you, you can get on the same page and prove to them. Here is the business advantage of establishing a taxonomy and here’s what we are losing if we don’t establish one. Here are all the customer frustrations with not being able to find this information in this way and that will kind of help you get over that first hurdle.

SB:     Yeah, absolutely. Having a justification, demonstrable return on investment or whatever, is really important before you can get started on any project like this.

SB:     So actually once you’ve then gone past that first step, you’ve got a buy in there, then the next thing to do is to go to those stakeholders that you identified and engage with them. You want to validate your map of the scope, you need to understand their needs and it’s really, really important.

SB:     If you try and start building a taxonomy out and you don’t include all the stakeholders, you’re setting yourself up for problems later essentially.

GK:     Yeah, and we’ve seen lots of cases where that happened where maybe one department or one small group within a department started a taxonomy because they had an immediate need for it. But they didn’t go talk to anybody else who may have been impacted by it later. And so then, down the road, they realize, oh, we’ve got a taxonomy that started over here in the training department, but the marketing department really needs to be using it and to be consistent with it. But because there was no communication, maybe marketing started their own taxonomy and it’s very different. And so kind of getting that alignment is a lot easier on the front end than it is to try to bring things into alignment later. So the earlier that you can engage other stakeholders and other groups, the better off you’re going to be.

SB:     Absolutely.

SB:     So the third step in building the taxonomy is to then refine your project purpose and get the sponsor’s agreement. So get things together and then go back to your sponsor and just make sure that they also have buy-in on what you’re doing.

SB:     The fourth step is design your approach and then step five, build your communication plan and identify the benefits. And really one of the important things here is communication, as in many things like this. It’s really important. Communicate with the stakeholders. Don’t just get their input and then go away and do things. Communicate all along what you’re doing and identify your benefits.

GK:     Yeah, it should be a collaborative process and that goes for, when you do design that approach, that step four, it’s really important to have that collaboration going on during that phase too because then you know if another group comes up with a concern and says, “Oh, we need the taxonomy to be able to do X thing for us,” but you also need it to do Y thing for you, it’s good to know that up front when you are designing how you’re going to put that taxonomy in place. And the same is really true, getting back to the point we talked about earlier when we were discussing that that sort of confirmation bias and the other sorts of biases that you may encounter, you want to make sure that no one group has too much bias in the taxonomy, and that if there is any customer end user information that any or all of the groups has access to, that’s being shared across the board. So when you are going through those phases of designing your approach and figuring out that communication among everyone and identifying how the taxonomy will benefit each group, it’s really important to collaborate throughout that whole process. And as Simon mentioned, not just have each group or one group go off and do their own thing. It really needs to be cohesive across the board.

SB:     That’s right. That’s right. Because eventually, getting your taxonomy back to the real world, when you present these things, when you present the terms that you’ve agreed upon on the taxonomy, they are present in many ways. Your company expresses things publicly, so you might have things appear in marketing brochures, you might have things occur on your website, you might have things occur on the documentation. And you really want, that’s well one of the advantages of the taxonomy is that when these ideas, when the concepts, terms of whatever, are presented in all of these areas of your company, they all come out the same. You’re using the same language consistently and that’s a major advance for you.

SB:     The sixth step here in building your taxonomy is to start the process of taxonomy governance. So a taxonomy isn’t a static thing. You don’t just build it, set it, and then go away. It’s going to evolve. It’s going to continually change. People are going to add to it, people are going to refine it. People are going to take things away from it. You do need to set up some process, some way that people are going to help remain engaged and continue helping to maintain your taxonomy.

GK:     Absolutely. And there are tools and systems out there that can help with this, but I think it really comes down to that agreement for everyone to continue collaborating on it in the end. I mean, you can put some kind of a tool in place that’s designed to help maintain a taxonomy and update a taxonomy, but everybody has to agree to use that correctly and to do the work it takes to keep that taxonomy managed and maintained. So it’s kind of a culture shift. If you’ve never had a taxonomy of your organization before, it can really be sort of a major change to realize, this is important. Here’s why it’s important. Here are the benefits that it’s going to bring us and therefore, we need to dedicate X number of resources toward maintaining and improving it over time.

GK:     So how does taxonomy fit within a larger content strategy?

SB:     Well, there are several places within your content strategy that a taxonomy can help. So there is search, for instance, there’s targeted delivery and personalization. These are some pain points where taxonomies can help you.

GK:     Yeah, absolutely. And I think we’ve actually seen that with several of our clients where, if something like personalization is a goal and they don’t have a taxonomy in place that needs to become part of their content strategy.

GK:     Same thing for search. Depending on how people need to search your content, and that’s both end users, but also content creators, subject matter experts, people like that, people in support, anyone that needs to search your content, the taxonomy really does a lot of the work on the backend of making sure they get the right results when they’re doing those searches. So all of these factors are a major part of your content strategy and the taxonomy can be the piece that gets recommended to help resolve those issues.

SB:     Yeah, that’s correct.

SB:     So another place is in borders or you may also want to think about silos. You want to prevent silos. So you have a whole number of different groups within your organization. They all had the same final goal. And what you want to do is use your taxonomy to help you get around those borders. Because your taxonomy, as I was mentioning a few minutes ago, helps maintain consistency across all of your groups.

SB:     And then of course, you can think vertically. There are your levels. So within your content strategy, you want to think about what’s happening at your corporate level, what’s happening at business group level, what happens at the department level. And in some cases, there are calls for different levels of taxonomy or different types of taxonomy within that. But essentially, in the end, it all boils down to the same thing. You have taxonomies that cover the needs of each of these different levels.

GK:     Yeah, absolutely. And that’s what we think of as an enterprise taxonomy where you are encompassing different groups, different levels, and the needs of each of those, and making sure that they are all in sync. But that then each individual group or level also has maybe some different taxonomies or different categories that apply specifically to them.

SB:     So one other place where we can fit a taxonomy within a content strategy is just your level of end user. So you may have experts, you may have novices. So for instance, you may be dealing with medical or a technical language and how physicians will search for information may use very different language than how lay people, people who are non-physicians, will search for things.

GK:     Yeah, that’s very true. And we’ve seen that, not just with medical but you said with technical as well, and the same as the case with an end user versus maybe a software developer. The same is true if you get into something like manufacturing. Someone who’s an engineer is going to use different language than an end user as well. And so it’s important to think about what levels are there. Maybe you have expert and novice, maybe you have more kind of intermediate levels in between, especially if you are dealing with educational content. You may have different levels for different grade levels in school or different course levels if you’re looking at a university. So it’s really important to think about that aspect of taxonomy as well.

SB:     Yeah, those are all great examples.

GK:     So we touched on this a little bit, I think when we were talking about taxonomy and how it fits with some of the aspects around search and personalization. But how does taxonomy relate to metadata?

SB:     Well, of course, metadata is information about your data. So metadata can be things like labels. You can add additional identifying information. There’s a lot of things you can do in metadata and the metadata in your content can be used for a number of different things.

SB:     And so there are four principle things that I like to think of when I’m dealing with metadata. You can have metadata that helps your managers track your content development. So for instance, the current state of things, who modified something, or when it was last updated. The metadata can be used by your authors. They may need to find appropriate content. They have an assignment to go and make a modification for a product change or a new strategy or something. So they need to go find the appropriate content.

SB:     Also, as they’re creating that content, they may need to reuse particular content, they have to find the content to reuse. They may also want to create cross-references. They need to find the content to cross-reference. Your metadata can also provide production information for your output generators. So for instance, when you’re creating a book or a PDF, you might have a copyright date and an owner. The cover might have part numbers on it. You might have branding, cover images, and so on. All of this can come from your metadata.

SB:     And finally, as we’ve been using as an example many times, the metadata in your content can be used by your users to search for information. So the search turns out to be, it’s the prime example, but it’s not the only way where we have metadata. And of course, that same metadata that we’re talking about for the search, this is information that comes from a taxonomy.

SB:     So, often, this metadata grows organically. It starts out somebody who’s just creating a user manual or just creates a piece of information about something, people add to it and over time, people start adding more and more metadata to it.

SB:     The problem is, if you want to develop this metadata in an organized and thorough way, the correct starting point is actually to roll back a bit and start with your taxonomy. It’s always difficult and time consuming to go back and modify or even add metadata to existing content. So it’s much better if the metadata can be developed along with your content. The earlier you can create a taxonomy with buy-in from all your partners, the better.

GK:     Yeah, I think that is very solid advice and the way I would wrap up that advice too is just to say don’t leave taxonomy as a last resort. Make sure it’s a priority. Especially if you know you have these requirements around search, around content organization, around the way that both your content creators and your end users are going to need to find and use that information. Taxonomy needs to be a really important priority for you and you need to make sure, as we’ve talked about all along, that you have that buy-in, that you can prove that value, and that you collaborate across the organization with anyone who’s got a stake in that taxonomy to make sure that it’s going to best serve your organization’s needs.

GK:     And I think we can go ahead and wrap up there. So thank you so much for joining me, Simon.

SB:     You’re quite welcome.

GK:     And thank you for listening to The Content Strategy Experts podcast brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

 

The post The benefits of a taxonomy (podcast, part 2) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 15:59
The benefits of a taxonomy (podcast, part 1) https://www.scriptorium.com/2020/03/the-benefits-of-a-taxonomy-podcast-part-1/ Mon, 30 Mar 2020 13:30:11 +0000 https://scriptorium.com/?p=19557 https://www.scriptorium.com/2020/03/the-benefits-of-a-taxonomy-podcast-part-1/#respond https://www.scriptorium.com/2020/03/the-benefits-of-a-taxonomy-podcast-part-1/feed/ 0 In episode 73 of The Content Strategy Experts Podcast, Gretyl Kinsey and Simon Bate talk about the benefits of establishing a taxonomy.

“Filtering is possible through the use of taxonomies. They have a real world benefit for people looking to find something.”

—Simon Bate

Related links: 

Twitter handles:

Transcript: 

Gretyl Kinsey:     Welcome to The Content Strategy Experts Podcast, brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage structure, organize, and distribute content in an efficient way. In this episode we talk about the benefits of establishing a taxonomy. This is part one of a two-part podcast.

GK:    Hello, and welcome everyone. I’m Gretyl Kinsey.

Simon Bate:     And I’m Simon Bate.

GK:     And we are going to be looking at taxonomy today, and talk about some of the benefits that you might encounter if you establish a taxonomy within your organization. So I think the logical first place to start is just by defining, what is a taxonomy?

SB:     That’s a great place to start. So a taxonomy is an organizing scheme that helps us make sense of stuff. Let me give a couple of examples there. One is, if you go into the library and you try and find a book, usually you’ve used something like the Dewey decimal system or the Library of Congress system. That’s an organizing scheme. Another organizing scheme that we’ve learned about in school is the way that plants and animals get placed into kingdom, genus, species, and so on. These are a couple of the most well known taxonomies. And also if you’ve shopped on Amazon, you’ve encountered taxonomies there.

GK:     Right, if you have ever used any of the tools that they have to help narrow down some of the products to what you want to buy, that’s definitely a great example. So continuing along that path, can you use multiple taxonomies simultaneously?

SB:     Yeah you can. Let’s look at the Amazon example a bit more. Assume that you’re interested in buying a shirt. There are a number of characteristics of shirts that can be used to categorize or limit your search results. Do you want a red shirt? Do you want a green shirt? Color is one of the taxonomies. What size do you want? Small, medium, large and so on. That’s another taxonomy. Do you want a long sleeve shirt? Short sleeve, sleeveless. What material do you want, cotton, silk, rayon, casual or formal? All of these things which we call facets in taxonomies can be used to narrow down the options, so you can find just the shirt that you want. If you’ve ever used a used car finder, there’s exactly that same kind of filtering is done there. And that was made popular quite a number of years ago. All this filtering is possible through the use of taxonomies. They have a real world benefit for people looking to find something.

GK:     Yeah, absolutely. And I know that’s something I think all of us have used in our day to day lives at some points, not just in maybe our careers in terms of content, but going more in that direction, how else might you use taxonomies, what else are taxonomies good for?

SB:     Well, one of them is for standardizing data. So if you think about looking for a shirt and you’re looking for a medium, there’s a whole number of different ways, actually, that a vendor might describe something as being medium. They might just use a capital M, they might use Medium with a capital M, they might use medium, all lower case, medium, all upper case, they may use size range, so size 34 to 36 or something like that. If you’re getting this data such as shirt size from multiple vendors, and each vendor has a different standard for storing the data, and you blindly pass that along, your users are going to have a ridiculous set of choices to have to define to go through to find a medium shirt. So again, that’s not very user friendly.

SB:     So by correlating all of those into a single definition of medium, your taxonomy ensures, regardless of how you receive the data, it fits into a single definition everywhere.

GK:     Can taxonomies be reused?

SB:     Yes. The same idea of sizing can also work for other things, such as other than shirts. We could use it for coats, pants, gloves, anything that has a size, we can reuse that same taxonomy with those things.

GK:     That’s really awesome. We’ve been looking at this example, started out with Amazon and went more specifically into maybe clothing that you might use these taxonomies that are built in for that. But I want to shift gears and talk about some other ways that you can use taxonomy. So how might this be something that helps on a support site?

SB:     Well, the concept of taxonomy is the same there. In a support site we want our users to be able to retrieve information so they can perform their jobs. Taxonomies help with the users being able to locate the answers to their specific questions.

SB:     So in the retail site, we talked about clothing sizes. In a support site you can use the same ideas of taxonomy to help readers narrow down product type, product name, version, and so on.

GK:     When you’ve got this kind of a taxonomy built in, how do you know that your facets are correct?

SB:     That’s a good question. The taxonomies inherently reflect the person or group that created it. Diversity is key to ensuring that any biases are surfaced. That is, you can’t just create the taxonomy by yourself. You have to work and develop your taxonomy within a group of people. And the more diverse you can make that group, the more you can be assured that your taxonomy actually is as general as it can be, that it reflects all perspectives rather than just simply your perspective. So confirmation bias, selection bias, and these can limit your perspective on the facets.

SB:     Another problem that we have with taxonomies is they do enforce a sort of top down approach. Humans naturally want to group things and then break those groups down further. How do you know the thing at the top is really at the top, and how far down do you go in subgroups? One hint that Patrick Lamb offers in his book for us is to forget our scientific traditions. Rather than trying to find a single, perfect ideal spot for an object or a piece of information and put it where it’s most likely to be found, just don’t agonize over the perfect. You just find the information.

GK:     Absolutely. And I think that that’s one area where, with some of the clients we’ve worked with, that’s where getting into things like user testing and analytics and just really acquiring the information that they need about the real world cases of how users are going to make use of those facets and how they’re going to search for information, what kinds of information they’re trying to find, and how they’re going to go about it, can really help them if they’re coming up with a taxonomy and trying to figure out what those facets need to be. If you just come up with it from the perspective of how you think it makes sense from the way you’ve designed your products or the way your marketing team wants to emphasize things in the way that they are putting out that messaging, that may not actually serve what your users need. So it’s important to try to get that information from them as much as you can, and continue to use that to make your facets better.

SB:     Absolutely.

GK:     So how are taxonomies presented to the users, what are some different examples of ways that they might come across to the user?

SB:     Well, there are several different ways that your taxonomy can be presented to the users. One way might be lists, so you might just have a simple list of items. For instance, a list of sizes or a list of product names, variety of things like that. Now the simple list, that’s the basis for where we go off into taxonomies because as soon as you get more than about 12 items or so, this gets really hard to use. If you’ve ever been on a site where you’re presented with a dropdown list that goes off the screen, it starts to get really, really hard to find the thing that you want.

SB:     A couple of examples of these symbolists might be a shopping list, or a list of animals, say. So you could just have a list of any animals, a lion, a cow, a dog, a cat, a rat, something like that.

SB:     The next way of looking at your information is with trees, or hierarchies. You divide your list into a set of related subgroups. For example, if you create a shopping list, you might want to divide your shopping list by food type. So you could have a section on your list, or a sub-list for produce, a sub-list for things you want to find in dairy, sub-list you want to find in groceries, and so on. If you’re looking at a list of animals, you might want to list them in a tree, say according to their habitat.

SB:     But of course, one problem here is that one person’s idea of how to organize these things might be different than another person’s perspective. The trees then lead us to what we call hierarchies. And hierarchy is a tree with a very strict rule about the subdivisions. The tree is exhaustive; that is, it covers everything that there is, and it is unambiguous. So for everything that you have on the list, there’s no way that it can actually exist under two different categories.

SB:     One great example of this is the standard Linnaean classification of animals, where we actually break things down into kingdom, phylum, class, order, family, genus, and species. Each one of those different divisions, and actually above kingdom, now they’ve added domain. All of these divisions actually have very specific rules about what it is that separates one from another in each of the different sub classifications.

SB:     Now, talking about these ways, we have the idea of facets, and we brought the idea of facets up before. And a facet is essentially, it’s an attribute that may represent a piece of information. The facet itself can be a list, a tree, or a hierarchy.

SB:     Now, another way that this could be presented is in a matrix, or matrices. In a matrix, actually you can have two or three facets presented in a table. Let’s take the simplest facets, which of course are lists. In a matrix, you could have a table which is two dimensional. So in the rows you could have one list represented in the columns. You actually could have another facet presented. An example of a matrix is a table in a catalog that associates a specific product number with two or more characteristics, such as capacity and operating environment. If I’m looking in my catalog, I know I need to find a piece of equipment. I know what the capacity of that piece of equipment needs to be, whether that be voltage, maximum voltage that it can handle, maximum pressure, all sorts of things. And then I also need to find that piece of equipment that works in a specific operating environment. Does it have to operate in subzero temperatures, does it have to operate in normal temperatures, does it have to work in tropical climate, anything like that. So I can use that matrix of those two different characteristics and find the specific product that I need.

SB:     And finally, a little looser than a matrix, there’s a relationship map. And a relationship map is a way you show the proximity and relationships among the different entities in your taxonomy. A relationship map could be a physical map, so it could be actually a public transport system, or the human body showing something like the lymphatic system, the nerve system, those are relationship maps. Or it can be conceptual. A conceptual relationship map is something like a mind map.

SB:     So that’s a long answer to a very, very short question.

GK:     Yeah. And I think it’s really common to see these combinations of these different ways that taxonomies are presented to users, maybe even in the same interface or the same site or what have you, kind of like we talked about earlier with the Amazon example. When you go to search, a lot of times when you have over on the left, all these different ways that you can sort the thousands of results that you get from a search, there are multiple taxonomies and ways of presenting those taxonomies at work, because you can choose things from a list, you can choose things from ranges of lists, there are all sorts of different things you can do. And so I think that, again, it gets back to what your customer base needs and how they tend to look for information about your products that you would then say, okay, which of these different ways are going to be most effective to present this taxonomy to our users? And it might be more than one. It might be a combination that you find works best.

SB:     Yep. And there’s an interesting thought here actually, that we’re talking here about all the various specific ways of taxonomies, but I think a lot of the people actually listening to this podcast are probably looking for specific solutions that usually will relate to a computer interface, such as a help system, or trying to find a specific manual or something. And I have to say that while there are all these divisions, the lists, trees, hierarchies and so on, really a lot of the time we’re going to find ourselves, for the most part, really focusing on lists primarily, and perhaps then trees and hierarchies. But really, lists are the thing, mostly because of the interface of the computer. There’s not a really effective way on a computer, just with a standard HTML interface, to be able to show somebody a tree structure. And actually within the types of things we’re talking about, there’s no real need for it either. The tree structure really works very well with animals and things. A hierarchy works very well with animals. But we really are interested in lists.

GK:     Right, and I think that’s a good place to wrap up part one of this podcast. We will be back next time with part-two.

GK:     And thank you for listening to the Content Strategy Experts Podcast, brought to you by Scriptorium. For more information, visit Scriptorium.com or check the show notes for relevant links.

 

The post The benefits of a taxonomy (podcast, part 1) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 15:14
Getting started with DITA (podcast, part 2) https://www.scriptorium.com/2020/03/getting-started-with-dita-podcast-part-2/ Mon, 16 Mar 2020 13:30:24 +0000 https://scriptorium.com/?p=19544 https://www.scriptorium.com/2020/03/getting-started-with-dita-podcast-part-2/#respond https://www.scriptorium.com/2020/03/getting-started-with-dita-podcast-part-2/feed/ 0 In episode 72 of The Content Strategy Experts Podcast, Gretyl Kinsey and Barbara Green of ACS Technologies continue their discussion about getting started with DITA.

“We experienced far more change than I anticipated from the time Scriptorium first came in to evaluate our situation. I remember you saying, “Expect change, expect resistance to change,” but reality is the great teacher of life.”

—Barbara Green

Related links:

Twitter handles:

Transcript: 

Gretyl Kinsey:     Welcome to The Content Strategy Experts Podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we continue our discussion of getting started with DITA and taking the next steps forward with special guest Barbara Green of ACS Technologies. This is part two of a two part podcast.

GK:     So I want to talk a little bit about we’ve kind of covered where things stand right now and what you’re about to do by year end or beginning of next year. So I want to talk a little bit about how that fits into the fifth and final phase that Scriptorium recommended, which is interconnectivity amongst not just the R&D department that produces the realm product and all of its content, but all of the other content producing departments as well, such as e-learning and marketing, and how all of them can benefit from reusing content, from having content in the CCMS or connecting to the CCMS or the portal. And I want to talk about that phase that’s on the horizon, what kinds of plans that you have in mind, and how that all fits into what you’ve already been doing for delivering personalized content to your user base.

Barbara Green:     Yeah. Well we know that even now we’re working on some features in our products that are going to increase the complexity of our variance again. One of the things that if you’d asked me three years ago if I would really be passionate about I would’ve probably said no, but I am growing quite passionate about this, and I guess in a lot of ways it’s sort of like, oh dare I dream. What I would really love to see personally is a top down corporate content strategy that carries us into the future. Here’s how we’re going to set up our taxonomy, having all content creating departments, sitting at a table and agreeing on some things around voice and tone, other types of style guides, our taxonomy, and our reuse strategy.

BG:     We’ve identified with our development staff seven problem statements based upon what we know we could have done a better job with this year or what either content didn’t understand, because it’s a learning experience. I’ve always worked with developers, but when you really get down into technical things, it’s been a learning experience for me. And I can’t say enough, ACS is just a great culture, we have a phenomenal culture. Yes, we fuss and fight amongst ourselves, but we really do have a great culture of teamwork and we like each other. I think that that’s been a huge plus in this project.

GK:     Absolutely.

BG:     Yeah. We’ve identified these things and we want to work on those with Scriptorium’s help just to sort of align ourselves on what can we expect from development resources, not just for the help, but we have other groups like e-learning with really great plans and they stood up in LMS this year and our marketing department, even our risk management department is saying, “Hey, can I use this CCMS for policies?” And of course our vendor is going, “Absolutely you can,” and we have other customers that do that.

BG:     So we have sort of a lot of interest coming from other areas, and I’m personally working with IT now, like let’s test getting content out of the CCMS into web front ends. So we want to do all these things, but we are realizing we don’t have the resources, the human resources, and some of the documented here’s how we’re going to do it, here’s how we’re going to make decisions in place. So I really would like to see us make big leaps and bounds in 2020 around those things.

BG:     And I know I’ve read lots of articles. Some people would say, “Well, you should’ve had all that in place,” and in a perfect world you should, but we didn’t. So you go with what you got, right?

GK:     Yeah. And I remember when Scriptorium first came to ACS and we talked about what are the problems that you’re facing, and this was about three years ago. One of the biggest ones that we heard over and over was that all of these different departments just work in independent silos and don’t communicate with each other. And then I remember I went and visited last year and it was a world of difference because we got a lot of different representatives from these different departments in a room together and just had a brainstorming session of how we could all collaborate. And just seeing the difference even over those couple of years was pretty incredible.

GK:     And I think it does speak to what you said about even though you’d been working in these little disconnected silos, that you do have that great corporate culture where you all like each other and get along. So I think that goes a long way. There’s not as much of an issue of that sort of tension among departments or people saying, “Oh, I don’t want to work with this person over here.” It does seem like people are excited and onboard to start moving in that direction of collaboration.

BG:     Right. I think each department here has its challenges. They also have their business goals. I want to say that I do think business drivers were the key in our case. Even though each department has their own goals, our goals are big and personalization is a key driver behind how fast we’ve moved with some of this because just providing context to the user no matter… and also just user experience. Our UX designers do a really good job of communicating and helping us all think better about the user’s experience, so no matter what door they come in to our company, we want to create the right user experience and content is so important to that. We want to sound like one company, not five.

BG:     We all desire quality. We know we have some work to get on the same page, but I’m very optimistic that we’re headed in the right way and I’m really thrilled with the interest around collaborating more with each other, maybe even setting up teams and committees around strategies, a taxonomy committee or a team, whatever we decide to call it, somebody will come up with a fancy word here and we’ll call it that.

GK:     I think that having that in place already puts you miles ahead of a lot of the other companies that we’ve seen where they try and try to put that kind of initiative in place, but it’s just really, really hard to get out of the rut of all these disconnected silos. And so I think that over this next year or so, that leveraging that excitement is really going to help that last phase come together as you want it to and really help you achieve that goal of all these different content producing departments working together and truly delivering the best user experience possible.

GK:     So where I want to go next is just talking about we’ve gone through the overall strategy and sort of how that’s played out and how it’s going to play out in the future, but I want to do a little bit of a look back and talk about lessons learned. With hindsight 20/20, if you could go back three years in time and tell yourself here are some things that I wish I had known before taking the plunge, going into DITA and changing all of these content processes, what are some of those things?

BG:     Well, back to the very beginning, we know now that we should have educated stakeholders better, just helping set expectations around timelines for things like conversions and development and making sure that various mini projects we needed to be successful were scoped accurately. We also learned as a content team we needed to collaborate with development, especially our architects. We needed to get their input early on, and I think the RFP process that we went through, we had the head of R&D and we had a developer on that team was just really one of the best things that we tried. And it just worked great because programmers can often help create the case for your business case. They see things a different way than content people do. We haven’t always done that, and even since then, even though we learned that lesson, sometimes you continue to repeat mistakes of the past. But that’s a big lesson that we learned is to collaborate with our development team.

BG:     And just to take that one step further, what I now know at the end of this year is that I need to set those development expectations. This is not a one and done. Content needs ongoing development resources, and actually one of the things that R&D is doing is standing up a content team in 2020. They’re not completely dedicated to the CCMS and the portal, but that is one of their projects. They’re also going to be supporting the LMS and a few other content initiatives. So that’s a really positive change and definitely is one of our lessons learned there, is get our devs involved.

BG:     We also learned that the unexpected is going to happen. Have a plan for regaining your focus. Know some alternative resources that you can take advantage of. We learned that lesson last year and this year I think that’s actually a lesson we learned and we did something about. We had a content issue come up with our legacy system because we still have some legacy products in the Wiki this year. I took some time and wrote up a research project that needed to be done around that and then we went to HR and said, “Hey could you find us an intern or could you make this a career development focus? Because we have a program here for that.” They weren’t able to locate a resource, but my manager was then able to find a resource that could fix our problem. So looking for those things you don’t think are resources but just thinking outside of the box, that just worked great this year for us.

BG:     For us, we have supplemented our strategy with training, and actually I think that was a Scriptorium lesson learned, if I remember correctly.

GK:     Yeah, absolutely. I wanted to touch on that a little bit, because I know that earlier you mentioned the learning curve that’s involved with something like this.

BG:     Yeah.

GK:     And I think that where that learning curve really became clear to us is that the way that we have typically done a lot of these sorts of projects where we help a company move into DITA or any other kind of structured content is that we get everything converted, we get it in the new system, and then at the end we deliver training. And what we learned at ACS is that it was, especially with this kind of a phased approach to the strategy, it really was easier on the writers and the people that had to use DITA to have training delivered in kind of smaller chunks all along. And I think we found that when after phases one and two, after we did the initial conversion and getting things working in GitHub, we did a couple of days on site where we delivered some training to all of the writers and it was just sort of a DITA 101 basics, here’s how you create DITA topics, here’s how you put them together in maps and publish them, and then here are a few basic reuse mechanisms and I think that laid the groundwork.

GK:     But the problem is that at the time it was sort of not real so to speak, because the writers really needed to see all of that in action with their content, and because that conversion had just recently occurred, it was really difficult I think to connect us just delivering basic one-on-one level training with what they actually needed to do. And so what we realized was that it was really important to supplement that with additional training that went along with each phase. So I think this was something that, like you said, Barbara, with the last piece that you talked about, that we had a lesson learned and we actually did something about it.

GK:     And we’ve learned that with ACS it makes sense to kind of get through part of a phase and then if there is anything that the content creators are a little shaky on or have questions about to just do training in smaller chunks, and I think the most recent example of that is that we had a lot of the writers and even people in some of these other departments saying, “We don’t fully understand reuse and we really need some more training in that,” and now that all the content is in place and you’re starting to set up reuse amongst that content, we were able to look at real examples, look at what ACS is doing with reuse and then deliver some more targeted focused training based on that. And we did it instead of doing one big info dump, we did several just little one hour sessions over the course of several weeks.

GK:     I think that approach makes it a little more digestible, and if you are facing a very steep learning curve, I think delivering the training in those kind of smaller pieces really, really helps make it easier for people to get over that hump. So that’s actually something that we have changed to do with other clients as well. If we are working with a team that has no DITA experience, that has no structured content experience, they’re kind of coming into it the way that the writers at ACS were where all of them had been working in a Wiki, they had never worked with anything like DITA or XML, and so it was a totally unfamiliar learning curve and we keep that in mind now where if we’re working with a team that has absolutely no background in this kind of thing, that they’re probably going to need training delivered in these smaller, more bite sized chunks, and they’re going to need it delivered all along the way instead of at one big chunk at the end.

GK:     And so that’s been a lesson learned for us is how we approach training with different teams based on the way that their DITA rollout is going and the experience that they had beforehand.

BG:     Yeah, I agree. When I first began to just outline our reuse strategy, it really hit me, “Okay, I know we’ve talked about reuse,” and I felt like I had a grasp of it but I could easily second guess myself and we found that others were doing the same thing. So I thought that was really very helpful to go back and just zero in on various reuse strategies with our examples in hand. I think that was a big win for us.

BG:     Organizations are structured very differently, but our writers are embedded on agile teams and so their bandwidth is stretched depending on the two week sprint that they have. So doing those one hour trainings was definitely a win. It was an easy commitment and it gave them time to go back and reflect and get questions ready for the following week.

GK:     Yeah, absolutely. And I think that’s really helped us have a better understanding of just really further personalizing our different training approach to different companies, and this has been a real learning experience for us on that front as well.

GK:     And then I think something else that we learned firsthand from this particular project is what to do in the face of these external unexpected changes. We talked a little bit about how there was a lot of reorganization at ACS, and obviously that was not something that had been planned at the time that we initially put our content strategy together, and that just really hit home for us how important it is to have some flexibility in your strategy and in your timelines and to be able to accommodate those kinds of changes. In some ways it can be positive. It can present challenges, but ultimately I think some of the reorganization that you went through you were able to, after one of those first big rounds of reorganization, that was when you got into not only your CCMS but your portal pretty quickly.

GK:     So in some ways it can be really helpful. Even though I think that reorganization delayed your phase where you were just working in GitHub and stretched that out for a long time, I think it ultimately led to you getting into the next two phases more quickly.

GK:     And so what Scriptorium learned from that is just the adaptability that you have to have and then also being able to support a company through those changes when it starts to affect content, being able to advise a company on how to stick to your strategy or maybe adapt it a little bit as needed in the face of all of those changes and make sure that it doesn’t just throw you completely off track.

GK:     We talk about change management and dealing with change resistance a lot in our strategies, and we usually include some advice to our clients when we deliver a strategy to them about here’s how you might deal with resistance to change or how you might approach unexpected changes. But I think this project in particular hit home to us, how adaptable you truly have to be sometimes.

BG:     Yeah, I would agree. We experienced far more change than I anticipated from the time that you first came in to evaluate our situation, that was definitely a surprise and I remember you guys saying, “Expect change, expect resistance to change,” but reality is the great teacher of life. It really hits you, you learn. But it’s definitely been a growth opportunity, and again, we’ve had some very positive outcomes. I will say, I will add too that one of the problem statements that I’ve documented with my developers is that while it was such an awesome surprise to hear that we were going to get the portal, what we now realize, and Gretyl I have to give you credit for the right words, but we sort of robbed ourselves of educating all parties on what a portal is, what it does, what the benefits are.

BG:     So as a result of that, we really didn’t… We were lacking in some technologies that need to be in place to use a portal the way it’s meant to be used. Even now with our portal set up, we’ve got a solution for year end, but we know it can’t be our final solution.

BG:     Again, I can’t stress enough the importance of walking through the discovery phase. I’ll be honest, I had never in my life written an RFP. I could have probably not even gotten started without you guys. I also had a coworker here that used to review RFPs in New York and she was a big help with advice, but it was… Just looking back, it was just such an important part of this, and I’m asked all the time what tool did you pick? I just want to take and pick a tool. And I’m like, “No. You need to [crosstalk 00:22:46]”

GK:     Yeah, exactly. And I’m really glad to hear you say that because one of the things that I know we’ve stressed this time and time again on the podcast and on our blog is that tools should be the very last thing that you choose, you should first decide, determine what your business goals are and what things are blocking you from achieving that at the moment, and then what things need to happen to get past those blocks and achieve those goals.

GK:     And then once you do that, then it’s time to say, “Okay, what are some options of tools that might fit?” And then really screen all of those very heavily. Ask as many questions as you can think of from not just a content perspective but an IT perspective, and also maybe you get into like a marketing perspective or training perspective if they might ever use that tool. Anybody that’s going to have a stake in it will have questions about how it works, how it needs to be supported, and how it’s going to support you. Get very, very specific, get very, very thorough answers and have them demonstrate everything that they say that their tools could do so that you can get a firsthand idea of how that’s going to work.

GK:     And I think that going through that vetting process as thoroughly as possible is extremely important. We’ve seen a lot of companies where what basically happened was they got pulled in by the shiny marketing aspect of a tool saying, “Hey, look at this fancy tool that can solve all your problems.” And they say, “Okay,” they buy it and they haven’t really evaluated whether it actually can solve all their problems and then they get stuck using it.

BG:     Right.

GK:     Then they come to us for help saying, “Can you create a content strategy to get us out of this one tool and into some other workflow, and this time we’ll choose it more carefully.” And so that’s why we always advise companies to make that the last thing you do and to be very, very careful. Of course, part of our strategy is that we can help do things like write those RFPs or attend the demos and point out different things that maybe the customer wouldn’t necessarily see. It’s really important to take your time during that discovery phase and really just evaluate every single angle that you can.

BG:     I agree. And it also helps you know what you’re going to need from your own company as you go about implementing. I mean, believe it or not, I think had we really worked that discovery phase with the dynamic portal, we might have had that implemented a little faster because we would have known, “Oh, we’ve got to have this piece of technology in place.” And just to reiterate, when I was writing that RFP, I pulled out a list of every department in the company, and we’re about 400 employees. We’re not real small, we’re not real huge, but I pulled out a list of every department and just mentally went down that list and said, “Is there anything about this department that if my dreams came true, this system could affect them?”

BG:     I basically sat down with almost every department in the company. Even if they weren’t directly affected, sometimes there was an indirect benefit or something that they thought of that I didn’t, and that was a very helpful discovery process for me to see… just to hear what they thought it was and how they thought they could use it. That was great. It was a great process.

GK:     Absolutely. So just, I want to wrap up with one final thing and that is if you could give one piece of advice to another company that’s in the same boat that you were in a few years ago, what would that be?

BG:     Do your discovery process. Yeah, I really think that’s high on the list, but a very, very close second, if you’ll let me say too, is really work hard at educating your stakeholders or your contributors, like development. At the end of every year, I think I worked really hard and I always see places that I overlooked, I could have done a better job in my communication. But you want to have those conversations with your design group, your developers, your stakeholders, so that you have a well rounded understanding of the business objectives from different viewpoints.

GK:     Absolutely. It’s fantastic and solid advice. So thank you so much, Barbara, for joining us on the podcast.

BG:     Well, thank you. It’s been a pleasure.

GK:     And thank you all for listening to The Content Strategy Experts Podcast brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

 

The post Getting started with DITA (podcast, part 2) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 28:10
Getting started with DITA (podcast, part 1) https://www.scriptorium.com/2020/03/getting-started-with-dita-podcast/ Mon, 09 Mar 2020 13:30:57 +0000 https://scriptorium.com/?p=19535 https://www.scriptorium.com/2020/03/getting-started-with-dita-podcast/#respond https://www.scriptorium.com/2020/03/getting-started-with-dita-podcast/feed/ 0 In episode 71 of The Content Strategy Experts Podcast, Gretyl Kinsey and Barbara Green of ACS Technologies talk about getting started with DITA.

“We ran the conversion and got the content in DITA. It wasn’t structured the way it would be if you had started writing in DITA from the beginning. If I ever had another project, I would know to really take that into consideration.”

—Barbara Green

Related links:

Twitter handles:

Transcript: 

Gretyl Kinsey:     Welcome to The Content Strategy Experts Podcast brought to you by Scriptorium. Since 1997 Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we talk about getting started with DITA and taking the next steps forward with special guest Barbara Green of ACS Technologies. This is part one of a two-part podcast.

GK:     Hello and welcome to the podcast. I’m Gretyl Kinsey.

Barbara Green:     And I’m Barbara Green.

GK:     And today we’re going to talk about a case study with the project that Scriptorium did with ACS Technologies, started a few years ago and it’s still ongoing, about getting the company started with DITA. So first thing that I want to ask you, Barbara, is to just give us a brief overview of the company. Tell us what ACS Technologies does.

BG:     Okay, well, ACS Technologies has been in the business for about 40 years. We develop software solutions primarily for faith based organizations, and our corporate offices are in Florence, South Carolina, but we have distributed teams throughout the country and offices in Greenville and Phoenix as well.

GK:     All right, perfect. And when it came to moving into DITA, what were some of the reasons that you wanted to start looking into changing the way that you were developing content? What were the business drivers behind this decision?

BG:     Well, we were developing our flagship product at the time, which is called Realm, and it began to grow more complex even though we were still in the early phases. It wasn’t developed as a core product with modules that plugged in depending on the features our customers wanted, but instead features were turned off and on based on packages or experiences that customers required.

BG:     And so I guess about three, three and a half years ago, I realized we can’t keep documenting the way we’re doing. In the early stages of that development, writers could add notes here and there to help customers find their paths. But we knew this was not the user experience that we wanted to create and we also knew that the product offering was growing more complex and personalization was on the horizon. We also spent many hours formatting content.

BG:     So, right away we had four problems that were identified. We needed to target custom content, we needed to integrate content within the product, we needed better findability for sure. Search was a struggle. We had multiple output types and while we had tried very hard to move just to online, many of our customers still requested PDFs. We also were seeing content reused across various departments more and more and we really could not prove our value because we lacked a cohesive set of content metrics.

GK:     Yeah, and I remember when Scriptorium went in and helped assess all of these issues, those were the root cause of all of those were kind of coming from the fact that all of the content was being offered in a Wiki. So, all of the Realm help content was stuck in this silo that made it really difficult to achieve all of those things, especially things like search and reuse and personalization. And so I remember back when we were initially talking about this, that we were looking at all of these problems and DITA seemed like absolutely the logical solution to help solve all of them over time.

BG:     Yes, it did. When we had software products that were more modular in orientation, the Wiki worked okay. I’ll say that years ago the Wiki got us online where our help had not been online. So it had a value at the time. But we really outgrew it very fast.

GK:     Right. And I think that’s something that we’ve seen in a lot of different organizations where some solution that does help you at one time isn’t scalable in the way that DITA is. And so making that transition makes a lot of sense. So I want to get into talking some about how we actually went about getting everything up and running with DITA and the strategy that we put in place to make that happen.

GK:     And the approach that Scriptorium ended up taking with ACS Technologies was what we call a phased approach. So this was determined by a lot of different things including timelines, schedules, budgets, and it also gave us the opportunity to start small at a pilot level and then expand outward.

GK:     So, we set up content strategy in phases where each one built off of the previous one and we have pretty much stuck to those phases. The timeline of those has gotten a little off track from what we’d initially planned. Some phases happen more slowly, others have happened more quickly. But we initially outlined these phases and then just started tackling the plan in that order. And so I wanted to talk to you about how that’s gone and we can get into a little bit what those phases involved and how that played out in reality versus what we had initially planned.

BG:     Right. Yes, I think no one is more surprised than I that we’ve made it through four of the five phases.

GK:     Yes.

BG:     It’s like a dream come true, right?

GK:     Yeah, absolutely. And so, we really just started phase one. The big push there was just getting the content out of the Wiki and into Dita and so that involved a process of conversion. And so I wanted to just get your take on how that process went and what kinds of things that you wish you had known in hindsight.

BG:     Yeah. So, I guess the conversion itself after running several test iterations went very well considering the product we were converting from. The Wiki that we used puts a lot of junk code in the background. So, Lord bless the developer that had to write that for us. One of the big surprises there that we found is every time we had uploaded an image, there was a version of that image in the database. So, that was a lot of fun to try to figure out.

GK:     Oh, wow.

BG:     Yeah. But we ran the conversion and got the content in DITA. Now it wasn’t structured the way you would if you had started from the beginning writing in DITA and if I ever had another project, I would know better now to really take that into consideration. We’ve talked about, we don’t feel we made a bad decision converting content, but we have sat around the water cooler, so to speak, and talked about, “Hmm, would it have been easier to just start over?” Because we didn’t have a very large set of content at that point.

GK:     Yeah. And that’s something that I think all companies have to take into consideration. Is it easier to rewrite or restructure or reorganize your content on the front end before you convert or do it after you convert? And it’s a difficult question, especially when you’ve got such a small set of content. Because the good thing about that is it doesn’t take as much time either way compared to if you had hundreds of thousands of topics. But it’s still a big thing to consider to try and make sure that you take whatever approach is going to be the least amount of stress and time and effort on the people that have to do that work.

BG:     Right. And the driving factor for us to convert too was we had been given a timeline and so we felt like if we didn’t convert, there was no way we could meet that timeline. I guess one of my lessons just personally, as an information manager at the time, is push back on timelines.

GK:     Yeah. And I think that as content strategists here at Scriptorium, that’s an important thing too, is to be realistic about timelines, because we see that a lot where you’ll have executive pressure to get something done by a certain date, but then you often have to compromise. Do you get it done by this date and maybe it’s not done quite as well? Or in the same way that you would have done it if you had unlimited time? So, you have to find that sweet spot of what’s the right amount of time to do something correctly but still try to meet your deadlines or meet a schedule or not get things behind. And that’s always the challenge that I think companies face with something like this.

GK:     But as we know we did get through that phase. And so then phase two was basically an interim phase of using the content in Dita, managing it under source control with Git, and starting to deliver HTML output. Particularly a couple of different variants for different customers. And the main goal of that phase was just basically stay in it until you reached a critical point of needing a component content management system to manage things like workflow and publishing and especially publishing, all these different content variants.

GK:     And I think this was the phase that we stayed in a little bit longer than we had initially planned because I think we had planned for that to maybe be six months to a year and that ended up going on longer than we thought. So, I wanted to get your perspective on that phase of the project and how things went.

BG:     Yeah, so it did go on longer than we thought it would. It also went on probably longer than it should have from a technical standpoint, but again we’ve gotten through it. One of the lessons learned, and we’ll talk about this more later, is making sure that you have development resources in place.

BG:     Our designer lead at the time and our developers came up with a front end. It was a single page app for us to publish to and I think they refer to it now as a homegrown system. But our version control was in GitHub and that was a very steep learning curve that occurred at the end of the year. So, with the holidays and everything. It was not that writers can’t learn, they do. Writers everyday learn to use GitHub. But we did not have a pretty front end for GitHub.

BG:     We had to learn through command prompts and memorize command lines. I think we didn’t know any better. And so that was very technical. It was a steep learning curve and we were not all there all the time learning at the same rate. So we made a lot of mistakes with GitHub and we’d have to grab a developer and get them to help us figure out what we had done wrong.

BG:     Over time that evened out into the next year. But our homegrown system didn’t accommodate the complexity that we were adding on a very regular basis to our content. So, our company began to go through reorganization at that time. We had lots of change and change management. Technologies were changing, dev resources were stretched, writing resources were stretched.

BG:     And so sometimes we would make a change, commit it to Git, and we couldn’t publish. And we spent lots of time troubleshooting, would have to pull in development resources, and often those would get escalated. We would be putting SOS signals out to Scriptorium, what we had done wrong. So, it definitely had its hills and valleys. There were weeks that were extremely frustrating and then there were weeks that it went along pretty well. But it was a rough patch. I personally couldn’t wait to get into a CCMS.

GK:     Yeah. And I think what you just described with those peaks and valleys and with your homegrown system not being able to accommodate the complexity of the product and its content really embodies that critical point that I mentioned about when it’s the right time to move to a CCMS. And I think that one of the big challenges was that you reach that critical point but then still had to wait a little bit longer to get into a CCMS. And of course with that process, you always have to go through evaluating different options and figure out which is the right system for you, which is the best fit for your business goals and your content.

GK:     And so once we finally got the green light to do that, that’s when we moved into that phase. And so now I want to just get into talking about that. So phase three was getting into a CCMS and getting set up where you could have all of the workflow in place and where you could start to deliver more content variants, more of those personalized variants to different segments of your customer base.

BG:     Yeah, so we did go through a formal RFP process and that was really a great experience in hindsight. If anyone asked me the single most important piece of advice, I would say, don’t skip it, do it. So we picked our CCMS and for me that was my highest priority in my role, was to do whatever I needed to do to get that stood up, to get workflows in place, working with our vendor. All the little things that you have to do to make it ready for writers to move into. And we talked about moving day, what would be our moving day? We were ready to move into it, but we did have to wait on development resources to make some changes.

BG:     Our product, if a user’s in our product and they click the question mark, it takes them straight to the page and help that they need to view. And also our content was being … They needed to get to the right content for the package that they were using for the version.

BG:     And again, our complexity was growing. Scriptorium had recommended, we have no more than five versions or filters, variants. We were approaching … It would get to 20 and then 25 and I think we ended up at 36 or 37 variants. So, we had to wait for development to make that switch and then when they made that switch, what we were able to do then is we began authoring and version controlling workflow handled in our CCMS and they pulled our source files down and continued to run them through the DITA Open Toolkit to produce the various help pages. It did take a little bit of development work to get there.

GK:     Yeah. And I think that what you’ve mentioned about all those different variants to leads into where that phase with the CCMS bled early into the next phase that was planned, which was for phase four. We had recommended once you get to a certain number of variants, that’s too much to keep publishing all those different outputs. Whether you’re still in Git or in a CCMS, once you get that many variants we recommended that the best way to deliver content was through a dynamic delivery portal.

GK:     And what was really interesting was that that came for ACS, right on the heels of getting the CCMS. You got them both right back-to-back and it was an overlap where basically you chose your CCMS and then chose your portal right on the heels of that instead of having a longer phase in the CCMS. And so I want to talk to you about that overlap between those two phases and what led to that decision and how that’s gone so far.

BG:     Yes. That old phrase, be careful what you wish for, right? One of the best things that we did was our dev resources. We had some dedicated dev resources that did walk through the RFP process of the CCMS. And during that process, the concept of portals was introduced by more than one of the vendors. So there was a lot of excitement from development to get behind that for our sakes. And I really appreciated that.

BG:     And so that led to … Somebody started calculating the number of dev hours they were spending in the current front end we were publishing to and the writing hours that we were wasting troubleshooting the front end. The business case just got made much faster than I anticipated and we purchased the portal.

BG:     When I stand here today and think, “Wow, we stood up a CCMS and a portal in the same year.” I can’t even believe it. But we did. And it was a lot of work. But I’m glad in the long run that we did that. Now again, what we ran into is, I believe, that we had underestimated the technology that we needed to have in place in order for the portal to do the best job and I actually had anticipated design resources thanks to … Right now, I can’t remember her name, but someone had said, “Don’t underestimate design resources.” And so we had anticipated that and our UX team just did a fantastic job on the designs for our portal. It’s beautiful.

BG:     And so our design was in place but we just didn’t have everything. We didn’t have the dedicated resources we needed from development or the priority. We did eventually get to a place where those things have been put in place and our portal was up several months ago. Technically we could publish to it, but we were also still locked into a situation where we have to publish the old fashioned way, as we call it now.

BG:     And we will be doing user testing on it. We can already tell one of the big wins for us right now, builds take an hour and 10 minutes in the old system. They’re published within a minute, you can go out and see the new content that you wrote in the CCMS, pushed through APIs to the portal. It’s so fast. It’s such a time saver.

GK:     Yeah. And that makes a really big difference in terms of your time to delivery. So, that’s a really big accomplishment.

BG:     Yeah. It’s great. There’ll be some other tweaks and things that we want to make obviously, and we’re sort of now going, “Oh, could we do this? Could we do that?” But yes, it’s going to be great to turn that final switch and do away with the old system.

GK:     Yeah. And I think it’s really a good thing to show something like this, which is the ultimate point of your content strategy coming to fruition and you being able to deliver through that portal. Because that was … It’s not the final phase, but it’s a major delivery end point and I think achieving that goal in the timeline that you did, especially considering a lot of the challenges that you faced with reorganization in your company with resources being moved around and things being changed, it’s really I think quite uncommon to see a company stick to the plan that well and achieve the goals within that reasonable of a timeline.

GK:     I know that it’s very common for a lot of unexpected things to crop up. Sometimes you have to adapt your strategy and go in a different direction based on circumstances outside of your control. But I think it’s really impressive that ACS Technologies managed to really stick to the plan and has been able to prove the success of it phase by phase even with all of those external challenges.

BG:     Yeah, I think it is too. And I would definitely agree with you, caution anyone else that that is a big challenge to do both of those things in the same fiscal year.

GK:     And with that, I think we’re going to go ahead and wrap up part one. We will be back with part two in our next episode.

GK:     Thank you all for listening to the content strategy experts podcast brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

 

The post Getting started with DITA (podcast, part 1) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 23:13
DITA projects with a scaled approach https://www.scriptorium.com/2020/02/dita-projects-with-a-scaled-approach/ Mon, 03 Feb 2020 14:30:57 +0000 https://scriptorium.com/?p=19489 https://www.scriptorium.com/2020/02/dita-projects-with-a-scaled-approach/#respond https://www.scriptorium.com/2020/02/dita-projects-with-a-scaled-approach/feed/ 0 In episode 69 of The Content Strategy Experts podcast, Bill Swallow and Stephani Clark of Jorsek talk about using a scaled approach with DITA projects.

“The desktop publishing and single user tools are always going to have a much lower price tag than a DITA CCMS will, but there’s a trade off for what you’re getting.”

—Stephani Clark

March 2025 update: We have moved LearningDITA to a new platform. The Introduction to DITA course is still free, and you can sign up for courses at store.scriptorium.com.

Related links:

Twitter handles:

Transcript:

Bill Swallow:     Welcome to the Content Strategy Experts Podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage structure, organize and distribute content in an efficient way. In this episode we talk with Stephani Clark of Jorsek about using a scaled approach with DITA projects.

BS:     Hi, everyone, I’m Bill Swallow.

Stephani Clark:     And hi, I’m Stephani Clark

BS:     And we’re going to talk a bit about a scaled approach to DITA projects. So Stephani, what would you say is the best way to get started with a DITA project without a huge investment upfront?

SC:     Well, I think there are lots of ways that you can get started with a DITA project without a huge investment up front. And I think there’s kind of a misconception that DITA is for these large enterprises. And if you’re anything smaller than that, then you probably can’t benefit from it. But the benefits are there regardless of what size organization, it’s just deciding how you’re going to invest if you wanted to move into a DITA environment. And so I think one thing to understand is that there’s always some investment, but I think that there is an opportunity to decide if that investment is going to be purely monetary or if you want to invest some time.

SC:     And there’s a lot of ways now to get started with DITA without the monetary investment that you can use best practices, reasonable tools, approaches to content conversion or publishing, self-education. There’s a lot of resources out there. And so I think that’s something that I want to kind of explore a little more in our conversation today is what can an organization do if they don’t want to go lay out a lot of money to implement DITA? So we can kind of look at each of these items I guess. But what are your thoughts on the best way to overall get started?

BS:     You mentioned that right off the bat that regardless of what your approach is going to be, there’s still going to be a cost associated with it. Do you want to speak a little bit to that?

SC:     Yeah. So let’s look at maybe an example which would be looking at content conversion. So oftentimes when you’re implementing DITA, one of the first steps that you have to take is looking at how are you going to move your content into the DITA structure and get it into a DITA environment. And a lot of companies will do like an engineered conversion and that’s great. I mean those come out really well typically, it’s engineered to your needs and your information model and that’s all fantastic. However, doing an engineered conversion can cost quite a bit of money. And some organizations look at that and see that as an immediate barrier to moving into DITA. But I think if you look at kind of that trade off, it’s going to cost something, it’s either time or money. You can look at easier, do it yourself approaches, whether that’s using a more generic conversion and doing the cleanup or even, I’ve seen companies with smaller sets of content do a lot of copying and pasting to move into a DITA environment.

SC:     So I think that would be maybe an example of, you know, you don’t have to go spend $10,000 on conversion or more, you could spend the hours to get the content cleaned up and in really good shape and get the same results.

BS:     Right. So there’s a mindfulness there between monetary budget and I guess time budget and the amount of resources you have available to get things done.

SC:     Yeah, you do have to have the resources. If you don’t want to spend the money, you need to take into account the time that your team or yourself are going to probably spend on some of the DITA implementation. But once you get rid of that huge price tag that some people see and get scared away by, and you look at it and kind of plan it, I think that that can be a really good approach for smaller teams or smaller organizations that want to start making that move.

BS:     So speaking of price tags, a lot of the tools out there generally come with some degree of sticker shock when you start looking at enterprise content management systems and so forth. Do you have thoughts around those?

SC:     Yeah, I think that is one of the big barriers as well. And one of the reasons a lot of organizations maybe don’t adopt DITA and decide, “Hey, we’re going to use these desktop publishing tools that are already available” or “I can produce a PDF out of Word. I don’t need this more elaborate system.” And so I think it depends on what system you’re looking for. Like the desktop publishing and single user tools are always going to have a much lower price tag than like a DITA CCMS will, but there’s the trade off of what you’re getting.

SC:     But I will say, I don’t want to make this too much about my organization, but one thing we’ve done at Jorsek is we just introduced this year some really low tier options. So people can get started for as little as $100 a month in a DITA CCMS. So there are tools out there that don’t have a six figure price tag that make it a lot more accessible to people and I’m sure there’s others out there as well that have options that are available at at kind of a lower price point

March 2025 update: We have moved LearningDITA to a new platform. The Introduction to DITA course is still free, and you can sign up for courses at store.scriptorium.com.

BS:     There’s always the option to use, to really go bare metal and use a source repository such as Git or something like that to at least get yourself started.

SC:     Absolutely. And a lot of organizations are doing that. We’re a DITA CCMS, we’ve seen a lot of prospects coming to us that are already in DITA, got themselves started completely on their own using Oxygen plus Git and then maybe in a few years you decide, “Hey we could probably benefit from a content management system” and it might make more sense at that point.

BS:     So we talked about tools, we talked about content conversion, what about the publishing side?

SC:     So I think publishing is another one of those kind of barriers to entry for DITA. And that is because most DITA publishing is done using the DITA open toolkit, which is an open source publishing engine. It gives you a ton of options. You can publish to any number of different formats, but the kind of caveat there is that there’s usually some initial setup required. So you have to develop a publishing plugin to apply you’re styling and all of your rules for how you want the output to look. The bonus of that is that later on, you have consistent output but the barrier is that yes, there’s an upfront investment, again, whether it’s time or money. If you’re doing it yourself or paying someone to develop it and so you have to look at how you can do that at a reasonable budget.

SC:     But one thing I’ll say is that DITA publishing, what I’ve seen, and Bill, maybe you’ve seen this as well, is that there’s a lot of open source options available to start from and that I’m seeing more options available that aren’t just DITA open toolkit that are maybe easier for people to use. Have you seen the same thing in the industry, Bill?

BS:     I’ve started seeing, yes, some of these, I would say more polished starting points popping up there. I mean the open toolkit is great in that it gives you some initial publishing targets that you can configure. But the catch is that you have to be able to configure them. So you know, if you’re doing PDF you have to know FO or you have to know cascading style sheets for print or for PDF, you need to be able to develop cascading style sheets for HTML. But a lot of tools start coming with some bare bones ones, with some more, I don’t want to call them visual editors, but they’re a good starting point for being able to lay out your output format and then be able to tweak things from there by going into the CSS and fixing things. So there are some options that are starting to creep out there, but they do still require a bit of tweaking to get it just right. Even if it’s just a matter of changing colors and fonts and dropping a logo in, it takes a little bit of time to get up and running, but certainly not as much as trying to configure a bare bones OT plugin on your own.

SC:     Yeah, I think you make a good point that not everyone has the skill set to do it. And so it is important to know that there are some good tools available out there that don’t require that advanced level of skill set. That even the average person could probably get in and do some basic CSS work. We’ve started using Prince XML somewhat, which is CSS for PDF and I know nothing about CSS and yet I can somehow manage to go in there and still like change colors and drop the logo and make it look pretty. So it’s a lower entry point I think then maybe some of the traditional DITA OT publishing options maybe.

BS:     Absolutely. So with regard to getting started, a lot of companies seem to think that it’s going to be a massive undertaking to get things rolling within a company. I know that we’ve seen a lot of companies start doing more of a proof of concept on that end to kind of get the ball rolling. And I was wondering if you’ve seen that as well and what types of, I guess, startup projects you see people implementing.

SC:     We have seen a lot of POCs too and I think a proof of concept, or POC, is a really fantastic way to get started. It helps you build your business case. It helps you validate any assumptions or ideas that you have and you can make it really focused around your core goals for maybe why you’re moving into DITA. So we have a few different POC options that we provide. So we have like two books, like you want to look at reuse and so you just get two pieces of content that are similar in the system and start working on it, to see how much you can reuse, how much easier it is to author and maintain and kind of prove out the points that you want to see. So I love a POC because it’s a great way to prove that a solution will work for you before you make any larger investments in it. What do you typically see with POCs that you guys have been working on?

BS:     Well, a lot of times we do see companies start looking at at least producing one complete deliverable of some kind. So they don’t go head first into converting everything over and focusing on making sure that everything is hooked up and working properly before they start outputting content. So they’ll pick a pet project usually. So if there’s a product development initiative going, especially if there’s a brand new product that’s coming out, usually they’ll align their proof of concept to that. This way, they’re not dealing with legacy content, so they don’t have to deal with conversion as much and they can get in and start authoring the correct way for DITA in the tools for their proof of concept and be able to design the primary. And when I say primary, I mean one, transform or publishing target for that particular deliverable. And most of the times we see that usually being some flavor of HTML.

BS:     This way it can be either served up on the web or provided in a lighter format with a product or what have you. But the key there is to not focus again on everything. If your proof of concept requires you to convert thousands of topics or thousands of documents or thousands of pages of content into DITA first, that’s going to delay getting that proof of concept out in front of people who need to see it to approve a larger investment. So we usually try to help companies identify a small manageable target that they can hit within a reasonable timeframe.

SC:     Yeah, I like that approach of starting from scratch and having a small reasonable project. I’ve also seen with POCs and one of the things I like about it is it’s a really good opportunity for at least like a core team of users to kind of gain experience with DITA, with the tools that they’re going to use and maybe learn some lessons early on in a low risk environment as opposed to trying to do like a full fledged implementation where there’s a lot more risk involved if you start making any mistakes or you have those learning points along the way.

BS:     Absolutely. And that actually brings up another good point and that’s to not try to inject too many bells and whistles into the design of your content upfront. And by that I mean introducing heavy amounts of DITA specialization, which is a customization of the model or using a lot of what I would say more advanced features because usually those require a bit more thought and a bit more set up before you can truly begin authoring your content. Things like using keys to change the context of your content and using a lot of conditional processing. I would shy away from using too much and focus on one goal.

BS:     If that goal is to produce, as you mentioned, two different manuals for a particular set of content, then focus only on that and using those conditions and not all the other bells and whistles that you might be able to use. Keep everything in mind that you want to use going forward, but focus on the key elements that are going to show the people who really will allow you to grow your implementation that “Hey, this thing is going to work for us.”

SC:     Yeah, I think that’s really great advice and it gets you started thinking about how your larger implementation might look and work and what you want to do, but again, keeping it focused for the POC on some simple goals. You kind of brought up one other point though, which is, when you’re getting into DITA, there’s all of these options. You know you can customize it to anything that you want it to be really. What do you suggest for people that are just starting to kind of learn about DITA, in terms of resources to educate themselves on some of these options?

BS:     That’s a good question. Actually. We do have LearningDITA.com, which is available if you head over there. It’s a 100% free resource for learning about DITA. Up there I believe there are close to maybe 10 or so courses that you can take and there are also several recordings available from past LearningDITA conferences. We do an online conference every February. That’s a great way to get started and get learning about it.

BS:     The other thing that I think is really important is to start taking into account everything you might need going forward. Even if you do have that DITA expertise, taking a strong look at your content and start thinking about how all the bits and pieces need to be able to work in this new environment. Because the goal of really moving to DITA is not so much changing tools and changing the format in which you’re offering, it’s changing everything about how you’re offering in order to deliver something better and to produce something faster. So look at where the inefficiencies are and start thinking about how you want to resolve those or at least identifying what you want to resolve before moving forward. Because the last thing you want to do when you have an investment in changing tools, regardless if you’re going to DITA or anything else, is reinventing the same problems in a new tool set.

SC:     Yeah, I think that’s a great starting point. I recommend LearningDITA to a lot of people and that’s how I got started in DITA surprisingly enough, was through your training courses. So great.

BS:     Glad to hear it’s working.

SC:     And for someone that’s just starting to look at developing a content strategy or what they may or may not need that DITA has to offer, do you have any suggestions for them to get started?

BS:     Well, of course the default answer is please contact us, but no, the best way to go about this is again to look at your content and also understand what the best practices are for authoring in DITA. Generally you want to keep things topic oriented and you want to identify your reusable pieces of information and make sure that you are separating those. Generally you want to do an audit over your entire content set and figure out what needs to be moved over and why it needs to be moved over and which pieces are going to be reused and how, and kind of getting your arms around everything that you had in your content before and what you wish you could have done better with it because chances are there’s a mechanism in DITA that will allow you to do something better with that content.

SC:     Yeah, that’s a great point. And I would just maybe double down on if you’re doing it yourself. I mean, if you’re not using experienced content strategists like Scriptorium folks, I would say always follow best practices. Don’t get too carried away. Try and be a little minimalist in doing just what you need to meet your goals and use the best practices for the industry. And there’s a lot of resources available, whether it’s DITA forums or other options.

BS:     Absolutely. I mean DITA affords you a lot of bells and whistles to do some really smart and interesting things with your content, but you have to be mindful to not try to use them all.

SC:     Yes, if you use them all, it can get a little confusing and complicated quite easily, so.

BS:     Absolutely.

SC:      Alrighty.

BS:     Alright, well thank you Stephani. I think this has been a great little talk.

SC:     Awesome. Thanks for having me Bill. Always nice chatting with you

BS:     And you.

BS:     And thank you for listening to The Content Strategy Experts Podcast brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

 

 

The post DITA projects with a scaled approach appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 19:09
Unusual DITA outputs https://www.scriptorium.com/2020/01/unusual-dita-outputs/ Mon, 20 Jan 2020 14:30:19 +0000 https://scriptorium.com/?p=19460 https://www.scriptorium.com/2020/01/unusual-dita-outputs/#respond https://www.scriptorium.com/2020/01/unusual-dita-outputs/feed/ 0 In episode 68 of The Content Strategy Experts podcast, Gretyl Kinsey and Simon Bate talk about unusual outputs from DITA sources.

” With DITA, it’s incredibly flexible. We can generate almost any type of output that we want to with it.”

—Simon Bate

Related links:

Twitter handles:

Transcript:

Gretyl Kinsey:     Welcome to The Content Strategy Experts Podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize and distribute content in an efficient way. In this episode, we talk about unusual outputs from DITA sources. Hello and welcome come to the podcast. I’m Gretyl Kinsey.

Simon Bate:     And I’m Simon Bate.

GK:     Today, we’re going to take a look at some different outputs from DITA that aren’t very common or widely used. I think the best place to kick off here is to talk about what is commonly used. What are the sort of more typical outputs that you see from those sources?

SB:     Yeah. We can actually divide this into two areas. One is the output formats and then the other is the output type itself. Among the usual DITA outputs, we have things like manuals, guides, essentially anything that’s paged. We’re predominantly talking about PDFs. Then there’s other outputs, which are more collections of HTML pages, whether it be websites or whatever. Then there are the formats themselves. Of course, the two groups that I’ve listed here, there’s a PDF output and HTML output.

GK:     Those are ones that are kind of delivered standard with the DITA open toolkit. One thing that we see a lot at Scriptorium is companies that ask us to come in and build customized versions of these outputs for their DITA content. We’ve had a lot of companies that want one or both of these output types and sometimes multiple versions. They might have one PDF transform that handles their manuals. They might have one that handles their data sheets or some other smaller file type, and then they might have HTML for all of their content as well so that they can deliver everything across the board in different ways.

SB:     Data sheets themselves are an interesting jumping off point for a discussion about unusual outputs because while I consider manuals and guides to be fairly standard output, data sheets often are an odd duck. Often you have a mapping where you have one DITA topic equals one data sheet. That’s not necessarily true, but that’s what we see a lot of the time. But data sheets because of the density of the information that’s in it require often a specialization or a lot of output class usage and with that comes a great deal of author training or buy in. Anybody writing a DITA topic that’s going to be converted to a data sheet has to know right from the start that that is one of the… The data sheet is a possible output for this content.

GK:     Absolutely. I think, like you said, that is a good jumping off point into talking about some more unusual or not so typical outputs that you might get from your DITA sources. I want to start off that discussion by talking about some of the benefits of these less typical outputs. What might make a company say, okay, we’ve got a real case here to go from DITA to something a little bit more unusual than PDF or HTML?

SB:     Well, often we find that the clients want to do this because they’re using their DITA already to create what we consider a usual output, but in addition for one reason or another they have a requirement for generating some other kind of output. Part of the desire is to use the same DITA sources to generate both the standard output and to go to some specialized or unusual output format.

GK:     Absolutely. I think some of the examples that we’re going to get into and talk about more in depth, one of them is something that we’ve actually covered quite a bit on the podcast, on our blog and even in our LearningDITA live presentations and that is going from DITA to InDesign. One thing that we’ll do is include in the show notes for this episode some links to all of the different content that we’ve produced around that. One of our consultants, Jake Campbell, has done a lot of work on DITA to InDesign and that’s definitely one of those sort of unusual output formats from DITA. But the use case there is that for the most part, the DITA content was going to those usual outputs like PDF or HTML that there were a few types of documents.

GK:     Maybe you’ve got data sheets or maybe you’ve got a marketing slick or something that needs to be a little bit more highly formatted, highly designed and customized before it’s actually sent to the printer or posted on the website or what have you. In that case, taking your DITA source into InDesign and doing some of those really specific tweaks to the formatting that you can’t get from something more standardized like a PDF transform is a really good way to do that and not compromise your design.

GK:     That’s kind of one of the possible use cases for going to a sort of less typical output format is if you for the most part want to have a standard templatized design for your PDF output, but maybe you’ve got this one set of data sheets or something that does need that extra finessing in InDesign, then you have that transform that takes your DITA sources to InDesign. Then that way you still have all of your DITA in a single source and you don’t have sort of disconnected content being done over here in InDesign and then all the rest of it in a different repository in DITA. You still have that shared repository single source. That’s a really big benefit there. I want to get into now talking about some examples of unusual outputs from DITA.

GK:     Simon, I know you’ve done a lot of work on transforms for these, and so I wanted to just ask about some of the different ones you’ve done and what that kind of process has involved.

SB:     Well, one of them that we can talk about right away is sort of usual and that is EPUB. EPUB, of course, this is standard. Now this is what, version of three of the standard. Essentially what it means is taking HTML output and then packaging it together with a number of other XML files, the document that describes the structure of your EPUB. In there, as well as with a lot of things based in HTML, most of the work is actually in building the files that describe the thing. We’ve already gotten the transforms prepared for doing the HTML transform. Usually it requires not much change for going to an EPUB. Sometimes some CSS work. But for the most part, the actual work is in doing, as I say, the packaging.

SB:     With EPUB, that gets to be one of the problems because I’ve found in working in EPUB it’s a very frustrating standard to work with.

GK:     You mentioned that EPUB is a little bit of a difficult output type to work with. What are some of the challenges that are involved with developing a DITA to EPUB output?

SB:     A lot of them are actually in the sequencing. There’s a particular XML file that describes the order in which things come. It’s been a little while since I’ve touched it, so I can’t remember exactly where the problems lay. But there were issues with particularly dealing with the front matter of the EPUB, trying to get a title page in, trying to make sure the table of contents fit in, and other pagination things around that. That was in particular the really hard part. Flowing the text, most of the text actually is very straight forward. Some of the problems come with things like titles of the content. For normal structure of content with various nested topics with titles in them, those will fall out okay.

SB:     But when you start introducing things like a topic head in a map, there’s not much provision within the EPUB standard for a title to exist without any content below it. You have a title, then you go straight to the title of the next thing down, that’s rather difficult to deal with an EPUB.

GK:     It sounds like there are difficulties with regard to how EPUB renders the DITA structure, but then one thing that I can remember from testing EPUB output as well is that there’s a bit of a challenge for making sure the EPUB displays consistently across different mobile devices as well. I know that that’s a big consideration if you’re thinking about EPUB output is how much control do you want over how it displays on an iPad versus a Kindle versus any other sort of e-reader or mobile device or tablet because it’s really, really difficult to ensure it looks the same and I would say probably impossible to make sure it looks the same.

SB:     Not just across different devices. There are also a number of different readers out there on some platforms. On Macintosh and on PC, there are a number of different readers. On some less restrictive tablets, say Android, there are a number of readers you can find. For Apple, there are a handful of readers. The Apple Reader itself has its own quirks. When you test it, you have to look out for all of those things. Kindle actually brings up a whole different set of problems because the Kindle format is not quite the same as the EPUB 3 standard or EPUB 2 even. You have to make additional changes, additional modifications to go to Kindle.

GK:     Yeah. I think those are all really important things to think about. Sort of with all of these unusual outputs that we’re talking about, there are sort of different risks and different considerations to make sure that you think about before you start building those outputs.

SB:     That’s right. It’s not just the transform. It’s the testing. For some of these formats, that can consume great amount of resources.

GK:     Absolutely. What’s another unusual output that you’ve worked on?

SB:     I think through this discussion we’ll be diving deeper and deeper into weirder and weirder outputs. The next one again can be expected to be a normal output in some sense and that is LMS or learning management systems. Often people want to go either from a normal DITA that is topic, concept, task, and reference or even the Learning and Training Specialization into a content that’s consumed by a learning management system. Of course, there are dozens and dozens of learning management systems. There’s a wealth of experience to be had there and we haven’t even touched much of it at all really. One thing that’s used a lot in learning management systems is the SCORM standard.

SB:     The SCORM essentially allows you to build a package, which is transportable supposedly across learning management systems. Although our experience with SCORM is the implementation or actually putting the content out into SCORM, actually you have to have the learning management system or the JavaScript that’s driving it in mind while you’re building the SCORM.

GK:     Yeah. One use case that I wanted to bring up with regard to learning content going into a learning management system is actually LearningDITA.com.

SB:     That’s right.

GK:     That is, as most of you probably know, Scriptorium’s free e-learning resource for DITA training. We have actually or I should say Simon has developed the process that takes content from DITA into the LMS that we use for that.

March 2025 update: We have moved LearningDITA to a new platform. The Introduction to DITA course is still free, and you can sign up for courses at store.scriptorium.com.

SB:     That’s correct. For LearningDITA, we used the Learning and Training Specialization for all the sources. In fact, if you want to, you can go into GIT and access Learning and Training sources yourselves and see what we did with it. Now, moving it into the learning management system was an interesting thing because first we had to find a learning management system. We found one that’s actually a plugin for WordPress. WordPress itself brings up its own issues. The transforms themselves, we had to do several things. One is we had to figure out how the learning management system fit into WordPress and what the files looked like for that.

SB:     Now, when you’re looking for learning management systems, if you’re going to be doing anything like this, one important consideration when you’re looking at learning management system is to think about the import, the import limitations or whatever facilities there are on import for the LMS. It turned out for us what we needed to do was to craft some files in a particular form and then be able to import them into WordPress. A lot of the work there really was reverse engineering. We took a look at WordPress import and export files and found the important parts, the pieces that we needed to preserve, and what we could pull in from metadata from the topics, what we actually had to specify when we were doing the import.

SB:     Then we created the transform to take our DITA and transform it to the XML, which we can then import into WordPress. Now, in addition to the actual topics themselves, the learning management system managed the questions. I’m sure many of you have been in LearningDITA and you’ve experienced the quizzes at the end of each of the sections and those quizzes are managed by the learning management system. There’s an entirely separate file format that we had to come up with for that. We had to, again, reverse engineer how the learning management system needed its question.

SB:     Then there’s also a complex process that we go through to first import the topics themselves into a WordPress and then a separate process for importing the questions into the learning management system and then tying the whole thing up and tying it up with a bow.

GK:     If you’ve got a situation where you have content creators, let’s say in the training department they’re working in DITA and they need to create output that goes to a learning management system and let’s say you’ve also got your technical content is sharing that same DITA source and maybe some other different departments, they’ve all got content in that same DITA repository, what are some of the considerations that the training team would need to keep in mind when it comes to choosing an LMS so that that output can be as efficient as possible?

SB:     That’s kind of hard. I think a lot of it gets back to my initial statement that the import facility has to be there. Much of the issue with learning management system itself is just mapping from the DITA into what you can move into the learning management system. With DITA, it’s incredibly flexible. We can generate almost any type of output that we want to with it. I can’t think of any limitations actually in the authoring because almost everything has to be done in the transform itself. Now, once you’ve selected the learning management system, that selection process may come with certain limitations, certain things that are possible to do in learning management systems, some things that are not. That then is going to feed back into what the writers can do or what your content creators can do.

GK:     Yeah, and that’s why it’s so important I think to keep up that communication amongst everybody that’s going to be using your DITA sources and contributing to it and making sure that what one team does doesn’t affect something that another team’s going to do in a negative way and that everything’s working together as sort of this DITA ecosystem. Speaking of training materials and training content, you’ve also developed another output type, which is DITA to slides. I wanted to talk about that a little bit.

SB:     Yeah. That actually falls into two different groups. There was the initial attempt that I made a number of years ago. As part of my work here, I do a lot of training. I thought, well, the training content itself ought to be in DITA. That’s fine for putting together the sheets that I work from when I’m doing training, but then we’d also want those same materials to be presented in slides on the screen while I’m doing the training. It occurred to me that I could write a transform. HTML seemed to be the obvious choice. It was a fairly flexible and it could be used almost anywhere. We can take this content and transform it, and I can generate my slides and I can generate my handouts and other training materials all from the same content.

SB:     There were some things that I had to do and this actually will get into the second aspect of doing a training or slide material and that is there has to be a system somehow of indicating what you do want to have on the slides and what you don’t want to have on slides. With my first slide transform, what I was able to do is make certain rules about where things appeared in bulleted lists, whether it was in a paragraph within the list item or not, and then add some output classes to say, this is not for the slides, this is not for the printed output. Then using those rules, I could generate materials for both. The second effort onto doing slides is a little bit more complex. This is at a client request.

SB:     They had a bunch of training materials and they needed to have it not just as handouts, but they wanted to use PowerPoint. We will talk about going to Word a little later, but we’ve had some previous experience in trying to go to Word or Office packages. This time around it occurred to me there were two things, and one is my experience was in dealing with almost anything in Office, hierarchy is mostly… Hierarchy is ignored. You have to throw out the hierarchies. That is you have to flatten your structure. But the other thing was that in our other effort, we went directly to the XML, the Office XML format. That turned out to be a really, really hard thing to do.

SB:     This time around it occurred to me, well, Microsoft Office has a great VBS, that is Visual Basic Library, for loading things into PowerPoint files. What I did was created something that’s a two step process. The first process is to take the DITA and to flatten the structure. While I’m flattening, I can do a lot of pre-processing, I can identify things. The output of the pre-process is essentially built with slides in mind. As I’m building this out, I can build out decks of slides from the content and tag things accordingly. This output format by the way is not XML, and I’ll get into that in a little bit later. With the output format I can then put all the content that’s going to go out and then I take that output format and run a Visual Basic Script on that output file, on that flat file.

SB:     The Visual Basic then actually finds the PowerPoint template, opens the template as a new document, and then starts to load content into that template slide by slide based on the content of the flat file. Because it’s based on the content of the flat file and because I found parsing limitations very, very restrictive in Visual Basic, I just used a plain text file that has some simple delimiters. It would be really nice, it would be much, much nicer if I could have used XML, but unfortunately I couldn’t. I looked into a number of different ways of using XML in Visual Basic and it’s just not possible. I can parse the file with my simple rules in Visual Basic and load it all into the slides.

SB:     One of the other things that I found as I was working in Visual Basic was that there are actual differences between how Visual Basic behaves in windows and how it behaves in Macintosh. I do a lot of my development work in Macintosh, but the client was in Windows and we knew that was going to be an important target for them. We started testing in Windows and found that things that I had developed in Macintosh just did not work under Windows. Interestingly while I was trying to develop some other things in the process itself, I found the lesson back the other way. I was looking, did Google searches, trying to see how in Visual Basic I could create a file selection dialogue let’s say to find the file that we’re going to be loading into the template.

SB:     I could find lots of things about how to do it on windows. I thought, well, it should work just the same on Macintosh. It turned out it didn’t. On Macintosh, actually I had to write a whole separate routine for locating the file and loading that file into the script.

GK:     Yeah, and I think that really gets back to some of the points that we made earlier when we were talking about EPUB and testing across different platforms and different readers, and then the same thing with going to something like SCORM and testing across different LMS’. It’s going to be different across different systems, operating systems as well. That’s something to keep in mind if you have to build one of these types of outputs to consider are you just using Windows or just using Macintosh or do you maybe have a use case for both? That’s all going to play an important role in kind of how much time and how many resources are going to be involved in developing an output like this. Earlier you mentioned that you had done some work for not just PowerPoint, but for Word as well.

GK:     Tell us a little bit about that and kind of how a DITA to Word transform works.

SB:     Right. To recap what I was saying initially was that we had gone from DITA straight to the Word DOCX XML format, which turned out to be very, very difficult to work with. It’s very, very difficult to test, very difficult to get things right. It expects things in a very particular order, and it expects all the content to be flattened out. We were successful. We managed to complete the project going to Word. But if we were to do it again, we would certainly use the Office libraries and again use Visual Basic. The nice thing is now that we’ve got a format that we can use for flattening the file, the text file that I’ve developed for PowerPoint will actually work very well for Word. In the future if we need to go to Word, we’re all set and ready to go with that.

GK:     That’s really great because I think it is pretty… I don’t know if common is the right word, but I think it’s pretty smart if you have got a lot of people using Microsoft Office products that you might want to have an output that goes to Word and an output that goes to PowerPoint, that both kind of used that Visual Basic starting point. I think that makes a lot of sense if that’s kind of a need at your company that you know that you’ve got a lot of people that need to take that DITA content into various Microsoft Office programs, that having that Visual Basic beginning point is really a solid plan.

SB:     There’s actually a third Office product, which leads actually into the next area of things that I was going to discuss, and that is Excel. Because Excel, of course, spreadsheet is nothing more than a database in a matrix. We’ve done a number of things converting our DITA content to database formats of one kind or another. But some of the other formats that we’ve gone to for database, they’re all fairly much the same and because they’re all text formats, fairly easy to go to. That includes comma separated value files. There we’ve often had people who say, well, we need a table converted to a comma separated value files so that then we can load it into a database or we can load it into Excel. We’ve also done a number of things using JSON as our output format.

SB:     JSON is very nice because it’s a nicely structured format. It’s a little bit more forgiving than say comma separated values are particularly when you’ve got content that might have commas in it. Also, JSON is readily interpreted by a number of tools, including JavaScript. In face, JSON was based around JavaScript and for that reason is very, very useful format to create data in. Almost all of these when we need to go to a database, usually we’re starting with DITA content in tables. It may be a pricing table that’s in a data sheet and DITA also needs to go into some database or other. Often it’s lists of standards, lists of product availability, what are the serial numbers associated with the product with particular specifications and those types of very catalog like things.

GK:     We’ve talked a lot about taking DITA content into some of these Microsoft-based products like PowerPoint and Word and Excel, but what about going in a different and I guess more visual direction and taking DITA into SVGs or Scalable Vector Graphics?

SB:     That’s actually an interesting thing and for me a very fun thing to do. I like playing with graphics. I like playing with SVG. SVG itself is nice because it’s an XML format, so we’re going from DITA, which is XML, to another XML format, which is always a whole lot easier than trying to go to something else. We’ve gone from DITA to SVG for a number of different output types. Some of them are things like diagrams of registers in chips. We have content in a table, and we can take that tabular content, which specifies a bit offset position, width for the field, and what’s the content of that register, watch the register’s name or actually the field’s name, and then lay those things out into an image that looks vaguely descriptive of the way that register appears.

SB:     This was incredibly useful to one of our clients because they had thousands and thousands of these things. The information was extracted first from a database and then moved into XML in DITA and then we pulled it out and were able to format this.

GK:     Yeah. I’ve seen a lot of cases too where you have parts diagrams where different pieces have to be labeled. For localization purposes, they wanted the text to be in one layer and the image to be in another. That’s where SVGs were really, really helpful. We’ve seen that as a major use case for going from DITA to SVG. We’ve also seen things like with training content, if you’ve got a hotspot style question or something where you’re matching up pieces of text to pieces of an image, then that’s where SVGs can be really helpful as well to again have that separation where your text is in one layer, your image is in another. That works for both that and for localization purposes. There’s a whole lot of benefit that you can get out of having SVGs as an output format.

SB:     That’s correct. It works not just in the SVG, but actually in the DITA sources themselves because we have one client where they have some massive tables that describe in detail how you put together a particular part number that describes a particular thing. Again, there are fields where there are values, so an A represents a yellow one and a B represents a green one, things like that. We can take that information from the DITA content and create a diagram that shows again how a person making an order would put together the part number for their appropriate piece of equipment. One of the things we can do at the same time is we can generate a list on the side of what are the actual names of these things.

SB:     Now, this information comes from DITA and the DITA can start out in English as the primary language, but also the DITA then can be translated. We can take in that translated DITA and then convert it into just the same table, but only in German or Swedish or Spanish or whatever we want to choose at that time.

GK:     We’ve talked about SVGs as something where you’re going from DITA which is one XML format to another. I know that one thing that I wanted to address is going from XML to something else instead of necessarily DITA to something, are there any cases where you’ve just gone from XML to another format or maybe XML to XML in a similar way that you’ve done with SVGs?

SB:     Yes. Getting back to some of our earlier examples, we have gone from DITA to XML when dealing with training materials because again, we’re dealing with content in an LMS. The LMS’ input isn’t necessarily going to be DITA. In fact, it usually isn’t DITA, but often the LMS will take its input content in an XML file. We have to go to the XML file to do that.

GK:    I want to kind of attempt to wrap things up with a final I guess not just question, but set of questions or considerations around unusual outputs and that is just what advice would you give if a company is thinking about maybe they’ve already got PDF or HTML or something that’s more typical, but then they’re thinking about adding maybe DITA to PowerPoint or DITA to InDesign or something that’s a little bit less common? What advice would you give them regarding the time and resources involved and some of the challenges that they might come up against that they might not have encountered when they did their more typical outputs?

SB:     Well, there’s a great deal of crystal ball time, of course. The real problems you’re going to find are when you get to a brick wall. You work on something and then you find that actually there’s no way to do it or it’s going to require something different. Often that something different in DITA translates back into either using an output class or creating a specialization. If you can, look at the formats, look at where you’re going and what are some of the requirements of that format and are there going to be things that may be difficult to come to from DITA. You can do some of that work early on, but a lot of that experience, a lot of that learning is going to actually occur when you’re actually trying to go into whatever format you’re going to.

SB:     I would say in general pad your estimates, build in a lot of extra time to allow for dead ends, allow for where you had to try… You thought your implementation was going to go in one direction, but you found out eventually that you have to do something different for that.

GK:     Yeah. I would agree 100%. I think that going to something that’s a less typical output does require a whole lot more time and resources for testing, for not just testing, but testing the limits of what’s possible. It’s important to think about that and not say, oh, you know, it took such many hours to develop PDF, so it’ll be about the same for InDesign. That’s absolutely not the case at all. You really have to think about what are you trying to do? What are the possible limitations that you’re going to run into? What are the compromises that you’re willing to make when you do run into a limitation because it’s pretty much inevitable, and how much budget or time resources do you have to dedicate to developing that output?

GK:     Those are all really important things to think about when you’re still in the planning stage before you get too deep into it.

SB:     That brings to mind another thing is part of your work is going to be training your authors because there’s often going to be things, whether it’s an output class or a specialization, where the authors are going to have to know about particular decisions you had to make, things they have to do, things they have to do a particular way in order to get it to work. You’d like it to be just perfect that you can author anything in DITA and convert it into whatever your target format is, but the truth is you will find limitations and you will have to work around those limitations, but then you have to communicate how do you work around those limitations to your writers.

GK:     I think that gets to a point too about kind of what are the importance of your different outputs, what’s the priority for you. Because if you have a very, very strong business need to go from let’s say DITA to Word and that’s kind of a much more atypical output than DITA to PDF or something, but that’s something that’s very, very important for you, then that does have a lot of impact on maybe how you’re writing and structuring your DITA content. It cuts both ways and you can’t just take one particular method or standard of writing your DITA content and then say this is going to work across the board for PDF and HTML and Word, PowerPoint, InDesign, whatever. You have to think about which outputs are the most important to us and then what needs to be in our DITA content model to support that.

SB:     Yeah. On top of that, I would say the last thing on testing or trying to come up with your estimates, and we’ve hit on this a number of times already in here, is just that there are differences across platforms, there are differences across tools. If you’re going to be using a number of different ones, you’re going to be using several different platforms, you have to make sure that’s part of your testing plan. You have to also plan for that in your time to know that you’ll have to add extra time to build in those accommodations for those other platforms.

GK:     Absolutely. I think kind of to wrap things up, our final parting words of advice would be something along the lines of these unusual outputs can do a lot of really cool and interesting things for you and they might satisfy some really important business requirements, but it comes with the caution of plan ahead. Really, really think about the considerations as you would do with anything content wise before you go ahead with those types of outputs.

SB:     Yeah.

GK:     Well, thank you so much, Simon, for joining me today. And thank you for listening to The Content Strategy Experts Podcast brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

 

The post Unusual DITA outputs appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 37:14
Content accounting: Measuring content value (podcast, part 2) https://www.scriptorium.com/2020/01/content-accounting-measuring-content-value-podcast-part-2/ Mon, 06 Jan 2020 14:30:21 +0000 https://scriptorium.com/?p=19427 https://www.scriptorium.com/2020/01/content-accounting-measuring-content-value-podcast-part-2/#respond https://www.scriptorium.com/2020/01/content-accounting-measuring-content-value-podcast-part-2/feed/ 0 In episode 67 of The Content Strategy Experts podcast, Kaitlyn Heath and Sarah O’Keefe continue their discussion on measuring content value based on accounting principles.

“Language evolves. Your content actually needs maintenance, just like your house.”

—Sarah O’Keefe

Related links:

Twitter handles:

Transcript:

Kaitlyn Heath:     Welcome to the Content Strategy Experts podcast, brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In part two of the content accounting podcast, we focus on how to apply the concept of a balance sheet to content. Hi, I’m Kaitlyn Heath.

Sarah O’Keefe:     Hi, Kaitlyn, I’m Sarah O’Keefe.

KH:     And today we’re continuing our conversation about balance sheets and content accounting. So, tell us what a balance sheet is, as applied to content accounting.

SO:     So a balance sheet, at least for me, was the thing in accounting that took really the longest to understand because they make my head hurt. But basically a balance sheet, if you take something, let’s start with a house because before we move on to content…

KH:     Sounds good.

SO:     Yes. So, if you have a house, you own a house, it’s worth $1 million but you have a mortgage on the house for $900,000. And so you have an asset that’s worth $1 million but you have a mortgage, a liability, which is $900,000, which then implies that your equity in the house is $100,000.

KH:     Okay.

SO:     So, the balance sheet is called that because it always has to balance. Your assets always have to balance or equal your liabilities, your debts plus your equity.

KH:     Okay.

SO:     That’s the concept of a balance sheet. So, when you do this in regular accounting, you have your bank accounts under assets, you have your loans under liabilities, and then the equity is the difference between the two, basically. And it all kind of works out.

SO:     Now, we think about that from a content point of view. Right? Okay. Well if you have a content balance sheet, what’s your asset?

KH:     The content.

SO:     The content. Except is it really? What if your content is really bad?

KH:     Okay. So maybe that’s a liability.

SO:     So maybe it’s a liability. So broadly, yes, you have your asset, which is your content. We hope it has a positive value.

KH:     Right.

SO:     And then you have your liabilities, whatever those may be, we’ll talk about that. And then sort of the difference between the two is your overall content equity.

KH:     Alright.

SO:     Alright, so on the balance sheet you’re going to… Oh, and by the way, an asset is defined as something that has long-term value to the business. So I would argue that for example, a tweet…

KH:     Is not long-term.

SO:     Probably not an asset, right?

KH:     Right.

SO:     But maybe your process or your system of extracting tweets and putting… You schedule them, you put them somewhere, you have a whole strategy for how you do that. That might be a long-term asset, just not necessarily the individual tweets. And then of course, if a single tweet goes viral, then all bets are off. So let’s just set that aside. Rarely an issue for those of us that live on the technical content side, the viral tweets. So we’ll just move on.

SO:     Content, product information, product overviews, product descriptions, technical documentation, knowledge-base articles. All those things are content, white papers, and they have value, we hope. You have the actual systems that you use to produce the content, a content management system, a delivery system, a content portal, the branding you’ve implemented on the system, the work that you’ve done to make your website look nice or behave properly. You have supporting assets like glossaries. You wrote a definition of a particular term where you put a lot of work into that, you use it in a lot of places, it’s an asset. Taxonomies are an asset or could be. Content models, you have a standardized way of writing knowledge base articles. You have a standardized way of writing white papers. You have a standardized way of writing how-to information. Those are all potentially assets.

SO:     And then on the localization side, most of those things but also translation memory, your translation management system, whatever that kind of pipeline looks like. Translation memory is the big one, right? All those pairs where you have the original, let’s say, English sentence and the target sentence in German, you can reuse it, it’s awesome. So those are all assets.

KH:     Okay.

SO:     Okay? But assets tend to depreciate. So if you think about your house, you have to do maintenance on it or it’ll eventually fall down.

KH:     Yes.

SO:     Okay. Well, it turns out the same thing is true for content, which is kind of horrifying and we don’t think about it that way. But think about, in the olden days, when we used to explain to people, if you go look at old technical documentation, there’ll be 20 pages up front that explain how to use a mouse.

KH:     Right.

SO:     This is how to single-click, this is how to double-click, this is how to right-click. Because the assumption was people didn’t know how to do it. You had to include it in your documents. Well, these days, you produce something like that, people are going to look at it and go, “Oh, hello, 1990s.” And that’s bad. So, your content has to be refreshed and updated periodically or you run into trouble, especially in countries or in languages that are newer to technology. Language evolves. So, the term that was used for computer 10 years ago might not be the term you use any more.

KH:     Interesting.

SO:     Or you see a lot of reference to cellular devices and now everybody talks about mobile phones, that kind of thing. So, you have to be careful because your terminology can actually become outdated over time.

KH:     Your content might need maintenance.

SO:     Your content actually needs maintenance, just like your house.

KH:     Right.

SO:     So that’s something to consider, right? That it might depreciate. If you want to make content valuable, it needs to be accurate.

KH:     Right?

SO:     It should not be wrong. Wrong is bad. It should be relevant. Again, cellular phones and how to double-click, I mean, it might be accurate but it’s kind of like, Ugh.

KH:     A little bit useless.

SO:     Little bit useless. Targeted to the right audience, useful to that audience. So you’ve thought about your audience and you’re actually writing stuff for them that makes sense to them. Now, you want to be careful with this because there’s an awful lot of like, “Ooh, let’s pander to a particular audience and we’re going to be all hip and cool and whatever.” It never works, don’t do that.

KH:     That’s potentially isolating.

SO:     Oh, it’s terrible. It’s like, “Oh, look, we’re going to sell to millennials.” It’s like, “But you sound like idiots.” Okay, so, useful to the target audience, doesn’t make them laugh at you.

SO:     Purpose. It has a purpose and it accomplishes that purpose. This KB article is going to explain how to do a thing. And when you get to the end of the article, you’ve actually done the thing.

KH:     Ideal.

SO:     Yeah. I mean, if you get to the end of the article and you’re like, “I don’t know what I was supposed to do.”

KH:     That’s not good content.

SO:     I wrote an article on how to do it. So, what are you complaining about? And you’re like, “Your article makes no sense. It’s right but I couldn’t do it.”

SO:     Longevity, if you write a white paper or if you write, again, a how-to, those are typically going to have more longevity than a tweet.

KH:     A tweet.

SO:     Tweet-tweet. And if you write it in a way that’s localization friendly, that’s helpful because it’s a… If you don’t… It’s more expensive to translate it. So, there’s downstream impact, right?

SO:     And then you want to think a little bit about, can you reuse it? The canonical example of this is a product description.

KH:     Right.

SO:     You write it once, you use it everywhere where you’re talking about that product. But in addition to that, glossary terms? You really don’t need to define standard deviation more than once.

KH:     Right.

SO:     You define it once, you use it everywhere in your company, assuming you’re doing things related to math and statistics, in which case, I’m really sorry.

SO:     Variants, we see this a lot in technical content where you have two very closely related products. You can write a how-to but the how-to is 95% the same for product A and B. There’s just one little step that’s different. Okay. Split out that step. Put it in some sort of a variant label. And that way, you can produce both product A and product B from the same content. So you reuse it.

KH:     And this might be applicable to the audience topic earlier, if you might have two audiences.

SO:     You could have two audiences.

KH:     You can do the millennials separately.

SO:     Okay. You can write for the millennials and I will write for the not millennials. Yeah, so but no. But you’re right, you can potentially write for different audiences but kind of embed the same audience in a single document.

KH:     Right.

SO:     Or maybe your beginner-level audience gets additional contextual information and your advanced level audience gets just… These are your steps but you could expand them and get more information.

SO:     Multichannel output is the other one and localization. Those are all kind of the multipliers that you might be able to address to make your content more valuable. So, a tightly written piece of content targeted at multiple audiences with that information labeled, potentially with variants that is ready for localization, long-term, useful piece of content is very valuable.

KH:     Okay. So then what’s the next part of the balance sheet?

SO:     So now we have liability. We had fun with assets. We’re like, “Yay. Assets, our content is so great.” But then we have liabilities. And there’s a bunch of stuff here. But really, all of this boils down to the concept, and I so wish I’d come up with this but it wasn’t me, of content debt. In the same way that you can have technical debt, which essentially is, “Hey, we need to do this but we haven’t gotten around to it.” You can have content debt. We should be doing this but we haven’t. Your content is hard to use, bad experience. If it’s inaccessible, that means there’s an entire audience you’re not reaching because they can’t consume your content. This podcast is audio but we also provide a transcript. And the transcript is screen reader-accessible, right? So we’re trying to cover a couple of different ways of accessing this information and not saying, “If you can’t listen to the audio, that’s it.”

KH:     Right.

SO:     Now, and there’s a lot of people that like looking at the transcript who can potentially hear, they just don’t want to spend…

KH:     They don’t want to spend.

SO:     20 or 30 or 50 minutes on this podcast. So, bad experience, right? The information is unattractive. It’s hard to consume on the page because it’s hard to understand because the layout is terrible. You’re using terrible colors that don’t have nice contrast. The font is tiny and not readable by anybody over the age of 35, that kind of thing.

SO:     Okay. Information is wrong or just out of date. It used to be right but then there was a product update, you didn’t get around to it. That’s canonical content debt.

SO:     Wrong audience. It’s too difficult to understand or it’s actually wrong. And shout out to Char James-Tanny who had a great example of this where, there was information that she was given, medical information that she was given that said, “These are the things that you need to do.” And she read it and she said, “This is wrong for me.” She knew they had given her sort of the generic version and she needed the specific version. And because she had educated herself on what was going on, she knew that, “Nope, Nope, this is not what I should be doing and in fact, these things will end very badly for me.”

KH:     Oh, that’s a terrible time to have the wrong information.

SO:     And it was kind of a high-stakes situation. She knew better but they just gave her the generic information instead of giving her the, “Oh, you’re this kind of patient so we’re going to give you very specific information.” So that one.

SO:     Voice and tone, it’s probably not good to be cute about something that’s life-threatening. Depending on your audience and their demographics, you might want to think pretty carefully about your voice and tone. Also, I worry a lot about people who are, let’s say we have English content, non-native English speakers who are reading something that is so cutesy and has all this like, “Hey y’all, what’s up.”

KH:     Impossible to translate.

SO:     Impossible to translate and maybe not easy to understand if your grasp of English is not perfect.

KH:     Okay, right.

SO:     So that’s something to consider but also, if you’re documenting a game, fine. If you’re documenting a medical device, not fine.

KH:     Okay.

SO:     Right?

KH:     Right.

SO:     I mean, you don’t have to be totally stuffy. Well actually, you probably do for the medical device but it’s just not appropriate to be funny in the context of “here’s how to use the defibrillator.”

KH:     I’ve seen some pretty funny pictures.

SO:     Come on. Right? Then there’s some other obvious stuff like it’s offensive, it’s problematic, it’s in the wrong format. I’m looking at it on my phone and your 27 megabyte PDF is useless to me, because you laid it out in an 11 by 17 tabloid and I’m trying to look at it on a tiny screen and now I hate you.

KH:     That has happened to all of us. That is not fun.

SO:     This morning. And then finally, translation. Well, it’s not been translated. It’s not available in my preferred language. That’s bad. Or you translated it but your translation is crappy. And I’m looking at it saying, “Well, obviously, you’re not serious about being in this market because you can’t even use my language properly.”

KH:     Absolutely.

SO:     So, those are all content debt or liabilities, right?

KH:     Right.

SO:     So you’re going to add this all up. You’re going to add up your balance sheet, your assets, your systems, and then you’re going to subtract out your liabilities and then you’re going to really, really, really hope that you get a positive equity number.

KH:     So, how do we quantify these liabilities? And assets?

SO:     That’s a really good question. And the answer is, I don’t know. We took a stab at it in this white paper. We put some stuff in. I think it’s useful to think about what’s the worst thing that could happen. So, if you’re documenting a game and you leave something out, then what’s the worst thing? If people get frustrated, they go on the forum, they argue and they yell, and your game gets a bad rating.

KH:     Okay.

SO:     If you’re documenting a product that can affect health and safety, life…

KH:     Probably talking about lawsuits.

SO:     Or people dying, people getting injured, or people being killed by the product because they used it incorrectly. Because either you told them to, the instructions were wrong, or you told them the right thing to do but they didn’t find your instructions. So they did it the wrong way because they didn’t find what you were looking for. So I suppose, at least here in the U.S., you could quantify this on the basis of how big is the lawsuit going to be? But that can lead you into trouble because what happens then is people say, “Oh, well, we’ll just set aside $5 million for lawsuits and not fix the thing.”

KH:     Oh, yeah.

SO:     We don’t really advocate that at all. So something to consider there. But I think that’s really, literally the million-dollar question is, how do we quantify the liability of bad content, of missing content, of badly translated content? I think you can do some workaround. Let’s say you’re trying to sell into China and you decide you need Chinese content in order to reach your Chinese audience. Well, first of all, you know that if you don’t translate into Chinese, the percentage of people in China who speak enough English to use your product, that’s a quantifiable number. How many people in China speak English or can read English well enough to use your product?

KH:     So, we can talk about that in terms of revenue.

SO:     And are willing to use a product that’s only in English. So, that’s a percentage.

KH:     Right.

SO:     And then you can say, “Okay, step two, what if I translate my content into Chinese but I do it really badly?” Presumably, you get a higher number than the English only kind of group.

KH:     Right.

SO:     But it seems like if you wanted to maximize your potential revenue, you would do a really good Chinese translation. And you would think about what is my potential reasonable market share with a good translation and how much am I going to get if I do nothing? So the spread between those two is your liability or your value.

KH:     Sure. And I can think of one other way that we know to qualify, or quantify rather, missing content or incorrect content and that’s tech support costs.

SO:      Ah, yes. So they call tech support, which costs you something like 30 to $50 per call.

KH:     Right.

SO:     And if you had provided the content with good search…

KH:     Right?

SO:     Right.

KH:     Critical.

SO:     Which means make a good taxonomy, which means have a good search engine, then maybe they wouldn’t have called, maybe. So I’m really interested in getting some feedback on all of this because we put this document together and we said, “Okay, well, we’re going to put a stake in the ground and this is what we’ve come up with.” But for those of you listening to this, I’d be really interested in hearing about what you’ve done with content accounting and what kinds of things you’ve done to try and quantify your content overall and is this a framework that makes sense to you?

KH:     Absolutely. Okay, well thank you, Sarah.

SO:     Thank you.

KH:     Thank you for listening to the Content Strategy Experts podcast, brought to you by Scriptorium. For more information, visit Scriptorium.com or check the show notes for relevant links.

 

The post Content accounting: Measuring content value (podcast, part 2) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 17:47
Content accounting: Measuring content value (podcast, part 1) https://www.scriptorium.com/2019/12/content-accounting-measuring-content-value-podcast-part-1/ Mon, 16 Dec 2019 14:30:56 +0000 https://scriptorium.com/?p=19399 https://www.scriptorium.com/2019/12/content-accounting-measuring-content-value-podcast-part-1/#respond https://www.scriptorium.com/2019/12/content-accounting-measuring-content-value-podcast-part-1/feed/ 0 In episode 66 of The Content Strategy Experts podcast, Kaitlyn Heath and Sarah O’Keefe discuss measuring content value based on accounting principles.

Related links:

Twitter handles:

Transcript:

Kaitlyn Heath:     Welcome to The Content Strategy Experts podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize and distribute content in an efficient way. In part one of the content accounting podcast, we talk about measuring content value based on accounting principles. Hi, I’m Kaitlyn Heath.

Sarah O’Keefe:     And I’m Sarah O’Keefe.

KH:     And today we’re going to look at measuring your company’s content value. So I want to start by identifying what type of content we’re talking about when we’re talking about content value.

SO:     Well in my mind we’re talking about customer facing content, whether it’s technical product content, high-value content, like technical reports or membership information that you present to your customers and or marketing and sales content.

KH:     So what type of value does this content generally have for your company?

SO:     Well, that is the question.

KH:     That is the question.

SO:     And it actually… it turns out to be a really hard question because there have been actually a lot of attempts at this. How do we calculate content value? And you’ll see a lot of stuff around, “Oh well we got this many impressions,” or “Our tweet got this many retweets,” or “We got this many hits on our website,” or “This many people click the yes this piece of technical information in my knowledge base was helpful,” right? But value, what’s the value of that content? We have some ideas about metrics.

KH:     Right, so user feedback type of stuff.

SO:     Yeah, or sheer volume, the volume of people that are looking at something. And so-

KH:     But does that necessarily translate to value?

SO:     …right. We don’t really know. And so that was… I wrote this white paper talking about the concept of content accounting and I’m not the first one to kind of touch on this. There’ve been a couple of other papers out there where people have tried to kind of address content value, but in this specific white paper, what I did was I tried to take accounting itself as a framework for thinking about content. So for those of you that are not accountants, which I suppose is…

KH:     Not me.

SO:     …perhaps a lot of our listening audience, you’re probably familiar with the idea of a profit and loss statement and maybe less familiar with the concept of a balance sheet. But those are kind of the two basic documents that you see in accounting used to calculate the value of things, the value of a company the… What the performance of your company really. So I thought, “Well, all right, can we do content based accounting? Can we do accounting that uses a profit and loss, a P&L, and also a balance sheet but looks at content related aspects to try and figure out how to value information.”

KH:     Right, so what might that look like? What does this profit and loss sheet look like for content accounting?

SO:     So a P&L for content accounting. If you’re somebody like Netflix, then it’s pretty straight forward, right? Because as Netflix, you know that you have your streaming income and you know that your subscribers are paying you $8 or $10 or 15 or whatever they’ve bumped it to these days for access to your content, right? Because your content is directly your product. So in a very simplified way your income is your streaming money or the money people are paying you for your content. And if you think about a book publisher or something, same thing, people pay the money to get their hands on books or movies or other kinds of content. On the L, profit and loss, on the L side, on the expense side, you have the cost of creating or perhaps licensing that content. How much does it cost to make a Netflix series? How much does it cost to license a series from an existing TV network?

SO:     How much does it cost to get an author to write a technical book about computer subjects that you’re then selling in the bookstore online or whatever. How much does it cost to produce an eLearning course that you then sell people? So if you’re a publishing company, if you’re a content company, then you know that this is all pretty straight forward, right? Because you have your cost of producing content and you have your income from content.

KH:     Absolutely. So then how does this change when we start talking about marketing content and then technical content?

SO:     So now it gets obnoxious. So you set aside the content companies and you start thinking about, “Okay, well I’m a software company and I produce content that is important to my customers,” or “I make consumer electronics,” or “I make anything else in the world that’s not directly selling a book or a movie or a piece of content, a piece of intellectual property to your customers.” What you have now is you have a product like piece of software or a piece of roof rack, a car, whatever. Alright, how do you pick which car to buy, right? You probably do some research and you say, “Hey, I like this one. Well I can’t afford it, moving on.” You sort of go through this process of I want a vehicle that has these features or you know, “Hey, I could get to my office by moped, ’cause it’s pretty close except there’s a really narrow road so I’d probably die, so maybe not that.”

SO:     But you kind of work through what you need from the product that you’re going to buy and then you go looking for the product that meets those aspects. And that’s where the customer facing content from the company starts to come in. If you’re in the market to buy something, you’re looking for information on which product meets my requirements. Is it fuel efficient enough? Is it electric or not, as the case may be. Does it have enough cargo space for your dogs and cats and gerbils and kittens and giraffe or whatever that you have as a pet?

SO:     Those are the kinds of things that you need to think about. What features do you need in that? And the marketing material can be kind of persuasive and aspirational. People like you buy these kinds of cars and, “Ooh, don’t you want to be one of the cool kids that has this kind of a car?” I’m oversimplifying marketing, but to a certain extent marketing is about persuasion and saying this is the one you want to choose and here’s why, features and benefits. Your technical content though, if I know that I need to haul around a 150 pound dog, I’m get very interested in…

KH:     You might be looking for specifications that are in the technical documentation.

SO:     I might want a bigger car. Yes. So you’re looking for how much cargo space is there? Is there a roof rack? Not for the dog. We don’t do that in this state. But you look for those kinds of things. What are the exact specifications because you might decide, “Well I’m not even going to consider this car unless it’s like a hybrid.” Okay, well you can rule out a whole pile of stuff based on it not being a hybrid. And maybe you know a little bit more about that, and you’re pretty specific about what you want. You want batteries that are more easily replaceable or have been built in a way that’s more ecologically sound. So you can think-

KH:     That stuff’s not usually in the marketing content.

SO:     It’s usually not… Exactly, right? So you end up in the technical content. The research says that when buying, now this is consumer electronics, not cars, but when buying consumer electronics, something like 80% of people will look at the technical content before they buy. Because they’re looking for some little spec that’s in there. Okay, so back to your question, how do you quantify that?

KH:     How do you quantify that?

SO:     What is the value of a piece of content that says this is the cargo space and these are the specs for the battery that then leads somebody to say, “Oh wait, I want that car.”

KH:     Yeah, how can we measure that?

SO:     How do you measure that?

KH:    How do you measure that?

SO:     So that was the question I tried to tackle with perhaps varying degrees of success. And what I basically landed on was that you have these five aspects of income that content contributes to, and there’s a lovely pyramid drawing in the white paper, right?

KH:     Which we’ll link to.

SO:     Which we’ll link to. Item one which we haven’t actually talked about yet, is compliance. If you produce a product that is regulated, you must comply with the regulations or you don’t get to sell it. If you’re doing pharmaceuticals, you have to meet certain standards about drug labeling. If you’re selling a car, you have to meet certain standards around safety and discussing the safety equipment that you’re required to have in various markets. So compliance is one of these things. It’s very hard to quantify except that if you don’t do it…

KH:     You can’t sell your product.

SO:     You get zero revenue. So in a way it’s like that old ad from the credit card company, it’s priceless, right? Okay. So that’s one. Now the second one, kind of moving up the pyramid is cost avoidance. How do you make things cheaper and more efficient? If you look at your compliance content as this horrific cost of doing business? Well, how do I do compliance as efficiently, as inexpensively, as fast as possible?

KH:     So what are some costs that you might be avoiding?

SO:     Usually what we’re looking at here is efficiency. So don’t duplicate and triplicate your content and then have to change it in three places. Don’t make dumb mistakes because you copied and pasted out of the database and missed a number and then your numbers are wrong and now you’re in trouble with the FDA or the somebody, some regulatory body. So cost avoidance usually… And the interesting thing is this is where the focus has been for the last 10 or 15 years. Let’s automate the formatting. Let’s do a lot of reuse. Let’s automate our localization as much as we can and create these really efficient workflows that are better than sort of doing things by hand and that are more scalable.

SO:     But cost avoidance, you have to be really careful because you don’t want to cost avoidance yourself out of a job or a mission, right? And so there’s a bunch of other stuff that you need to look at. So we have compliance and cost avoidance, which are kind of baseline, prereq, foundational, whatever. Revenue growth. If your content is really good, people might choose your stuff over the one that they looked at and they’re like, “I don’t know what these people are writing about, but I don’t understand it.”

KH:     And I think that’s something that isn’t often talked about with technical documentation necessarily, about gaining revenue from your technical documentation.

SO:     If you do a search on a particular feature that you’re looking for and you find it in product A but not competitor product B, you’re probably going to buy product A.

KH:     Absolutely.

SO:     Which implies that you need to pay attention to SEO and those kinds of things. So revenue growth, arguably a really great piece of marketing could drive your revenue because people read it or they see the ad or they read the white paper and they say, “Wow, this product sounds great. I should look into it some more. And then they end up buying it,” and conversely, really bad marketing and put it out there and you pay to get it out there to everybody. And they read it and they’re like, “I don’t think so.” So reach is not everything. Just reaching a lot of people isn’t necessarily going to help you with your sales if your message isn’t the right message. So revenue growth.

SO:     Then we move up to competitive advantage. And this is sort of the idea of… Let’s say you have two products that are pretty comparable, but my product has this one extra feature that your product doesn’t have. You have some other feature. But what I want to do is I want to highlight my product’s extra feature and make sure that that is everywhere. And everybody knows about this extra special feature because why would you ever buy a product that doesn’t have my special feature? And so if you do a really good job with content and a really good job with providing technical information, people might understand more about your product and be willing to pay for that cool feature that they didn’t know they needed.

KH:     And that sort of ties in nicely to the apex of the pyramid here.

SO:     Yeah. So the top of it is branding. And you think about companies that have done a really good job with branding, companies that are known for having really great design, really great industrial design or software design, UX, UI experience. People are willing to pay a premium to get those products. Some people are willing to pay a premium to get those products, but your branding helps sell the product, right? It helps you get that sort of halo of goodness and people grab your product.

KH:     Absolutely. So then what type of expenses are we talking about here?

SO:     So on the expense side, you’re looking at the cost of producing the information largely. So what does that look like? You’ve got some staff that need to produce the information and you’ve got systems, whether it’s workflow or anything like that. And you’ve got localization in order to get everything rolled out to your markets, wherever those may be. So basically you’ve got the staff that actually creates authors, delivers the content. You’ve got the staff that does things like social media amplification, distribution, that kind of thing. You’ve got the software itself that you’re using to kind of produce the thing and then you’ve got some others, some ancillary things, they tend to be smaller potentially like overhead facilities, that kind of thing.

KH:     Right. Okay. And then so when talking about expenses, how do we compare those expenses to the benefits that you’re getting in your content? So for example, how can we say, this piece of software that we’re buying, how is that going to then benefit our content and how is that going to add to the value of our content?

SO:     So if you’re making an argument to invest in a piece of software or really anything, you have to prove that we’re going to spend X dollars and we’re going to get Y value and preferably Y value is greater than X dollars. That’s how you do a business case. I know the accountants right now are crying and I’m really sorry.

KH:     It’s not me.

SO:     Not you. But basically we’re going to invest X, we’re going to get Y where Y is greater than X. We can squeeze a lot out of efficiency and done that because it’s easy. It’s the low hanging fruit to a certain extent. But you also have to look at things like, well, if I put this information in a better system, in a better set of files, in a content management system as opposed to just managing a pile of files somewhere on my laptop, what does that buy me? I’m going to be able to produce the content better, be more accurate, do all these things, maybe produce it faster. I can be more consistent with my corporate identity branding. I can rebrand when the company rebrands weekly, monthly, whatever. Or you get acquired, it’s not your fault, but you get acquired and then they’re like, “Hey, you have to use our new branding.” And it’s like, “Ah, rebrand again.”

SO:     So those are the kinds of things that you kind of look at. If I invest some time in writing a better product description, right? I mean I can write a really bad one in five minutes or I can take two hours and write one that’s actually really good. And maybe I’ve thought a little bit about search engines and keywords and those kinds of things. Well how much more valuable is that two hour description than the five minute description?

KH:     And so I think that goes back to how are we measuring the benefits?

SO:     How are we measuring the benefits? Right and so you have to… Essentially, you have to be able to prove that somewhere on that pyramid you’re adding value, whether it’s through revenue growth or branding or way down in efficiency, cost avoidance, compliance. If it’s something like writing a better product description, then you’re probably focused on revenue growth, right? Because you’re saying I’m going to write a better description. More people will read it, and then more people will buy.

KH:     So that pretty much wraps up the profit and loss statement, right? Okay. So we’re just about out of time.

SO:     Who knew you could talk about P&L’s for this long.

KH:     Right. But the other important part of this is…

SO:     The balance sheet.

KH:     The balance sheet, right? Okay. So we’ll talk about that on the next podcast.

SO:     In part two. Come back for more accounting concepts.

KH:     Lovely. Well thank you Sarah.

SO:     Thank you.

KH:     Thank you for listening to the Content Strategy Experts podcast brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

 

The post Content accounting: Measuring content value (podcast, part 1) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 17:16
The need for a localization strategy (podcast) https://www.scriptorium.com/2019/12/the-need-for-a-localization-strategy-podcast/ Mon, 02 Dec 2019 14:30:19 +0000 https://scriptorium.com/?p=19368 https://www.scriptorium.com/2019/12/the-need-for-a-localization-strategy-podcast/#respond https://www.scriptorium.com/2019/12/the-need-for-a-localization-strategy-podcast/feed/ 0 In episode 65 of The Content Strategy Experts podcast, Elizabeth Patterson and Bill Swallow talk about the need for a localization strategy.

“There may be things you’re writing in your source content that you don’t want literally translated. In many cases, there are stark cultural differences between one location and another. Writing something at all may be inappropriate for another audience.”

—Bill Swallow

Related links:

Twitter handles:

Transcript:

Elizabeth Patterson:     Welcome to The Content Strategy Experts Podcast, brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize and distribute content in an efficient way. In this episode, we talk about the need for a localization strategy. Hi, I’m Elizabeth Patterson.

Bill Swallow:     And I’m Bill Swallow.

EP:     And today we are going to talk about the need for a localization strategy. So I’m going to start with a really general question here, Bill. Why might companies need a localization strategy?

BS:     Well, before we dive into why a company might need a localization strategy, I think it’s important to dispel one of the more common myths out there. Despite a growing need for multiple languages and content going out to multiple regions, translation is still seen as a bit of a commodity or a commoditized service. In that sense, you would write content, throw it over the wall, sending it to the translators, waiting, and then you get it back and you’re all good. If only it were that simple. You spent a lot of time and effort and money developing your source content and the last thing that you really want to do is just throw it over the wall and hope that someone’s going to understand what you are doing and why, translate it appropriately, and send it back to you in a format that’s usable.

BS:     Ideally, you want to have some kind of prep behind that, making sure that the translators know what your intent was with the content, whether it’s technical information, marketing and so forth, how they should be translating it, what their specific audience is, not just the language but who are the people who are going to be reading this in that language. Where are they located? Because that also has a high impact on the success rate of your translated content. But if you just throw it over the wall and expect a translator to understand all of these things, you’re really doing yourself a disservice and you’re not taking advantage of all the value that you’ve put into developing the content from the beginning.

EP:     Right. So I guess my question here is, say you are considering a localization strategy, but you’ve got some more time before that’s actually going to happen and you need something translated, so you need something done quickly. What are some options if you need something done quickly while you’re working on putting a localization strategy in place?

BS:     Probably the best option is to at least meet with your translator ahead of time, give them information that gives them the context that they need in order to understand the purpose of the content. If you don’t have a localization strategy in place yet, it may not necessarily be written in the best way for them to translate, but they should be able to understand with that context you provided how best to translate that content and send it back. Now, that’s just speaking in terms of voice, tone, audience appropriateness, that type of thing. So the translator at least has that information, but there’s a whole layer of other things that really ideally should be done before you send something out for translation.

EP:     So how might you prepare for that translation then, and what might your localization strategy look like?

BS:     Sure. There are a lot of factors that go into developing a localization strategy. The first thing is knowing exactly where your content is going, what languages do the people speak, where in the world the content is going, so what are the cultural implications of sending content that way. That way, you can kind of start collecting a body of knowledge that you can share with the translators and say, “These are the people that we ideally want to be targeting with this information. We understand that there might be cultural concerns above and beyond just the language concerns and the local idioms and whatnot that you need to be mindful of,” and work with the translator at that point to develop a plan for that content. There may be things that you’re writing in your source content that you might not want to have literally translated or even remotely translated in some other term. In many cases, there are stark cultural differences between one location and another, so writing something at all may be inappropriate for another audience.

EP:     Right.

BS:     And then there’s the whole technical side of things. How is your content written? What tools are you using to develop this content? Are you leveraging and maximizing the efficiencies in those tools that can then help the translation process move along more quickly.

EP:     And even in addition to writing, I mean, you have to think a lot about the images and things that you’re using within your documentation because that can have cultural implications as well.

BS:     Oh, images are a huge one. There’s the subject matter of the image. I do remember, I used to work in translation, and I remember receiving feedback from a particular translator about at that point, my other company’s client had sent over an image that had a woman holding a baby. And for that particular language they had to change the direction. So in order to make the title flip and everything, they just flipped the image, so they transposed it from right to left to left to right. So suddenly, you have this mirror image of a woman holding a baby and the wedding ring that was on her finger is now on the wrong hand as far as the published thing goes, and having an unwed mother in that particular locale was pretty much a taboo subject, so things like that you need to be mindful of. I mean, it’s something that you normally wouldn’t think of, but fortunately the translator caught it when they saw the image before it went out to the public.

BS:     In other cases, with images, if you’re using any kind of call-outs or things like that on the image, it’s important to remember that if the text is embedded in the image file itself, it’s going to be a lot more difficult to translate and could be impossible to translate that particular file. The translator would have to recreate it and impose the translated text either on top of the source language text or create a brand new image with that translated text in there. And then of course, anytime you change that text, the same process needs to happen. So that’s a lot of rework that really you can avoid generally by using a different system. A lot of people choose to use numbered call-outs where they just have numbers in the image and then they put the text in text below the image. That way, you’re not translating the image at all. It’s just a pass through at that point.

BS:     There are other considerations with imagery, such as colors. Colors have very different meanings in different cultures that you need to be aware of. Same thing with hand gestures. If you’re using hand gestures in photos or in icons, those can be problematic as well because not everyone … Well to be blunt, not everyone uses the same rude gesture.

EP:     Right. That’s definitely true. I remember when I was going through my graduate school program in technical communication, part of what we would look at in one of our lessons, especially when we were doing visual communication and design, we looked at images and colors and gestures that had different meanings in different countries and that was just really eye-opening to me, because I felt like I was aware of those things, but yet I learned so much more about that. And so when we talk about this localization strategy and how you can’t just throw it at a translator because there’s so much more to it than that, it makes a lot of sense cause there’s all sorts of things that you really have to be aware of, and it varies based on the industry, too.

BS:     Oh, absolutely. And it’s not to say that you can’t use these things, but you have to be mindful that they’re going to have a different meaning in a different culture, so you need to plan ahead and have an alternate set for that particular group. Now, if it’s something that’s built into the product, let’s say you have an icon in the product with a particular … for whatever reason you have a hand doing a gesture, you may want to rethink that in the original product design and change that to something that’s more universal. That way you don’t have to change the product UI and the screenshots and any documentation that goes along with it. You can just use the same information or the same icon throughout the entire process from the product to the documentation.

EP:     Right. So I think this kind of leads us into my next question. What are some common roadblocks that companies might run into when employing the localization strategy? Obviously, some of these challenges with imagery and gestures and that sort of thing can pose a roadblock, but what else my companies run into and how can companies best prepare for these?

BS:     The biggest one is having the sudden realization that you didn’t do your homework upfront, and that’s a tough one to get around. But really the only way to do it is to start. And at that point, usually that discovery comes at a very inconvenient time. So it comes at a time when either things are just about to go out for translation and someone raises an issue or worse, the translator comes back and says, “I can’t translate this,” or, “I can’t position this for the audience that you’re intending.” And even worse still you hear from a customer in that location that says, “What does this mean? I don’t understand,” or, “How dare you use this image?” That is probably the worst case scenario.

BS:     But some of the roadblocks there, again, is first of all understanding that you’re going to need one and the timing of that and being able to allocate resources to building that plan, to kind of walk back the reason why you’re translating and what you’re translating and being able to incorporate those changes that would facilitate translation. A big part of translation, especially in the software world, involves internationalization, which is basically the separation of all of the UI text and icons and so forth that are used within the user interface and having them in a place outside of the code that can be modified, so that way you’re not sending code files to your translators and expecting them to weed through all of the code to get to the strings that need to be translated. You have all of those strings and imagery and everything else in a separate set of files that can be modified and then brought back into the application. That’s critical.

BS:     And by the same token, you can internationalize a lot of your documentation and other content infrastructure as well through the use of templates. If you’re using any form of XML, you can certainly do that, using separate strings files and separate resource files, but basically being mindful of anything that’s going to be used and reused over and over and over again. Get it out of the meat of what you’re sending the translator and build it into some kind of automated workflow where it’s applied to the translation after the fact rather than having the translator translate the same label every single time they see it. That way they replace it once or they translate it once, you replace it everywhere.

EP:     Right, because it really doesn’t make sense to pay for them to translate the same thing over and over again.

BS:     Right. I mean, there’s definitely a cost there, particularly with a lot of the different … If you’re writing information that has a lot of warnings in it, chances are those warnings appear more than once. It doesn’t make sense to write it more than once and it doesn’t make sense to translate it more than once if it says the same exact thing every time. So being able to externalize that from the content and then be able to drop back in saves a ton of money and it saves a lot of time on translation as well.

EP:     Right. Speaking of saving money, how exactly can localization and employing a localization strategy help a company to maximize their return on investment? Because I think that’s an important thing to mention because in any company management is going to want to see the money. How are we saving the money? How are we making the money?

BS:     Mm-hmm. And really, you just hit on the two key points. There’s two factors in the return on investment in any kind of content or localization strategy. One, there is cost savings and there’s additional sales, so being able to grow money and save money. With localization strategy on the save money side, you can spend the time upfront to do things, quote unquote, “the right way”, to minimize the total amount of unique words that need to be translated by a translator.

BS:     Secondly, and I should say more additionally to that, being able to leverage your translation memory from one translation to the next, obviously for the same language, but being able to leverage that to make sure that you are using the same wording and phrasing when you add new content and that when you modify existing content that you’re very careful about only modifying what absolutely needs to be changed and not making subjective changes to the content to say, “I really think that we should have written this phrase this way. It’s not wrong the way it is, but I like it better this other way.” If you can avoid edits like that, unless they’re absolutely necessary or they add additional value to the content, leave it alone because otherwise you’re just adding cost to the translation process.

BS:     And as far as being able to grow money, a localization strategy should keep in mind not only the languages you need to translate into and the locations that you’re sending content to, but where you’re going to be sending content and translating content in five years, let’s say, so in the future and being able to plan for that upfront and be able to really target who’s going to be getting this content and why and planning your process accordingly.

BS:     By doing all of these things, you’ll start to streamline your content development process and your translation process, which will significantly reduce the amount of total time it takes to your content. That means faster time to market, which almost, you can’t put a price on that because you can enter a market quicker. Let’s say you’re going against a competitor in a particular market. Neither one of you has targeted that market before. If you have all your ducks in a row up front, chances are you’ll be able to beat them to the marketplace.

EP:     Right. And so localization or employing a localization strategy is really essential for your business to be successful if they want to grow.

BS:     Exactly.

EP:     Right. Well, I think that that’s a good place to end, so thank you, Bill.

BS:     Thank you.

EP:     And thank you for listening to The Content Strategy Experts Podcast, brought to you by Scriptorium. For more information, visit Scriptorium.com or check the show notes for relevant links.

 

The post The need for a localization strategy (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 15:49
Small scope content strategies (podcast) https://www.scriptorium.com/2019/11/small-scope-content-strategies-podcast/ Mon, 18 Nov 2019 14:30:01 +0000 https://scriptorium.com/?p=19346 https://www.scriptorium.com/2019/11/small-scope-content-strategies-podcast/#respond https://www.scriptorium.com/2019/11/small-scope-content-strategies-podcast/feed/ 0 In episode 64 of The Content Strategy Experts podcast, Gretyl Kinsey and Alan Pringle talk about content strategies that have a limited or smaller scope.

“When you are limited it may slow you down, but at least you’re moving forward. It’s baby steps. It’s increments. It’s important to realize, yes it’s limiting, but you can take that and make it an advantage.”

—Alan Pringle

Related links:

Twitter handles:

Transcript:

Gretyl Kinsey:     Welcome to The Content Strategy Experts podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize and distribute content in an efficient way. In episode 64 we talk about content strategies with limited scope. Hello and welcome to the podcast. I am Gretyl Kinsey.

Alan Pringle:     And I am Alan Pringle.

GK:     And today we’re going to be talking about content strategies that have a limited or smaller scope. So what we mean by this is a content strategy that sort of addresses just one piece of the overall content puzzle. It may look something like working around an established tool chain or tool set or set of processes and just making improvements in one specific area instead of addressing the entire process. It may also look like doing a smaller scale project such as a pilot or proof of concept. So Alan, do you have anything to add to that or maybe some examples?

AP:     A lot of times you already may have a tool in place, for example, and that tool has been licensed and purchased and there is no getting around it, so you have to figure out how to optimize use for that tool and then work the rest of the strategy around that tool. That’s one case I can think of immediately.

GK:     Yes, and I also want to talk a little bit about how this is different from sort of what Scriptorium I think more typically does, which is an end-to-end content strategy.

AP:     Right. It is more limited in scope as you’ve already mentioned, and you’re talking about many more moving pieces and parts when you’re doing end-to-end. In this case, you may be focusing on one type of content, one tool and it’s basic ecosystem for lack of a better word. Or there is one particular problem that you need to solve and look at and you’ve already mentioned a pilot project. A lot of times you need to prove … Listen, this particular process, tool chain, whatever could work, but we really need to kind of get some support for it by showing that it actually can work in one instance or for one particular department. So that’s one way to do it. Basically it’s a way to build consensus and to get more people to buy into your content strategy.

GK:     Absolutely. And oftentimes we see that as kind of the first step to an end-to-end content strategy, especially in that case where you’re doing a pilot or proof of concept. So you’ll start small and then kind of keeping those other requirements for a larger scale content strategy in mind, you do the smaller piece first, get that approval and that buy in and then kind of expand outward from there.

AP:     Yeah, and one thing about doing the smaller, there’s fewer people generally involved, it can be a slightly, I hate to say easier, because implementing any content strategy is not that simple, but you do have fewer people to deal with and in some cases that can be a pretty big asset to get one little thing done basically.

GK:     Yes. One other big difference that I want to kind of emphasize between a sort of small scale or limited scope content strategy versus an end-to-end content strategy is the idea of looking at the big picture and looking at future goals. That’s something that you see a lot more with an end-to-end content strategy, because you’re not just addressing sort of one problem. You’re looking at the entire content life cycle and saying how do we improve this not only in the short term but in the long term. And so it encompasses really looking at the big picture, looking at all of the different departments, the different types of content, the different tools and processes that can affect that strategy.

GK:     But when you are doing this kind of more limited scope type of strategy, you’re more likely to just focus on one sort of more immediate or short term goal. I think it’s important to still keep the big picture in mind so that you don’t lock yourself in, but there’s kind of … That’s kind of one of the big differences in scope or in scale between those two types of strategies.

AP:     I do think with a bigger strategy, you’re often looking at future proofing. You are trying to come up with a system that will not lock you out of future requirements. That is always an important goal whenever you’re changing anything, and that’s not just content strategy. Anything in a company, you do need to be thinking long term. On the flip side of that though, as you just mentioned, with the smaller, more limited scope type of content strategy issues, that may not be at the foremost of your mind or your requirements while you work on it.

GK:     Absolutely, and I think that brings us into talking about some examples of these types of engagements. So one is one that you already mentioned, Alan, I want to expand on that a little, and that’s one where there’s already one tool or one piece of a tool chain in place that’s locked down for whatever reason. It may be just due to licensing, it may be due to the fact that it works well, but other pieces of the tool chain don’t. So I think when you are in a situation like that and you’re kind of working around an established tool instead of kind of having free reign to do whatever you want, one thing that can help us looking at why that tool was chosen, does it truly work? Does it really serve the business goals that were kind of evaluated before it was put in place? And just keeping that tool non-negotiable. And if the answers to all of those questions are yes and you are kind of locked-in to working around that tool, what kinds of things would you recommend to make sure that things kind of go as well as they can or as smoothly as they can with content strategy?

AP:     Well first, take a look and see if you’re using that tool in the optimal way. It can be very hard, especially if it’s a tool that you’ve worked with for a very long time, to take a step back and look at it very objectively and say, yes, we are using this tool correctly. We are using all of the things that make it more efficient. Often in the case of content creation, you’re talking about using templates, you’re talking about using macros or other kinds of things that speed up repetitive tasks. Take a look at those kinds of angles. And take … Kind of put on your consultant glasses if you will, your consultant hat, take a look and say, “Are we really using this the best that it can be used?” And I wouldn’t be surprised that there are some cases where you realize you were not using that tool to its full potential.

GK:     Absolutely. I think we’ve seen that plenty of times, where a company buys a tool, maybe they were motivated by that tool seller’s marketing and they kind of didn’t really evaluate carefully all the things the tool could do for them, and we’ve come in and said, “The tool offers all of these features, why are you not using them?” So that can kind of be a good starting point for that sort of a more limited scope strategy is to look at what you’ve already got, use it more effectively and then from there kind of look outward and say, everything else that connects to that tool or that kind of interacts with that tool, what kinds of improvements can be made there as well.

AP:     Right, because if you were not using that particular tool well on its own, there’s a good chance when you try to connect it to something else, it’s not going to be any better. Maybe even worse.

GK:     Absolutely. So another example of a limited scope or limited scale content strategy is one where you’re working with a small or limited selection of content. And this kind of gets into what we mentioned earlier about using a pilot project. So, some examples of that might be if you just have one department that has content that you want to start with, maybe just one type of content. So maybe you just do data sheets first and then eventually you move on to user manuals and training guides …

AP:     Training or whatever. Exactly.

GK:     Maybe marketing materials. Maybe if you are focusing on localization strategy, you start with one language or kind of one group of languages. So that … Those are some sort of examples of sort of that limited scope with a small subset of your content instead of addressing all of it at once.

AP:     And I still see that kind of as a litmus test. Basically, is this really going to work in the real world? And while you may want to jump in completely all the way into the pool, you may not be able to simply because budgets may be constraining you. This may be a big part of the reason why you have a limited scope. Or there may be some management organizational issues where there is only one group that is really willing and able to get into that right now. And you have to make the best of what you’ve got basically, and if that means constraining it down to the different things that you just talked about, so be it. But the good thing about succeeding in one of these smaller projects is that it provides you with a real proof of concept. Look, this worked for this group. Let’s figure out ways to adapt it for these other groups. And then as you do that, that’s when you really start talking about the reuse and sharing across the enterprise.

GK:     Yeah, absolutely. And I think I’ve seen this in quite a few projects where they’ve started very small, with maybe one collection of documents and proved that creating those documents in a different way or delivering them, publishing in a different way can really help improve their overall time for creating content, can improve efficiency, can improve the content quality. And then they show that to other groups and other departments and they say, “Oh we should be doing this too. We should be connected to what you’re doing.” So it really is a good way to kind of get started with a content strategy that can later sort of expand outward and become one of those more full scale end-to-end ones.

AP:     Absolutely, and a big part of this, and we’ve already touched on this, is culture. It’s not always a financial issue. There may be one group that is impossibly more open to change, so you need to take advantage of that. Now, this smaller type scale, it cuts both ways. It’s really important to think, as you’ve already mentioned, at an enterprise, across the organization level. The future proofing. Years down the road, what are things going to be? Those are all really important things and you’ve got to keep them in mind. The flip side of that is that these smaller things are much more doable. They’re much more realistic and you can pick the group that’s willing to jump in and do that and to prove that it can be done. So when you are limited, yes, it may slow you down from doing the cross organization enterprise thing, but at least you’re moving forward.

AP:     It’s baby steps. It’s increments. So I think that’s important to realize, yes it’s limiting, but you can take that and make it an advantage actually.

GK:     Yes, and I think having a starting point is really important, because if you try to go too big at the beginning and you try to maybe start at the enterprise level without doing this kind of proof of concept first, a lot of times it may not even get off the ground because of these things that you mentioned. Change management is a huge issue that we see and kind of risk management as well. I know that risk tends to be a big factor in a lot of organizations not wanting to start and take that first step, but if you can even get buy-in from one small part of the organization, one department or even one writer or two, or one manager within the organization that says this is a good idea, let’s pursue it, then I think taking that first step is really the most important thing to get the ball rolling.

AP:     Yeah, and you mentioned risk management and that is a really big part of any kind of work in content strategy, or any kind of corporate change. On the risk management side though I would say as a thing you want to pick for your small scale pilot, whatever you want to call it, you don’t want to make it so easy that it shows no impact, but you also don’t want to bite off more than you can chew.

GK:     Yes.

AP:     So there is this balancing act that you have to think about and that kind of ties in what we’ve talked about, the culture, the finances, the politics, all of those things should come into play when you’re picking this thing that you want to do, but you don’t want to make it so easy that it doesn’t really show any result, but you also don’t want to pick something that’s so big and so expansive that it’s risky.

GK:     Yeah. In that case, it’s really not a pilot anymore. It’s kind of getting into the realm of a larger scale engagement and so it’s very much a fine line as far as choosing what subset of content that you want to work with or maybe what department and making sure that it’s manageable and that it’s kind of the the right type of content as well to show that it’s going to be the strategy you need.

AP:     There’s also the issue when you start working in these bigger engagements that people get what’s called analysis paralysis.

GK:     Yes.

AP:     They get so hung up by all the things that have to be done, all the choices that have to be made, they basically freeze in place and nothing gets done.

GK:     Yeah. And I think that the more risks that’s involved, the more that could happen. If there is a very major change that’s involved in the strategy, then that’s something that really happens very easily. Another type of limited scope engagement I want to touch on is the idea of developing or sort of refining one piece of a larger content strategy. And so for us at Scriptorium, that’s looked like things such as maybe a company bringing us in just to build a content model or bringing us in to work on a training plan. Develop a localization strategy. So just sort of one piece of the larger puzzle instead of doing the entire thing. So I wanted to talk about how you work around that and maybe there are some cases where there are different people working on an end-to-end strategy together, but each person is doing a different part of it. How do you make something like that work?

AP:     You talk to each other.

GK:     Yes.

AP:     It’s that simple. You have to. It is very tempting when you’re doing these smaller scale things to just go head down and not talk beyond the group. You’ve got to strike … Again, that balancing act. You still have to talk to people outside to see about the potential connections, overlaps and you also do not want to repeat work people have already done and you do not want to stomp on any accomplishments they already have. You need to pay attention and kind of put out feelers to figure out what’s going on around you and how what you’re doing can flow into and out of that.

GK:     Yeah. And I think having that communication be as open as possible is good, because if you don’t talk to each other, then what can happen is that the overall strategy can kind of get locked down in a way where … So for example, if one group is brought in to develop, let’s say a training plan and they’re putting together any materials that are needed for that, but then they look at some other piece of the strategy and they say, “Hold on. Maybe this could be done in a better way before it’s too late.” If you don’t have that sort of open channel of communication, then that means that that area where an improvement could have happened, nobody brings it up and it doesn’t happen and then all of the other groups that are affected by that decision are sort of locked into it. So I think that keeping the channels of communication open and also everyone kind of keeping an open mind if somebody sees or points out a problem that you don’t immediately just kind of take offense to it and put up a wall when somebody brings up, “Hey, maybe you could improve this piece over here.” I think it’s really important to think of the overall strategy as a moving entity and to keep an open mind and open communication around how to improve it.

AP:     You cannot use this idea of doing a pilot or smaller scale thing as an excuse to lock reality out.

GK:     Yes.

AP:     It’s very tempting to go heads down and ignore everything going around you. And there is something to be said for that in some cases, but when you were going to treat this as a piece of a larger enterprise strategy, you really cannot do that. Yes, you have to focus and get that work done, but you still have to realize that there are tentacles that connect everything. So don’t preclude those possibilities when you’re coming up with your strategies. And don’t have every little department doing their own thing and then try to just throw everything together and assume it’s going to work, because I guarantee you it will not.

GK:     Absolutely. And this is where I think having some kind of a plan for governance in place is important. Even if you’ve got different people or different groups working on different parts of a strategy based on their expertise, which I think is very smart, it’s still good to have some kind of a plan for how you’re going to manage each of those pieces working together and sort of your overall governance of the strategy to make sure that nothing gets stuck. We’ve seen plenty of cases where the more groups or people that have to work together, the easier it can be for things to stall out or for arguments to pop up, and if there’s not a plan in place for how to solve some of those issues or how to work through them, then it just kind of delays the strategy even more.

AP:     Yeah. Once again, communication, and it sounds so tired and such … A chestnut, but it’s true. You have to talk amongst yourselves.

GK:     Yeah. It sounds like a basic common sense thing, but you would be surprised how often that doesn’t happen and how difficult it is to make sure it happens. So if it …

AP:     Oh, I can vouch for the fact, it often does not happen.

GK:     Yeah. So if you think about that from the get go and you really prioritize that communication, I think that’s a really good way to make sure that these types of strategies, where you’ve got different pieces happening with different groups actually succeed.

AP:     I agree.

GK:     I want to kind of close out by talking about some advantages and disadvantages of taking this limited scope approach. So, and we’ve already touched on these but I think it’s a good way to kind of just wrap everything up.

AP:     Wrap it up.

GK:     So the advantages we’ve talked about, it may reduce your risk, especially if that limited scope is something like a small scale or pilot project that can be used to prove success in one area. And in that same vein, it reduces, maybe the budget at first and shows if you’ve got budgetary constraints start small. And another advantage that’s kind of interesting is that if you are limited by maybe a tool lock-in type of thing, it can also make it sort of easier to rule out tools and processes that connect to it. If you are being brought in and let’s say you’ve got your publishing end of things already figured out, but you need new authoring tools. If you already have one piece of the puzzle in place, then it kind of helps you rule out things that are not going to work with that piece when you’re looking at new options for authoring tools. As opposed to, if someone comes in and says redo the entire thing, then you have a lot more kinds of options to look at and in some ways that can be overwhelming.

AP:      And sometimes the reality can be very difficult when you are locked in. You just have to make it work. And once again, it goes back to what we talked about at the front of this, be sure that you’re using those tools as effectively as you can be.

GK:     Yeah. That one kind of cuts both into the advantages and the disadvantages because if you are kind of working in that limited scope, then you might not be able to suggest an improvement to a tool that’s kind of locked down or an alternative. So those kinds of things are really important to keep in mind. That does go back to what we’ve said about making sure that what you do have, you’re using it as efficiently and effectively as possible. And one other kind of disadvantage is that if you are working in a limited scope and you don’t keep in mind the big picture or the future requirements and you don’t keep communication open, then it can lead to sort of more lock-in down the road or to a strategy getting in places maybe not the best. So that again goes back to our advice about make sure you keep all of that in mind. Make sure that you talk to each other and that you future proof your strategy no matter how small that it starts.

AP:     Yeah, resist the temptation to put on blinders to focus just on the small part you’re working on. That’s a dangerous thing to do.

GK:     Yes, even if you are only doing something with one piece of content or you’re only doing one part of the strategy, don’t forget all of the other pieces and make sure that what you’re doing is not going to have to be redone somewhere down the road. That is going to really overall help the strategy that you’ve got in place.

AP:     Yeah. It needs to be adaptable. It needs to be extensible.

GK:     So do you have any other final words of advice?

AP:     It cuts both ways. It can be to your advantage to start smaller, but as we have already said, don’t let it constrain you in a way where you’re going to make things difficult for yourselves a few years down the road.

GK:     Absolutely. So we’re going to go ahead and wrap things up here. Thank you so much Alan, for being on the podcast with me.

AP:     Thank you.

GK:     And thank you for listening to the Content Strategy Experts podcast brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

 

The post Small scope content strategies (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 22:55
Subject matter experts as authors and reviewers (podcast) https://www.scriptorium.com/2019/11/subject-matter-experts-as-authors-and-reviewers-podcast/ Mon, 04 Nov 2019 14:30:34 +0000 https://scriptorium.com/?p=19312 https://www.scriptorium.com/2019/11/subject-matter-experts-as-authors-and-reviewers-podcast/#respond https://www.scriptorium.com/2019/11/subject-matter-experts-as-authors-and-reviewers-podcast/feed/ 0 In episode 63 of The Content Strategy Experts podcast, Sarah O’Keefe and Chip Gettinger of SDL chat about subject matter experts and their role as authors and as reviewers of content.

“One of the most important things about working with SMEs is to meet them where they are. It’s important to understand where they’re coming from and their perspective. Understand what issues matter to them.”

—Chip Gettinger

Related links:

Twitter handles:

Transcript:

Sarah O’Keefe:     Welcome to The Content Strategy Experts Podcast, brought to you by Scriptorium. Since 1997 Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In episode 63 we talk about subject matter experts, and their role as authors, and as reviewers of content. Hi everyone, I’m Sarah O’Keefe, and I’m delighted to be here with Chip Gettinger of SDL.

Chip Gettinger:     Hi Sarah, hi everybody. It’s great to be on the podcast today, looking forward to it.

SO:     Yes, we are delighted to have you here, glad we get a chance to chat because we don’t as much as we might want to. Chip is over at SDL, where he manages the global solutions team focused on structured content management, working directly with customers and partners. And for someone like me maybe more importantly, has been in this industry for a while, and knows everybody, and perhaps everything. For those of you that have also been in the industry for a while, you should know that he was an actual typesetter, so he comes by his interest in content honestly.

CG:     Yes Sarah, I remember the days fondly of teaching typesetting picas and points to my students, it was really fun.

SO:     Alright, well now that we’ve lost all the millennials we can move on to our actual topic. So, a little bit about subject matter experts and their relationship to content. I guess traditionally, a subject matter expert would be somebody who, who’s what? What is a subject matter expert?

CG:     It’s a great question Sarah, and I think it does vary by industry, but let’s start with high tech manufacturing organizations. I think one of the first things I see frequently are software, hardware developers, engineers. These are experts in the company, who are developing products, writing software, developing hardware, and they have so much knowledge, and so much expertise. But, they’re really driving the production, the development of the products. And this information is so critical that they have about how those products work, how they operate, and how they can get that information out.

SO:     So you’re talking about somebody who’s an expert in the product, but not necessarily somebody who is an expert writer?

CG:     Correct, correct. Unfortunately we’ve all read content that’s written by somebody who really is not a good writer. Our professional writers, the industry has grown up over many decades of skills and so forth, working very closely with SMEs to ween out that information. And then, professionally write it and present it typically for customers, or internal use for their organizations and products.

SO:     Yeah, so I mean traditionally is kind of a loaded term. But, it seems like what we have had with the rise of professional technical writing, typically is that your subject matter expert reviews content, right? So, I as the writer create the content, and then I send it to you, the product domain subject matter expert and say, “Did I get it right?”

CG:     Absolutely. Those workflows are still very much in use today, and actually quite, quite beneficial for organizations. What we’re also starting to see however, is real pressure on time to market. Organizations are investing in technology that perhaps they could capture information, and I see this especially, let’s say I was seeing this, semiconductor manufacturing industries. We have very technical products, and you have SMEs that can write about, let’s say a chip layout, or a manufacturing device. And then, that information can get captured, it doesn’t have to go through a writer to get that information.

SO:     In the same way that we’re losing in a lot of ways, the gatekeepers to publishing, and you and I have both talked about that a lot.

CG:     Yes.

SO:     Now the writers are no longer the SME’s gatekeepers in some of these scenarios.

CG:     In some of them there are, and in some organizations we’re finding that SMEs, there never really has been a centralized documentation team that has had professional writers. What I’ve been working on is, how do we help those organizations understand we need to have things like consistent content, we need to do things like reuse. And other aspects that are important for organizations, but perhaps are on an audience of people less technically skilled at being able to do some of that.

SO:     So we have what I would describe as sort of the rise of subject matter experts, or having them more integrated in content authoring, or having them contribute more to content authoring. And as you said, there’s some benefits to that. Are there risks, are there downsides?

CG:     There are risks and downsides. I mean, we’ve already talked about content quality, terminology consistency, you probably have had podcasts around that, and there’s risk about that. An emerging area that I really like is where technical doc teams work with SMEs who are doing the authoring. So, when you have a centralized content management system where SMEs can perhaps write structured content, and contribute that, and then put it in a draft review mode. And then, the professional writers can come in and use some of the tools they can to ensure consistency, perhaps create some reuse around terminology and so forth. But, it shortens their time, the professional writers time, it took to get that information out because they were able to capture it right from the SME.

SO:     What’s the implication of this on structured content? I mean, you and I live in a world where we’re structuring content, and we’re enforcing content structure, and there’s a lot of pretty heavy technologies sitting in and around that. What’s the implication when I’m dealing with a person who’s a physician or something, but not necessarily interested in developing that level of expertise in content?

CG:     It’s a great question, and really I look to you Sarah, and the skills your team brings around content strategy, and information architecture. I think gone are the days when we information architect to the experts who understand all the tagging, and metadata, and attributes, and so forth. The content strategy now gets driven by perhaps, how can we simplify this? And secondly, how can we perhaps use some automation downstream to do things that maybe professional writers would have done before?

CG:     An example of that might be for example, indexing and auto tagging. I’ve seen technology now that’s starting to embrace AI to do some of that. It doesn’t replace the quality of the writing, but what we’re starting to see is some automation, and better tools. And then secondly, all we have to do is look at wikis, and other example products. Many organizations, especially a lot of the SMEs are using wikis to capture content. But, you know what? Its internal customer … Or, I’m sorry. Its internal content only, it’s not customer facing.

SO:     Is it reasonable given the tool sets that we have now to expect subject matter experts to write in XML, is that a thing that’s happening, for them to create structured content?

CG:     Yes, and at SDL we have several customers doing that now. I will say it’s been early stages, and the most successful customers have picked their projects very carefully. I will say one example is Cloud based products, tend to be easier, more newer products. A second area we have is in medical information, on healthcare information. We’re seeing early stages, where a lot of this information needs to be structured to fit into regulatory type information. And we’re seeing some early stage kind of good work going on there for making SMEs work contribute, but be in structure.

SO:     Yeah, and so since you’re doing an excellent job avoiding the actual plug for SDL tools, I think it’s worth nothing that SDL, and others-

CG:     Yeah.

SO:     … Do have tool sets where the professional writer might be using one set of tools, the subject matter expert is using a different and more lightweight set of tools, but they’re working on the same content.

CG:     Exactly.

SO:     I’ve had some very bad experiences with wikis, and the inability to manage or pull content out of wikis. So, hearing wiki always strikes fear in my heart if it’s supposed to be customer facing content, because-

CG:     Yes.

SO:     … That really is a terrible, terrible challenge. I will also say that it’s been our experience that you can look at subject matter experts along a couple of different kinds of axes. One is their level of expertise, and by that I mean if we’re talking about literal rocket scientists and there’s only a few of them in the world, that presents a challenge. If you’re talking about somebody who has some product expertise, but there are lots of people like him or her, that’s kind of okay. But, the more specialized the knowledge, and the more unique that person is, the worse off we are in terms of getting them to cooperate, right?

CG:     Right.

SO:     We don’t have a lot of leverage. The other axis that can be very, very problematic is whether or not they are in fact an employee of the organization for which they are SMEing. In other words, when you’re dealing with volunteers, all bets are off, you know?

CG:     Yeah.

SO:     If it’s an employee within the organization you can appeal to their sense of, the organization needs you to help us with this.

CG:     Exactly, exactly.

SO:     But, if they’re a volunteer, that is just not very fun. Where do you see this going? I mean, is the pendulum going to swing from lots of professional writers, all the way over to just SMEs, or where are we going to land with this?

CG:     You know Sarah, I think about that myself. If I look at traditional structure content industries, it’s happening. This is one of those changes that we need to accept and think about. If I look at our most successful customers, are ones that think about the products that they can document and so forth. And, another example might be if your company has a suite of solutions that comprise different products, it could be several SMEs. And so, you still need to have professional writers who can collate and combine all the various aspects to your product.

CG:     But secondly, I think for our industry there’s an exciting opportunity of new users that are going to come on, that are in regulated industries who traditionally have used unstructured tools, don’t know really much about structured authoring. So, I feel that there’s a larger audience out there that we could capture and get in, if we make it moderately easy for them.

SO:     Right, because I mean the great advantage of structured content is that you’re not going to forget, right? You’re not going to forget to put in that mandatory chunk of content, because the structure itself will say, “Uh, chip? This requires an abstract, and you haven’t done one.”

CG:     Exactly, exactly. And, the other aspects that are required for digital deliveries, you know? We for years, have promoted single sourcing concepts. And Sarah, you said something great about a centralized, you know, having a centralized CMS manage this. And, the variety of tools that could be used depending on your skill sets, or level of education. I’m a professional writer, I’ll use the power tools versus an SME that might use lighter weight tools, but we’re all single sourcing off the same content.

SO:     What are some of the best practices? I mean if you’re talking to an organization and they’re going to have SMEs contributing content, and potentially interacting with some sort of structured content, what’s the advice that you give people? What are some of the best practices, what are some of the things that they should do or not do to make sure that this thing succeeds?

CG:     I think Sarah, one of the most important things about working with SMEs is meet them where they are. A lot of times these are organizations that you don’t have direct responsibility, and many companies, they can go off and do their own thing. I think it’s important to understand where they’re coming from and their perspective, and really get to know your SMEs, you know? Understand what issues matter to them. I feel it’s also important for, let’s say us, our professional writers, to educate them about customer needs. And by the way, the customers could be fairly technical in all the other aspects, so really getting to know them. And, some of the techniques I’ve seen Scriptorium use are things like conducting interviews, you know? And also, identifying the superstars of the organization.

CG:     We always know that there’s going to be the laggards, and the superstars. Identify those people that say, “Oh, this looks kind of interesting,” and so forth. And then, that gives the less confident people nudges and saying, “Okay, if so and sos going to do this, maybe I should get, move forward.” And then finally, measure and reward.

CG:     If you have brown bag lunches, or better off, if you have social groups that you can socialize this, a new progress in your company, measure and give rewards out to people that are successful.

SO:     What are the worst practices, or put another way, what are the risk factors? You go into a customer or a potential customer and they start saying, “Well, we’re going to do this, and this, and this.” What are those things that strike fear in your heart when it comes to SME content and reviewing?

CG:     Yeah, boy, great question. I think what strikes fear in my heart is lack of a strategy, you know? A real strategy around how we’re going to do this, and a big part of that of course is the content strategy information architecture. I think secondly, there does need to be some training. Now, it can’t be days and weeks, it needs to be measured in hours perhaps. But, there needs to be some structure. And finally, I also like to see, I think of it as mentoring. I’ve really been … and, a lot of organizations do this, where they’ll team, let’s say a newer person in the organization, with someone more experienced and so forth. So, having sort of social networks, having places they can post questions, share information, and so forth. The SMEs become part of the process, but ultimately somebody is helping to control and make sure that it’s going to work for them without chaos ruling.

SO:     Yeah, I mean that seems like a good list because I think I would agree that we’ve seen a lot of that as well. Which, I almost feel like we could just, we could just rename this podcast to, “You have to do change management.” You know?

CG:     Yeah, yeah.

SO:     The end. Every podcast, every document we put out basically says, “If you don’t do change management, nothing else matters. This project will fail.”

CG:     Right?

SO:     That’s what I’m hearing from you, right?

CG:     Yeah.

SO:     You have to think about what you’re doing before you do it.

CG:     And suddenly Sarah, we have a larger audience with people interested, and participating with us. What I also see, one of the negative things is I’ve seen tech doc groups get ignored. And, engineering and other development groups just go off and do their own thing, and they can publish it out to the web, and they can do all that. If you don’t meet them in the middle, if you don’t really interact with them, they’ll bypass you if it’s too onerous or too difficult. And, back to your earlier conversations about tools, I think some of the early mistakes are making the tools too complicated. Now, the idea is to keep the content structure quality there without having to have the SME jump through hoops to make it work.

SO:     Yeah, and I think it’s certainly fair to say that 10 years ago we didn’t really have tools that allowed us to achieve both things, right? That allowed us to have structured, flexible content that we could manipulate, and an authoring environment that was easy enough for a person who was not focused entirely on writing.

CG:     Yeah, and Sarah I think a conversation you and I’ve had in the past is, we’re seeing organizations adopting second or third generation CMSs.

CG:     They’re moving from, let’s say document based content into component based. And we’re seeing this, I’m seeing this across industries, not just your traditional tech companies, and so forth.

CG:     The exciting thing for me I think is our industry in structured content, content strategies. As we mature, we have an opportunity to get our best practices, our governance, and all that out to a larger audience of people. We just suddenly can’t measure it in years, we’re going to have to measure it in weeks and months now.

SO:     So as a final question since SDL is mostly focused on localization, right? As a global company. Are there any particular concerns, or considerations that you have in dealing with SMEs in an environment that’s heavily localized, or perhaps multilingual. Have you run into anything along those lines?

CG:     Yes. I’m working with a customer right now who has traditionally published English only content, and a number of their customers are based in Asia. What they found is, the number of English speaking engineers are being hired away, they’re being recruited away. So, they’re going to have to start doing their first translation projects to Vietnamese, simplified Chinese, Japanese, and so forth. The concern I have then is back to the basics of, we know that for example, if you have people writing in English and it’s not their primary language, we need to have tools available. Quality checks and so forth, to check terminology, phrasing, and so forth.

CG:     The second fear I have is that terminology leaks in that’s very cultural, you know? An American term that doesn’t make any sense to a Brit, or somebody in Australia, or other types of things. Again, professional authors have a knowledge of that, SMEs may not know what they’re writing about has that. That has a direct impact on translation. As you know, translation has greatly simplified the centralization of translation memories for language, but it also requires consistency. One of the benefits of moving into this structured authoring for SMEs, at least the structure of the content can be more uniform, which will reduce translation costs. But boy, we have to make sure the content written matches as well.

SO:     And interestingly, we’re also seeing that this, let’s call it prioritization of subject matter experts, is leading to multilingual source authoring. So, our entire engineering team is in Korea, so we’re going to source the documents in Korean.

CG:     Yes.

SO:     Now, they’ll then translate and do some other things, but the logic becomes that we’re going to get better quality content if we start in the engineers preferred language, and then we’ll worry about translation downstream. But we are, I think as the subject matter experts potentially become more and more critical to the content process, that’s actually going to drive a need to do … because companies are global, and they have engineering and product development operations all over the world. So now all of a sudden we’re talking about the need to support the engineers in Germany, the engineers in Korea, the engineers in China, wherever they may be, in their preferred language.

CG:     Right, and that’s the exciting thing about my job at SDL. I really get to work with global organizations, and I’ve got team members in Europe, Asia, and here in North America. I think that the exciting customers I work with, our CMS, or tools can support those kind of environments. They’re not easy to manage, I’m not going to pretend. But, it’s possible to be authoring in multiple languages, and it does take really strong governance.

CG:     One of the exciting things I see also is, I mentioned earlier, is the teaming up. You may have a new person in Eastern Europe coming on, and they pair them up with somebody in North America, in California, who’s more of an expert. And, there’s real skills being transferred, and you can do things with video, and recordings that don’t get rid of the time difference, and so forth.

CG:     I think all of those kinds of things are really exciting for me, working with global organizations on managing this. And then finally, if I look again back in the regulated industries, financial, medical, pharmaceutical, that’s the real growth area for this. That’s the area I think I’m learning a lot about some of their challenges, it kind of feels almost like 20 years ago Sarah, when we first really started getting into structured content.

SO:     And, I think that might be a good place to leave it. There’s a lot of exciting stuff happening. Chip, thank you for this, it was really interesting, I learned a few things. We will look forward to seeing you downstream at whatever conference we might next bump into each other at, and there will be chocolate. With that, thank you for listening to The Content Strategy Experts Podcast, brought to you by Scriptorium. For more information, visit Scriptorium.com, or check the show notes for relevant links.

 

The post Subject matter experts as authors and reviewers (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 22:59
Content strategy pitfalls: best practices (podcast, part 2) https://www.scriptorium.com/2019/10/content-strategy-pitfalls-best-practices-podcast-part-2/ Mon, 21 Oct 2019 13:30:12 +0000 https://scriptorium.com/?p=19291 https://www.scriptorium.com/2019/10/content-strategy-pitfalls-best-practices-podcast-part-2/#respond https://www.scriptorium.com/2019/10/content-strategy-pitfalls-best-practices-podcast-part-2/feed/ 0 In episode 62 of The Content Strategy Experts podcast, Gretyl Kinsey and Bill Swallow continue their discussion from episode 61 and talk about best practices for planning.

“You need to be mindful about how what you’re doing is going to impact other groups. You can’t just assume they’re going to play ball when you start rolling out a new strategy. Make sure they’re not only on board in theory, but that they are pretty much committed to the success of the project because they should have a stake in it in some form as well.”

— Bill Swallow

 

Related links:

Twitter handles:

Transcript: 

Gretyl Kinsey:     Welcome to The Content Strategy Experts Podcast brought to you by Scriptorium. Since 1997 Scriptorium has helped companies manage, structure, organize and distribute content in an efficient way. In Episode 62 we continue our discussion from Episode 61 around planning.

GK:     Hello and welcome, I’m Gretyl Kinsey here with Bill Swallow again and we are picking back up where we left off in the previous episode with content strategy planning. So now that we’ve talked about all of the pitfalls that you can encounter when it comes to not planning, how do you do it correctly, what really should go into the planning when it comes to putting together a content strategy and then also figuring out how you’re going to execute that.

Bill Swallow:     I think the first and foremost one is being able to tie everything back to your business goals and that one means that you need to chase down what the goals really are. I know that a lot of companies have charters, mission statements, vision statements and so forth, but you really need to dig into, okay, so given these statements that are out there, what are the goals from the business side as far as how we’re going to make that mission a reality, make that vision come true. Being able to grab those and keep those in sight and make sure that everything that you’re doing ultimately aligns to meeting that business goal that you’re trying to achieve.

GK:     Absolutely. I think everything has to hinge on those business goals or else, like we talked about earlier, things can easily get sidetracked and things can end up not following the strategy that you’ve set forward to achieve. I think it’s really important when you’re coming up with your strategy and planning out the implementation side of it to think about how your strategy is going to get you both short term and longterm wins and address business goals that are more immediate versus ones that are more out there in the future, but something that’s still really important. So an example of that might be maybe you have some really pressing delivery need that’s a short term goal. So that might look like coming up with a certain output transformation scenario or what have you to meet that immediate need.

GK:     But then maybe you’ve got a longterm plan to deliver content into other markets to a larger customer base and so that might involve something like localization. So it’s important to think about what are the short term business goals that we have maybe within the next six months to a year, versus ones that are for like five years down the road. There can even be some sort of mid goals like two to three years down the road, but it’s important to think about not just what’s going to happen immediately when you put this plan in place, but what’s going to happen down the road too, so that your strategy can encompass all of that.

BS:     Right. And it helps to have this stuff, to make it all public knowledge, to make sure it, I’m not saying mass public, but within your company make sure that it’s common knowledge that these are some of the wins you’re looking for in the short term and long term. Address them and treat them as milestones within your project plan. So this way you know that within six weeks, let’s say you’re supposed to have a new localization management system chosen and have a test environment set up so that you could start playing with it and seeing how your content needs to feed into it or how it needs to be modified to handle some of the content decisions that you’ve made.

GK:     Absolutely. One other piece that I want to bring up and talk about a little bit is budgeting and figuring out what your return on investment is when you’re coming up with your strategy. This is something that I think a lot of times whenever I’ve initially talked to companies, that’s something that we’ve had to come in as outsiders and bring that perspective. Because I think a lot of times whoever’s driving the change, unless they are an executive, that’s not their number one goal. Their number one goal is usually something like how do we make our working lives easier by changing this process or fixing this one thing, but I think that looking at the larger picture of the budget is really important. So I wanted to get your take on that as well.

BS:     Oh yeah. You have three different areas of budgeting to really wrap your head around. One is of course is the money to make sure that you have the money that you’re going to need to get this thing done. And a lot of people make a mistake where they look only at the raw costs of tools and technologies and they don’t consider all the soft costs. So it could be oh, here’s X amount of dollars for a new system. Here’s a Y amount of dollars for a new offering tools and Z for these new publishing tools that we need to use to get the published content out to our customers or to our readers. That’s great. But then you have all of the costs associated with any external vendors that you need to solicit for support.

BS:     You may have other costs that come in the form of any kind of licensing that you weren’t aware of at the time, or any other just hard costs. Maybe everything looked good on paper, but IT came back and said, well we need to buy a new server for this and since it’s for your initiative, you need to cover the cost of this new purchase. These are all like little hidden costs that can pop up. In addition to that, you also have to budget for your time. That’s not only the project timeline, but it’s the time of every single person devoting effort to the project. That third piece of budgeting of course is the effort or the actual resource allocation to be able to say, we need essentially to use up 80 person weeks to get this piece done or to get this implementation done. Now how are we going to find 80 person weeks when we’re all busy?

GK:     Yes.

BS:     There’s budgeting like that. So you’d have your time, you have your availability of resources, and then of course you have your money.

GK:     Yes. And I think that that’s something that like you said the time and the resources pieces of that budget often get overlooked or they don’t get really accounted for. It can be difficult if you haven’t been tracking the way you’ve been using your time and resources. It can be difficult to even estimate how much might be involved in that project. But it’s really important to consider that and put that into your plan as much as possible and as accurately as possible so that you don’t run into nasty surprises when it comes to actually putting that new strategy in place.

BS:     Mm-hmm. No one likes those nasty surprises.

GK:     Oh, not at all. That kind of leads me into the next point, which is that your content strategy alone isn’t the full scope of a plan because you also, once you’ve got that strategy in place, you also need a plan for as you implement each part of it, how that process is going to go. The logistics of that strategy and making sure that everything goes as smoothly as it possibly can. So what kinds of things do you need to think about there?

BS:     Oh, there’s plenty. That’s really the move from moving from a strategic point of view to a tactical point of view. Because strategy is all well and good, but it’s not going to get anything done. It might be able to identify and even reserve the funds that you need, the people that you need and what have you. But now that you have all that, that’s where your strategy pretty much ends. I mean it serves as a roadmap during all of this other tactical work that needs to get done. But your strategy is there to get everything rolling, to get approval and to say, yes, we’re going to do this. This is how we’re going to do it. This is all the things that we need. And now that we have it, now we have to actually do the stuff.

BS:     Those things can be a myriad of activities, large and small. Could be content conversion that you’re looking at. It could be looking into all of the new systems that you need to assess and you have to pick a new CCMS maybe, maybe you have to pick a new asset management system, what have you. You have to be able to look at them and evaluate them and then go through the entire purchasing arrangement for it and then set up a test environment to poke at it. There are lots of moving pieces for each and every one of these little bits of these little tiny fragments of the strategy that need to be implemented in order for you to then move onto the next few pieces.

GK:     Absolutely. I think that having a plan in place for keeping all of those different pieces moving in a way that makes sense and where one piece is not going to be holding something else up or getting in the way of something is really, really important. I think that’s a challenging thing to coordinate, but it’s really essential to try to control that on the front end as much as possible instead of just taking off and approaching these pieces willy nilly without really thinking what makes sense to do first? What’s the best sequence? How does this fit into a schedule? All of that sort of thing. Because otherwise, like we talked about up front, if you don’t plan out how all these different moving pieces and parts are going to come together, then they can easily just stall out.

BS:     Right. Yeah, you need to look at prioritization at that point and say, okay, what are the three big things that need to happen to keep this moving forward? It might be identify and purchase new systems. That would be one of them. Another one could be training or document conversion. Yet a third one could be offering tools, being able to find the right tools that integrate with the systems that you’re looking to implement and being able to look at all of these little pieces and say, okay, which ones are going to get us the biggest bang for the buck and make sure that we have what we need to focus on the next few pieces and it’s going to be different for every single implementation.

BS:     There’s no right or wrong as to which one you do first. The only wrong is that it’s the wrong choice for you. So you need to look at… you need to start looking at that and saying okay, in order for us to move forward, we may have X and Y but we don’t have Z and Z is a deal breaker. So we have to focus on Z first.

GK:     So all this prioritization I think speaks back to what we talked about just a few minutes ago when it comes to that short-term versus long-term business goals. Because that is really a way that you can say, here’s what’s most important and most essential to get those short-term goals that have to happen off the ground and get them up and moving and then think about what’s more long-term. You don’t want to start with something that’s really more essential for a long-term goal while ignoring the things that are more relevant for your short-term goals.

BS:     Exactly. Yeah. That really speaks to the scheduling aspect of being able to put together that timeline and identify not only what the big pieces are that need to happen, but also, okay, so in order for these big pieces to happen, what’s the order in which they need to happen? Then you can start looking at each one and say, start scoping each one individually. This one might take 12 weeks to do. This other one can happen concurrently and it might take only eight weeks to do, another one must be done after these other stages, so we know that’s going to take about another 16 weeks to do this piece. But it can’t get done until this 12 week piece is done. Being able to put together that schedule and that roadmap.

GK:     Absolutely. I think that’s where it really becomes important to have deadlines and a schedule set and to try to stick to that as best as possible. Then if things slip off schedule to, like we said earlier, really have that open communication so that you can let other people who may be affected know, hey this didn’t quite go how we thought it would instead of taking 12 weeks, maybe now it’s going to take 15 or maybe we finish this one piece early so then that can get something else moving. But I think it’s really important if you don’t set deadlines for yourself and have those goals in place, then a lot of times things will just drag on forever. I think that speaks back to what we had talked about previously with the resource allocation aspect as well.

GK:     You really need to not only have those deadlines in place but know who’s going to work on each piece, when they’re going to be available. How that factors in to other projects that they may be working on. It’s a lot of moving pieces and parts for the content strategy itself, but also for all the people who are going to be working on it with their other work. So that’s where that planning really, really becomes essential.

GK:     One other piece I want to just briefly mention, because I think this tends to get ignored a lot, is building in time for quality assurance. So things like taking the time for a content inventory or audit before you build a content model or convert your content if you’re going into structure. Things like setting up test environments, allowing time for user feedback. That’s where I’ve seen a lot of projects get hung up just because when they were doing their resource allocation or their planning, people just oftentimes don’t think about how much time really is involved in quality assurance and making sure that it’s done in a way that doesn’t leave anything out or that’s not rushed. So I think that’s one thing that I really wanted to bring up as an important part to include in your planning.

BS:     Oh, absolutely. You have to make sure that everyone who is responsible for doing that level of testing or quality assurance that they are aware that something is coming and not just wait until whatever the thing is you’re working on is done to throw it at them and say, okay, we did a thing, now go test it. That is definitely the wrong approach. Making sure that your testers are involved or at least knowledgeable of what you’re doing along the way so they know what to expect when it comes time for them to start kicking the tires.

GK:     Absolutely. So are there any other considerations or things that people need to think about when they’re planning out their content strategy and how they’re going to execute it?

BS:     I think the only other thing to really mention is that you need to be mindful about how what you’re doing is going to impact other groups. You can’t just assume that they’re going to play ball when you start rolling out a new strategy. Making sure that they’re not only on board in theory, but that they are pretty much committed to the success of the project because they should have a stake in it in some form as well.

BS:     If it’s a a training group that the tech comm… Let’s say the technical communication group is implementing a strategy, they should be reaching out to their trainers, their tech support people, their salespeople, their marketing people to say, “Hey, we have this thing that we’re doing. It’s going to impact you all in some way and you can benefit from it. So let’s come together and make sure that this is going to be a solution that works for everybody.”

GK:     Absolutely. I think that’s especially important if you’re already dealing with a less than ideal situation between groups. If you, for example, I’ve seen one case where there was a large amount of reuse between a tech pubs department and a training department, but it was all copy paste. So when the tech pubs department decided to pursue a content strategy to just help them publish more efficiently, one of the things that they had to consider as well was that reuse factor with the training group and really get them on board and get them thinking about okay a year or two down the road when everything is in a new system and everything is structured and is much more shareable than it was before, how is that going to affect not just our group in our publishing abilities but our reuse among these different groups in the company?

GK:     Then it ended up where the largest amount of reuse was between tech pubs and training. But then there were also groups like marketing who were just copying and pasting content from the technical manuals without telling anyone. So that’s really, like you said, it’s important to think about how all of these groups work together. If they’re working together in a way that’s efficient and ideal right now versus how things need to be several years down the road.

GK:     I think all of that is an absolutely essential consideration and that really needs to be baked heavily into the planning phase of your content strategy and into all of the information you come up with about here’s how we’re going to execute and implement all of that.

BS:     Couldn’t agree more.

GK:     All right, well with that, I think we are going to wrap things up, so thank you Bill.

BS:     Thank you.

GK:     Thank you all for listening to The Content Strategy Experts Podcast brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

 

The post Content strategy pitfalls: best practices (podcast, part 2) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 19:03
Content strategy pitfalls: planning (podcast, part 1) https://www.scriptorium.com/2019/10/content-strategy-pitfalls-planning-podcast-part-1/ Mon, 07 Oct 2019 13:30:21 +0000 https://scriptorium.com/?p=19263 https://www.scriptorium.com/2019/10/content-strategy-pitfalls-planning-podcast-part-1/#respond https://www.scriptorium.com/2019/10/content-strategy-pitfalls-planning-podcast-part-1/feed/ 0 In episode 61 of The Content Strategy Experts podcast, Gretyl Kinsey and Bill Swallow return to our content strategy pitfalls series with a discussion about planning.

“Another thing that is really helpful is doing a pilot project or proof of concept, because that can help you look at a small but essential piece of your strategy and see how that works, and what goes wrong or what goes in an unexpected direction during that pilot.”

— Gretyl Kinsey

Related links:

Twitter handles:

Transcript: 

Gretyl Kinsey:     Welcome to The Content Strategy Experts Podcast brought to you by Scriptorium. Since 1997 Scriptorium has helped companies manage structure, organize, and distribute content in an efficient way. In episode 61, we return to our content strategy pitfalls series with a discussion around planning.

GK:     Hello and welcome. I’m Gretyl Kinsey.

Bill Swallow:     And I’m Bill Swallow.

GK:     Today, we’re going to talk about what happens if you don’t plan a content strategy and you don’t plan for implementing new systems. I think both of us have seen this happen quite a bit, so I think let’s just go ahead and start off with the question of what happens when you don’t plan properly?

BS:     All the bad things happen.

GK:     Yes.

BS:     Yeah. One of the things that I’ve seen and I’ve heard pain points for talking to other people is when they look at a content strategy and they plan it without considering all of the pieces that need to come together in order to reach the end goal, so they look at where they are and they look at where they need to be, and they put that end result as the highest priority. But they don’t consider all the little pieces in between.

GK:     Yeah, I’ve definitely seen that happen, as well. I’ve also seen it happen where the end result that they had in mind had nothing to do with their business goals and just had everything to do with, “Oh, we saw this really awesome tool that we think would be a good change.” Or, “We may be have this one goal that we’re focusing on,” but it’s not really taking into account how that is going to help save time and save costs, and it’s not really an overall picture. It’s just sort of something, one little piece that they’re focusing on, and that affects the entire planning process or lack thereof.

BS:     Right. Because it then shuts out a lot of other opportunities and it shuts out a lot of other places that need to be addressed.

GK:     Yes. I think one really common case I’ve seen of this is where content strategy kind of just encompasses one department or one type of content and doesn’t really look at the organization’s content as a whole, and doesn’t kind of look at the future end goal of getting all of that content aligned and making sure that the company’s branding is a major part of that and that everyone is kind of consistent across their messaging. I think that that’s one big planning gap that I’ve seen happen a lot of times. A lot of it boils down to just different groups kind of working in silos and not collaborating with each other. And so when that happens, then of course they don’t plan together and make sure that they’re coming up with one kind of overarching strategy.

BS:     Right. You did mention also focusing around a particular tool, and that’s a fairly myopic view in general, but when you start targeting a tool, either by relying on its own capabilities, you tend to forget to look at all of the other pieces that need to come together to support that or other places where that content needs to be used. Or if you’re looking at a particular goal of getting everything into a new CCMS, or a new CMS for that matter, that ends up being strictly an implementation problem, and not necessarily a strategic look at how the content needs to come together in order to do that.

BS:     So, a lot of the strategies are a lot of the tasks that fall out of the strategy that you’re putting together generally focus on that one single goal in disregard a lot of tangential pieces, that other people might be relying on your content that they no longer will be getting it in the format that they expect, or it might limit the sharing of content, or it might lead to oversharing of content. In which case, you then might have the same problem for example, of duplicate content or redundant content being produced by other groups just by leveraging your content without actually following some kind of systematic reuse.

GK:     Absolutely. Like I said, that’s something that I have seen happen so many times. Sometimes we’ve been brought into a situation as consultants where that was what happened and then they have had to bring in an outsider like us to fix it. That’s definitely something I think to really consider in the planning phase so that you can avoid that and avoid getting stuck in that hole and then having to kind of crawl your way back out.

BS:     That’s a good analogy.

GK:     What are some other examples that you’ve seen of what happens when you don’t plan or when you kind of fail to do so correctly?

BS:     Well speaking of crawling, you end up having portions of projects that ended up just being in this bit of a churn cycle, so to speak, where a lot of efforts being made to do a particular piece of the overall strategy, but nothing else is coming together. Then when things do come together, then a lot of rework needs to happen on that same piece. I guess in one way, you can boil it down to not hardening any particular portion of your strategic approach until you know all the pieces are going to fall together. Because if a change does happen somewhere down the line, you have to back up and then start redoing a lot of the same work over and over again. Likewise, if you don’t get buy-in from particular groups or with using particular technologies in a certain way, you might have to then revisit how you’ve approached your entire content model, which is no small feat, and it’s not easy to do when 85% of your implementation is complete and then you have to start and basically redo a huge portion of that.

GK:     Yes. A couple of examples where I’ve seen this happen. There is one case where metadata was something that got caught in a churn, because prior to kind of moving things over into a structure environment, there was a situation where nobody had really thought about metadata and how it would be used with structured content. And so, that almost became so much of a focus, and it was a new focus that had not been considered before. People got caught up in the churn of constantly going back and forth and saying, “What metadata do we need? How are we going to use it?” It basically just drew all of their focus away from the other pieces of the project so that nothing was really moving forward. I think that’s a good example of this kind of churn cycle that can happen.

GK:     I’ve also seen it happen with conversion processes sometimes or with the development of output transforms. Basically like you said, any one piece, if it kind of gets all of the focus or most of the focus and you’re really working so hard on making it super perfect, then oftentimes what happens is other considerations happen, other things come into play. Then what you thought was so perfect is actually not perfect after all, but maybe you’ve already used all of your resources. Maybe you had budgeted a certain amount of time or money to work on that piece and you’ve already blown through all of that before you had a chance to see how it fit into your larger strategy. Then you’re kind of in a really bad place at that point because what you had developed to perfection is no longer perfect, but then maybe you’re stuck without the means of taking it where it needs to be.

BS:     Right. And you know, that speaks a lot too also being able to keep your eyes on the prize, so to speak, and it’s nice to have that end goal in in view. But you can’t necessarily ignore it once you start working on a particular piece of your implementation or a particular piece of your strategic drill down into figuring out what the tasks are to complete something and start working on some of the ground level bits of your implementation. If you keep your eyes or if you keep a focus away from the end goal and all the pieces that need to come together to make that happen, a lot can change. Other departments and so forth can have their own projects going on, which could impact some of the infrastructure you planned on interfacing with or using directly, and that can have a huge impact on all the work that you’re doing. Which might be good work, but it would have to be reworked in order to work with whatever new approach or new systems these other groups have decided to implement while waiting for you.

GK:     Yes, absolutely. What other examples have you encountered of this?

BS:     Another, I guess a good basic one, is the problem of stagnation, where you have a charter to go ahead and implement something, to develop a strategy to get teams aligned and so forth. Then for whatever reason, things come to a crawl. So maybe executive leadership has a different high priority that they’re chasing, and suddenly your group’s needs or your project’s needs suddenly don’t get the focus that it needs to move forward.

BS:     It’s really difficult to kind of get out of that stuck mentality of not being able to move forward, because you’re not getting the budget that you need, because you’re not getting access to the people that you need to talk to. It can almost be frustrating. So, it’s important to be able to make sure that you have a means of communicating up to someone who can make an executive decision to say, “We know that we have a focus on these three other things, but this is still a priority project, so let’s make sure we have some time and resources allocated to keeping this moving forward.” Your content strategy might not be the be all end all project that the company is worried about, but then you can at least make sure that you have some ability to keep it moving forward and not letting it stagnate.

GK:     Absolutely, and I think that what you’ve said about allocating resources is one of the biggest issues that can lead to the stagnation, and can also lead to another example, which is something that we’ve called hurry up and wait mode. A lot of projects sort of end up in that cycle of stagnation and churn as well. I think a lot of it boils down to the allocation of resources, like you said. If you don’t plan for that up front and you just kind of make a project, something that people work on as they can rather than setting aside a certain amount of time that people are going to work on it, then what often happens is it just never gets done.

GK:     Something else always comes up that’s more important or more pressing. And if you haven’t thought about what resources do we want to put toward this, then it’s never going to happen. I think that’s also true if you don’t really have hard deadlines or a schedule in your plan that you’re working toward. Those two things, not having a set schedule and not having resources allocated, often can just stop a project in its tracks and delay it months, if not years at the time.

BS:     Mm-hmm. I guess what would you say to a situation where you’re doing everything right as far as you know, and you know you’re doing everything that you’re supposed to be doing to move your strategy along, but things just aren’t going to plan?

GK:     Well, I think the first thing I would say to that is that this is almost guaranteed to happen, so it’s really important to have backup plans when you make your initial plan. You know, there’s always going to be that ideal of how you think your strategy should go, but there are always external factors that come up that you will not be able to anticipate. But you can at least kind of think about if this sort of thing happens, here is how we would handle it on the front end. Then that way when those things do come up, you’re not just completely taken off track and you kind of have a bit of a game plan in mind for how you’re going to handle those things and it doesn’t just completely derail everything

BS:     Right. Yeah, having those backup plans is essential, and also being able to look at projects and be able to say, “Okay, what, what can we isolate?” You know, “What is something that isn’t tied specifically to a dependency down the road?” It might be initial conversion of your content. It might be doing a content audit to at least get your arms around, “Okay, well what do we have to do to deal with? Even if we don’t have the resources to do anything with it, what are we looking at here? What’s the total scope? What are the types of things that need to be changed? What types of files need to be massaged a bit so that it makes conversion easier?” Something small on that scale can still keep the project going as you wait for other pieces to start moving.

GK:     Yeah, I agree. One thing that I’ve seen really help companies, especially where the resources or the budget might be very stretched, is to kind of reduce that risk and plan things, and like you said, in those smaller chunks or in kind of more reasonable pieces and phases. I know that there’s one project that I worked on where they decided to do their implementation in these kind of small phases where they said, “We know that we can do each piece at a time and not feel like we’re biting off more than we can chew.”

GK:     Whereas if they had tried to just go ahead and implement every single part of our strategy at once, they know that they would have gotten caught up in that sort of churn that can happen whenever things go off the rails and don’t go according to plan. By sort of taking that into account and knowing how things typically worked, they thought that the smarter thing to do would be to just kind of take it in reasonable, approachable pieces, and sort of do one thing at a time that they knew they could kind of get their arms around and keep their arms around as they were going through it.

GK:     Another thing that I think is really helpful along those lines is the idea of doing a pilot project or proof of concept, because that can really help you look at sort of a small but essential piece of your strategy and see how that works, and then look at what does go wrong or what goes kind of in an unexpected direction during that pilot. Then use that to kind of help plan out your larger implementation more thoroughly, but without having invested everything up front. You can kind of see if something is going to go not quite according to plan, what kinds of things those might be by doing a pilot, and then you can say, “Oh, okay. Here’s how we need to course correct before we go forward to the rest of this and sort of expand it further.”

BS:     Yeah, those small proofs of concept can come in really handy, too, when you’re, for lack of a better approach, if you have particular groups that you’re waiting to work with, but for whatever reason they’re stalling their engagement, those little proofs of concepts, it can be that little spark that keeps the project going or the spark that kindles some action on their side. It’s a lot easier to show someone this is what I’m thinking and this is how it works, and you can play with it and try to break it or do whatever you need to to kind of see where we’re going with this. Sometimes that approach is a lot easier than trying to get people in a room to talk theoretically about how something will be implemented. If they have something that they can play with, and use, and provide feedback on, it can sometimes move that project forward quicker.

GK:     Absolutely. One case where I saw this actually work quite well was in a situation where there had been a really large merger and sort of series of mergers, where this company had kind of grown from having just the one content department to suddenly they had brought in these other companies that had their own documentation teams. They decided when they were doing this rebranding effort to kind of bring everything together, to just start with the one documentation team that was sort of pushing the project and that was the most motivated. They said, “We’re just going to convert this department’s manuals into XML, prove that this works, and then we can use that to convince all of these other groups that have their own sets of documentation, you know, this actually does work, and it’s safe for you to do this, and there’s not the huge risk that you think there is.”

GK:     Whereas I think if they had tried to just go ahead and convert everything all at once up front, it would have been chaos. Because they had all of these groups that they had just basically been bought out and they already had a lot of stress they were under as a result of that, and they were kind of learning all of the new things that they had to learn by now having been acquired by this larger company. And so focusing on a major content overhaul at the same time I think would have been too much. But because they had this one group that wanted to go ahead and do it and just start small, that proof of concept was enough to start getting the other groups on board and having them one by one come through and say, “Okay, now we can do this piece of the content and go ahead and convert that.” Then that way, they were able to sort of tackle a rebranding piece by piece in a way that was not overwhelming.

BS:     Another piece that goes along with that is just having regular, or sometimes even frequent, either meetings or at least communications going out to other groups that say, “Here is where we are. Here’s where things stand. Here are the next steps.” A lot of times, that really helps to one, keep people interested in what you’re doing, and also reminds them that they’re on the hook to share something at some point along the way.

GK:     Yes, absolutely, and I think that if you don’t have those open lines of communication, then that’s another way that things can really go off the rails and destroy whatever original plan that you had for your strategy. I think that’s extremely important. It kind of depends on your company culture, but whether it is face-to-face meetings that are more effective or if you’re a distributed team, whether it’s having web meetings or even if there is a forum that you all post on. Whatever system works best for your company, I think it’s really important to have that in place and have some sort of regularity to it so that you are all kept accountable.

BS:     Exactly.

GK:     We’re going to wrap things up here, but look out for our next podcast episode where we continue our discussion of content strategy pitfalls around planning, this time talking about some best practices of how you should do your planning. Thank you, Bill.

BS:     Thank you.

GK:     And thank you all for listening to The Content Strategy Experts Podcast brought to you by Scriptorium. For more information, visit Scriptorium.com or check the show notes for relevant links.

 

The post Content strategy pitfalls: planning (podcast, part 1) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 21:00
Reuse in DITA and beyond (podcast) https://www.scriptorium.com/2019/09/reuse-in-dita-and-beyond-podcast/ Mon, 23 Sep 2019 13:30:10 +0000 https://scriptorium.com/?p=19205 https://www.scriptorium.com/2019/09/reuse-in-dita-and-beyond-podcast/#respond https://www.scriptorium.com/2019/09/reuse-in-dita-and-beyond-podcast/feed/ 0 In episode 60 of The Content Strategy Experts Podcast, Elizabeth Patterson and Gretyl Kinsey discuss content reuse, how it specifically applies to DITA,  and how it can benefit your organization.

“So often we see companies wasting a lot of time copying and pasting. This idea of reuse saves time and money, and then it also helps to maintain that consistency across your organization.”

— Elizabeth Patterson

Related links:

Twitter handles:

Transcript: 

Elizabeth Patterson:     Welcome to the Content Strategy Experts Podcast, brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In Episode 60 we look at content reuse and how it can benefit your organization.

EP:     Hi, I’m Elizabeth Patterson.

Gretyl Kinsey:     And I’m Gretyl Kinsey.

EP:     I think we should start by defining reuse. We bring it up a lot as a benefit for structured offering and for migrating your content to DITA. What does reuse really mean, Gretyl?

GK:     At its core, reuse is all about writing content one time and then reusing it in multiple places. That’s opposed to something like keeping a bunch of different copies of the same exact information or even similar information. What that does by having just one piece of reusable content is it establishes a single source of truth. That means that your content is going to be more consistent if you just have it in that one place and it lets you do things like take your existing content and use parts of that to create new documentation. You can kind of create multiple documents that reference or reuse the same piece of content over and over.

GK:     An example that we can touch on a little bit, one common one, is something like safety warnings, and cautions, and things like that. You see that very commonly in technical documentation. Just kind of right off the bat is one very quick and easy way that you can see something that’s reusable. But it also might be things like a how to guide, a getting started guide, things like that that you see the same content over and over. That means that it’s probably something you should be reusing instead of copying, pasting, that sort of thing.

EP:     Right. And so often I think that we see companies wasting a lot of time copying and pasting. This idea of reuse saves time and money, and then it also helps to maintain that consistency across organizations because you might have content in different locations, and it might be from different times, and you might be pulling from different times. If you don’t have all of that together and you’re not reusing it efficiently, you might have inconsistent information.

GK:     Absolutely. We’ve seen lots of cases where two different writers are basically writing the same content because there’s not that communication and they’ve kind of written the same information in two slightly different ways. That introduces that inconsistency. And as you said, if there’s not really a good method of version control in place then if you go to reference some information, you just copy it out of an older version of your documentation. Then you’ve gotten something incorrect in your documents now.

EP:     Right.

GK:     It’s really important to have that single source of reusable truth.

EP:     So we’ve defined reusable content. What exactly does reusable content look like? Could you share some examples with us?

GK:     Sure. One of the ones I mentioned just a second ago was stuff like safety warnings, but I want to talk about different types of reuse that you can have and then some examples that would go with each. You can have reuse at the document level. This would be an entire publication, and this would look like maybe if you deliver packages of content with different products, but maybe with every single one they get the same kind of, “Here’s how to get started” sheet that goes with it. Or they get a little quick start guide booklet, but then the actual documentation is different product by product. But that one document is the same. That’s something that will be reused across those content sets.

GK:     When it comes specifically to DITA, you can have reuse at the topic level. A DITA topic can be referenced in different DITA maps. This might look like a common introductory topic that’s used in a lot of different publications, kind of like I mentioned before, a how to sort of thing. It might be a list of common terms, or warnings, or cautions that you’re going to see. But basically it’s an entire topic that would be reused at different points.

GK:     You can also have reuse at the element level, so that would be things like paragraphs, lists, notes, images, anything like that, tables. Something where an entire element can be reused in multiple DITA topics. That again goes back to that example I gave with safety warnings, that’s a very typical use case of that. Whatever that admonition is that contains your warning, your caution, whatever, might appear in multiple different places.

GK:     You can also have reuse at the phrase level. For example, your company name or if there’s another kind of specific branded term, that just one word or phrase can be reused. There is a caution to keep in mind there, which is if you are localizing, you have to think about how reusing using one word or phrase would affect translations.

EP:     Right.

GK:     That’s why we don’t recommend just doing it all over the place. It’s really more if it is for a proper name or something that’s part of your branding. Your company name or something that you want to just make sure you never misspell, you only write it in the one place.

GK:     DITA has mechanisms that support all these different types of reuse. If you’re looking at reuse at the document or the topic level, that might look like you have a main DITA map and you have a reference to another map that’s reusing that document. Or you may have a reference to a topic and that same reference appears in different maps. That would be reuse at the topic level.

GK:     At the element level it might look something like using a content reference, or conref, to pull in that one reusable table, or warning, or note, or whatever that’s your one element you’re reusing.

GK:     Then for phrase level, your company name, that might be supported by a key.

GK:     DITA has all of these really great building mechanisms to support all of these different types of reuse and there are some ways that you can sort of identify what content is reusable so that you can tell which of these mechanisms that might be best for it. Of course, one way is that you might sort of know off the top of your head, “I copy and paste this information all the time. I know it’s reusable.” But if you’ve got a lot of content, or a lot of different people working on it, or maybe a lot of legacy stuff built up, and that knowledge is not just right there, there are also tools that can scan your content and tell you, “Here’s where you’ve got an exact match appearing 20 or 30 times throughout your documentation set.” Or, “Here’s where you’ve got a very close or partial match” and that’s where you can find if different writers have been writing the same thing in different ways over and over.

GK:     With technologies like that, then it can pinpoint here is how much reuse potential that we have and then the types of reuse that might be in the content. You can take a look at that and then determine how that is going to affect putting reuse in place and what your reuse strategy is going to be when you move over to DITA.

EP:     You gave a really good, clear picture or visual picture of what content reuse looks like. I want to cycle back around to talk a little bit more about the benefits of reuse. I mentioned a couple earlier, I talked about saving time and saving money, which are both huge, and then also maintaining that consistency across your company. Do you have anything else to add to that?

GK:     Sure. One thing I want to talk about is localization. That’s because those time and money savings that you get really get even bigger and multiply if you have localization as part of your content workflow. That’s because if you think about how translation works, if you are translating one piece of reusable content, you’re paying for that translation the one time and then reusing it. But if you are not doing proper reuse and you’ve got that content basically copied and pasted all over the place, then you’re having to translate that same piece of content however many times you’ve got it all over.

GK:     If you really maximize on your reuse potential and let’s say you analyze your content and you find out 25% of it is reusable, or even up to 40 or 50%, which is pretty typical, is reusable, then all of a sudden you’re looking at cutting way down on your localization costs. Especially if you look at a situation where the more languages that you’re translating into, the more those savings can really, really add up. That’s one of the big drivers that we have seen when it comes to developing content strategy is needing to get that benefit of reuse to help make localization more cost efficient. That’s a really big one.

GK:     As you mentioned on the consistency angle, one thing I want to talk about there was that reuse can help make content more consistent, not just across a documentation set, but across an entire company. This would be a case where maybe you start with your, let’s say tech pubs department, and you get all of the content there consistent, then what about expanding outward to other groups in the organization. Maybe your training group, your marketing group, if there are any other content producing departments in your organization. I think it’s really important for a brand overall to have that consistency across all of those different groups.

EP:     Definitely.

GK:     There are a lot of times cases where there is reusable content, so a marketing website, or marketing slick that’s handed out at a convention, or something like that. If your product is very technical, or if it’s software, or even some types of hardware where people need to know what the technical specifications are, that might be a case where you would go into the technical documentation and reuse that content in your marketing materials. With training there’s a lot of reuse potential in organizations because as you bring in new employees and you need to train them on the product, there’s a lot of the product documentation right there that could form the backbone of a training course. Then you might just start with that information and then add how to’s, and quizzes, and things like that. Really, a lot of the content that you need is already in your documentation.

GK:     I think that when it comes to helping make sure that consistency is there and really helps your entire brand look more consistent, more put together, that’s a big place for reuse to come into play.

EP:     Right. Because if your brand isn’t consistent, people are going to question your company.

GK:     Exactly. It doesn’t really make the customer feel very secure in your product when they see one thing on the marketing side when they’re ordering, and then when their product arrives the documentation looks like it came from a completely different company.

EP:     Right.

GK:     That’s really something important to consider. That’s another place where you may not think about the immediate financial benefits, but if you’ve got a more consistent brand messaging across all of your content, then that could really help you draw in more customers. Conversely, if you don’t have that consistency, it could lose customers and you may not even think about that that might be why.

EP:     Right, absolutely. Let’s take a look at some specific use cases for content reuse.

GK:     Sure. One that’s really, really interesting to me that we’ve worked on is reuse to deliver targeted content. We’ve done this for a few different companies. What this looks like is when you’ve got all of your customers needing one main set of content, but then there’s also custom content that’s based on things, like what version of the product the customer owns. Maybe what user role they have or what location they’re in. There could be all kinds of factors like that where based on that information they would need to get some additional content that is just for them. In this case, you’ve got a scenario where most of the content is reusable and then it’s just these little customizations. That’s an interesting thing to think about how you had set that up and deliver it.

GK:     We’ve addressed this in a couple of different ways for different organizations. This might look like maybe using different DITA maps for different subsets or groups of customers. It may also look like using one main DITA map with different filters applied for different customers. In the one case you’ve got these different maps and they’re all kind of pulling from the same source of topics. And another case you’ve got just the one main map and then information is included or excluded based on the particular customer’s information. That’s a couple of different ways you can do it.

GK:     Then if you’re looking at reuse at the element level, it’s also possible to have common topics and then appending that customer specific content via mechanisms like conref push.

GK:     There are a lot of different ways that you can approach this type of reuse. That’s one scenario that I’ve seen that’s not the typical, “We need to save costs on localization.” Or, “We need to save costs on formatting.” Delivering the sort of targeted custom content to different groups of customers, but still having some material that’s the same across the board is a really interesting reuse case that we’ve seen a few times.

GK:     Another one is reuse for rebranding. This might be a scenario where you’ve got a new company logo, or maybe you’ve got a new company name, or a tagline, or whatever and that needs to be referenced in all of your documents. If you are in some sort of unstructured environment where reuse is maybe either impossible or just very, very difficult, you might be looking at a situation where someone would have to go in by hand and copy in that new logo, or name, or whatever into every single document, which is a huge waste of time and-

EP:     And very inconvenient.

GK:     Yes. And people don’t want to do that. This is a case where we’ve seen this be a really big driver for moving to DITA for some companies because they don’t want to have to go through the pain of all of that manual copying and pasting. They just want to have that one reusable logo, or maybe a DITA key that’s referenced with the company name, and just have that be used in all their documents. Then if it ever changes again, if they go through another rebranding in two years, they just have to change out their logo in the name and then all the documents automatically update and they don’t have to go through and painstakingly change their branding individually across all of those documents.

EP:     Right. Which would save a lot of time.

GK:     Absolutely.

EP:     If any of our listeners are interested in learning more about reuse and DITA, you can visit LearningDITA.com. We actually have two reuse courses. One covers the basics and then one goes into more advanced reuse mechanisms.

March 2025 update: We have moved LearningDITA to a new platform. The Introduction to DITA course is still free, and you can sign up for courses at store.scriptorium.com.

GK:     Yes. I know that we talked through some of those on here. I mentioned using keys, conrefs, conref push, all that stuff, all of that is covered in really nice detailed how to information in that second advanced reuse course on LearningDITA. Then the first course just goes into the basics of how reuse work. If you want a nice expansion and some hands on practice with reuse, then that’s a good place to go.

EP:     Absolutely. We also have a lot on our blog on reuse and we’ll link some of that in the show notes. So with that, I think we’re going to go ahead and wrap up. Thank you, Gretyl.

GK:     Thank you.

EP:     And thank you for listening to the Content Strategy Experts Podcast, brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

 

The post Reuse in DITA and beyond (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 16:31
Using the Learning and Training specialization for your content (podcast) https://www.scriptorium.com/2019/09/using-the-learning-and-training-specialization-for-your-content-podcast/ Mon, 09 Sep 2019 13:30:19 +0000 https://scriptorium.com/?p=19188 https://www.scriptorium.com/2019/09/using-the-learning-and-training-specialization-for-your-content-podcast/#respond https://www.scriptorium.com/2019/09/using-the-learning-and-training-specialization-for-your-content-podcast/feed/ 0 In episode 59 of the Content Strategy Experts Podcast, Alan Pringle and Kaitlyn Heath discuss how you can apply the Learning and Training specialization to your content.

I think the conditional processing is a huge benefit as well. You can have a lot more interactivity built in without that human interference.

— Kaitlyn Heath

Related links:

Twitter handles:

Transcript: 

Alan Pringle:     Welcome to The Content Strategy Experts podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize and distribute content in an efficient way. In Episode 59, we look at the DITA learning and training specialization.

AP:     Hello, everyone. I’m Alan Pringle. Today I have Kaitlyn Heath here with me.

Kaitlyn Heath:     Hi.

AP:     Hey there. Let’s talk today a little bit about the learning and training specialization that is part of the DITA XML standard. Let’s start the conversation first with defining exactly what that specialization is. Tell us a little bit about it overall.

KH:     The learning and training specialization is designed for instructional content. You can put things like your learning plan. You can do your entire course. Then you can also have things like your assessments or your questions and however you design those.

AP:     Because it’s part of the bigger DITA ecosystem, how does it fit in with that?

KH:     It’s designed to fit in and be able to be reused along with your other DITA content. You’re doing things like you would normally do in DITA, like using small topics and putting individual questions in their own topics, and then you’re able to reuse those in different kinds of maps that are designed specifically for learning and training or your standard DITA maps.

AP:     If you can use a standard DITA map, does that mean that you can mix, say, a standard DITA topic with the learning and training content?

KH:     Yes, of course you can. You can use all of your normal DITA content, let’s say even your specialized DITA content, within these new learning and training specialization topics, but you will have to use the specific elements in most of those cases that are designed specifically to fit within them.

AP:     There is a really big mix and match kind of scenario here.

KH:     Yeah. Yeah, I think that’s exactly how it was designed to work.

AP:     Yeah, so if you have a task, for example, that someone in your tech comm department has written, for example, and you’re in the training department and you need to reference that, you could just pull that into your stuff.

KH:     Exactly. You might need to use the specific learning and training element, but then you would be able to reference that .DITA topic. In the learning and training specialization, learning content topics, which is where the bulk of your instructional material is going to go, you are allowed to embed other DITA topics within them as well. If you are writing a task that is mostly going to go in your instructional content but you want to be able to reference that ID later, you can embed that topic directly within your learning content topic.

AP:     I’ve tinkered a little bit with the learning and training specialization, and I have to say I was overwhelmed by the sheer number of elements because there’s a lot.

KH:     There are a lot of elements, especially in things like the learning plan topic type. You’re not meant to use all of them. You’re meant to use and pick the ones that you need to use.

KH:     For example, in our LearningDITA.com courses, we eliminate a lot of the topic types that we don’t need, so we really only are using learning content topic types and then learning assessment topic types. We decided we don’t have enough content for a learning plan, and we don’t want an entire topic for a learning introduction or a learning summary, so we decided to just include those specific elements within our learning content topic type and then reference those in a normal DITA map.

AP:     Can you kind of go over briefly the hierarchy of… I guess is learning object the right word here, because there’s so many layers? It’s like an onion almost.

KH:     I think so. Learning objects are an element and a map type that are-

AP:     Oh, that’s confusing.

KH:     It’s a little bit confusing.

AP:     Yeah.

KH:     You can have this one main learning object element within a learning object map. The way that that’s designed to work is that you have one main learning object within your learning object map, and that is where you will define the smaller units or sections in your learning content. Then, from there, you will include your learning plan, learning overview, learning content, learning summary, learning assessments, if you would like, in that map. Then, on a higher level, maybe you would have units or chapters that would be included in your learning group map or your learning groups.

KH:     It gets a little confusing because there are a lot of different ways that you can nest these. It seems to be intended that you will have units and learning groups at the higher level and then included in embedded sections that are your learning objects within them, and that is however you would define your units or sections or whatever in your learning content.

AP:     You’re not required to do things, necessarily, in absolute path with this.

KH:     Right. Right. You can use them however you need to use them and-

AP:     Or not use them.

KH:     Or not use them at all, right, which we have not used them in our LearningDITA courses, but yeah, you can use them however you see fit. You may not have units and sections. You may really only have chapters, and then you can use learning objects or you can use whatever you need to. You can use a normal map.

KH:     The interesting thing about the learning objects and learning map elements, they’re based on topic refs, so you will have to use specific things like, in a learning object, you will define your learning content ref, and that will have to be the specific element that you use. You can’t use a topic ref element, but in the href that you use, you will be able to reference other DITA material as well, so it doesn’t have to necessarily be a learning object topic type. You can use a normal concept topic in that place.

AP:     That’s where your reuse really comes into play.

KH:     Exactly. Exactly. The reverse is also true so, in your normal DITA map, you can reference those learning topic types, and it’s not going to throw an error. It’s just however you have processed your content when you later turn it into a PDF or you put it into SCORM or whatever to go on their interactive website.

AP:     You were just talking about various output types, and you mentioned SCORM. Let’s tell people what SCORM is out there.

KH:     It stands for shareable content object reference model, and it’s basically a way to package your information that then your learning management system can process.

AP:     It’s kind of like an interchangeable way that different LMSs can suck in a course, basically, more or less.

KH:     Yes. Yes.

AP:     The thing about DITA, even standard DITA, not just this specialization, is that it gives you tremendous amounts of flexibility in what you transform that content into.

KH:     Of course.

AP:     For training, I mean you could do a teacher guide. You could do a student guide. You could even do handouts. That’s on the print side alone.

KH:     Absolutely. The great thing about using DITA, and especially for assessments and things that you have teacher information for, is that you can author it at the same time and store it in the same place so that you can look at them at the same time.

KH:     For example, if you have a test, and you’ve got questions and then the teacher answer key, you can author those things and then view them at the same time, so you will have the answer options and then a special tag that says, “This is the right answer,” or, “LC correct response.” It’s really nice not to have to have one Microsoft Word file with the student information and then a separate printout or Microsoft Word file for your teacher information. You can store and write them at the same time.

AP:     In addition to the modularity that DITA enables, the whole conditional aspect of content plays into this too, so you’ve got this built in intelligence where you can create a question and an answer, and they’re together.

KH:     Yes, it’s a huge benefit.

AP:     Then you can output it showing the answer or not depending on the audience for that particular printout or whatever it is.

KH:     Right. Exactly. Exactly. Then, especially for learning management systems and interactive courses for students, you can then print those answers to the screen when they have selected the correct or incorrect answer. You can have different outputs for different inputs that they have, so if they pick one answer, you can output one thing, and if they pick another answer, you can output the other thing.

AP:     Basically, you’re creating feedback with it.

KH:     Exactly.

AP:     Based on if they answer a question incorrectly, it could provide guidance, “No. That’s not right, and here’s why.”

KH:     Exactly.

AP:     It gives them kind of in-depth context that you can include. We’ve actually done this on our LearningDITA.com site. It is based on the learning and training specialization. If you get a question wrong, a lot of the times it will tell you, “No. That’s not the right answer, and here’s why.”

KH:     Exactly.

AP:     We talked a little bit about print. We talked a little bit about online. Let’s talk a little more about the online ability in learning management systems and what you can do with this content.

KH:     You can include a lot of the media content that you could not in print. If you have instructional videos and things like that, you can include them in the learning and training specialization. Also, I think the conditional processing is a huge benefit as well. I think you can have a lot more interactivity built in without that human interference.

AP:     Well, it’s a tremendous amount of overhead to maintain two separate versions, especially if you’re working in a desktop publishing tool like Microsoft Word-

KH:     Absolutely. Absolutely.

AP:     …and keeping those two things in sync. The amount of brain power alone that has to go into that, “Oh, I changed this, so I need to change it over here in this version.” Can only imagine if you had more than… if you went beyond student and teacher, if there was another audience in there, I can’t even imagine how hard it would be to do two much less three.

KH:     Exactly, and which is possible for some of the audiences that need to use learning and training specialization. You might be training different groups that need to know different levels of things.

AP:     Exactly.

KH:     Then you have that conditional processing that says, “For this group, we need these topics and these lessons, but for this other one, we don’t necessarily need all of them. We just need the first three”

AP:     You still have all of that source to build off on-

KH:     Right. Exactly.

AP:     …and you’re not having to make a copy and paste it over and over and over.

KH:     Exactly. It’s saving all of that time that DITA normally saves but for this entire instructional content.

AP:      I know you’ve worked a whole lot on LearningDITA.com, and you’ve worked with a few clients and talked to them about the learning and training specialization. Based on your own experience and talking to clients, what do people find really challenging about using the learning and training specialization?

KH:     They’re, just as in DITA, is sort of a mindset shift in authoring. A lot of times, you have to be mindful of the format that your questions, for example, are going to be in. You might not want to have as many interactive drawing types of questions and things like that. Sometimes it’s difficult to move from a paper model into this is going to be reused, and this has to fit in the DITA model. Of course there’s specialization. You can specialize your question types, but it is a little bit difficult to go from I have full control over what this question looks like versus this is the structure that it has to fit in.

KH:     I think the same thing is true for just authoring the courses. You start to think about that implied structure. You think, “Oh, what is my overview? What is my summary?” Which I think sometimes is helpful because students tend to crave that structure. They like having, “This is what the header looks like, and this is what I’m looking for,” but I think it’s difficult to make that shift.

AP:     In order for modularity to work, even in standard DITA content, not even the learning and training content, if you don’t have that shift in mindset and you’re not thinking about this is what this is going to look like on paper, this kind of paper-based paradigm, DITA, in general, is going to be difficult for you.

KH:     Right. I think that’s very true. It is true, but it saves a lot of time. It saves a lot of time.

AP:     How does it save time?

KH:     Well, for example, in instructional content, you don’t have someone… You find that people will rewrite questions over and over again for different courses or, like I was talking about before, really similar courses with a little shift in content but, in this way, you can reuse them and then also reuse content that you’ve already written, say, in your technical documentation for a product. Maybe you can then use it, that same content in your training, so you don’t have to rewrite it every time. If you do, maybe you conref some in and then change the way that it’s framed.

AP:     What does conref mean, for those who don’t know?

KH:     Conreffing is a way that you can pull sections, whether it be paragraphs or whatever granularity you want it to be, you can pull it in from your existing content into your new topic and your new content…

AP:     Yeah, so-

KH:     …with an ID or… Right.

AP:     Okay, so reuse is a huge part of it.

KH:     Yes, absolutely.

AP:     We’ve already touched on that. Then there’s also the formatting angle too we’ve also touched on a little bit too, because if content creators, your instructional designers, are not having to spend time focused on how is this going to format in print, what is this going to look like when it’s in the LMS, all that’s handled automatically by the transforms that transform the DITA into the various output types. You don’t have to be thinking about that. You get to focus strictly on the content itself.

KH:     Right, which again is hard to accept for some people that are used to, like I said, drawing their own pictures, maybe, for learning content or setting it up in a specific way, but it does save a lot of that time.

AP:     You mentioned, earlier, video, and you’re talking about art. You can still get-

KH:     Yes, all of those things.

AP:     …multimedia stuff will still… You could still reference them as objects in this content.

KH:     Of course.

AP:     In a lot of LMSs, you can play a video, for example.

KH:     Right. Exactly. Yeah, so it’s all still possible. It’s just a different way of including it and thinking about where you’re going to include it.

AP:     If people need help with the learning and training specialization, what are your suggestions?

KH:     Well, there are not a lot of resources, but we do have a course on LearningDITA.com called The Learning and Training specialization.

AP:     Free.

KH:     Free course, right? That is the first place that I would go for an overview of all of the topic types and sort of what they all do and some of the different elements included in them.

AP:     Yeah. I think one of the more important things, and you’ve already touched on this, is you don’t have to use it all. If you go in there with the mindset that, “I have to use every single one of these elements in this hierarchy,” you’re going to make your life very unpleasant.

KH:     Let me just say we couldn’t even include all of them in the course. I mean you probably will never even need to know about all of them, but if you do, you can always visit the DITA 1.3 specification to look them up, but this gives a good overview of what those things are and how to use them.

AP:     Yeah. I think it’s important to realize the people that created this specialization, I’m sure they were thinking about all the different use cases. It’s not one size fits all.

KH:     Of course. Right.

AP:     That’s why there are so many elements and so many layers. It’s a matter of adapting all those layers to suit your particular purposes.

KH:      Right. Think of the entire specialization as, “This is probably everything that is possible, but what do I need and what maps to the content that I have and the needs that I have?”

AP:     That fits beautifully into the whole idea of DITA, which has the word Darwin in it, Darwin Information Typing Architecture. It is meant to be adaptable. That means you adapt it to what you need.

KH:     Yes, exactly.

AP:     I think, on that note, we will leave it there. Thank you so much, Kaitlyn, for your time.

KH:     Thank you.

AP:     Thank you for listening to The Content Strategy Experts podcast brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

 

The post Using the Learning and Training specialization for your content (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 18:17
Unifying content after a merger (podcast) https://www.scriptorium.com/2019/08/unifying-content-after-a-merger-podcast/ Mon, 26 Aug 2019 13:30:14 +0000 https://scriptorium.com/?p=19164 https://www.scriptorium.com/2019/08/unifying-content-after-a-merger-podcast/#respond https://www.scriptorium.com/2019/08/unifying-content-after-a-merger-podcast/feed/ 0 In episode 58 of the Content Strategy Experts Podcast, Elizabeth Patterson and Sarah O’Keefe discuss how to unify content after a merger.

In terms of pushback or in terms of change management, what we have to do is ask, “What does this other team do really well that potentially is going to be asked to change tools? How do you do this well?” And position the change as an opportunity.

— Sarah O’Keefe

Related links:

Twitter handles:

Transcript: 

Elizabeth Patterson:     Welcome to the Content Strategy Experts podcast, brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize and distribute content in an efficient way. In episode 58, we look at how to unify content after a merger.

EP:     Hi, I’m Elizabeth Patterson.

Sarah O’Keefe:    And I’m Sarah O’Keefe. Hello.

EP:    And we’re going to talk some today about unifying content after a merger. So I think the first question to really get into is, what are some of the biggest content challenges that you commonly see after a company merger?

SO:     Well, the biggest challenge in general is always change management. We should probably just start by putting that on the table and saying that overall people hate change, and mergers mean change for everybody, whether you’re the acquiring company or part of the acquiree, but the biggest content challenges that you face after a merger, so you take company A and company B and now we have a new shiny company C, or possibly just company A with company B included. So from a customer point of view, you now have a single company, but if you go and look at a post-merger website, or actually more likely post-merger websites, plural, what you’re going to see is that the content itself is not consistent. It doesn’t present the same unified, merged perspective that the company wants you to see them as, right?

SO:    They put out a press release and say, “We’ve now joined together. A and B are now C sharp, and it is awesome.” But then you go to their website and it’s very easy to tell which of the pre-acquisition companies actually created a particular kind of content, so you have a lack of consistency. That means that you’re going to have search problems, you’re going to have delivery problems, you’re going to have terminology problems where the two previous companies are using words to mean different things. The branding’s not unified. The documents look different.

SO:     So you have all of those issues, which are all kind of customer facing, outward facing issues, and then on the inside, the big challenges you’re going to have on the inside are two or three or more teams that do things in different ways, so they have different content creation processes, different content approval processes, different production workflow. They might be putting out different formats, and I mean literally. Like, “This group over here does only PDF and this group over here does only HTML,” or they’re both putting out PDF but one team was out of Europe, so their paper’s all A4, and one team is out of the US or North America, so everything they’re doing is US letter.

SO:    Setting aside the sort of obvious delivery problems, that now you have all this stuff that just doesn’t quite match up and it makes the merged company look bad, it’s expensive. It’s really expensive to maintain all these different publishing pipelines for what is now a single or what is supposed to be a single team.

EP:    Right. And likely if you’re not presenting a unified brand you’re eventually going to start losing customers.

SO:     It will not help you with your customers. Yeah. And in many cases, these mergers happen because at a strategic level, the company wants to unify the products or be able to cross-sell the products or be able to expand their geographic reach, and you can’t do those things if you can’t present a unified, cohesive user experience.

EP:     When companies merge, they’re bringing different content to the table. So some may be structured, some may be unstructured, there might be large manuals, there might be small tech docs, there might be things broken into topics and things that aren’t. How do you go about actually unifying that content and creating that unified brand across the companies?

SO:     Well, it’s an opportunity, right? Because it’s an opportunity to look at where you are, and as an organization or as a now merged organization, and figure out what your best way forward is. In many cases, the bigger company, the acquiring company will simply say to the company they acquired, “You need to fit into our workflow.” And clearly if you have 50 content creators and you bring on another five from a smaller organization, then it makes a lot of sense to just sort of move them into your existing workflow, whatever that may be.

SO:    But when you have what’s more of a merger of equals, you know, a team of 15 over here, and a team of 20 over there, and there’s not a clear, “We’re doing things better and you people are doing things badly,” then I think this is a big opportunity, because it’s an opportunity to revisit the entire content workflow and make some decisions about, “What are the best practices going forward? What is each team doing really well? Where can we improve?” And perhaps, “Should we just throw away the whole thing and come up with a new workflow entirely?” It’s also worth noting that after a merger, you may have a team that’s big enough to justify an investment that you could not justify separately. So if I have a team of 10 and you have a team of 10, and we merge, and so now we’re a team of 20, that opens up some possibilities that individually the investment for a team of 10 might have been too big, but for a team of 20 it might be a reasonable approach that we can now choose because we’ve gotten bigger.

SO:     So I think what you want to do is take a look at what everybody has. Do the traditional things, do a content audit, do the stakeholder interviews, identify what teams do really well. What are you really proud of? What have you done best? What do you think are the best things that you’ve done with your content? And once you gain some trust, ask the opposite question. What do you do worst? Where do you see the problems? What’s the number one thing you would like to fix? That question usually leads to some really, really interesting answers. Probably don’t want to start there. If I walk into a meeting and introduce myself and start asking, “What is the biggest problem that you see?” You know, let’s have some coffee, and do some icebreakers and some introductions, and talk about the good stuff before we dig into the bad stuff.

SO:     But the bad stuff question is far, far more valuable, right? Because if I ask a group of writers, “What do you hate doing?” And they’ll say, we spend hundreds of hours a year redrawing engineering content. You know, “We get engineering drawings, but they’re not in a usable format for what we’re trying to do with our content, and it is just soul sucking, and we want it to go away.” That’s an entry point, not just into, “We can save you a bunch of time and money,” but into, “Let’s look at how we can fix your process to make the soul sucking braindead stuff go away and allow you to focus on the value added writing really good content. Creating really good content, and I say writing, but whether it’s audio or video or text, how do you deliver this information best? So go talk to the people, go look at the existing content, go look at the legacy content, figure out what’s good and what’s bad, and based on all of that, working … And of course this is from our perspective as consultants, but working with the team or the teams, we can then put together recommendations for going forward and some sort of a roadmap that says, “It’s going to take this long. It’s going to cost this money, this much money. These are the kinds of resources that you need.”

EP:     Okay. So naturally, digging into all of the content that they have and looking at, “What are you not doing correctly? What are you doing correctly?” All of that’s going to mean that companies are going to have to make some big changes in what they’re doing, and there might be one team that has to make more changes than the other. So how do you avoid pushback in that type of scenario?

SO:     Well, there’s going to be pushback. I mean, I’m not sure it can be avoided, because as I said at the top, people hate change. Change is painful. And stepping back for a second, think about this from a merger point of view, right? You were in a nice little group of like 10 people and you were doing your thing and everything was great. And then along comes this monster company that has like 40 writers, and they say, “Hey, guess what Elizabeth? You are now part of our 40, now 50 person organization, and all that expertise that you’ve built up in tool A, B, and C is totally irrelevant. We don’t use those. Those are kindergarten tools. We’re going to be using these cool new tools. We’ve been using them forever. You have to learn them all. Oh, and by the way, our content is organized differently and approved differently, and basically everything you know is worthless.” So when you put it that way, people tend to push back. I mean, yeah. Structurally what you’re really saying is, “Your expertise coming from the mergee is no longer of value.” Right? “The things you know about this tool are not valued in the new organization because we’re not using that tool.”

SO:     In terms of pushback or in terms of change management, what we have to do is we have to look at, “What does this other team do really well that potentially is going to be asked to change tools? How do you do this well?” And position the change as an opportunity. So instead of saying, “Your expertise is now worthless,” it’s, “Hey, you’re going to have the opportunity to learn some new tools, and these are cutting edge, industry leading, et cetera, et cetera, and they’re going to make you more valuable. Your career, your resume is helped by learning this new stuff.” I mean, just as a general rule, learning a new tool is a good thing. It increases your skill set and all the rest of it. So instead of, “Change is bad and scary,” it’s, “You have an opportunity to learn some new stuff.”

SO:     And I have told people, especially on, again, the mergee side, “Look, just give it a chance. Try this tool, learn this tool, shift into the new workflow, kind of give it a shot, see how it goes. If you hate it,” and with mergers and acquisitions, very often there’s a lot of, “We hate this. We don’t want to do it,” and a lot of stress. Well, if you hate it, that’s fine. You can eventually go and change jobs. I mean, you’re not going to win, right? I mean, the acquisition has happened. You can’t stop it, so take this opportunity to learn the new tool, and if you find that you don’t want to work in this new environment, at least you have a new tool when you then decide, “I’m out of here. I’m going elsewhere.”

SO:     But I think that’s really the key, is to identify the things that we need to take from each organization as we merge them together. Try to avoid simply saying, “Hey, you people, you will now be subsumed into Big Mega Corp,” and identify some things. Sometimes those smaller teams are doing a much better job than the bigger teams. I’ve certainly seen cases where the smaller team was pretty cutting edge, and their approach, their technology stack, their way of doing things actually won out over the bigger company or the bigger team. Now, we are not yet today in a world where these kinds of mergers are driven by how the content teams are producing their content, but there is an opportunity there to look at the smaller team and see what they’re doing and see what we can take out of that.

EP:     How does training factor into all of this? Because we’re bringing on teams that have different levels of experience or introducing new tools. Some of them might be familiar with the tool, some of them might not. So how does training play into all of this?

SO:     One of the things to keep in mind is that when there is a merger, we focus on training in the content stack, but when there is a merger, it’s actually quite common to have a lot of training. For example, something like a new HR system, or, “Oh, everybody has to attend this mandatory training that we’ve always done in the big company, but the small company or the smaller company never did it.” So we can’t just look at training in a vacuum, because it’s really quite likely that the people that we need to train on content related things have actually been through piles and piles and piles of mandatory training due to their acquisition. So I think it’s important to start there and recognize that that’s happening.

SO:     One of the worst training experiences I had in my life as a trainer was when I showed up and I just had this really cranky group of people and I couldn’t figure out why they were so mad. They were just bitter and annoyed, and they weren’t quite pelting me with tomatoes, but they were just really, really sketchy. And I hadn’t been there long enough for them to get mad at me, so eventually I figured out it wasn’t me, and eventually, eventually eventually, I got out of them that they had been required to travel to go to training, for … It was like three of the past four weeks. So I was week three of mandatory training, and they had just had it. They were away from home. These were people that didn’t typically travel a lot, and they had been put on this three weeks of, “Just go to the mothership and get trained and assimilated.” They weren’t mad at me, but they were really mad.

EP:     That’s a pretty big commitment there.

SO:     Yeah. And it was unreasonable. They should’ve spread it out. The company should’ve spread it out better and not landed in the situation where they were so mad they weren’t going to hear a word I said. So we got past that eventually, but that’s one thing to keep in mind, is there are going to be a lot of training demands. So really pay attention to, “Are we making people travel more than they’re comfortable with? Are we stacking up training? Can we minimize it? Can we do video online instead of making people travel?” All those kinds of things.

SO:     Okay. Outside of that, when we start thinking about training, we have to look at, well, as a content creator, what level of expertise … What’s your skills gap? If we change systems, then what do you not know yet? What do you need to know? If you look at moving somebody out of a book based, PDF based content world, then you’re going to have to teach them not just things like tagging, how to do reuse, how to work in a content management system. But probably you’re going to have to start at the beginning, which is, “What is topic based authoring, and why should you care?” So we have to kind of look at who’s in the organization, in the group that’s going to change systems potentially, and how much do they need to know about those systems? Like, what level do we need to get them to, and what level are they currently at? And then figure out how to bring them up to the level they need potentially over time.

SO:     We don’t have to do it all in three days of training. We can do a lot. We have a lot of flexibility. You can do self study, you can do online, you can do classroom-based. At the end of the day, classroom-based training with a really good trainer is more effective than any other approach that is out there. And it’s expensive, right? I mean, A, you have to bring in the trainer. B, you probably have to bring in some of the trainees. So you have travel costs, which are hugely expensive. You have to have a training room. Many companies have one. Some don’t. But there is, especially after a merger, there’s great value in putting all the people, the newly potentially unified team, right? They’re supposed to be unified, but they might not be. Putting all of those people in a room together and having them do training together and kind of get to know each other, and maybe they go out to dinner that night or they socialize a little bit at lunch. There is enormous value to doing that, and it needs to be factored in when you think about training. So when we look at classroom training and say, “Oh, that is going to be so, so expensive,” absolutely true. Consider the value that classroom training brings you.

SO:     Now, I say that. At the same time, given the globally distributed teams that we have now, there are things you can do with online training that you may not be able to do with classroom training. Some people cannot travel for a variety of reasons. It could be medical issues of their own. It could be that they have caregiving responsibilities and they can’t pick up and fly halfway around the world. It could be that they are working in another country, and bringing them into the country where you want to have the training could be problematic. You might not be able to get a visa on time, that type of thing. Oh, and it’s easier to record online training. Certainly you can record classroom training, but it tends to be kind of suboptimal.

SO:     So live instructor led training online mitigates some of the travel issues. It’s still live, so you have questions and answers and that kind of thing. And then as you kind of move down the pipe to, let’s say, e-learning and asynchronous e-learning, those kinds of things, that’s where you have an opportunity to just simply record the training and have people do self study on their own time, which then addresses some of the time zone issues and things like that. So that was a long winded way of saying there are lots of different ways of doing training, and not any single one of them is going to solve every problem.

EP:     Right. So you could pick and choose, I guess, the situations in which, “Okay, this might be more effective for classroom based training, and then we can have additional training offered online to offer that flexibility.”

SO:     Yup, exactly. So I think there’s a place for all of them. We very often encourage people to do some self study for the introductory levels. If we’re talking about DITA training, then of course we have learningdita.com, and they can kind of work through that, and then the classroom training or the live instructor led training, we focus on the specific implementation that that customer has. So your specific content model, your specific tools, your specific content workflows. And I think there’s a lot of value in doing training that’s not just, “Hey, this is topic based authoring,” but in fact, “Here’s how you are going to work in your environment, in your organization.”

March 2025 update: We have moved LearningDITA to a new platform. The Introduction to DITA course is still free, and you can sign up for courses at store.scriptorium.com.

EP:     Well, we’ve talked about content challenges after a company merger, and what unifying that content will look like, and then the training that follows that. What makes all of this worth it?

SO:     Well, there are certainly companies that have not done this. There are companies that infamously merge a bunch of subsidiaries and then just let them do their thing, and they actually don’t unify them. But more often they do want to unify, and what makes it worth it is that the organization, the organization that’s doing the roll up or that’s doing the merger has a strategic vision for the merger. “We want to combine these companies because we will move forward and we will be stronger together as this unified company.” Therefore, if that’s your goal, then you need a unified customer experience, right? You need people to come to your acquired company website and feel as though they are looking at a single entity. So there’s a unified customer experience that you need. People expect you to speak with a single voice and not have this obvious, “Well, that clearly came from the old company, because it doesn’t look like anything like what the new company produces.” So that’s one thing, so that’s the customer experience angle.

SO:     There’s a cost angle. There’s cost associated with managing, maintaining, licensing a content production process. The technology that goes into it, the processes, just the general sort of maintenance of that workflow. As a general rule, it would be cheaper to have one workflow for everybody than it is to have two workflows or three or 17.

EP:     Of course.

SO:     You’d be surprised. And then if you want your content creators to collaborate, work together, cross over their skill sets, work on each other’s documents, those kinds of things, then you need some cohesion. You need the team to kind of come together as a unified team. And this process will achieve that, right? Because if you get to a unified content process, you can typically get to a unified content team.

SO:      Now, I will say that the people are always, always, always the most difficult part of this process. Always. Technology is easy, people are hard, and mergers in particular, or actually acquisitions, they can be hard. When you put together two groups of 25 and then management from on high says, “Well, we know that you’re going to be more efficient as a bigger group, so get rid of 5% or 10% of your people,” and that inarguably happens. I mean, it just does. So people are very wary of mergers or acquisitions, and you can’t really blame them, but we need to work on that team cohesion and getting everybody sort of on the same page working together, and not, “Oh, I’m from the old group and you’re from the new group, and we don’t like each other much.”

SO:     I have seen cases where that has failed, in that there was no team cohesion, so you have two groups operating under the same umbrella. Structurally, they look as though they should be one team, but in fact they are two teams and they don’t talk to each other, and it is really bad. So as a manager or as a leader, I need to do whatever it takes to bring those teams together eventually. Not in week one, but bring them together, get them all kind of working together, and get them working as a team and not as the sort of company A and company B teams.

EP:     Right. Well with that, I think we’re going to go ahead and wrap up. So thank you, Sarah.

SO:     Thank you.

EP:     And thank you for listening to the Content Strategy Experts Podcast, brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

 

The post Unifying content after a merger (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 24:48
Enterprise content strategy: Putting the pieces together (podcast) https://www.scriptorium.com/2019/08/enterprise-content-strategy-putting-the-pieces-together-podcast/ Mon, 12 Aug 2019 10:00:13 +0000 https://scriptorium.com/?p=19130 https://www.scriptorium.com/2019/08/enterprise-content-strategy-putting-the-pieces-together-podcast/#respond https://www.scriptorium.com/2019/08/enterprise-content-strategy-putting-the-pieces-together-podcast/feed/ 0 In episode 57, Sarah O’Keefe and Bill Swallow look at content strategy across different disciplines and how an enterprise-level content strategy can grow from departmental efforts.

Related links:

Twitter handles:

Transcript: 

Sarah O’Keefe: Welcome to the Content Strategy Experts Podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In episode 57, we look at content strategy across different disciplines and how an enterprise level content strategy can grow out of departmental efforts. Hi everyone. I’m Sarah O’Keefe. I’m here with Bill Swallow.

Bill Swallow: Hello.

Sarah O’Keefe: Bill is actually for once in the room here in North Carolina and has spent the last few days visiting with the so-called home office and complaining about the chewy and wearable air.

Bill Swallow: It’s gross. I don’t know how you do it.

Sarah O’Keefe: Well, we could return the favor in January and-

Bill Swallow: That’s true.

Sarah O’Keefe: … discuss how do you do that in the frozen North? But we’ve named his office the Arctic Scriptorium, although I guess now that we have an employee in Minneapolis, you’re only what, the sub-arctic Scriptorium?

Bill Swallow: Yeah. We’ll take apple country again.

Sarah O’Keefe: Apple country Scriptorium. So, today what we wanted to do was talk about enterprise content strategy and what that really means when you try to put the pieces together across a whole group of departments, and what their interpretations of content strategy are. When we talk about content strategy, every discipline seems to have a slightly different definition of what that is.

Sarah O’Keefe: If you talk to tech comm people about technical or tech comm content strategy, or product content strategy, of marketing content strategy, et cetera, et cetera, et cetera, and each one of those departments or functions seems to have a slightly different idea of what their priorities are and what they need out of content strategy. So, starting with, for example, marketing, what’s a typical focus for content strategy in a marketing-focused area?

Bill Swallow: A lot of the marketing content strategy is focused on customer engagement and corporate brand. So, being able to put the best face for the company forward in making sure that your customers and your target audience are getting the experience that your company expects them to get. So, all of the content strategy pieces that make that happen will be very different from say looking at your technical communications group producing your documentation.

Sarah O’Keefe: So, on the marketing side you get things like voice and tone standards.

Bill Swallow: Voice and tone standards, delivery formats to some degree, and positioning.

Sarah O’Keefe: And editorial calendars and web content strategy and all the rest of it.

Bill Swallow: All that fun stuff.

Sarah O’Keefe: And then, what does tech comm focus on?

Bill Swallow: Well, you have tech comm, you have a lot of the, a lot of similar things. You still have your voice, you still have your tone but it usually takes a slightly different approach, at least traditionally.

Sarah O’Keefe: You typically hear style guide, right?

Bill Swallow: Well, you hear style guide but-

Sarah O’Keefe: Not so much voice and tone.

Bill Swallow: It’s a bit of both because the style guide really does have control over voice and tone as well. And your marketing content strategy is certainly going to have a style guide as well. But it may not be the same. What technical content does well is that whether or not you’re using any kind of structured environment or XML, tech comm is really good at structuring content at a conceptual level.

Bill Swallow: So, looking at things and breaking them down into topics, whether it’s long form or short form, having very specific compartmentalized approaches to producing content, and then wrapping it all up together to produce either in the classic sense a manual, or online help systems, or putting out just information that is well-organized and defined. And the real trick and the real difference is that there is a very strong focus on being technically accurate and correct in the tech comm side, which you don’t necessarily find in other departments.

Sarah O’Keefe: I think in addition to that, what you see is that the… it’s not quite as simple as marketing is pre-sales in tech comm or product content as post-sales. But if you look at it that way very broadly, then the marketing teams tend to be very much focused on how do we increase revenue, how do we increase cost share, how do we increase sales, right?

Sarah O’Keefe: They’re focused on, let’s invest and as a result have increased sales, have increased revenue, have increased customers, whatever. On the tech comm side, nearly always, tech comm is focused on efficiency and cost avoidance.

Bill Swallow: Yes.

Sarah O’Keefe: How can we… We have to produce all this content, whether it because of regulation or just so that people can use the product successfully. How do we produce all this information as efficiently as possible? Historically, that has not been a focus on the marketing side. Right? The question of how long does it take, how expensive is it to produce this piece?

Sarah O’Keefe: Marketing might look at the question of, oh well three varnishes and eight metallic ink colors is going to make for a really expensive printed product, but you very rarely hear it costs too much to write this content or to manage this content. And that tech comm has been, I don’t know about exclusively, but largely focused on we have so, so much content we have to manage it efficiently.

Bill Swallow: Right. Because also a lot of the tech comm-based content is more evergreen than the marketing content will be. So therefore it’s going to stick around for a lot longer period of time. A lot of the marketing content is very short lived. I mean, you certainly would have marketing copy that could potentially be out there for a year or more. I’m not saying that there isn’t. But a lot of the marketing content, it’s very targeted and it’s very-

Sarah O’Keefe: Time [crosstalk 00:06:02].

Bill Swallow: Yes, very, very time-sensitive.

Sarah O’Keefe: Then of course there are other content creators that produce customer-facing information like tech support, which tends to for the most part align with the product content. And then something like training, which to me kind of goes across the two. If you look at it between marketing and tech comm, it kind of sits in a weird maybe third place, but kind of in-between.

Sarah O’Keefe: The question then I think, becomes how can we look at content strategy across an enterprise and do a better job of putting those pieces together, of working across the various functions and taking the things that they do well on the content strategy side, and putting that all together, how do we do that? I mean, how do we take that step into saying we’re not going to have marketing content strategy and product content strategy and training content strategy and I don’t even know what else content strategy. How do we put them together?

Bill Swallow: Well, I’m not sure we would say that you don’t have those things, but you would definitely have something, an umbrella strategy that pulls them all together. The first thing you need to look at, okay, what are the strengths, the weaknesses, the commonalities and the unique needs for each of these groups? So, what are they doing really well? What are they struggling with? What are very specific things that they need to worry about that other groups don’t need to worry about? What are the things that they worry about that everybody worries about? Trying to find all these pieces and see how they can fit together and complement each other from group to group.

Sarah O’Keefe: Then, what we found is that the, as you said, the umbrella or the overarching tool that can make it possible to really understand what you’re dealing with tends to be the customer journey.

Bill Swallow: Generally.

Sarah O’Keefe: Yeah, because you can look at a holistic customer journey and say, “Okay, well the customer goes from being a prospect to being a researcher, to being a buyer, to being a learner, to being a user and then back around through the… back to being a prospect because they might upgrade or they might buy other products or they might become an advocate or an evangelist for your product. So, you have that-

Bill Swallow: And hopefully never a detractor.

Sarah O’Keefe: Hopefully. So, you have that process that they go through and sometimes it’s shown as a loop, sometimes it’s kind of an infinity figure eight. But what’s important is that it is not a straight line from A to B to C to D to E. There’s definitely some loop back in there. The customer, and this I think is really the crux of it. From the customer’s point of view, you the enterprise, you the organization are a single thing, right?

Sarah O’Keefe: I bought this product from Microsoft. I bought this product from Apple. I bought this product from whoever. They don’t know or care that in fact, inside that great big organization, you have a product content team and a marketing content team. Oh, and you have both sales and marketing, and marketing communications, and technical marketing, and tech support, and you have VIP tech support, and training, and… It just goes on and on and on.

Sarah O’Keefe: Well, I don’t care. You know? I bought a piece of software and I want you to give me the information that I need to use that software or that hardware or that tool successfully. I am entirely disinterested in the fact that the content I need is sitting on xyz.company.com and not on abc.company.com. That should not be my problem.

Sarah O’Keefe: What happens is, when you have these departmental content strategies, it becomes my problem because I have to find it, right? I have to find the right website or the right–God help me–PDF, and then I have to dig through it and find the right thing. Oh, and then it probably is contradicted by the other website.

Bill Swallow: Well yeah, it doesn’t help that how you navigate when researching a product varies differently from how you navigate when trying to research how to troubleshoot something in the product. Finding content across, like you mentioned, multiple different portals and having to use these portals in a different way and the fact that they organize their information potentially in a very different way. It gets back to the detractor thing. I mean, are you fostering detractors of your company, your brand, or do you want them to be a net promoter?

Sarah O’Keefe: So, working our way around to our point, which is enterprise content strategy is your friend, right? It’s not your friend because it’s cool and awesome and… It is your friend, because if you don’t use the same words for the same product across different departments, your customers are screwed. They can’t find what they’re looking for because they are using the wrong word.

Sarah O’Keefe: So, when I say you need to use the same words to describe things or the same words to allow people to search, well, you need an enterprise taxonomy. You need enterprise level metadata that is consistent across the enterprise, not just department by department. Because again, your customers are not interacting with your content department by department. They’re just not.

Bill Swallow: Right.

Sarah O’Keefe: Once they learn over in the land of marketing that you refer to a product as ABC123, if you then go over to the product content and all of a sudden it’s 123ABC, they are not going to be very happy with you.

Bill Swallow: Right, or XYZ-

Sarah O’Keefe: Or XYZ, or AlphaBetaGamma. Just because, oh well we do it differently over here and those people are morons, right? I mean, how many times have we heard that? We go into a company and we say, “Wait, you do it this way. And they do it that way.” And the canonical answer is, “Well, we’re right and they’re wrong.”

Bill Swallow: Yup.

Sarah O’Keefe: Well, that may be true, although I will tell you, we then go talk to the second group and they say, “Well, we’re right and they’re wrong.” But it’s actually irrelevant because what matters is that you’re producing content that doesn’t align, that makes it the customers’ problem and therefore you’re both wrong.

Bill Swallow: I had bought a chainsaw a while back and-

Sarah O’Keefe: Okay.

Bill Swallow: … I was actually going through and just going through the maintenance procedure of it just a couple of weeks ago because I need to clear some small trees from my property. I was looking up in the print manual, which was stored with the chainsaw, which was half-destroyed, you know what type of oil I needed and such, and it’s an electric chainsaw so I had to figure out exactly what I needed to do to fix all the moving pieces. And I could only get to half the information.

Bill Swallow: So, then I went online and I was finding that the information that was online for my very specific model of chainsaw was very different from the print manual that came with the chain saw itself. So I had to call the support number just to ask them which one should I be following because my print manual’s destroyed and the online information is different from what I can read in the print manual.

Sarah O’Keefe: But it’s a chainsaw.

Bill Swallow: And they didn’t know. They didn’t know.

Sarah O’Keefe: It’s okay. What are the odds that you’ll get hurt if you use-

Bill Swallow: Oh, yeah.

Sarah O’Keefe: Yeah. So-

Bill Swallow: I still have all fingers, toes, and other limbs, so I’m good.

Sarah O’Keefe: But I mean, so this is actually… A, they used the wrong format, right? I mean, if you’re going to use paper, make it something laminated-

Bill Swallow: Yes.

Sarah O’Keefe: … or put it in a little Ziploc baggy or something.

Bill Swallow: Well, that was probably my fault-

Sarah O’Keefe: It’s a chainsaw, it’s going to be outside, or it’s going to be in your garage getting eaten by the mice. Do you have mice? Anyway. So then, that’s problem A. Problem B is they contradict each other, but is that because the one online got updated?

Bill Swallow: We don’t know because the person I called looked it up and they couldn’t tell me which one was right or wrong.

Sarah O’Keefe: Okay, fantastic.

Bill Swallow: Because, and this goes back to the reason why having this global content strategy is important, is because when you have very important information like this, it’s kind of important to centralize it so that all groups are using the same information, always. What I assume happened here is that a particular spec sheet went down from their R&D group or their production group and was used at one point in time to create one deliverable, and a copy of it was somehow modified and used to create some different copy for maybe the online stuff.

Bill Swallow: That is absolutely what you do not want to have happen. Fortunately, a chainsaw only works a certain way, so if there’s something that goes wrong, generally if you’re used to using power tools, you can kind of avoid injury. But you know, having other things, especially automated pieces of machinery-

Sarah O’Keefe: I think I know what I’m doing, so I’ll avoid injury here. Now, I personally would say something along the lines of, “Well, this information contradicts, so you know what, I don’t need to clear that brush that badly.”

Bill Swallow: Yeah, I used that excuse long enough. Unfortunately, it has to be cleared.

Sarah O’Keefe: Okay, so now that we’ve established that enterprise content strategy is a good thing for your various limbs, how do you do this? I mean, you’re sitting inside a group, you know, maybe you’re in a product group, maybe you’re in a marketing group, maybe you’re in a training group, you know that you have these contradictions. You know that you have 18 websites out there that all have different information on a different technology stack produced by different people with different taxonomy in different terms and different everything. What do you do? What’s your first step? I mean other than call us.

Bill Swallow: Yeah, other than call us or-

Sarah O’Keefe: What’s your first step?

Bill Swallow: … other than sit at your desk crying.

Sarah O’Keefe: Crying.

Bill Swallow: Yeah, about it. The first thing is to start talking with people in your company across these different departments. Try to find your peers. So, if you are in charge of the technical communications team, you want to make sure that you have inroads with someone who’s in charge of the marketing team-

Sarah O’Keefe: And tech support.

Bill Swallow: And tech support.

Sarah O’Keefe: And training, and-

Bill Swallow: And training, and all these other people who feed into the content pipeline. You need to make those connections if you don’t have them. You need to kind of understand who’s who, so have that org chart handy and go knock on some office stores or call people up, or what have you.

Bill Swallow: Then, once you do have all of their ears, make sure that you don’t dive into the deep end. You want to kind of wade in slowly and start small. The first part really is to try to define, okay, you know, what is your strategy? Find out who’s doing what and why. What are you doing well? What are you not doing well? What are some processes that you’re following? And how might we be able to start sharing some of this stuff across? Find out where those common areas are. Make note of the things that absolutely have to be treated a certain way for a particular group and go from there.

Sarah O’Keefe: Yeah. If the problem’s big enough, and I would argue that your chainsaw potential massacre there is a big enough problem, you might try the Go Big strategy, which is basically to write it up and say, “Hey, over here it says this, and over here it says this, and this is really bad.” And take it up your management chain and try and get some immediate action. That can be highly effective and/or it could be not so effective and result in you being literally walked out the door for being a general pain in somebody’s tuchus.

Sarah O’Keefe: So, think about that. Think about how much risk you really want to take. But I do think that it’s always a good idea to network within the company, and finding your peers in the other organizations is always a good idea, and making those connections will help you even if this goes nowhere because having those connections, having friends in other parts of the organization that you talk to and work with and network with and understand what they’re doing means that your career at that company will be a lot easier.

Sarah O’Keefe: Something happens, they call you or they email you and say, “Hey, did you hear about such and such? Maybe you should be in this meeting.” You know, those connections, you need to make those connections no matter what. But in this particular case, I think it’s a very good idea.

Bill Swallow: It’s very easy to start butting heads too, when you start making these connections. I mean some people have a very specific strategy for a very specific reason and they’re going to be loath to change the way they’re working. But again, you got to… Hopefully everyone involved can use the overall customer experience or the customer journey as a guideline to say, okay, we’re all in this together. We’re at different points in the journey. How can we make sure that it’s as seamless as possible when they go from group A to group B to group C to group D’s content and contribution to the customer?

Sarah O’Keefe: Yeah, and I think that’s a good summary. So, with that, I think I’ll wrap it up.

Bill Swallow: Sounds good.

Sarah O’Keefe: Thank you for listening to the Content Strategy Experts Podcast, brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

The post Enterprise content strategy: Putting the pieces together (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 20:00
Calculating return on investment for your content strategy (podcast) https://www.scriptorium.com/2019/07/calculating-return-on-investment-for-your-content-strategy-podcast/ Mon, 29 Jul 2019 13:30:34 +0000 https://scriptorium.com/?p=19114 https://www.scriptorium.com/2019/07/calculating-return-on-investment-for-your-content-strategy-podcast/#respond https://www.scriptorium.com/2019/07/calculating-return-on-investment-for-your-content-strategy-podcast/feed/ 0 In episode 56 of the Content Strategy Experts Podcast, Gretyl Kinsey and Alan Pringle discuss how to calculate return on investment for your content strategy even if you’ve never measured anything before.

Related links:

Twitter handles:

Transcript: 

Gretyl Kinsey:     Welcome to the Content Strategy Experts podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In episode 56, we talk about return on investment and how you can calculate it even if you’ve never measured anything before. Hello and welcome to the Content Strategy Experts podcast. I’m Gretyl Kinsey,

Alan Pringle:     And I’m Alan Pringle.

GK:     And today we’re going to be talking about return on investment or ROI. And I want to just get started with a little bit of context first. So when you’re coming up with your content strategy, why is it so important to make sure that you calculate ROI and be able to show that?

AP:     Generally, because the people that have the money want to know what they’re going to be getting for their money, that’s your return on investment.

GK:     Absolutely. And what are some of the typical kinds of pieces of information that go into coming up with that number?

AP:     Well, one way to kind of look at that is when you’re looking at your content strategy and you’re doing your analysis of your current situation, what’s messed up, what’s not working, where are the inefficiencies? A lot of the numbers and ideas in regard to ROI can come from there. Those are your pain points. If you fix them, what goodness are you going to get from that? And that’s kind of one way to look at where to start to look for those kinds of numbers.

GK:     Absolutely. I know that when I’ve interviewed different people at companies whether they’re writers or whether it’s executives or anybody else with a stake in the content, they all kind of have different pain points and so every piece of information that you can get from those different people in those different groups can kind of give you a better understanding of what that overall ROI might be.

AP:     And I think it’s important to note too, some people’s pain points trump other people’s pain points. If an executive, and especially someone who has the money that can fund your project really wants problem B to get worked out but you’re more concerned at problem A, you might want to take a look at your perspective and realize in order to get the money you may need to focus on what the exec sees as the problem as part of your analysis work on your content strategy.

GK:   Absolutely, and I think that’s especially important if you’ve got pain points maybe that are in conflict with each other from different groups. Then when you’re looking at it from that ROI perspective and thinking, “Which pain points are costing us the most?” Then that kind of clearly shows you which one might be the most important one to consider when it comes to fixing those.

AP:     Exactly.

GK:    I want to talk a little bit more about that kind of conflict and maybe some roadblocks that can come into play when you’re trying to figure out your ROI and one that I want to talk about is lack of metrics. There have been several organizations actually that I’ve worked with where this has been an issue. And one, in particular, that was kind of a pain point they mentioned was they had not been gathering any sort of metrics on things like how much time writers were spending on formatting versus content creation. How much time users were spending searching for content and not being able to find it, both internal and external users. And those were some good numbers that could’ve been used to calculate ROI, but they didn’t have any of that.

GK:     So what they had to kind of do to prove it and sell it more to executives was start with a small scale pilot and go up and then with that pilot they could say, “Okay, now, here, we’re finally collecting some metrics that can prove our ROI.” So I wanted to get your take. Have you seen similar situations? And what other things can be done besides maybe a small pilot project to make up for that lack of metrics if you’re trying to get numbers and you just don’t have anything so far?

AP:   Unfortunately, content people, content creators, often are not good with numbers. Yes, it’s a bit of a generalization, borderline on a stereotype, but there is truth to that. So sometimes you’re going to have to really take a step back and before you maybe start looking into, “I need money for this,” really starting to figure out those numbers. Now, getting them is often not an easy thing to do. You mentioned a really good one. If you can get funding for a small pilot and then extrapolate from that, that’s one way. But another way to do it too is maybe do some research on industry averages. There’re certain things that are fairly true consistent across the board in regard to fields like technical communication. When people in tech comm are using desktop publishing tools, often you were looking at 25 to 50% of a full-time content creators time as being working on the formatting, not actually creating the content but formatting.

AP:     Now, there are ways to get those numbers down templatizing, et cetera, but, in general, say 25% so with that metric in mind, take a look at the average cost of a, for example, technical communicator, and I’m talking about the loaded cost. And when I say that that is the overall costs, including all compensation, vacation, whatever else it’s called, “a loaded cost.” And if you know or can find out, probably from someone a little higher up the food chain, what that loaded cost is, and you say take even a more conservative, 10 to 15% of that time as being for formatting, you’re going to have an idea of how much it costs for one person a year to work on say formatting on their content.

AP:   And if you go in the direction of a content strategy where you’re trying to automate formatting with smarter structured content, you are going to really eliminate that 10 to 15% of whatever number that is for every single person who’s creating content. So if you kind of methodically go through from starting with, “This is what it costs to employ someone for a year, what percentage of their time is doing spent on x and y tasks?” And if you reduce the time of your content strategy to do x and y, you can pinpoint fairly well what you’re going to save per person.

GK:     Absolutely. I think it’s really good to note that even if you don’t have hard numbers that you’ve been calculating and that you’ve been collecting metrics on, industry averages can still give you at least some kind of an estimate to start with. And it doesn’t even have to just be with things like the time spent formatting. It can be on other things like localization.

AP:    Oh, anything.

GK:     You can kind of get some idea of reuse just as-

AP:    Copying and pasting.

GK:     … copying and pasting.

AP:     That’s a really good one.

GK:     So there are all sorts of ways that you can just look at averages and kind of use that to help calculate your ROI at least close. And I think as Alan said, if you do a conservative estimate, you could even make the case that this is sort of what we think it might be the lowest cost savings scenario but you could be saving a whole lot more if the metrics that you actually end up collecting later show that we were wasting more time than you thought.

AP:    You really always want to go with the conservative estimates because what you don’t want to do is over promise and under deliver. You want to do the opposite. You want to say, “This is a very conservative estimate and look at how much we’re gaining just with this conservative view on this.” It’s a lot more compelling and frankly safer for you to do it that way.

GK:    I want to talk a little bit about another issue now that might kind of impede you from getting the numbers that you need and that is organizational issues. And we kind of touched on one a little bit when we talked about are there may be some aspects that are in conflict with each other, but there could be situations like an acquisition or merger that might make it really hard to collect some of the numbers and the metrics that you need across the organization. There could be things like company reorganizations, there might be kind of drama or political issues amongst departments. So there are all sorts of things kind of organizationally that might make it difficult to get those numbers. So what would you recommend when you’re faced with a situation like that to make sure that you get all the information that you need?

AP:    This is where I think face-to-face communication is really, really important. It may be time to have an all-hands meeting in one location. I’m talking face-to-face. If you have to have some people come in because of time zones or conflicts or whatever, come in via a web conference it’s okay, but it’s not ideal. I think if you’re running into those problems, getting everybody in a room, talking face to face and figuring out what possible … I’m trying to find the right word here. What misconception are people having about someone else’s work that’s causing these problems? Where are the conflicts coming from? Getting everybody together face-to-face to talk about what they perceive and then maybe try to get everyone kind of to a level playing field about their understanding of the problems.

AP:    It sounds ridiculous, but talk it out. I mean we laugh here at Scriptorium a lot of times when we call ourselves content therapists because we get caught in these political battles and have to smooth things out a lot of times by being the middle person, the intermediary. But I do think the really the best way to solve these problems is to talk face-to-face. And I know it sounds facile, and it sounds ridiculous, but it can actually work.

GK:    Yes. And I’ve seen it work multiple times with organizations in these kinds of conflicts or with these kinds of situations that a lot of times what was causing those kinds of misunderstandings in the first place was just people not talking. So when you’re coming up with a new strategy and everyone really needs to get onboard and work together, just bringing them into a room together and kind of explaining what’s so important really helps push past a lot of those issues.

AP:   And it can help to have a third party in there. Hire a consultant to maybe help you with that, to come in there. And I’m not saying be the referee in a brawl, that’s not what I’m getting at, although I’ve seen some meetings that came close. To basically, help people think outside their own boxes because when you have someone else asking the questions, it can help you focus in a way than if it’s just people off in the company there.

GK:    Absolutely. And I think that’s especially important if, for example, a content strategy initiative is being led by one department. And so that department’s manager is the one kind of gathering everything. They may not have the ability that a consultant would have to really get honest numbers from everybody if there are kind of conflicts going on. Whereas if you have a third party outside perspective than people might be a little more willing, to be honest, and to kind of give up their information and their numbers. And it also kind of takes the pressure off of you as that manager trying to put your strategy in place. If you’ve got someone backing you up and kind of helping you collect all the information that you need to prove your case, then that can really get you that ROI that you need.

AP:    And it’s important to build trust because when you start talking about what is the average loaded cost for an employee, you’re starting to talk about what people are getting paid. And that can be a rather prickly issue to deal with. So there has to be a level of trust and respect among all the people who are working on the content strategy analysis to understand and really not weaponize that information. They need to use it for good and they need to be careful about how they handle that information and to be respectful of sensitive information when you are dealing with numbers like that.

GK:     Absolutely. So when you’re in a situation where you might be working with bad or incomplete data, what do you think is the best way that you can kind of estimate the ROI, whether you’re dealing with lack of metrics or organizational issues or both?

AP:    We touched on it a little bit earlier, but see if you can do some research on industry averages, that’s one way to start having that kind of baseline information and you can kind of compare yourself, “Does this really apply to us or not?” That can help maybe spark some ideas about what you need to do to better gather the information that’s a little more specific to your particular case.

GK:     Yeah, and I think that can also be a starting point for if you’re not collecting the metrics you need, start now. Look at what you are missing and say, “Okay, as part of our strategy we need to go ahead and start gathering these numbers,” even if we haven’t been historically, we can at least kind of start establishing a baseline right now and say, “Okay, even just starting this week, this is how much time our writers are spending doing this task versus this other task. This is how much we’re spending on localization and how much time it’s taking the content to get back to us after being translated. This is how much content is being reused or has the potential to be reused versus copied and pasted.” Things like that.

GK:     Look at areas where that’s missing and start gathering those numbers. And I think we can kind of go back to something we talked about a little bit earlier too, which is pain points. Look at your pain points and say, “What information can we be gathering that we’re not? That can really help us explain why these are pain points to help solve the problem.”

AP:     To get you out of the pain.

GK:    Yes.

AP:    Yeah. So basically, use your pain as fuel to get yourself out of that. The more specific information you have, the more compelling those metrics are going to be to help you fix the problem that is causing you all those pain points.

GK:     And I think one other point that can be made about bringing in a consultant is that along with things like industry averages, they’re going to have industry experience that maybe you wouldn’t just from your individual organization. And so even if you don’t have really good metrics and industry averages can only get you so far, they can kind of come in and say, “Based on our experience and what we’ve seen across other companies in a similar situation, here are maybe some average estimates that might be kind of similar to kind of help you start calculating your ROI.”

AP:    I agree with that. Absolutely. And I think it’s also important to think about when you’re looking at these numbers and these metrics. There are some also, qualitative things beyond the quantitative numbers that you need to look at too. For example, if you set up re-use well to ensure that approved safety information for your products is properly used across your entire documents set, all of your content, you’re helping to reduce legal liability by having a re-use scenario set up where information is approved and then used exactly as it was written across everything that reduces legal liability. Can you put a number on that? Maybe not, but it’s that kind of thing that you need to be aware of too. They’re going to be some things you’re not going to be able to give a precise calculation. So a lawsuit may not be something that you can put into a report with a hard number, but it is something saying you were reducing legal liability by using a process that has a very institutionalized re-use and review program built into it.

GK:     Absolutely. And that’s a really important point because I’d mentioned at the beginning that there was one company I worked with that was not getting the kind of quantitative metrics, but they also weren’t getting any qualitative metrics and that was something that they wanted to build in. So being able to show not really a number but a kind of customer satisfaction statistic and being able to prove even if you can’t really quantify it with a number, just being able to show that your industry reputation may be improved, that your customer satisfaction may be improved. And then as Alan said, legal liability is another one. Also, kind of employee quality of work-

AP:    And morale.

GK:    … and morale, yeah. So there are a lot of ways that you may not be able to have a calculated into your ROI, but you could still use that to make a strong business case and say, “Here’s sort of our conservative estimate ROI but then also here are these other things that you can’t really measure that really make a strong case for going this route with your strategy.”

AP:     Having both of those things in your assessment is important. I agree.

GK:    So I think with that, we can wrap up. So thank you, Alan, for talking with me today.

AP:    Thank you.

GK:    And thank you for listening to the Content Strategy Experts podcast brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

 

The post Calculating return on investment for your content strategy (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 18:00
The Scriptorium approach to content strategy (podcast) https://www.scriptorium.com/2019/07/the-scriptorium-approach-to-content-strategy-podcast/ Mon, 15 Jul 2019 13:30:05 +0000 https://scriptorium.com/?p=19087 https://www.scriptorium.com/2019/07/the-scriptorium-approach-to-content-strategy-podcast/#respond https://www.scriptorium.com/2019/07/the-scriptorium-approach-to-content-strategy-podcast/feed/ 0 In episode 55 of the Content Strategy Experts Podcast, Elizabeth Patterson and Sarah O’Keefe discuss Scriptorium’s approach to content strategy.

Related links:

Twitter handles:

Transcript: 

Elizabeth Patterson:     Welcome to the Content Strategy Experts podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize and distribute content in an efficient way. In episode 55, we discuss Scriptorium’s approach to content strategy. Hi, I’m Elizabeth Patterson and I’m joined by Sarah O’Keefe.

Sarah O’Keefe:     Hello.

EP:     Sarah just recently finished writing a white paper and in this white paper, you mentioned that when you invest in content strategy you’re really committing to a major digital transformation effort. And with that, there are significant challenges, but that also brings great opportunity and I think that that’s something that’s important to note before we start going into some of these specific aspects of Scriptorium’s approach to content strategy.

SO:     Yeah, I think that’s right. I mean you’re making a big commitment, and you’re committing to what I’m afraid is going to be a lot of pain. So just be aware of that before you get started.

EP:     So in this white paper, the first piece that you really focus on is the current state analysis. Could you talk a little bit about what that looks like and some of the current states of content that you have seen along the way?

SO:     So current state analysis means that we go in and we figure out what exactly is going on right now in that organization with their content. How is it being created, how is it being delivered, what are some of the problems? And, of course, there are a lot of problems, right? Because if there weren’t a lot of problems we wouldn’t be there. So current state analysis is a matter of saying, “All right, what’s going on here? What is the problem that rose to a level where you are willing to bring us in? Where you were willing to acknowledge, we have a problem, we need help and we’re going to have scriptorium come in and help us fix the problem?” I try hard not to make 12-step analogies, but the whole thing kind of heads that way.

SO:     So first we say “All right, our content has a problem,” and then we have to figure out how to fix it. So the kinds of things that we run into are content that’s being created that doesn’t have a purpose. It’s just, “We’ve always done that” “But it looks terrible. Why do you do it that way? Why is it so ugly?” I have seen terrible, terrible things perpetrated in InDesign, which is known for allowing you to produce really attractive looking print. But I’ve seen some terrible, terrible print coming out of InDesign and being done in a way that takes enormous amounts of time and is completely unmaintainable. So if you’re going to use InDesign, at least make it look good if you’re going to spend all this time. And yet what we have is it’s an inefficient tool and it’s not being used well, so that’s always kind of a little disturbing.

SO:     Huge amounts of content that is just wrong. Either technically inaccurate or out of date or it’s in the wrong language. It’s being written in English and the primary audience needs Chinese and it’s not being delivered in Chinese, or you’re writing for a particular kind of audience, but the content isn’t appropriate for that audience. So if you’re writing for medical professionals, you can assume a certain level of knowledge, but if you’re writing for an orderly in a nursing home, then you have to kind of make some different assumptions about their background knowledge. So you see content that’s just, it’s not necessarily technically inaccurate, but it’s wrong for the audience.

SO:     And then we see systemic problems. So the content hasn’t been updated because it’s too hard. Nobody knows what the most current version is because they can’t figure out where it lives. They’re not translating because it’s “too expensive,” or they do translate but it takes a year to get the translations out the door, which in this global environment typically is not actually acceptable. Every time I say, “We’ve seen it all,” we see more things. But we’ve seen a lot of stuff, a lot of really just bad stuff.

EP:     So once you have gone through that current state analysis, you’re then able to start completing a gap analysis, which really spells out the problems with the current state. Could you touch a little on this and talk about some of the common problems that you find during this analysis in this stage?

SO:     Yeah. So the gap analysis basically says, “Gee, you really need this in 15 languages and you’re delivering in two,” so the gap appears to be 13 languages, that type of thing. So it’s kind of like, “Here’s your current state, here’s what you would like your state to be and here’s the gap or abyss,” or chasm, or ocean between the two. The most common things we run into, I’ve talked a lot about languages. Localization issues are really common. People say things like, “We need more languages, but we can’t afford it in our current workflow because it’s too expensive,” or “It takes too long,” or, “We don’t have the talent to do it. We don’t know where to find the talent,” that type of thing. “We need to rebrand all our documents, but it’s too expensive to open them up and touch them up and make all of those changes.”

SO:     We see a lot of issues around search, especially when you have a huge puddle of monster PDFs. “We have our content library is 8,000 PDF files and people can’t find what they’re looking for,” and each of those 8,000 files is 300 pages, so first you have to find the right file and then you have to search within the file. So search is a really, really big problem. On a more, again, systemic level, we see a lot of content duplication. So information that’s been copied and pasted from place to place to place, to place to place and has, along the way, kind of like a game of telephone. It’s gotten changed or not changed or the first copy got updated but the downstream copies didn’t get updated, that type of thing. So it’s out of sync. And in many cases, you’ll see content where document A contradicts document B.

SO:     You’ll see inconsistency with styles because writer A and writer B don’t write the same way. They don’t have the same common voice, so that can be a big problem. Other gaps with searchability, in addition to the PDF problem, which is a big one, we also see a lot of problems around websites where a company has not a website but numerous dozens of websites and the information might be repeated on those websites. It might have been written by two different people in two different formats and put on two different websites and contradicts each other. So now what’s the authoritative version? Right? Which one’s right? If I’m the end user and I’m trying to figure out how to use this content, well, who do I trust?

EP:     Right, I’ve run across that on several websites where I read something different.

SO:     Yeah, do you trust x.documents.com or do you trust y.documents.com, and I mean who knows? And one of them was PDF and one of them was HTML and a third one was something else. So you just see just everything, all sorts of problems. And they are causing issues with delivery, right? With the company doing what they want to do.

EP:     Right. So once you’ve identified those gaps, then, obviously, it’s time to identify the changes that are going to be needed in order to reach the desired state that the company has. So how do you go about conducting the needs analysis and then making recommendations that are appropriate for the customer because customer’s needs are different?

SO:     So it can be a little difficult to separate the gap from the needs. But for example, going back to my language problem, if the gap is, “We need to deliver in 26 languages and currently we’re delivering in two,” is that a needs analysis? The gap is the languages aren’t being delivered. The need is something like, “We need to deliver these languages and probably that means we need a better localization strategy. We might need some localization software. Maybe we need a professional external localization vendor.” So there are different kinds of needs that you might run into there. But broadly, we’re going to be looking at overall strategy. So what is your content strategy? What are you going to create and how are you going to deliver it? When, where, why, and what languages and what format and all of that.

SO:     Some sort of reuse strategy for that canonical information so that you’re not copying it around. Localization strategy, systems, workflows, processes to make sure that localization is happening in a professional kind of way. We have to think about the content model and information architecture. So not just how are you going to organize your website if that’s your delivery model, but also how are you going to organize the content itself? So to take an example of that, if you’re doing hardware documentation, like how to repair something very typically you have prerequisites, which is something like, “Unplug the machine, make sure you have these tools and then set up everything and get ready.” So there’s kind of this preliminary step, or process before it says, “Okay, step one, open the casing and step two, do some stuff.”

SO:     So you want to think about the content model of, “Are all of my repair tasks always going to have some sort of prerequisite or some sort of list of tools that you need, or some sort of you need these skills or you need these certifications to do this task.” Because sometimes you don’t want somebody unqualified trying to repair a thing that-

EP:     That wouldn’t go well.

SO:     … could hurt them, or even kill them. So some things to think about there. So you have sort of your systems and your tools and your software and your content model inside of all of that. And then you have to think about the content life cycle, but then, in addition to all of that, you really have to think about the company itself and what level of investment is appropriate for that company. Are they a global multinational company that needs to deliver in every country and in 50 or 60 or a hundred languages? Do they have enough market share in all those different countries to justify that kind of an investment? Or is it a smaller company where we don’t have that same volume of information? Are they updating or republishing content nightly or is it more monthly or every six months? What’s that cadence look like? What’s the velocity? So you can broadly say, “Well, in an ideal world, everybody would do it this very sophisticated way.” But that might not be appropriate for a company that has less content or fewer tentacles out into these different languages or fewer formats.

SO:     Some companies have very straightforward content delivery requirements or is very straight forward content strategy model because the content they’re doing is easy. So from a business point of view, we’re looking at what kind of investment can we justify based on the value that they’re going to get out of it?

EP:     Right, because you have to be able to make that case for the executives.

SO:     Right. And we can’t tell them, “Spend a $1 million in order to save $50,000 on localization,” I mean that’s just not right, and rarely will they sign off on that.

EP:     So we’ve really covered the basics. And what would working with Scriptorium look like? What would that experience be like?

SO:     Well, we hope it’s a positive experience, but generally, if we’re doing this type of analysis, content strategy, assessment work and, for the record, even though nobody asks, we also do the technical implementation work. So I’ll just put that out there because that’s my job putting it out there. We would generally come in and meet with all the key stakeholders that are involved in this. Now, you would think content people and that’s definitely a key stakeholder, but in addition to the content people we’re talking about all the people that touch on content; subject matter experts who review it, people who approve it, product managers very often. Somewhere there’s an executive champion, IT because they’re responsible for the infrastructure or maybe the security around, especially if it’s a cloud-based solution, around the external sort of connectors that might be needed.

SO:     So we need to talk to all of those people and kind of understand what the parameters are of what we’re dealing with. Put all of that together. Do as we said, the needs analysis, the gap analysis and what comes out of that in pretty close collaboration with the key stakeholders in the client company is a plan that says, “Okay, this is what you need. Here’s your current state, here’s where you want to be. This is what it’s going to take to get there” Whether it’s software or process changes or training for your people or a new content model, or all of the above, very often, “And this is what the roadmap looks like and this is how we can do this.”

SO:     We usually include a risk mitigation section. So we talk about risk, “What are the risks of doing this? What are the risks of not doing this?” Because at the end of the day, if I’m an executive looking at this kind of a project, I have to be convinced that the risk of undertaking a project like this, of changing our operations, changing how we create content, changing what kind of content we create and all the rest of it is worth it.

SO:     And not just the financial return, although that’s important, but that it’s worth the change. That it’s worth the risk of making the change and moving forward versus just staying in the status quo. Status quo is not as scary as making changes. And so very often we’re coming in and saying, “Look, you need to make these very significant and profound changes.” People hate change, right? Everybody hates change.

EP:     And adapting takes a while.

SO:     And we’re people coming in and demanding or recommending change and we’re no different from anybody else. You change anything in our office here and you should see the screaming that goes on. And yet at the same time, we’re going in and telling people, “You have to change your day-to-day, how you approach your job.” So there’s a huge change management component and from a leadership point of view, you have to decide that the transformation of your content or the digital transformation, to use the horrific buzzword, that we are recommending is appropriate and necessary for your organization.

EP:     Well, with that I think we are going to go ahead and wrap up. Thank you, Sarah.

SO:     Thank you.

EP:     And thank you for listening to the Content Strategy Experts podcast brought to you by scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

 

The post The Scriptorium approach to content strategy (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 16:16
Evolution of content (podcast) https://www.scriptorium.com/2019/06/evolution-of-content-podcast/ Mon, 24 Jun 2019 13:30:51 +0000 https://scriptorium.com/?p=19068 https://www.scriptorium.com/2019/06/evolution-of-content-podcast/#respond https://www.scriptorium.com/2019/06/evolution-of-content-podcast/feed/ 0 In episode 54 of the Content Strategy Experts podcast, Elizabeth Patterson interviews Sarah O’Keefe and Alan Pringle about what’s changed and what hasn’t changed in content over the years.

Related links:

Twitter handles:

Transcript: 

Elizabeth Patterson:    Welcome to the Content Strategy Experts podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize and distribute content in an efficient way.

EP:    In episode 54, we discuss what’s changed and what hasn’t changed in content over the years. Hi. I’m Elizabeth Patterson, and I’m joined today with Sarah O’Keefe and Alan Pringle.

Sarah O’Keefe:    Hello.

Alan Pringle:    Hi, there.

EP:    We’re going to talk some about the changes within content. I’m relatively new here and I just want to talk a little bit about how Scriptorium is different now from when it was founded back in 1997.

SO:    I love it.

AP:    No old people jokes, please. Thanks.

SO:    Too late, too late.

SO:    So when we got started with Scriptorium, we sort of had this… not sort of. When we got started with Scriptorium, we had this focus on content and technology and publishing and looking at how to combine those three things. How do you use technology to publish content efficiently? Something like that. And I think our focus from day one was always on efficiency as a priority, looking at how can we do this better? How can we take out the things that are not value added and get to focus on how to create better content?

SO:    So that focus hasn’t changed, we’re still very interested in the question of how content and technology and publishing go together to deliver whatever it is we’re trying to deliver. What’s interesting to me is that every one of those three things though, has changed. The technologies are different, publishing itself is different and even the content that we’re producing is different.

EP:    So what are some of the major changes that you have seen when it comes to writing and delivering content, specifically?

AP:    Well, on the tools and tech side in regard to content creation, 20 plus years ago, it was desktop publishing, was the big thing. And it’s still in use today.

AP:    But we’re seeing more, I think, more and more content creators, regardless of what kind of content they’re creating, whether it’s technical content, whether it’s marketing content, training content, moving more towards what we’re calling smarter content. Usually, structured authoring, XML based, where you have this separation of the content and the formatting. As you’re authoring, you are not applying the formatting. That is done separately later and done in many different ways, where you could be publishing to still, PDF or print perhaps, publishing to the web, to a learning management system. The possibility goes on and on.

AP:    But at the core of all those different delivery platforms, you still have the same source content.

EP:    Right.

SO:    So then when you look at publishing and distribution, I mean, it seems almost trite to say we’re not printing anymore, but it’s true. And it’s a big deal. 20 years ago, I was doing press checks. I was worried about bound books being shipped and making sure that the binding worked, or, oh, this book is over 900 pages, we’re going to have to do something creative, because it won’t fit in a physical binding, because it’s too big. So either the paper has to get thinner, or we have to break it into multiple volumes, or there were these physical constraints around how you do print, and how you bind books.

SO:    Now, as Alan said, we’re still doing a lot of PDF, but everybody just prints the PDF themselves. It is so rare to run across a company that is not a book publisher, that is still doing any sort of actual printed and bound documentation or books.

SO:    And then related to that, instead of having to go through this process of send it to the printer and wait for the blue lines to come back and do press and color checks and all this other stuff, you push a button and it goes on the web and it’s live.

AP:    And I remember even dealing with tractor trailers, who would back up to your facility with basically, boxes that contained thousands of whatever guide you had published, and those had to be manually hauled and then distributed to customers. What Sarah just described, all of that physical labor and transportation is going away. And I’m not so sure that’s a bad thing, says a person whose back carried many, many of those cartons of books.

SO:    Yeah, and to your point about tractor trailers, I mean, I remember a couple of incidents where we had, back in the day when we were printing, we had very, very large files, which were first PostScript, and then later PDF. And those needed to be delivered to the printer, so that the printer could process them and turn that into paper, ink on paper.

SO:    And there were a couple of cases where we had to put it on a CD or some other physical medium, I don’t know how many of our listeners remembers zip drives, but that was a thing, and hand deliver them to the printer. So you’d be either driving across town or getting a courier who would come and pick the stuff up and drive it across town, because it wasn’t possible to email the files.

AP:    Or put it on an airplane seat.

SO:    Yeah. And there was no Dropbox. And if you had a contract with your customer that said, the documentation needs to be delivered to customer X on June 1st, there were literally cases where on June 1st in the morning, we were putting people on a plane holding a CD, who were flying across the country, because they had to deliver this content to the customer on June 1st, or else there was some enormous penalty written into the contract.

SO:    And I mean, it just sounds ridiculous now.

EP:    There’s definitely that convenience in having things instantly published on the web.

AP:    And there’s a cost savings, enormous cost savings.

SO:    Less wear and tear, because we’re not physically hand carrying this stuff across the country, which actually happened. There were also numerous instances of having to drive out to the airport to catch the last FedEx plane. If you missed the office pickup at 5:00 or 6:00 or whatever it was, there was a last gasp drive out to the airport and get there before 8:45 and you could get it on the FedEx plane. I don’t miss that, actually.

AP:    Nor do I.

EP:    We’re talking here about the publishing and distribution of this content. So how specifically has content changed over the years?

AP:    Well, because we’ve moved away from this idea of a printed book, before when you were writing that content, you were thinking about the organization of that content, often in a more narrative, logical way that flowed from the start of the book to the end of the book. So there was this narrative thread, for lack of a better word.

AP:    But now, we have completely gotten away from that in several ways. First of all, there’s been this huge move to modularity. And that’s a big part of this whole smarter content push. You write these smaller pieces of content, and that makes them easier to reuse. You can mix and match to create what you need from these little smaller bits and pieces.

AP:    There’s also the fact that there’s not all this connective tissue that you had to write between all of these pieces in your narrative thread. Now, we’ve kind of discarded those, because those created a context where you expected piece B to always be after… or module A, and you can’t really expect that A, B, C, D flow anymore. They are these freestanding modules and you kind of stitch them together, which is pushing you toward writing things in a little more minimal way, without all those logical connections to link all those pieces.

SO:    And then I think, in addition to that, we’ve got a bunch of new kinds of content. The one that really, that I think is really maybe the most stunning is actually the rise of e-learning. Because back in the day, it was classroom training and if it wasn’t classroom training, there might be something like a terrible video that was captured of the training, that you could watch after the fact. Or there might be some sort of a self-study guide of some sort.

SO:    But this idea of e-learning or of blended learning that you can deliver a video and some online stuff and a test and an assessment and all these different things, in an interactive online environment was not a thing. I spent years and years doing stand up classroom training. And certainly, there’s still a market for that. But this concept of e-learning really didn’t exist and the content needed to drive it didn’t really exist in the way that it does today, by which I mean, it’s widely available, lots of people do it.

SO:    It has not replaced classroom training, but because e-learning allows you to avoid the cost of the instructor, that one-on-one or that one-on-10 cost, e-learnings become very, very popular. And that’s an entire content type that we really didn’t quite have 20 years ago. I mean there was some stuff but not a ton. So that’s one.

SO:    And the other one is things like videos and podcasts and these kind of non-text methods of getting your point across, which then of course ties back into e-learning, which uses a lot of that stuff. And then I think, and this is really a big one, these are all in one sentence, but the convergence of marcom and techcomm, the convergence of marketing content and technical content into, and for that matter, training content into a broader thing that is just, we are going to talk about our product and the content that goes with the product is, I think, really critical.

EP:    Right. So pretty significant changes have happened within this field. So how have your jobs changed over the years?

AP:    Well, we already talked about the tech. And that means that as the tech evolves, you have to evolve, or you’re going to be basically discarded with the bad tech. You really do have to pay attention to what’s going on and be sure you’re up on the latest tools and technologies. I think that’s super important.

AP:    But there’s always been this trend or this, I think, a challenge might be a better word, actually than trend, of it is very hard to sometimes find people who are experts in these tools and know how to use them properly. And as these tools get more and more specialized for creating content, that challenge multiplies, it gets even more and more difficult to find people who know how to use these tools properly.

SO:    Yeah, I think use the tools properly is a really key point. So the tools we’re using have changed, but the idea that you need to master a tool, that you need to understand how it works, you need to understand how best to use it, what it can do, what it can’t do, and what kinds of problems it’s appropriate for and what kinds of problems it’s not appropriate for, that judgment is something that hasn’t changed at all. Every new tool that comes along, we have to look at it and say what is this good for? What is this not good for? Are there showstoppers in here?

SO:    The other one, big one I think, is from my point of view, that content creators and the content production process in general has no margin for error anymore. Again, in the olden days, you would send a book to press and then you had the opportunity to correct your terrible grievous errors in a press check or in a blue line or something like that. And it was expensive to fix things, but you could fix them.

SO:    Now, you push a button and it goes live. And okay, we can fix it after the fact and we can unpublish it or we can make a correction and republish. But that slack time isn’t there anymore. We shipped and now we’re done and now I’m going to go do something else for a couple of days, while I wait for the blue lines to come in. That kind of slack has entirely disappeared from the process.

AP:    I think it’s also worth noting and we’re talking about all these changes, but there are a lot of cases where baseline technology has not changed. The whole idea of XML and basically, writing content as code, when I was in college, and I’m not going to say when and it was my first job, those two places, I learned basic markup languages in both of those, both in college and in my first job. Those skills serve me today, working with these open standards that are for XML based content.

AP:    So there are a lot of cases where you can rely on and even maybe recycle what you’ve already learned and use it as a building point to learn the latest and the greatest technology.

SO:    I think the most difficult part of a content creator’s job, and I’m thinking particularly of technical writers, but in general, is doing the research to get the information you need to write the content. And in many cases, that means you write a draft and then you run it by some sort of the subject matter expert. And I mean, people have written entire books on how hard it is to track down a subject matter expert and get them to give you a review. Get them to read what you wrote, get them to clarify, get them to answer questions.

SO:    So the elusive subject matter expert I think is still a problem. And we formalized a lot of the review processes in electronic workflows, in ways that are helpful. But we still have huge, huge problems with review workflows, because you have to get people to prioritize doing that work. And they, like everybody else, have no slack time. And they see this as, in many cases, as an optional part of their job.

SO:    I’m totally overwhelmed. What can I triage out of my job? And the answer is, oh, this stupid review, I don’t want to do that. I’m just going to ignore it and make it go away. So I mean, that was a problem 20 years ago, and I think it’s going to be a problem 100 years from now.

AP:    And it’s a symptom of a bigger problem too. We have all these blurring of distinctions now in content, and this is one of them. A lot of people who consider themselves not to be content creators, well, guess what? In truth, they are part-time content contributors, whether they want to admit it or not. And technology is really enabling that blurring. And I’m not so sure that’s a bad thing.

SO:    Yeah, that’s interesting, because if you go back, go back even further, you go back to the concept of people writing content in longhand on legal pads and those kinds of things, because for the rise of all this word processing and capturing it directly, and then that led into mark up, I don’t know. I feel like maybe it was easier when there was somebody sitting at your doorstep with a legal pad saying, tell me more. As opposed to you getting an electronic notification in your inbox that says please review this. The people are much harder to ignore.

EP:    Well, with that, I think we’re going to go ahead and wrap up. Thank you, Sarah and Alan.

SO:    Thank you.

AP:    Thank you.

EP:    And thank you for listening to the Content Strategy Experts podcast, brought to you by Scriptorium. For more information visit scriptorium.com or check the show notes for relevant links.

The post Evolution of content (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 16:30
Rebranding as a business case for smart content (podcast) https://www.scriptorium.com/2019/06/rebranding-as-a-business-case-for-smart-content/ Mon, 10 Jun 2019 13:30:01 +0000 https://scriptorium.com/?p=19019 https://www.scriptorium.com/2019/06/rebranding-as-a-business-case-for-smart-content/#respond https://www.scriptorium.com/2019/06/rebranding-as-a-business-case-for-smart-content/feed/ 0 In episode 53 of the Content Strategy Experts podcast, Elizabeth Patterson and Bill Swallow discuss rebranding as a business case for smart content. How can you make sweeping branding changes as quickly and as painlessly as possible?

Related links:

Twitter handles:

Transcript: 

Elizabeth Patterson:   Welcome to the Content Strategy Experts Podcast, brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In episode 53 we discuss rebranding as a business case for smart content. How can you make sweeping branding changes as quickly and as painlessly as possible? Hi. I’m Elizabeth Patterson, and I’m joined today with Bill Swallow.

Bill Swallow:   Hello.

EP:   We’re going to talk about smart content and rebranding. Rebranding is a business case for smart content. Bill, can you describe a little bit about what that business case is?

BS:    Sure. Rebranding happens when you either have a merger, acquisition, you are taking your company in a slightly different direction, modernizing the look and feel, or you might just have a new content marketing officer who comes in, decides that sweeping changes need to be made. Then all of a sudden all of your content is in the wrong color, wrong size, wrong font, wrong logo, wrong taglines, what have you.

BS:   Usually what this means is that you have to go in and replace them all, but in a traditional content shop you generally have thousands of word files, thousands of FrameMaker or InDesign files. This is all technical content that, while it’s not going to be as flashy probably as your marketing content, it still needs to have a lot of this information applied. Whenever you do make these sweeping marketing changes, you also have to be mindful of all of the supporting documentation that your company has that’s going out to customers, going out to field people, and what have you.

BS:   How do you rebrand all of these things when you have all of these many, many, many files distributed probably across your company, being accessed and used by a variety of different people? It’s very, very time consuming and expensive to rework all of these static files. Replacing logos in word files, resizing them, applying new fonts and so forth. You’re lucky if you have templates to drive this, but generally what we’ve seen is that a lot of people do have these one off files that have ad hoc formatting in there, and everything needs to be redone by hand.

BS:   When we talk about smart content, we’re really talking about separating the formatting, all of that busy work, to get the content to look right, from the content itself. That way you can use the budget that you would have spent redoing all of this stuff to update tools and processes to make things flow a lot more smoothly.

EP:   Okay. With all of these files, you were kind of going into this some. But, rebranding can definitely be a very large undertaking, and you started touching on this a little bit, but what methods are companies and authors currently using to do that rebranding, and how can that be improved so that it is working most efficiently for them?

BS:   Well, a common method that actually shows a best practice is that on the web side. All of your web content is generally stored in some kind of a content management system, and that system is powered by HTML. Some of them use other technologies, but generally speaking we can say that they’re powered by HMTL. So, all of the content is in there as text, and it’s the CMS’s job to organize that information, and present it in a certain way. If you need to go ahead and make those changes, you make the change on the CMS side via templates, style sheets, and what have you, and you generally don’t touch the content itself. So, if you need to change all of your colors from green to red, you go in and you make a style sheet change on the CSS side. Just a couple lines of code max to make that change, and then you republish your content, and it’s in that new color.

BS:   Same thing with applying logos, or redesigning the look and feel of what the published page looks like. All of that happens outside of the content within the system itself. That is generally the best practice you have when you’re looking at what these unstructured environments might look like. I know I might get flack for saying that a web content management system’s unstructured, because it really isn’t. It’s kind of a hybrid. But, generally that’s what we’re talking about, is being able to move to that type of environment. That is the best case scenario, if you have an unstructured situation.

BS:   But, everything else, it comes down to how that content has been written, how that content has been managed, if it’s been managed, and what type of changes you are making, and how extensive it’s going to be when you start trying to fix them. That’s where we get into more of the need for smart content.

EP:   Speaking of smart content, can you kind of go into a little bit more detail about what exactly smart content means, and why is that going to make something like rebranding easier for companies and authors?

BS:   Sure. Smart content very, very basically is structured content with tags and metadata. What that means is that, you have this content that has a lot of information in it that is not formatting based. When we talk about smart content, we’re talking about separating the formatting of the content from the content itself. This way we’re writing without formatting. We’re not applying fonts, we’re not applying colors, we’re not applying spaces in between, we’re not tweaking the alignment. We’re just focusing on what’s being written, and we’re tagging it in a way that your system, depending on what you’re using, will understand, “Oh, this is a list,” so we’re going to publish the list looking this certain way. Here’s a table, and this is a certain type of table, so we’re going to present it in a particular manner that has, let’s say alternate shading. And, this other table, we’re going to present it without alternate shading.

BS:   We’re not doing that formatting within the content itself, but we’re providing information that a publishing system can understand, and then render it a certain way on the output side.

EP:   You were touching on this a little bit when we were talking about the formatting, and writing content without that formatting. But, when we’re thinking about specific branding assets that a company has, what would a typical branding asset look like structured versus unstructured?

BS:   Sure. In unstructured, and when we talk about branding assets I want to step back a little bit and say there’s the actual files and such that constitute your branding, and then there is the actual documentation itself, or the content itself. In unstructured you might have word files, FrameMaker files, InDesign files, and you might be fortunate enough to have templates that utilize, or basically are considered those branding assets, where they contain all the correct fonts, all the correct colors, the correct use cases for them, the correct layouts with all the images that you need for the background, and so forth intact and ready to go. They might have your copyright statements and other boiler plate information already presented in the templates, so all you have to do is open those files, and update the template.

BS:   If your content is, I should say well formed and you are adhering to the templates, in theory all you would have to do is make those template changes once, and apply that template to all the files that use it. That would make the update as quick and painless as possible.

EP:   Yeah, save a lot of time.

BS:   It would, but at the same time you still need someone to go in there and, whether it’s via a script if you’re lucky, or whether it’s via brute force, open a file, apply a new template, see how it looks, make sure it was applied correctly, and then save the file with the new look and feel. With structured information, or smarter content, what you’re essentially doing is you are taking all of that busy work out of the equation.

BS:   You would basically have these text files, and they are void of all formatting, but they have specific semantic information built into them that says, “Hi, I’m a table.” Or, “Hi, I’m a list.” They might have specific properties around those that say, “I’m this type of list.” But, it doesn’t actually format the content itself. What you then have is a series of style sheets, just like you would have in a web CMS.

BS:   You’d have a series of style sheets, you’d have a series of instructions that say, “When I publish for this specific purpose, do these things with the content, and apply these style sheets appropriately, and then publish to this location.” That way all of your content doesn’t need to be touched, you just need to tweak the pieces that publish the content in order to set the rules to say, “Hey, we’re now changing the color over to blue. Hey, we have a new logo so let’s drop that in, and we want to place it at the bottom of the page instead of the top of the page,” for whatever reason.

BS:   All of those rules are applied at the publishing side, rather than someone going in and making that change to every single page of content, or every single topic that you have if you’re publishing to the web, let’s say, and making those changes by hand.

EP:   Oh, it sounds like structured content is definitely the way to go to save you some headaches.

BS:   It definitely is. We’ve worked with a lot of clients on rebranding for this very specific purpose. One in particular had such a strong business case from rebranding alone, they actually calculated out the amount of time it would take them to apply new fonts, new colors, new logos, new taglines to all of their content. They looked at that total cost of time and effort, and it spanned years to update all of their content.

EP:   Wow.

BS:   We’re talking a very big company here. They looked at it and said, “You know what? It’s going to be more cost effective, it’s going to be more time effective to go ahead and just move this stuff over to a smart content format, and do it all that way.”

BS:   Rather than taking a few years of time to get all of this done, and using many, many, many human resources that could have been used elsewhere, like updating content, making further improvements, documenting new products. They decided to outsource and say, “We’re going to covert all of our information, and we’re going to move from our current format to this smart content format,” and they had a third party just go ahead and convert all that stuff. Then, they sat down and they worked hard on how they want this branding to look in their new output.

BS:   We’re talking months of time versus years of time, in order to rebrand all of that content. Again, it was something ridiculous like a few hundred thousand pages worth of content.

EP:   That’s just crazy, definitely something to talk about with your company and look into, to make sure that you’re being efficient. Alright, you were talking a little bit about how structured content is void of all formatting. I wonder if you can define this picture a little bit more on what content without formatting looks like.

BS:   Sure. In this situation you have a series of text files, and these text files are, you can open them up in a text editor. They are not flashy, they’re not proprietary. They could be something that is structured, so some type of XML maybe. Or, they could be even something a little more simpler. They could be something like Markdown. I wouldn’t necessarily call Markdown smart, but it does afford you a lot of the same capabilities, you’re just lacking a lot of metadata that you would get in an XML based solution.

BS:   Generally all of these smart content formats, they are void of all of that formatting information. Instead of saying, “Here’s a heading one, and I’m going to select it and make it 16 point blue, with 24 points of space underneath it.” And, being able to save that to a template and apply that every single time you’re just saying, ‘Hey, I’m a heading.” In a smart content situation, all of the understanding of how to format that heading is on the publishing side.

BS:   If you’re going to one particular format, you might have a series of rules that says, “Okay, this is blue, this is 24 points, this is … it has, it’s centered,” what have you. Then, you go to another target and it says, “Okay, this one is for a particular rebranding effort, or if it’s for a separate product, or it’s for our partner company,” so this one needs to be read, and it needs to be in this different font, and it needs to be offset this way, and handled in a different manner.

BS:   You’re not making those changes to the content.

EP:   Mm-hmm.

BS:   You’re making those changes over on the publishing side based on those rules for where that content is going.

EP:   Okay. Companies right now that are thinking it’s time for them to rebrand, and they’re trying to make decisions. You’ve touched on this a little bit, and the time saver. But, why is migrating to smart content the right choice?

BS:   Well it really affords you the ability to manage a lot of the, again, the look and feel processing of your content separately from the content itself. You’re able to do a variety of different things, publish to a variety of different mediums, without having to change the content. If the specs for your published output changes, then the content itself is unaffected. At that point all you need to do is make those changes on the publishing side, and then run the content through again. If you have different look and feel requirements based on where the content is going, whether it’s going to the web, whether it’s going to PDF, whether it’s going to a third party to be consumed by them. If it’s going to some OEM partner, you can go ahead and process that content differently without having to modify the content itself. You have all those rules built into the publishing side of things, and not in the content storage side.

BS:   The other benefit is that when you start thinking about smart content, things get a lot more modular. Your content is written in smaller chunks.

BS:   Those chunks can be assembled, reassembled, remixed, removed, and you can produce new things with that content without having to rewrite it, without having to go in and physically reorganize it from a source point of view. You’re just saying, “Hey, I have these five talking points and I’m going to arrange them in the output, in any variety of ways.”

BS:   Some other things you can do is that, you can externalize a lot of the branding elements in your smart content environment. Things can be swapped in and out as you need them. Not only do you have the look and feel, and the organization of the content. But, you can swap logos in and out, you can swap screenshots and other images in and out depending on what you need to show. This particularly comes in handy with localization, where you don’t need to go ahead and replace a bunch of images, you just reset the pointers.

BS:   Other things like company names, product names. If those things change, if they change often especially, those can be externalized and managed separate from the content itself so all you have to do is swap in and out a new name, and republish. Swap a new name once, and republish in all of the areas where it’s being used. Things like taglines, copyright statements, and other boiler plate information that generally would require you to rework your content and then republish, you can just make that change once. It could be as easy as flipping a flag on the publishing side to say, “Okay, we’re going from publishing scenario A to publishing scenario B. Now, run the content through.” It saves you a lot of time by being able to externalize all these things, and be able to remove that formatting from the content itself.

EP:   Which then allows you more time to work on other things that your company needs to.

BS:   Exactly, exactly. Because, the worst time sink that you can have is formatting content. It adds a visual value, but the amount of time it takes someone to do it by hand is nearly excruciating. That time could be better spent doing other things, like developing new content, or working on new projects to be able to further leverage the existing content that you have.

EP:   Alright, well I think we are going to go ahead and wrap up. Thank you, Bill.

BS:   Thank you.

EP:   And thank you, for listening to the Content Strategy Experts Podcast, brought to you by Scriptorium. For more information, visit Scriptorium.com, or check the show notes for relevant links.

 

The post Rebranding as a business case for smart content (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 18:35
Content conversion (podcast) https://www.scriptorium.com/2019/05/content-conversion-podcast/ Tue, 28 May 2019 13:30:32 +0000 https://scriptorium.com/?p=18971 https://www.scriptorium.com/2019/05/content-conversion-podcast/#respond https://www.scriptorium.com/2019/05/content-conversion-podcast/feed/ 0 In episode 52 of the Content Strategy Experts podcast, Gretyl Kinsey talks with Mark Gross of DCL about content conversion. They explore some of the use cases they have seen and what partial conversion looks like.

Related links:

About special guest, Mark Gross:

Mark Gross, President of DCL, is a recognized authority on XML implementation and document conversion. Mark’s company, DCL, which stands for “Data Conversion Laboratory” provides data and content transformation services and solutions. Using the latest innovations in artificial intelligence, including machine learning and natural language processing, DCL helps businesses organize and structure data and content for modern technologies and platforms.

Mark has a BS in Engineering from Columbia University and an MBA from New York University. He has also taught at the New York University Graduate School of Business, the New School, and Pace University. He is a frequent speaker on the topic of automated conversions to XML.

Twitter handles:

Transcript: 

Gretyl Kinsey:    Welcome to The Content Strategy Experts Podcast brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize and distribute content in an efficient way. In episode 52, we talk about content conversion with special guest Mark Gross of DCL.

GK:    Welcome to The Content Strategy Experts Podcast. I’m Gretyl Kinsey, and I am here with Mark Gross. Mark, how are you?

Mark Gross:    I am good today. Thank you.

GK:    Thank you so much for joining us on the podcast. And I wanted to just start off by asking you to tell us a little bit about DCL and what you do and kind of what differentiates you from other companies kind of in a similar space?

MG:    Well, I guess the name says Data Conversion Laboratory and that’s what we do, but things have changed. We’ve been in business almost 40 years now. When we started, conversion meant mostly one thing. It meant, “I just got this new computer. I’ve got a new system, and I want to move old information from my old system onto the new system, and that was a lot. When microcomputers first came along, people were coming off mainframe and mini computers. But today everything runs on data, and that’s not so much a focus anymore, although it certainly is a focus.

MG:    We’re living in a world where data, information, analytics, the economy just doesn’t run well without well curated, structured data. There’s so much content, so much data out there that you’re inundated. So today much of what our work is about, 20% or 25% of our work is still moving things from one computer to another. But a lot of the rest of the work is really in structuring the information that’s already out there. And I guess as much of our Scriptorium audience might be working with things like DITA and S1000D. Well, that’s very structured information, but most information out there isn’t. So what we do there is take information and we add the structured information into there. And that’s essentially what we’re doing today.

GK:    And you’re definitely right that at Scriptorium a lot of the clients we work with are kind of either going into structure or already in some kind of structure. And so as far as their conversion needs, they may be looking at either something like moving from one structure to another. So for example, something like a homegrown XML to DITA, or they may be looking at something like going from just completely unstructured content into something more structured like XML. That’s kind of what we see as sort of the common needs that companies have for converting their content, especially if they have a large volume of it that’s completely unstructured and they need it to have that structure, then that’s when they would say, “Okay, we’ve got to convert all of this.” And I wanted to get your take on this as well and kind of ask you, what are some of the most common reasons that you see with the companies you’ve worked with? Why they need this kind of conversion?

MG:    Okay. So the first things to talk about are the moving computers. But more of the time today is they’ve got information, and they’ve been collecting it, and they’ve been structuring … And it might be structured already, but the needs today are so much different, and the bar has been raised so much that information just isn’t in the form that it needs to be today for modern uses, for artificial intelligence, for transmitting information. It doesn’t have the right metadata. An example is the work we did for Elsevier and the scopeless database. So the scopeless database, it’s called the index to the world’s literature, scientific literature. And for the last 10 or 15 years, everything has been structured very tightly. It’s bibliography information, so you have author’s names and publishers and the dates and all those things there.

MG:    But going back more than 15 years, 15 years ago, the information wasn’t structured like that. It was just plain line information. Bibliography was just the way you see it in the back of a journal or on the back of a book. So that wasn’t good enough to be able to find things as quickly as they needed to be found. So what we did for them is we went back to all the material going back for another 15 years before that, and then used artificial intelligence and a bunch of very sophisticated software to go in and add the structure that should have been there, if it would have been correct in the first place. But 15 or 20 years ago, we didn’t know that we would need this anymore. So we went back and restructured and ordered information.

MG:    I think there’s lots of cases like that. A company may have documents going back 20, 30, 40 years, which is still valuable, but the older material is sitting, probably not on paper but it might be, but it’d be sitting on microfilm, or it would be sitting in PDF files, which are not really structured files. So there’s a need to go back and upgrade all those materials to try to work with what’s needed today. The world contains billions of pages of information out there, so it’s a shame not to have access to them, and today we want access to all that information.

GK:    Yeah, exactly. And that’s really one of the biggest use cases that we see as well. We’ll see companies saying all of our stuff is locked into an older format like PDF where it’s not as accessible. They can’t really put it on the web except a PDF for download, and it really just kind of restricts what they’re able to provide to their customers. And then they’ve got this demand for content to be able to be kind of parceled up and reused in different ways, and they really just don’t have that flexibility with it when it’s kind of stuck in an older and unstructured format.

MG:    Yeah. And I think just one more example that fits this, I think is work that we’re doing for the New York Public Library now. The New York Public Library has the complete collection of copyright records going back to the 1800s when copyrights first started being put together in the United States, but all that was in books. So there’s hundreds and hundreds of books on shelves, and then all those were scanned. So now you have images of all those pages, and then they were OCRed automatically, and so there’s an OCR, but that’s still not very much use because the data itself is not structured. When you look at a copyright record, it contains a lot of information that’s really fielded, separated by commas and semicolons and other things like that. So you really can’t find anything. A full text search doesn’t do you much good.

MG:    So what we’ve done now is gone back to all that material that’s already been … Images already exist of everything, but we’ve gone back and we’ve now taken the content out of that, and built that and tagged it and structured it, and built that into a database that a public library can now use and put out on the web. So there’s lots of information out there that can use more structuring.

GK:    Absolutely. I want to talk about some more specific use cases that Scriptorium has seen with our clients when it comes to conversion and kind of get your take on some of them. So the first one is the idea of a partial conversion. And a couple examples we’ve seen of this would be something like a company maybe does a content audit, and they realize that some of their content is unstructured, does need to be converted and provided in multiple different formats to their customers. But then maybe they’ve got some other content that’s just out of date. It’s never going to be updated again anyway, and they maybe decide it’s not worth it to convert that content. We’ve seen similar cases where maybe there’s a certain amount of the content that’s the most important, and that’s what they’re updating most frequently, and then maybe they start with that, and then convert the rest later on a schedule that works better for them.

GK:    So I wanted to ask you if you’ve seen cases like this where a company either does a partial conversion, or maybe starts with a partial conversion and kind of how common that is, and what kinds of use cases that you’ve seen there?

MG:    Right. So certainly there’s a cost to conversion. I think companies have a fiduciary responsibility to think about what the return on investment is going to be in anything they do. So I think we see a lot of cases where there’s partial conversions, either because as you said, they just have a lot of stuff and decide not to do it all because they don’t need it all. And the other is they do want it all, but maybe it doesn’t have to be converted with all the bells and whistles in order to reduce the cost. It all comes down to your return on investment. So I think very often this really starts with a content audit. I think people at the organization have to look at and see what the value of what the material they have is.

MG:    For example, an organization might find that only the … They want to convert the product manuals or their repair manuals for products that are 10 years or less and go from there. So that might be good enough. And the rest of it, well, moving it into something like DITA is relatively expensive. It’s dollars per page usually. So they’ll take that, the pieces that are more current and bring them up, while the rest of it can be left as just images and then done on a gradual basis as they go along.

MG:    In other cases, it may make sense … A client of ours is the Optical Society of America. For their hundredth birthday, which was a couple of years ago, they wanted to convert the entire corpus of material going back to 1917, which on one hand you think, “Why would you need … Why would a physicist want material that’s a hundred years old.” On the other hand, it turns out that they’re right. It’s this very valuable information there that wasn’t at all available before. So they chose to turn everything into top quality XML, everything in it. And they went for that over a period. They had a three year program, and they did it over that period.

MG:    In other cases, you know, there’s a cost to perfection. Sometimes you don’t need perfection. You just want to get 99% of the way there. So an example of that is the work we currently do for the US Patent Office. So that’s not a one-time conversion. That’s continuing information because they get five million pages a month of technical material coming into their facility. And until a few years ago, everything was imaged and everything was scanned, but it was just images. There wasn’t much you could do with it other than flip through them on a computer. But the cost of converting all that into XML the traditional way would have just been prohibitive.

MG:    We came up with a completely automated approach. We take documents coming in … OCR on technical documents by itself without correction, doesn’t work very well usually. But we came up with a computer vision approach that would clean up a page before it ever got to the OCR engine. It took off all the math and the tables and all those things, took it off so the page electronically just had text and white space. And then it went through an OCR engine, which produced better than 99% accuracy right out, without any correction. And then it got converted to XML or the automated tools. And then all the things we took out, the math that was left as images was pulled back in and produced the document that was XML completely automatically, but it was only … They ordered at 99.6 accuracy. So their take on it was, “We want all these pages, and getting 99.6% at one 20th the cost is definitely worth it for us.” It may not be worth it in a publishing organization, but it definitely was worth it for them because the documents were still going to be looked at by patent examiners at some point.

MG:    So I think every organization needs to think about what the return on investment is, and there are places in between other than … It doesn’t have to be everything. It doesn’t have to be zero. Some place in between there that really is the right decision.

GK:    Absolutely. And another place that we’ve seen clients kind of evaluate that return on investment is when it comes to the idea of rewriting or restructuring some of their content before they convert. And we see this a lot in cases where a company has maybe written their user manual in a very book-like way that doesn’t convert cleanly over to topics, and similar cases like that where maybe they’ve kind of formatted their content such that it doesn’t have any sort of implied structure, or they’ve maybe been misusing their template in Word or FrameMaker or InDesign or whatever they’ve been using. So when it comes to doing an automated conversion process, they realize that the results are going to be pretty messy and that it’s going to require a lot of cleanup on the output side. So I wanted to get your take on that as well and ask if you’ve seen any cases where it made more sense for a company to do some cleanup and restructuring, maybe rewriting on their content before they tried to convert it?

MG:    Of course there are cases where it’s just such a mess that it’s not worth it. It’s not worth converting without doing some rewriting, but I think it’s less than you might think. I don’t think people, especially when you have large amounts of content, I think there are tools around that let you insert structure where it wasn’t there before. It’s possible. Many of the cases that I’ve seen over time, we could apply some technology too. So there might be some … I think there’s a triage that needs to take place beforehand to review the items and do an inventory of what’s there and maybe identify those, the 10% or 5% or 20% that really is not going to transfer over.

MG:    But I think very often you can get 70 or 80% of the material to be moved over automatically. And I think there’s a lot of benefits, and you could do that. First of all, there is a cost savings to the extent you can do that, but also a lot of times you don’t have to recheck materials that have already been approved and have been used for awhile. You may not have to get recertified, and it doesn’t need … The professionals that are in the organizations don’t have a lot of time for any of this. They can focus on just those pieces that need their attention and the rest of it can be done by others, both by automation and by less trained people who can just do some of the things that need to be done. So I think automation, there’s a lot that automation can do that we underestimate a lot of times. It’s worth doing, taking the effort upfront to see what would happen.

GK:    Absolutely. So I want to talk about another common situation that we’ve seen that can kind of pose some interesting challenges to conversion. And that is when a company goes through merger or acquisitions. And then they’ve suddenly got this collection of content that’s coming from maybe two, three, five, ten different sources, and none of it’s consistent with each other, but all of a sudden all of it needs to be made consistent and rebranded and kind of following this one corporate structure. So I wanted to ask if you’ve dealt with any cases like this and what sorts of challenges that you’ve faced in converting content after a merger or acquisition?

MG:    All right. Yeah, that’s actually … There’s more and more of that kind of activity going on, so that’s definitely good for business, but I think it’s just a more exaggerated case of the usual trying to normalize information. Whereas when you go into one company, you’re really dealing with the materials that were done over a period of 10, 15, 20, 30 years of trying to normalize it, because people have done things over time. When you’re dealing with multiple companies, they for sure have done things, and every company does things differently. Frequently, one of the companies has better practices than the other. That might be why they did the acquisition or maybe that’s why they were acquired. But I think it’s a very similar process of identifying where you want to go, what is the goal. And I think that’s part of the specification process that takes place upfront. “This is what we want it to look like when it’s done,” and then mapping all that information over. And I think it’s just another … I think computers are very helpful there. There’s a lot of automation that can be applied. Today, a lot of artificial intelligence kinds of software can be applied. I think it’s just an exaggerated case.

MG:    But I think it’s more important … In this case it’s even more important to have a good planning process upfront with someone who’s familiar with the various data streams and the data formats and the tools that can be used because that can save years in the process. A lot of times you hear about companies that have merged, and it takes them three years or five years to get their information together, which is a disaster. So I think it behooves the organization to look at that. One of the issues, I think, is the IT groups are usually in charge of trying to merge everything together. And while they have a lot of experience with data streams, I don’t think IT groups usually have a lot of experience with the document streams and how documents come together. So I think it’s even more important to bring in professionals who are familiar with those kinds of materials in order to speed up the process.

GK:    Definitely. So you mentioned that you’re kind of seeing more and more of cases with mergers and acquisitions happening. So I wanted to ask if there are any other kind of common patterns like that that you’re seeing with types of conversions that you’re doing and if there are any unique challenges that you face with each of those and sort of how you deal with those?

MG:    So I think today … Traditionally, we saw these conversion as projects that go on … They have a start, and they might be six months or a year or a few years. But today, actually most of our work is continuing kind of work, like what I described about the Patent Office. That’s day by day work that needs to be done on a very timely basis. And the timeframes are really squeezed in. I mean where traditionally you’d schedule a six month process, today things might need to be delivered in 10 minutes or in an hour or in two hours. So the timeframes, I think are really squeezed down a lot of times. There’s no real time to go back. You’ve really got to describe things upfront and make sure you’ve got a machine that really takes care of everything.

MG:    So I think more and more of what becomes important is having a process upfront to specify what needs supply. Define what’s going to need to be done. And that’s really very detailed work. A lot of times people think of it as an ad hoc process. We have a very formalized specification process when we start something where everything gets laid out, all the details are laid, and we walk a customer through all the steps and the decision points and record the decisions. So I think it becomes very important to do that upfront because of just how the large scale of what we’re doing and the time, the time parameters are there. There’s also, I think, a matter of prioritizing, which you spoke about already. But if you’re going to … you’re really getting in new systems, decide what’s really needed, what’s the return on investment of various materials. And inventory what’s there so that you make sure you’re doing the most important things first. A lot of times you hear things like, “Well, price is no object. We want everything to be moved over.” That’s never true. Price is always an object, and cost is always an object. So prioritization is very important.

MG:    And I think another area that’s become common is this idea of all content reuse as the data streams become large, especially like when you’re dealing with technical documentation, which is much of what we’ve talked about. Systems like DITA and S1000, they are intended to reduce the amount of duplicate content that you’re handling, and I think that’s true in many places, so content reuse and normalizing the information becomes very important. We spoke about if a company merges, those two companies might have similar information, but they’re slightly different or they’re very different. One of the things we’ve done is we’ve built tools that let us examine large collections of information. It’s called Harmonizer, which will go through a collection information and find all the similar paragraphs, not just that they’re identical, but somebody’s changed a few words. So we can identify those so that we can now pull them out and say, “Well, this thing has repeated a hundred times. Let’s make that a module and just refer to it a hundred times.” I think a lot of those kind of things are common just because you’re dealing with more information and it’s all gotten bigger and faster.

GK:    It really has. And we’ve noticed kind of some similar trends and patterns as well with the clients that we’ve worked with. I definitely agree, reuse continues to be a really large factor, especially with companies that are localizing content to other languages. Because if they can’t reuse their content, then that translation cost is occurring many, many more times than it needs to. So I agree, I think it’s really important to, as you said, to plan and to look at your reuse needs and your reuse potential, and see how that can factor in to your conversion process and in that way really make the most of what you do when you convert.

MG:    Just one more point I think that’s important to … just in terms of the patterns.

MG:    An area where we’ve had a lot of focus and we spoke about a few times is just the focus on computer intelligence and artificial intelligence. And I think that’s been a major differentiator for DCL, and way before it was a buzzword. In 1982, already we built a conversion tool called Mindreader, which would take ASCII based files, just plain text files that were coming out of an ancient word processor called the Videk, to convert it to what was then a very modern word processor that had all tags, and it was automatically infering all the ideas and the architecture and putting in tags automatically. That’s been a fortunate beginning. It’s become more and more a focus over the last years because first of all, the data streams have become so much larger, and also labor costs are rising internationally, and that’s going to continue to happen. And so I think that focus on intelligence and using computers to do this has become more and more important as we go along. And having people really understand that is a really important part of all this.

GK:    Absolutely. Well, thank you again for joining us.

MG:    Okay. Well, it’s been a pleasure to be here. And thank you for this session. And these are really good questions. And thank you very much.

GK:    And thank you all for listening to The Content Strategy Experts Podcast brought to you by Scriptorium. For more information, visit Scriptorium.com or check the show notes for relevant links.

 

The post Content conversion (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 27:40
CMS/DITA NA conference interviews: part two (podcast) https://www.scriptorium.com/2019/05/cms-dita-na-conference-interviews-part-two-podcast/ Mon, 06 May 2019 13:30:50 +0000 https://scriptorium.com/?p=18890 https://www.scriptorium.com/2019/05/cms-dita-na-conference-interviews-part-two-podcast/#respond https://www.scriptorium.com/2019/05/cms-dita-na-conference-interviews-part-two-podcast/feed/ 0 In episode 51 of the Content Strategy Experts podcast, Elizabeth Patterson and Gretyl Kinsey talk with vendors at the CMS/DITA North America conference about how they have seen DITA evolve during their time in the industry. This is part two of a two-part podcast.

Interviewees:

  • Ulrike Parson, Parson AG
  • Tulika Garg, Adobe
  • Divraj Singh, Adobe
  • Radu Coravu, SyncroSoft
  • Val Swisher, Content Rules

Related links:

Twitter handles:

The post CMS/DITA NA conference interviews: part two (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 23:47
CMS/DITA NA conference interviews: part one (podcast) https://www.scriptorium.com/2019/04/cms-dita-na-conference-interviews-part-one-podcast/ Mon, 29 Apr 2019 13:30:56 +0000 https://scriptorium.com/?p=18837 https://www.scriptorium.com/2019/04/cms-dita-na-conference-interviews-part-one-podcast/#respond https://www.scriptorium.com/2019/04/cms-dita-na-conference-interviews-part-one-podcast/feed/ 0 In episode 50 of the Content Strategy Experts podcast, Elizabeth Patterson and Gretyl Kinsey talk with attendees at the CMS/DITA North America conference about how they have used DITA in their career and the challenges they have overcome. This is part one of a two-part podcast.

Interviewees:

  • Greg Stauffeneker
  • Jacqui LaLiberte
  • James Clyburn
  • Larry Kunz

Twitter handles:

The post CMS/DITA NA conference interviews: part one (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 18:12
Building a business case for structured authoring (podcast) https://www.scriptorium.com/2019/04/building-a-business-case-for-structured-authoring-podcast/ Mon, 08 Apr 2019 13:00:58 +0000 https://scriptorium.com/?p=18507 https://www.scriptorium.com/2019/04/building-a-business-case-for-structured-authoring-podcast/#respond https://www.scriptorium.com/2019/04/building-a-business-case-for-structured-authoring-podcast/feed/ 0 In episode 49 of the Content Strategy Experts podcast, Bill Swallow of Scriptorium and Stephani Clark of Jorsek discuss the value of structured authoring and building a business case for it.


 
Related Links:

See Stephani Clark of Jorsek and Scriptorium consultants present at CIDM DITA North America (April 15-17):

 
Twitter handles:

The post Building a business case for structured authoring (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 15:23
Open file formats in your digital transformation (podcast) https://www.scriptorium.com/2019/04/open-file-formats-in-your-digital-transformation-podcast/ Tue, 02 Apr 2019 13:30:17 +0000 https://scriptorium.com/?p=18459 https://www.scriptorium.com/2019/04/open-file-formats-in-your-digital-transformation-podcast/#respond https://www.scriptorium.com/2019/04/open-file-formats-in-your-digital-transformation-podcast/feed/ 0 In episode 48 of the Content Strategy Experts podcast, Alan Pringle, Sarah O’Keefe, and Bill Swallow discuss the importance of using open file formats in your digital transformation. How does content strategy fit within digital transformation?

Twitter handles:

The post Open file formats in your digital transformation (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 15:43
Smart content in unexpected places (podcast) https://www.scriptorium.com/2019/02/smart-content-in-unexpected-places-podcast/ Mon, 18 Feb 2019 14:00:43 +0000 https://scriptorium.com/?p=18268 https://www.scriptorium.com/2019/02/smart-content-in-unexpected-places-podcast/#respond https://www.scriptorium.com/2019/02/smart-content-in-unexpected-places-podcast/feed/ 0 In episode 45 of the Content Strategy Experts podcast, Alan Pringle, Sarah O’Keefe, and Bill Swallow discover the unexpected places where organizations are using smart, structured content.

Related links:

Twitter handles:

 

The post Smart content in unexpected places (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 19:04
What is content strategy? (podcast) https://www.scriptorium.com/2019/02/what-is-content-strategy-podcast/ Mon, 04 Feb 2019 14:00:55 +0000 https://scriptorium.com/?p=18207 https://www.scriptorium.com/2019/02/what-is-content-strategy-podcast/#comments https://www.scriptorium.com/2019/02/what-is-content-strategy-podcast/feed/ 2 In episode 44 of The Content Strategy Experts podcast, Bill Swallow and Sarah O’Keefe take a look at several definitions of “content strategy”. Do they work? Are they accurate?

 

Content strategy definitions:

The purpose of content strategy is to create meaningful, cohesive, engaging, and sustainable content that attracts the company’s target customers.

– Kruse Control, Inc.

Great content is created for a specific purpose, and this purpose needs to be defined. Ask yourself if you are creating content to boost brand awareness, generate leads, convert users, attract past customers, improve search ranking results, or something else altogether.

Neil Patel

Content marketing strategy is your “why.” Why you are creating content, who you are helping, and how you will help them in a way no one else can. Organizations typically use content marketing to build an audience and to achieve at least one of these profitable results: increased revenue, lower costs, or better customers.

Content strategy delves deeper into (in Kristina Halvorson’s words) the “creation, publication, and governance of useful, usable content.” Note that content strategy often goes beyond the scope of a content marketing strategy, as it helps businesses manage all of the content they have.

Content Marketing Institute

Content strategy refers to the management of pretty much any tangible media that you create and own: written, visual, downloadable … you name it. It is the piece of your marketing plan that continuously demonstrates who you are and the expertise you bring to your industry.

Justin McGill, Hubspot

Content strategy has been described as planning for “the creation, publication, and governance of useful, usable content.”

Kristina Halvorson, via Wikipedia

A content strategy is the high-level vision that guides future content development to deliver against a specific business objective.

Hannah Smith, Distilled

The essence of content strategy is simple: make a plan for your content to achieve a specific result. Your strategy could be small in scope, such as targeting web copy to a specific audience or tailoring your authoring process to suit multiple delivery formats. Or your strategy could be large in scope, aligning all content with a new brand or updating content infrastructure and workflows changes to improve localization accuracy and time to market.

Bill Swallow, Scriptorium

Management consulting is the practice of helping organizations to improve their performance, operating primarily through the analysis of existing organizational problems and the development of plans for improvement.

Wikipedia

Content strategy is a subdiscipline of management consulting. Like management consultants, content strategists begin by identifying business problems. The key difference is that content strategists focus on business problems that the organization can solve with content.

– Excerpt from a Scriptorium article in an upcoming industry journal

Twitter handles:

The post What is content strategy? (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 19:52
Tools selection podcast https://www.scriptorium.com/2019/01/tools-selection-podcast/ Mon, 21 Jan 2019 13:50:38 +0000 https://scriptorium.com/?p=18159 https://www.scriptorium.com/2019/01/tools-selection-podcast/#respond https://www.scriptorium.com/2019/01/tools-selection-podcast/feed/ 0 In episode 43 of the Content Strategy Experts podcast, Gretyl Kinsey and Alan Pringle discuss how and when to identify the proper tools for developing and managing content.

Related links:

Twitter handles:

The post Tools selection podcast appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 24:56
Content strategy stakeholders podcast https://www.scriptorium.com/2018/12/content-strategy-stakeholders-podcast/ Mon, 17 Dec 2018 14:30:06 +0000 https://scriptorium.com/?p=18109 https://www.scriptorium.com/2018/12/content-strategy-stakeholders-podcast/#respond https://www.scriptorium.com/2018/12/content-strategy-stakeholders-podcast/feed/ 0 In episode 41 of the Content Strategy Experts podcast, Alan Pringle, Bill Swallow, and Sarah O’Keefe identify the obvious (and not-so-obvious) stakeholders of content strategy processes: who should have input on a project and why?

Related links:

Twitter handles:

The post Content strategy stakeholders podcast appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 17:15
Potluck strategy podcast https://www.scriptorium.com/2018/11/potluck-strategy-podcast/ Fri, 30 Nov 2018 20:43:05 +0000 https://scriptorium.com/?p=18081 https://www.scriptorium.com/2018/11/potluck-strategy-podcast/#comments https://www.scriptorium.com/2018/11/potluck-strategy-podcast/feed/ 1 In episode 40 of the Content strategy experts podcast, Sarah O’Keefe, Bill Swallow, and Gretyl Kinsey create a food-themed facade for content strategy. Abundant potluck metaphors illustrate some of the most important content strategy decisions.

Related links:

Twitter handles:

The post Potluck strategy podcast appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 19:35
LavaCon 2018 recap podcast https://www.scriptorium.com/2018/10/lavacon-2018-recap-podcast/ Mon, 29 Oct 2018 20:57:24 +0000 https://scriptorium.com/?p=17997 https://www.scriptorium.com/2018/10/lavacon-2018-recap-podcast/#respond https://www.scriptorium.com/2018/10/lavacon-2018-recap-podcast/feed/ 0 In episode 39 of the Content strategy experts podcast, Bill Swallow briefly interviews attendees and vendors at LavaCon 2018 about trends, challenges, and take-aways from the conference. 

Thank you to Jack Molisani, Jodi Shimp, Christina Brunk, Alyssa Fox, George Bina, Dave Kerfoot, Caroline Couvrette,  and Liz Fraley.

 

Related links:

Twitter handle:

 

The post LavaCon 2018 recap podcast appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 7:42
Content strategy step one podcast https://www.scriptorium.com/2018/10/content-strategy-step-one-podcast/ Mon, 22 Oct 2018 13:00:08 +0000 https://scriptorium.com/?p=17984 https://www.scriptorium.com/2018/10/content-strategy-step-one-podcast/#respond https://www.scriptorium.com/2018/10/content-strategy-step-one-podcast/feed/ 0 In episode 38 of the Content strategy experts podcast, Kaitlyn Heath and Gretyl Kinsey discuss the beginning steps in implementing a new content strategy.

Read the Full transcript of Content strategy step one podcast.

Related links:

Twitter handles:

 

The post Content strategy step one podcast appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 16:56
Content strategy failure podcast https://www.scriptorium.com/2018/10/content-strategy-failure-podcast/ Mon, 08 Oct 2018 13:00:23 +0000 https://scriptorium.com/?p=17925 https://www.scriptorium.com/2018/10/content-strategy-failure-podcast/#respond https://www.scriptorium.com/2018/10/content-strategy-failure-podcast/feed/ 0 In episode 37 of the Content Strategy Experts podcast, Alan Pringle and Sarah O’Keefe teach you how to ensure your content strategy fails. From experience, they give examples of exactly what makes a content strategy unsuccessful.

Related links:

Twitter handles:

The post Content strategy failure podcast appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 14:59
Lightweight DITA podcast: part 2 with guests Carlos Evia and Michael Priestley https://www.scriptorium.com/2018/09/lightweight-dita-podcast-part-2-with-guests-carlos-evia-and-michael-priestley/ Mon, 10 Sep 2018 12:53:19 +0000 https://scriptorium.com/?p=17859 https://www.scriptorium.com/2018/09/lightweight-dita-podcast-part-2-with-guests-carlos-evia-and-michael-priestley/#respond https://www.scriptorium.com/2018/09/lightweight-dita-podcast-part-2-with-guests-carlos-evia-and-michael-priestley/feed/ 0 In part two of two, Lightweight DITA (LwDITA) committee co-chairs, Carlos Evia and Michael Priestley, elaborate on LwDITA’s development, including use scenarios and user testing.

Related links:

Twitter handles:

The post Lightweight DITA podcast: part 2 with guests Carlos Evia and Michael Priestley appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 24:12
Lightweight DITA podcast: part 1 with guests Carlos Evia and Michael Priestley https://www.scriptorium.com/2018/08/lwdita-podcast-part-1-with-guests-carlos-evia-and-michael-priestley/ Mon, 27 Aug 2018 12:50:32 +0000 https://scriptorium.com/?p=17766 https://www.scriptorium.com/2018/08/lwdita-podcast-part-1-with-guests-carlos-evia-and-michael-priestley/#respond https://www.scriptorium.com/2018/08/lwdita-podcast-part-1-with-guests-carlos-evia-and-michael-priestley/feed/ 0 In part one of this two-part podcast, Carlos Evia and Michael Priestley talk to Gretyl Kinsey about Lightweight DITA (LwDITA): how it was conceived, the target audience, and the problems they hope it will solve.

Look out for part two on September 10th.

Related links:

Twitter handles:

The post Lightweight DITA podcast: part 1 with guests Carlos Evia and Michael Priestley appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 14:46
Battling content strategy inertia podcast https://www.scriptorium.com/2018/08/battling-content-strategy-inertia-podcast/ Mon, 13 Aug 2018 12:50:27 +0000 https://scriptorium.com/?p=17714 https://www.scriptorium.com/2018/08/battling-content-strategy-inertia-podcast/#respond https://www.scriptorium.com/2018/08/battling-content-strategy-inertia-podcast/feed/ 0 In this podcast, Sarah O’Keefe and Alan Pringle discuss inertia. How do you build enough momentum for big content strategy changes?

Related links:

Twitter handles:

The post Battling content strategy inertia podcast appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 16:27
Content strategy pitfalls podcast: migration https://www.scriptorium.com/2018/07/content-strategy-pitfalls-podcast-migration/ Mon, 30 Jul 2018 12:50:46 +0000 https://scriptorium.com/?p=17689 https://www.scriptorium.com/2018/07/content-strategy-pitfalls-podcast-migration/#respond https://www.scriptorium.com/2018/07/content-strategy-pitfalls-podcast-migration/feed/ 0 The next installment in an occasional series on how to avoid pitfalls in your content strategy.

In this podcast, Bill Swallow and Alan Pringle tackle the pitfalls that occur during content migration.

Related links:

Twitter handles:

The post Content strategy pitfalls podcast: migration appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 17:06
Content strategy pitfalls podcast: risk management https://www.scriptorium.com/2018/07/content-strategy-pitfalls-podcast-risk-management/ Mon, 16 Jul 2018 12:56:58 +0000 https://scriptorium.com/?p=17606 https://www.scriptorium.com/2018/07/content-strategy-pitfalls-podcast-risk-management/#respond https://www.scriptorium.com/2018/07/content-strategy-pitfalls-podcast-risk-management/feed/ 0 The next installment in an occasional series on how to avoid pitfalls in your content strategy.

In this podcast, Gretyl Kinsey and Jake Campbell talk about the risks involved with putting a new content strategy in place and what you can do to minimize them.

Related links:

Twitter handles:

The post Content strategy pitfalls podcast: risk management appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 22:33
DITA is everywhere podcast https://www.scriptorium.com/2018/07/podcast-dita-everywhere/ Mon, 02 Jul 2018 12:30:40 +0000 https://scriptorium.com/?p=17502 https://www.scriptorium.com/2018/07/podcast-dita-everywhere/#respond https://www.scriptorium.com/2018/07/podcast-dita-everywhere/feed/ 0 In this podcast, Alan Pringle and Bill Swallow discuss various industries adopting the DITA standard and what kinds of content they produce.

Related links:

Twitter handles:

The post DITA is everywhere podcast appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 15:18
Podcast: content (conference) strategy https://www.scriptorium.com/2018/06/podcast-content-conferences/ Mon, 18 Jun 2018 12:30:37 +0000 https://scriptorium.com/?p=17469 https://www.scriptorium.com/2018/06/podcast-content-conferences/#respond https://www.scriptorium.com/2018/06/podcast-content-conferences/feed/ 0 In this podcast, Gretyl Kinsey and Jack Molisani talk LavaCon: What goes into planning the conference and how can you get the most out of it?

Related links:

Twitter handles:

 

 

The post Podcast: content (conference) strategy appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 24:51
Content strategy pitfalls podcast: change management https://www.scriptorium.com/2018/06/content-strategy-pitfalls-podcast-change-management/ Mon, 04 Jun 2018 12:30:55 +0000 https://scriptorium.com/?p=17439 https://www.scriptorium.com/2018/06/content-strategy-pitfalls-podcast-change-management/#respond https://www.scriptorium.com/2018/06/content-strategy-pitfalls-podcast-change-management/feed/ 0 The next installment in an occasional series on how to avoid pitfalls in your content strategy.

In this podcast, Gretyl Kinsey and Bill Swallow discuss common change management problems and how to make sure your content strategy accounts for them.

Related links:

Twitter handles:

The post Content strategy pitfalls podcast: change management appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 27:42
Podcast: content strategy and management consulting https://www.scriptorium.com/2018/05/podcast-content-strategy-management-consulting/ Mon, 14 May 2018 06:00:41 +0000 https://scriptorium.com/?p=17383 https://www.scriptorium.com/2018/05/podcast-content-strategy-management-consulting/#comments https://www.scriptorium.com/2018/05/podcast-content-strategy-management-consulting/feed/ 1 Sarah O’Keefe, Bill Swallow, and Alan Pringle discuss the connections between content strategy and management consulting.

Twitter handles:

Links:

The post Podcast: content strategy and management consulting appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 15:27
Podcast: You are not unique https://www.scriptorium.com/2018/04/podcast-not-unique/ Mon, 30 Apr 2018 06:00:24 +0000 https://scriptorium.com/?p=17362 https://www.scriptorium.com/2018/04/podcast-not-unique/#respond https://www.scriptorium.com/2018/04/podcast-not-unique/feed/ 0 In this podcast, Bill and Alan discuss the myth of uniqueness; the unique needs of published content are not always the best criteria for determining content processes.

Twitter handles:

The post Podcast: You are not unique appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 16:24
Content strategy pitfalls podcast: silos https://www.scriptorium.com/2018/04/content-strategy-pitfalls-podcast-silos/ Tue, 17 Apr 2018 06:00:47 +0000 https://scriptorium.com/?p=17334 https://www.scriptorium.com/2018/04/content-strategy-pitfalls-podcast-silos/#respond https://www.scriptorium.com/2018/04/content-strategy-pitfalls-podcast-silos/feed/ 0 The next installment in an occasional series on how to avoid pitfalls in your content strategy.

In this podcast, Sarah O’Keefe, Bill Swallow, and Alan Pringle discuss content silos, how they come to be, and how to deal with them.

Related links:

The post Content strategy pitfalls podcast: silos appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 20:43
Content strategy pitfalls podcast: tools https://www.scriptorium.com/2018/03/content-strategy-pitfalls-podcast-tools/ Mon, 26 Mar 2018 06:30:49 +0000 https://scriptorium.com/?p=17285 https://www.scriptorium.com/2018/03/content-strategy-pitfalls-podcast-tools/#respond https://www.scriptorium.com/2018/03/content-strategy-pitfalls-podcast-tools/feed/ 0 The first in an occasional series on how to avoid pitfalls in your content strategy.

In this podcast, Sarah O’Keefe, Bill Swallow, and Alan Pringle discuss how to dodge tool-related traps in your content strategy.

Related links:

The post Content strategy pitfalls podcast: tools appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 22:59
Content strategy across the enterprise podcast with guest Alyssa Fox https://www.scriptorium.com/2018/03/content-strategy-across-enterprise-podcast-guest-alyssa-fox/ Mon, 12 Mar 2018 06:05:31 +0000 https://scriptorium.com/?p=17244 https://www.scriptorium.com/2018/03/content-strategy-across-enterprise-podcast-guest-alyssa-fox/#respond https://www.scriptorium.com/2018/03/content-strategy-across-enterprise-podcast-guest-alyssa-fox/feed/ 0 In this podcast, we interview Alyssa Fox about aligning content strategy across marketing and technical communication and discuss her leadership role in the Society for Technical Communication.

Twitter handles:

The post Content strategy across the enterprise podcast with guest Alyssa Fox appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 20:09
Podcasting strategy podcast https://www.scriptorium.com/2018/02/podcasting-strategy-podcast/ Thu, 01 Feb 2018 07:00:09 +0000 https://scriptorium.com/?p=17145 https://www.scriptorium.com/2018/02/podcasting-strategy-podcast/#comments https://www.scriptorium.com/2018/02/podcasting-strategy-podcast/feed/ 1 This podcast features special guest Ed Marsh of the Content Content podcast. Scriptorium’s guest appearances on the Content Content podcast inspired us to start our own. In this episode, Gretyl Kinsey and Ed Marsh discuss content strategy and how it applies to the world of podcasting, solutions to the ever-present problem of content silos, how to find your tribe at events like LavaCon, and more.

Resources:

Twitter handles:

The post Podcasting strategy podcast appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 25:20
Is Google Translate good enough (podcast) https://www.scriptorium.com/2018/01/google-translate-good-enough-podcast/ Thu, 18 Jan 2018 07:00:13 +0000 https://scriptorium.com/?p=17122 https://www.scriptorium.com/2018/01/google-translate-good-enough-podcast/#respond https://www.scriptorium.com/2018/01/google-translate-good-enough-podcast/feed/ 0 Bill Swallow discusses the possibilities and the limitations of machine translation tools such as Google Translate.

When is Google Translate good enough and when do you need professional (human) translation?

Resources:

Featured image: ©jarretera/123RF.com

The post Is Google Translate good enough (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 20:41
New Year’s resolutions for content (podcast) https://www.scriptorium.com/2018/01/new-years-resolutions-content-podcast/ Thu, 04 Jan 2018 07:00:51 +0000 https://scriptorium.com/?p=17078 https://www.scriptorium.com/2018/01/new-years-resolutions-content-podcast/#respond https://www.scriptorium.com/2018/01/new-years-resolutions-content-podcast/feed/ 0 For the New Year, we have a set of content-related resolutions. Our content needs to lose weight, exercise, and much more. Sarah, Bill, and Alan discuss how to improve content in 2018.

Resources:

 

The post New Year’s resolutions for content (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 17:39
Best practices for localizing DITA content (podcast) https://www.scriptorium.com/2017/12/best-practices-localizing-dita/ Thu, 14 Dec 2017 07:00:17 +0000 https://scriptorium.com/?p=17001 https://www.scriptorium.com/2017/12/best-practices-localizing-dita/#comments https://www.scriptorium.com/2017/12/best-practices-localizing-dita/feed/ 1 There are special considerations when localizing DITA content. In this podcast, Bill Swallow and Simon Bate discuss the conventions available in DITA for localization, and share anecdotes and advice to help you circumvent localization problems.

Resources:

Twitter handles:

The post Best practices for localizing DITA content (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 21:51
Cancer staging information podcast https://www.scriptorium.com/2017/11/cancer-staging-information-podcast/ Thu, 30 Nov 2017 07:00:04 +0000 https://scriptorium.com/?p=16937 https://www.scriptorium.com/2017/11/cancer-staging-information-podcast/#respond https://www.scriptorium.com/2017/11/cancer-staging-information-podcast/feed/ 0 How can faster access to cancer staging information lead to better diagnoses and improved patient care? What if cancer staging content could be integrated into electronic medical records and accessed via API instead of just in a printed book? We discuss these questions and more with guest Laura Meyer Vega of the American Joint Committee on Cancer in this podcast recorded at LavaCon 2017 in Portland, Oregon.

Resources:

Twitter handles:

The post Cancer staging information podcast appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 24:27
The death of PDF (podcast) https://www.scriptorium.com/2017/11/death-pdf-podcast/ Thu, 09 Nov 2017 07:00:40 +0000 https://scriptorium.com/?p=16895 https://www.scriptorium.com/2017/11/death-pdf-podcast/#comments https://www.scriptorium.com/2017/11/death-pdf-podcast/feed/ 2 In this podcast, Alan Pringle and Sarah O’Keefe discuss the history—and health—of the PDF format. Is it still useful in today’s connected world? Are there business reasons to distribute PDF files—and not to?

Resources:

Twitter handles:

 

 

The post The death of PDF (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 18:17
DITA specialization (podcast) https://www.scriptorium.com/2017/10/dita-specialization-podcast/ Thu, 26 Oct 2017 06:00:55 +0000 https://scriptorium.com/?p=16844 https://www.scriptorium.com/2017/10/dita-specialization-podcast/#respond https://www.scriptorium.com/2017/10/dita-specialization-podcast/feed/ 0 In this podcast, Gretyl Kinsey and Sarah O’Keefe discuss DITA specialization, or the process of creating new structures from existing ones. What is involved in developing and testing a DITA specialization? What are some risks and benefits you should consider before specializing your DITA content?

Resources:

Twitter handles:

The post DITA specialization (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 19:49
Finance for technical communication with guest Erin Vang (podcast) https://www.scriptorium.com/2017/10/finance-technical-communication-guest-erin-vang-podcast/ Thu, 12 Oct 2017 06:00:54 +0000 https://scriptorium.com/?p=16793 https://www.scriptorium.com/2017/10/finance-technical-communication-guest-erin-vang-podcast/#respond https://www.scriptorium.com/2017/10/finance-technical-communication-guest-erin-vang-podcast/feed/ 0 Erin Vang of Global Pragmatica LLC discusses the basics of finance for technical communication managers. What do you need to know about budgeting and corporate finance to make your department run smoothly?

This podcast is a preview of a presentation that Erin will deliver at tcworld 2017 in Stuttgart, Germany.

Resources:

Twitter handles:

The post Finance for technical communication with guest Erin Vang (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 28:07
DITA Learning and Training (podcast) https://www.scriptorium.com/2017/09/dita-learning-training-podcast/ Thu, 28 Sep 2017 06:00:20 +0000 https://scriptorium.com/?p=16730 https://www.scriptorium.com/2017/09/dita-learning-training-podcast/#comments https://www.scriptorium.com/2017/09/dita-learning-training-podcast/feed/ 1 In this podcast, Gretyl Kinsey and Simon Bate discuss the DITA Learning and Training specialization. How does this specialization work? What are some ways an organization might benefit from using Learning and Training to structure its educational content? What should you consider before implementing a DITA authoring environment with Learning and Training?

Resources:

Twitter handles:

The post DITA Learning and Training (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 22:22
Content strategy ROI (podcast) https://www.scriptorium.com/2017/09/content-strategy-roi-podcast/ Thu, 14 Sep 2017 06:00:59 +0000 https://scriptorium.com/?p=16699 https://www.scriptorium.com/2017/09/content-strategy-roi-podcast/#comments https://www.scriptorium.com/2017/09/content-strategy-roi-podcast/feed/ 1 In this podcast, Alan Pringle, Sarah O’Keefe, and Bill Swallow discuss ways of measuring the return on investment in a content strategy implementation. A content strategy is tied to specific business goals; it’s designed to either solve a business problem with content or better position your company to meet current and future business goals. Like any business strategy, it needs to be measured over time to determine its effectiveness in achieving those goals.

Resources:

Twitter handles:

The post Content strategy ROI (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 26:05
The death of training (podcast) https://www.scriptorium.com/2017/08/death-training-podcast/ Thu, 31 Aug 2017 06:00:00 +0000 https://scriptorium.com/?p=16641 https://www.scriptorium.com/2017/08/death-training-podcast/#comments https://www.scriptorium.com/2017/08/death-training-podcast/feed/ 1 In this podcast, Alan Pringle and Sarah O’Keefe discuss the shift in training priorities. In the past, companies would train employees to avoid dependence on Evil Outside Contractors. Today, companies don’t want to invest in training their employees (“They might leave!”), so instead they farm out specialized tasks to expert contractors.

Many thanks to Danielle Marie Villegas (@techcommgeekmom). A discussion on her Facebook page sparked the idea for this podcast.

Resources:

Twitter handles:

The post The death of training (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 16:35
Transitioning from strategy to implementation (podcast) https://www.scriptorium.com/2017/08/transitioning-strategy-implementation-podcast/ Thu, 17 Aug 2017 06:00:52 +0000 https://scriptorium.com/?p=16594 https://www.scriptorium.com/2017/08/transitioning-strategy-implementation-podcast/#respond https://www.scriptorium.com/2017/08/transitioning-strategy-implementation-podcast/feed/ 0 In this podcast, Alan Pringle and Bill Swallow discuss transitioning from a developed content strategy to implementation of that strategy. What’s involved in a content strategy implementation? What should you be mindful of? How should you handle change?

Resources:

Twitter handles:

The post Transitioning from strategy to implementation (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 18:38
The client-consultant relationship (podcast) https://www.scriptorium.com/2017/08/the-client-consultant-relationship-podcast/ Thu, 03 Aug 2017 06:00:16 +0000 https://scriptorium.com/?p=16556 https://www.scriptorium.com/2017/08/the-client-consultant-relationship-podcast/#respond https://www.scriptorium.com/2017/08/the-client-consultant-relationship-podcast/feed/ 0 In this podcast, Sarah O’Keefe and Bill Swallow discuss the client-consultant relationship. What is it like working with a content strategy consultant? How can consultants help? How deep do such engagements go?

Resources:

Twitter handles:

The post The client-consultant relationship (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 22:34
Highly designed content (podcast) https://www.scriptorium.com/2017/07/highly-designed-content-podcast/ Thu, 20 Jul 2017 06:00:48 +0000 https://scriptorium.com/?p=16504 https://www.scriptorium.com/2017/07/highly-designed-content-podcast/#respond https://www.scriptorium.com/2017/07/highly-designed-content-podcast/feed/ 0 Highly designed content uses presentation to call its audience’s attention to the most important information. This kind of content requires more attention to detail and exceptions to the standard layout than content with a purely functional design. In this podcast, we discuss strategies for producing highly designed content and solutions for exerting more control over your design in a publishing environment with automated formatting.

Resources:

Twitter handles:

The post Highly designed content (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 22:09
20 years of content strategy (podcast) https://www.scriptorium.com/2017/07/20_years_of_content_strategy/ Thu, 06 Jul 2017 06:00:08 +0000 https://scriptorium.com/?p=16461 https://www.scriptorium.com/2017/07/20_years_of_content_strategy/#respond https://www.scriptorium.com/2017/07/20_years_of_content_strategy/feed/ 0 2017 marks 20 years of business for Scriptorium. In this podcast, Sarah O’Keefe, Alan Pringle, and Bill Swallow reflect on changes in the content industry over the past 20 years and on how Scriptorium came to be. From content development approaches to professional development resources… What’s changed, what’s stayed the same, and what does the future look like?

Resources:

Twitter handles:

The post 20 years of content strategy (podcast) appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 27:10
Accessibility podcast with Char James-Tanny https://www.scriptorium.com/2017/06/accessibility-podcast-char-james-tanny/ Thu, 08 Jun 2017 06:00:27 +0000 https://scriptorium.com/?p=16345 https://www.scriptorium.com/2017/06/accessibility-podcast-char-james-tanny/#comments https://www.scriptorium.com/2017/06/accessibility-podcast-char-james-tanny/feed/ 2 In this podcast, Sarah discusses content accessibility with Char James-Tanny. What makes content accessible? How can content creators include accessibility in their planning process? What happens if you do not provide accessible content?

Resources

Twitter handles:

Additional contact information

Transcript:

This is the Content Strategy Experts podcast produced by Scriptorium. Since 1997 Scriptorium has helped companies manage, structure, organize and distribute content in an efficient way.

Sarah O’Keefe: Welcome to the Content Strategy Experts podcast, episode 6. Today I’m delighted to welcome a special guest. Char James-Tanny is a content strategist with over 35 years of experience as a technical communicator. She is currently a principal technical writer for Schneider Electric and based out of Boston. She’s also an advocate for accessibility and that’s our topic for today. Char, welcome.

Char James-Tanny: Thank you. It’s good to be here.

SO: It is great to talk to you. Let’s jump right in. How did you get started in accessibility?

CJ: When I was 10, I broke four toes on my right foot skate boarding, while being towed by a bike on Easter Sunday in Upstate New York. I hit a piece of ice, flew through the air, landed badly, and amazingly enough in 50 years nothing has changed in how they can treat broken toes. But I couldn’t walk to and from school anymore. So my parents made arrangements for me to be picked up by this little shuttle bus because we lived about 25 minutes from the New York State School for the Deaf. For six weeks I rode a bus with… Everybody else was deaf. We couldn’t communicate, and you know me and you know I like to talk. It wasn’t any different when I was 10, and so they taught me how to finger spell, so that we could at least carry on very basic communications for the eight minutes that I was with them.

CJ: Since then everything in my life has just tripped along and I just keep gaining experiences in accessibility, and then all a sudden it was like, “Wow there’s a hashtag for accessibility on Twitter. There are people talking about accessibility.” For the longest time it was like people didn’t know. Like with Tech Com, for the longest time nobody knew other people who were doing WinHelp. It was the same with accessibility, and all a sudden it opened up the flood gates and you could find people who knew more than you did, who could teach you. You’ve brought things to the equation that they didn’t know, and that’s how I got started.

SO: So what is accessibility or what is your definition of it? How do you constrain that field, ’cause I know there’s a lot there?

CJ: There is a lot there and it tends to scare people, just because it is such a wide field. For me accessibility ties into usability because what you’re doing is not just making something useable for somebody who has all their senses and for mobility and everything else, but you’re making products that work no matter what constraints the person might have to try and work with. They use…

SO: Go ahead.

CJ: I was just gonna say, so you’ve got people who use screen readers. They’re not always blind but there are people who use screen readers, which reads the stuff on the screen to them and then they can talk back. They text through their iPhones by talking. They read Facebook by listening. They post by talking. People who are deaf, that’s who need captions and video description, so they could watch the video and even though they can’t hear what’s going on they can tell what’s going on. They perceive it. Mobility issues. People with arthritis or cerebral palsy, or even just a broken arm, who need to be able to navigate a website, there need to be keystrokes. Typically we use the tab key. It helps if people add in links that let people jump to different sections. Some apps like screen readers have special key combinations that pull up anything styled as a heading, not anything that looks like a heading but something that is styled as a heading.

SO: It seems like there’s an analogy here to the physical world and you even touched on this with your broken foot example. We have curb cuts, and we have traffic lights, or pedestrian lights that make noises in addition to having visual “Walk” “Don’t Walk” kinds of signs. It sounds as what you’re describing is that same kind of guide posting or the same alternatives being provided in the online world, in the content world.

CJ: Yes. I actually did talk about… I’m trying to find the curb cut for documentation at one point.

[chuckle]

CJ: Because curb cuts started after World War II, a little city in the midwest. Basically somebody was watching these Vets who could not get up over curbs between crutches and wheelchairs and things. Once curb cuts were adopted and they spread it went from there to San Francisco and then just everywhere. Now curb cuts are mandatory, but you’ll notice everybody uses the curb cut. People who have trouble bending their knees because of arthritis, use a curb cut. Mothers who are pushing strollers, use a curb cut. People pulling wheely suitcases, use a curb cut. Even though it might have been originally designed for a specific accessibility purpose, it’s open to everyone and makes everybody’s life easier. The same thing happens when documents and websites and podcasts and videos are accessible. It not only accommodates that person who needs that specific thing, but it actually makes it easier and better for everyone.

SO: We mentioned the physical, the curb cuts. I think for those of us listening in the United States we know that those are a legal requirement of the Americans with Disabilities Act. Is there a similar legal requirement for the content accessibility that we’re talking about?

CJ: Not officially in the United States, no. There is Section 508, which is for government. Anybody who does business with the government and any government site has to be accessible. It’s also a procurement law, which means that when the government is looking to hire a vendor and they have to choose between vendor A who has a really sucky interface or a product but it’s accessible, and vendor B who has this gorgeous product but isn’t accessible, they have to go with vendor A.

SO: Okay. What about worldwide are there other countries or other regions where there are accessibility requirements?

CJ: Yes, pretty much every place but here. WCAG 2.0 is the base standard, and the new section 508 which is due out I believe next January. January 2018, will tie into WCAG, because WCAG 1 went for very specific rules, like “You will use this font size” sort of thing or whatever. Now they’ve made it so much more generic, because our phones are computers, really. Therefore if you’re gonna make something accessible it needs to be accessible not just on a computer, but on a smartphone, on a tablet, on a watch, anything that is a device that has electronic capabilities. That’s what we’re aiming toward. Section 508, as far as I know, will still only be for government. Accessibility is just a good thing to do if you’re in the States even without a legal requirement.

SO: Okay. We’ll include links to all of these things in the show notes. WCAG is W-C-A-G by the way? [chuckle]

CJ: Yeah. Web Content Accessibility Guidelines 2.0, which is put out by…

SO: I’m glad you know what it stands for.

CJ: Which is put up by WAI, which is Web Accessibility Initiative, which comes from a W3C, which is the World Wide Web Consortium.

SO: Well, I knew how to spell it, and then, there my knowledge ended.

[chuckle]

CJ: Yeah. We’ll get those links in so that everybody has them. Okay, there’s some amount of legal requirement, and if you’re operating globally you’re going to be dealing with multiple different jurisdictions. Now, if I’m a content person, a content creator, where do I get started with this? How do I even know, is my content accessible or not? I have no idea, so where do I start?

CJ: One advantage for Tech Commers [technical writers] is that we typically like styles, not all of us, but most of us. If you’re doing something like structured frame, you’re already part way there, because you have to follow that structured authoring paradigm, which means heading 1, followed by content; heading 2, followed by content; heading three, followed by content. That’s accessible. Heading 1 to heading 5 isn’t really accessible, because it’s missing several steps. [chuckle] Tech Commers in general, tend to already be producing somewhat accessible documents even though they don’t realize it.

SO: That’s the tech side of things. What about other media, like graphics, video, audio?

CJ: Alright. When you get to graphics, video and even text, you want to watch color contrast. Color contrast is how easy is something to read. Obviously black on white is really good, because everybody can pretty much… Anybody who can see can easily typically read black on white. Some dyslexics actually find blue on yellow to be a better combination, and some people have said that if you tone the white down a bit it’s not so glary, which also makes it easier to read. Where you run into problems is when you do things… For example, I have a site up on my screen where the hyperlinks are in a lighter shade of blue. Now I can see it, but people who have different sets of… Some kind of color blindness. Let’s go with that. Might see the text more as grey. They might still be able to recognize it as a link, but due to the fact that it’s just blue and it doesn’t underline until you mouse over it, they may or may not actually recognize it as a link. People who use tab, the mobility side, would actually tab to the link, so the color wouldn’t be a an issue. The color is an issue for people who are low vision or who have some form of color blindness, and there’s nine or 12 different kinds of color blindness.

SO: Okay. For graphics and any visual display we need to worry about color and color contrast?

CJ: Yes.

SO: Then what about audio, like this podcast?

CJ: The audio should be… It doesn’t need to be transcribed because there’s nothing to see. No, it needs to be transcribed because you want the words. It doesn’t need to be described because there is nothing to see. An audio transcription means that somebody sits there and… There’s tools actually that’ll do that. It’ll listen to us. It’ll listen to the recording. It’ll do it’s best guess. Same as if you send a text message by talking into your phone, which sometimes works and sometimes you get weird words, and then you just clean it up. That’s one way to make a podcast accessible for somebody who is deaf, or who has some sort of hearing issue. For video, you want to caption it so that people… This not only helps people who are deaf. It helps people who are sitting in bars at airports waiting for their flight, or at the gate at the airport waiting for their flight, where there’s a huge amount of background noise, and you can’t hear what they’re saying on the TV that might be in at the gate area. These captions make it so that everybody knows what’s going on.

CJ: Description is when they actually do things like, “A guy named Joe just walked across the living room, and said… ” Then Joe’s voice jumps in. They can audio describe TV shows and movies and things like that. A friend of mine in England has a setting on her TV that automatically enables audio description, so that when she’s watching TV she gets the full experience even, though she can’t see it.

SO: It almost sounds as though they’re taking what would’ve been in the original script and reverse engineering it back in.

CJ: Basically, yeah. But a lot of people only get as far as adding captions, which is good, but it doesn’t provide the depth of detail.

SO: If I’m sitting here faced with a product that I need to document, or some technical content that I need to write, what are some of the best practices to think about as I’m thinking about “What’s gonna be my strategy for this content?” and then eventually I’m gonna localize it, but at what point in the process should I be thinking about accessibility?

CJ: From the beginning. At the very start. I did a project once. Somebody had created a multi-tabbed Excel spreadsheet for a class… I guess it was their homework. I’m trying to remember the project. And they got all done with it and somebody said, “By the way, it needs to be accessible.” And so I had to go back and pretty much redo it. I had to change the colors. I had to change the font sizes. I had to change the font families. I had to get headings into Excel, which is always a fun time.

[chuckle]

CJ: So that they could navigate around, all that kinda stuff. The fact is, if somebody had started from the very beginning and said “Okay look, we need… This is our color combination. It’s already verified. This is the font family we’re gonna use. This is the font size we’re gonna use. This is how we’re gonna differentiate headings.” Poof, it would be done, and when they got all done they’d have an accessible Excel spreadsheet. Instead, because they brought me in later, the end result was like an extra 15 grand for this 10-tab Excel spreadsheet, to make it accessible, ’cause I had to go back and redo everything but the content.

SO: This is just like anything else, if you plan it upfront it’ll be fine, and if you glue it on after the fact it’ll be not as good and cost a ton of money.

CJ: Yes. Not only that, but if you glue it on after the fact, typically what happens is you start project A and it’s not set up for accessibility, and so project A is working it’s way through it’s little product cycle and halfway through product A you start product B. Now you get to the end of product A and you go, “Oh, we should’ve included product B, or we should’ve made it accessible,” but now product B is already three quarters of the way through it’s cycle and now you gotta paste it on there; whereas if you start from the beginning by making it accessible, by setting up templates that are accessible, by having everybody on the same page. ‘Cause we’re talking more than just the styles and the format and the colors, we’re also talking words, especially for us. We’re Tech Commers, so words matter.

SO: It does seem as though we have a responsibility to think about this in the same way that nobody else thinks about style guides or anything like that. This is one of these things that as content producers we’re responsible for.

CJ: Correct. I was working on something the other day. I was editing something, I think, for a friend of mine, and there was a sentence that said, “Once you do this step once.” They’ve now used once two different ways, once meaning after and once meaning one time. Now that’ll cause issues just in general, understandability for almost anybody in the world, but it’ll also cause problems with translation. That example specifically, not so much for SEO, for search engine optimization, but other examples. If you make things accessible, you automatically make your translations better, your localizations better, your search engine optimization better, and everybody benefits. Another good word that’s been coming up a lot lately is follow, “Follow the following steps,” so “Proceed in order through the steps that come after this paragraph.” Somebody needs to… When you’re setting up your style guide, you need to indicate which words are and are not allowed, or how they should be used, especially if you’re going to translation.

SO: So it sounds as though potentially this can basically pay for itself, because [A], if you do it upfront it’s not that expensive. [B], it’s gonna help with translation or localization cost efficiency and all of that. And [C], you’re expanding your market. You’re expanding your market in the sense that, for example, the US government will look more kindly on your product if it’s accessible, but also anyone that cares about accessibility that’s a customer will be more likely to buy your product.

CJ: Yeah. One of the most common things I heard before I started working for Schneider Electric when I was still a consultant, would be… I’d say something about accessibility and the answer would almost always be, “We don’t have any disabled users.” It’s like, “How do you know that?”

SO: Well that’s interesting, because years and years and years ago, the first time I actually ran across accessibility was because we had a customer that called us up and said, “We need some form of help.” It was a long time ago. “And our help absolutely has to be accessible.” This had to have been maybe 15 years ago. It was awhile back. I looked into it a little bit, and of course out of the box what they were doing was not accessible, so we did some more digging, and we built them accessible help and everything was fine. But later we asked them, “Why was this such a concern?” Because it’s quite unusual to have somebody lead with that, to have a client show up and say “Accessibility is our number one concern.” It turned out, they said, Well… ” They made something related to networking. They said, “Well, our major client over here has a CIO who is blind, and we can’t sell to him unless we produce accessible help.”

CJ: That’s the way a lot of people end up getting into it. There’s a specific reason, “I know somebody who… ” “I needed this customer who… ” When I said earlier that “We don’t have any users,” or somebody saying “We don’t have any users with disabilities,” you can’t actually, unless you’ve been able to send everybody a survey saying, “You need… ” “Does this have to work on a screen reader?” “Does this have to work with TTY?” “Does this have to have to have captions?” The thing is, even if you try, the list is so long that you’ll never know. What typically happens is, a lot of people with disabilities might contact the company and say “Look, I really wanna use your product but I can’t.” Most just go find somebody else. They’re no different than everybody else, ’cause they are everybody else, which is the easy way out. You don’t wanna spend hours trying to get somebody to make something that works for what you need, you’ll just go try and find something that already does what you need.

SO: This actually sounds exactly like the argument against localization, which is basically, “Oh, all of our customers speak English.”

CJ: Yes, it’s very similar.

SO: And it’s true, all of your current customers speak English because you’re not providing anybody that doesn’t speak English with the option of using your product.

CJ: Right.

SO: So the real question…

CJ: I liken getting back to the mid ’90s when the browser wars were going on, and I always used… What was it? Netscape Navigator, was my primary browser. Every time I would talk to somebody about it and they’re just like, “If it works on IE that’s fine.” It’s like, “People use other browsers.” “No, none of our customers use other browsers.” “Well, I’m one of your customers and I use Netscape Navigator, and I cant see your site.” It takes me back 20 something… Wow, 30 something. Wow. Wow. A really long time. It’s the same fight just with a different battle, if that makes sense.

SO: I think that you’ve got some statistics around, not just the legal requirements but in fact the market, in terms of how large the market is of people that have some sort of limitation in accessing content.

CJ: The World Health Organization estimates that 20% globally, a billion people have at least one disability.

SO: That could be anything, low vision.

CJ: That could be anything. Somebody said to me “That’s an oxymoron,” but it’s the world’s largest minority group.

[chuckle]

CJ: It’s the only one that everyone will join at some point; not you might join it, you will join it. You will end up on crutches, because you sprained your ankle. You will end up in a cast, because you fell down. My joke when I give presentations is to show pictures of my son, they say that… The statistics basically say that people will spend 11% of their lives with some kind of disability, and my joke was always, “My son tried to fit in his 11% before he finished high school.”

[laughter]

CJ: The thing is, that 11% includes people who were born with some kind of a disability, as well as those who just get it because they got old.

[overlapping conversation]

CJ: 11%… Right. And aging, right now aging… [chuckle] I’ve been laughing. We read. You read, I read, we all read, but you and I especially I know we both read a lot. I’m getting so tired of all these novels of the 50-year-old gray hair stooped women. It’s like, “Oh, come on. There’s 50-year-olds who don’t look like that anymore. There’s 60-year-olds who don’t look like that.” But it doesn’t mean we don’t have other issues. We have arthritis, which means it’s hard to mouse, and so I’ll switch to the keyboard. Or instead of tapping to type on my phone I swipe, because it’s so much easier on my hand. I just have to hold a finger in place and I can just move around, or I can use the microphone.

SO: I’m sure it’s not me, but I’ve noticed that the type on my computer is getting smaller and smaller and smaller.

CJ: Two things. One is that the average age is 40 when people’s visions start to change. I’m one of the weird ones, I am farsighted, but most people who get over 40 are nearsighted, so it makes a difference. But, about 10 years ago, some nifty, young 20-year-old designer, nobody will ever know who it is and whoever it is, is certainly never going to say who they are, decided that a medium-gray type, that’s about what would the 8.0 when printed, is the most professional looking website, which rules out anybody over the age of 45, unless you’re wearing glasses and you up your font size in your browser. You just can’t read it, it’s too small.

SO: Yeah. And then because that particular person apparently worked for a certain well-known design leader, everybody else adopted that same grey on, sort of white. Yeah, not good.

AV: Yeah, and the world sits there and goes “I can’t read it.”

SO: “I can’t read it.” [chuckle] “It’s not me.”

AV: “I can’t read it.” [laughter] “Please.”

SO: As we wrap up here with our lament of aging…

[laughter]

SO: What is the advice that you would give somebody that’s hearing this and saying “Okay. I hear you and I understand, and now I think need to go think about this and maybe get started.” Where should they start?

CJ: They can start with WCAG, W-C-A-G. You can just type it. If you type in WCAG 2.0, there are guidelines and… That site is awesome. It not only has the 15 major points, but it actually describes each one. It includes things like color contrast and captions on videos. One of the the things is sort of along the lines of ‘Do No Harm’ in that it says “You need to make things so that if users end up in a quandary, they can get back out again.” Which is what we do. This is what Tech Com does. There’s an WCAG guideline just for us, really.

CJ: About eight of them I think, apply directly to Tech Com that can be implemented immediately. Easily, the quickest way to get started is make sure you’re using a style sheet and stick to it. Make sure you always use headings. Headings by the way come in handy, not just because screen reader users can pull up a link saying “Here’s all the headings in this document,” but because people with cognitive issues, traumatic brain injuries, anything that just has to do with brain stuff, they can look at text that is a different size and they can go, “Oh the bigger text is more important than the text that’s smaller, and that text is more important than the text that is smaller than that,” so there’s a visual acclimation that happens as people look through things.

CJ: If you make it look like a heading then you rule out the screen readers ’cause they can’t pull up the headings, although it still would work with anybody with a cognitive issue. Anyway, make sure your color contrast is good. Make sure that any graphics that you choose or use. Or screen shots, if your screen shots don’t… If you look at a screenshot and you have trouble and your vision is considered relatively normal, then go to your dev team and say, “Hey, we need to make some changes here. We need to modify this.” Somebody emailed me a couple of weeks ago and said, “I’m working on this website… I’ve asked somebody to work on a website for me and this is the button that they put on it. What do you think?” I’m like, “You’re kidding, right?” Because it was some pink text on a variegated blue back ground.

[chuckle]

CJ: I’m just like “Oh, that is just awful.” [chuckle] They way the colors worked out, they just had to change the pink text to black, and it was good enough. The blues were light enough that it would make enough color contrast. You worry about that.

SO: So it’s small things?

CJ: It’s small things.

SO: It’s basic, best practices.

CJ: If something looks weird to you, and like I said, you have roughly normal vision, it’s going to look weirder to somebody whose vision isn’t as good as yours, or it’s not gonna appear at all. My husband plays ‘Clash of Kings’ a lot of times, and every now and then he has to run in with the screen and say to me “What color… ” They do these little icons that are one color sitting on a background of a different color, and because all these icons are mushed onto one screen, he can’t always distinguish the colors. He can if they were separate but not because they’re all so close together. I look at them and go “Um, I think that one’s blue.”

[chuckle]

CJ: But there is… I forget which one it is. AbleGamers would know. But there is actually a game development company, where they actually accounted for people who are color blind right from the start, because their CEO was color blind, and he didn’t tell anybody. A lot of times people don’t… They don’t wanna say. It’s almost like it’s a bad thing. Sometimes people with disabilities are treated like it’s a bad thing. But this guy just basically… He didn’t actually tell anybody for awhile, that he wanted good color contrast ’cause of him. He just said it’s a good thing to do, and eventually it came out that this was why. Whatever the reason, you make it better for everybody.

SO: Yeah. I used to have a co-worker who was quite color blind, so we would run everything by him, “Can you see this? How does it look?” Of course now there are websites that will do that for us. [chuckle]

CJ: Yes.

SO: In return of course, he would wander in, in the morning and say, “Does this outfit look okay?” [chuckle]

CJ: Yeah.

SO: Okay. I think that wraps it up here. Thank you so much.

CJ: You’re welcome.

SO: There’s just a ton of good information here.

CJ: There’s stuff all over. One thing you can do is go to any government website: DOJ, the VA, Section 508, obviously. They should all have a page about accessibility. Even your state government website should have a page about accessibility, what they’ve done to make their sites.

SO: We’ll include a few of those in the show notes to get people started.

CJ: Yes. I have to send you a bunch of links.

The post Accessibility podcast with Char James-Tanny appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 30:28
Localization strategy: not just cost minimization https://www.scriptorium.com/2017/05/localization-strategy-not-just-cost-minimization/ Thu, 25 May 2017 06:00:41 +0000 https://scriptorium.com/?p=16272 https://www.scriptorium.com/2017/05/localization-strategy-not-just-cost-minimization/#comments https://www.scriptorium.com/2017/05/localization-strategy-not-just-cost-minimization/feed/ 1 In this podcast, Bill, Alan, and Sarah discuss localization strategy. Writing good content in the source language is only the beginning.

Resources

Twitter handles:

Additional contact information

The post Localization strategy: not just cost minimization appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 16:37
Convergence of marketing and technical content https://www.scriptorium.com/2017/05/convergence-marketing-technical-content/ Thu, 11 May 2017 06:00:38 +0000 https://scriptorium.com/?p=16143 https://www.scriptorium.com/2017/05/convergence-marketing-technical-content/#comments https://www.scriptorium.com/2017/05/convergence-marketing-technical-content/feed/ 1 In this podcast, Sarah and Gretyl discuss the traditional separation of marketing and technical content, and look at the reasons that these content types are now converging.

Resources

Twitter handles:

Additional contact information

The post Convergence of marketing and technical content appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 17:46
Typical DITA projects – do they exist? https://www.scriptorium.com/2017/04/typical-dita-projects/ Thu, 27 Apr 2017 06:00:45 +0000 https://scriptorium.com/?p=16134 https://www.scriptorium.com/2017/04/typical-dita-projects/#comments https://www.scriptorium.com/2017/04/typical-dita-projects/feed/ 1 In this podcast, Alan, Bill, and Sarah discuss some of the characteristics of typical DITA projects.

What is DITA? What are some reasons why you might move to DITA? What are some of the common challenges when implementing DITA? How does publishing work?

Resources

Additional contact information

The post Typical DITA projects – do they exist? appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 17:14
Structured authoring overview https://www.scriptorium.com/2017/04/structured-authoring-overview/ Thu, 13 Apr 2017 10:00:40 +0000 https://scriptorium.com/?p=16019 https://www.scriptorium.com/2017/04/structured-authoring-overview/#respond https://www.scriptorium.com/2017/04/structured-authoring-overview/feed/ 0 In this podcast, Alan, Bill, and Sarah provide an overview of structured authoring. What are the business requirements that might cause an organization to consider structured authoring?

Resources

Twitter handles:

Additional contact information

The post Structured authoring overview appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 15:56
The Content Strategy Experts podcast launch https://www.scriptorium.com/2017/03/content-strategy-experts-podcast-episode-1/ Thu, 30 Mar 2017 13:08:33 +0000 https://scriptorium.com/?p=16008 https://www.scriptorium.com/2017/03/content-strategy-experts-podcast-episode-1/#respond https://www.scriptorium.com/2017/03/content-strategy-experts-podcast-episode-1/feed/ 0 Content strategy, localization strategy…and pasta?

The post The Content Strategy Experts podcast launch appeared first on Scriptorium.

]]>
Scriptorium - The Content Strategy Experts full false 7:04