Fluid Topics Blog (FR) Generative AI

Webinar Recap: Content Pros as GenAI Pioneers

Fév 23, 2024  |  Reading Time: 6 minutes

In the realm of GenAI projects, the leadership and implementation are often spearheaded by IT teams. However, there is a growing recognition that content professionals should play a significant role in these initiatives. Unlike traditional IT-centric projects, GenAI projects heavily rely on the expertise of content professionals. When organizations involve the right people in these projects, they can propel these initiatives, transforming content into dynamic, user-centric solutions.

Fluid Topics recently hosted a webinar with content experts and panelists Lief Erickson, Amber Swope, and Fabrice Lacroix, which detailed why content professionals should be at the forefront of GenAI projects.

You can watch the webinar right here but if you’re short on time, read about some of the topics addressed during the webinar.

Key Takeaways

💡 Many AI projects led by IT teams prioritize the creation and performance of the technology, often forgetting to include other stakeholders. Involving content professionals in this process is a valuable, yet commonly overlooked opportunity. Without content experts, AI projects are bound to fall prey to hallucinations, inaccuracy, privacy issues, and ethical dilemmas.

💡 When IT teams need content for their projects, they tend to just go around collecting random sets. While this random content often works just well enough for the AI to answer simple questions, the issue is when people start asking complex questions, and the AI mixes up instructions from different projects or points them in the wrong direction entirely.

💡 The people who best know the content are the people who created it. And this isn’t limited to technical writers—training, service organizations, tech support, people who know the content can help the AI project.

💡 Metadata are essential to improve Generative AI’s relevance. They bring context and help reduce the hallucinations.

💡 Prompt engineering extends beyond the IT team. A good prompt comes back to personas – as a [role], I’m trying to do [x] – and therefore to the skills of content professionals.

3 lessons from the webinar

Don’t ignore the complexity of content and value of content professionals

Most AI projects are led by IT teams, who put together all the programming pieces. When they get to the part where they need content, they often collect random sets to test their technology (i.e an AI-powered chatbot).

The initial excitement over AI capabilities fades when that technology is faced with complex questions. While AI may perform adequately with simple queries, it struggles with more intricate tasks, leading to disappointment among stakeholders. As Fabrice Lacroix said, “the failure is not the technology, but the content.

Content serves as the backbone of any AI-driven endeavor. Without the involvement of content professionals, these AI projects are likely to fall prey to hallucinations, inaccuracy, privacy issues, and ethical quandaries.

All AI projects should involve the people who know the content the best – the people who created it. This isn’t limited to technical writers—training, service organizations and tech support all play a part. They will recognize when something isn’t quite right in a generated output, and they know where the correct information should have been pulled from. Lief Erickson explained that “you can connect the pipes, you can generate content and it might look good for that question but have a high failure rate overall and that’s where the SMEs have a lot to offer in leading these projects.”

Finding the right resources and how they work together as part of the AI team becomes pivotal for the success of these projects.

Prompt engineering extends beyond the IT team

If you’re old enough, you’ll remember a time when the general public did not know how to use search engines. We had to learn how to search the way the search engines wanted to be used. Slowly, the technology got better at inferring our intent, but also, searching became something people just knew how to do.

We’re seeing the same thing happen with Generative AI prompting. Prompt engineering is a difficult task. But as time goes by, we will get better at prompting and GenAIs will get better at interpreting our input. It’s a learning process, both for humans and robots.

Creating effective prompts is an essential competency within the domains of artificial intelligence and natural language processing (NLP). It involves crafting queries or instructions that guide language models to generate specific, accurate, and contextually relevant responses

A good prompt comes back to personas – as a [role], I’m trying to do [x] – and therefore to the skills of content professionals. The latter can ask and answer the right questions. “How do I translate my understanding of context to a series of prompts that gets me to the answer?” “How do I back engineer those prompts to get to the right content?” “How do I train the AI to get there?”.

IT teams need to collaborate with other departments to ensure these systems provide accurate, context-aware, and ethical responses.

Metadata are essential to improve Generative AI’s relevance

Content that is structured for DITA has a leg up because there’s a level of intelligence that is already baked in. However, when you’re writing structured, modular, purpose driven content with metadata, there’s no inference. On the other hand, the NLP can infer intent out of unstructured content, but it takes more work and is less reliable.

Structure can add context, purpose, and relevance, among other things. Information around audience and relevance can be applied to the content in the architecture. In that regard, metadata bring context and help reduce the hallucinations.

However, for a generative AI project to succeed the content must be connected to silos of other information that it previously hasn’t been connected to. This requires more metadata and may include building a taxonomy to help feed other technologies that can help find the right answers. As Fabrice summed up “We need to reconnect the different silos with a metadata layer on top of the existing metadata to create consistency across content silos.

The Bottom Line: What’s the Outlook for the Future?

While the IT team plays a crucial role in implementing and managing the technical aspects of artificial intelligence systems, content professionals bring a unique perspective and skill set that is essential for crafting meaningful and engaging experiences:

  • They know the value of the content,
  • They can help design additional metadata schemes that can be applied to the siloed content,
  • They can help downstream to evaluate the accuracy and relevance of the model.

As we continue to break barriers and pioneer new frontiers in the realm of GenAI, let’s recognize that the collaboration between content professionals and the IT team is essential for unlocking the full potential of GenAI projects.

What the audience asked during the Q&A session

How much time should we block to train a model and check the accuracy of the results?

Who is doing this successfully currently — and how would success be defined?

How can you effectively check the accuracy of the model you trained? How can writers or content specialists participate?

Can there be hallucinations for the content that you serve up is technically-verified and approved content from different content sets – training, support, marketing?

Is it an advantage to have your content in DITA – including metadata?

In the topic-based authoring method of writing content in small, modular chunks do we need to chunk the content further (macro topics) to serve up genAI touch points like chatbot?

 

Top quotes by our panelists

Amber Swope:

« GenAI is not about replacing the author, or the subject matter expert. It is about helping them get to pass the blank page and getting to their work more quickly. »

« The more challenging aspect is when the answer is almost right – but almost right doesn’t meet the end user’s needs – it seems right because content came through, my part of the system’s working and based on some cursory testing, it got some of the basic context right, so therefore the answer must be right. If you have ever tried to follow instructions where one step is wrong, almost right is not good enough. »

« The reason people start with AI is because AI is marketed as a technology and technology is tangible. The idea of content is more nebulous. In many companies, there are many organizations that create content, and they may or may not be involved in or even know about these AI initiatives until there is a level of disappointment that causes the project manager to regroup. People who know the content are the folks who can help with the project. »

Lief Erickson:

« You can connect the pipes, you can generate content and it might look good for that question but have a high failure rate overall and that’s where the SMEs have a lot to offer in leading these projects. »

« What are the success metrics for the content in the context of a GenAI solution. Success for this project might be a certain number of support calls is deflected. »

Fabrice Lacroix:

« Spotting something that is wrong is easy, and spotting what is missing is even trickier. »

« The failure is not the technology it’s the content. »

Get in touch

Want to explore how Fluid Topics can help you with your GenAI initiatives for your global content?  Contact us today.

About The Author

Anne-Sophie Lardet

Anne-Sophie Lardet