h1

LavaCon2018: Day 2

October 24, 2018

Day 2 started as per Day 1, with several short (20 minute) plenary sessions.

A journey to intelligent content delivery

The first breakout session I attended was another case study on how a software company transformed their help and documentation offering using Zoomin, in this case Pam Goodwin describing how her small team of 6 authors changed how documentation was viewed for Cherwell, an IT service management company with 450 employees.

Before:

  • focus was on ‘catch up’ documentation
  • dated look
  • poor search capability
  • different help systems and tools used for different rpoducts
  • no consolidated search
  • no access outside the software to other information

Interim (2016):

  • used SuiteHelp
  • based on DITA source
  • improved searching
  • standalone systems for each product release
  • needed separate PDF plug-in and PDF delivery model

After (2018):

  • easy to find
  • cross-product and cross-version support
  • multi language support
  • improved searching
  • on-demand PDF creation by users at point of need
  • responsive design
  • scalable
  • simple delivery model for authors
  • easy maintenance

Why choose Zoomin Docs?

  • met all their requirements
  • powerful search
  • minimal changes to DITA source
  • improved analytics (esp. for searches)
  • on-demand PDFs
  • built-in feedback mechanism
  • past relationship with vendor’s personnel
  • great customer references

How they sold it to decision makers:

  • Timing — new leadership were looking for opportunities to modernise and scale
  • Executive support — Chief Product officer (CPO) saw tech docs as a valuable marketing tool, esp. once he realised that when he was researching a product, he always checked the docs as part of his decision making
  • Strong program management — saw value and helped navigate the approval process
  • Affordable solution — able to take great strides with minimal investment and compressed timeline

Project challenges:

  • lack of UI/UX support at kick-off
  • struggled with taxonomy shift; once made, then kept it simple
  • quickly pushed multi language support and context-sensitive help to phase [not sure what I wrote there!]
  • table formatting issues
  • automation issues
  • user management
  • need a third testing environment

Results:

  • significant increase in users (27%)
  • number of sessions per user down (i.e. ‘hit and run’)
  • 320% increase in number of page views
  • 91% increase in unique page views
  • 37% decrease in bounce rates
  • 17.5% increase in Net Promoter score [what is this?] in past year; great customer comments

Creating interactive intelligent style guides

(George Bina, Oxygen)

20 years since XML standard was first published, and since Oxygen was created.

A style guide is a set of rules to follow when writing content. (e.g. how to style code blocks so the code remains correct, but visually readable). A style guide helps you avoid making mistakes.

A style guide should:

  • evolve and grow over time
  • deal with errors/new issues as you find them

But often there are too many rules.

Solution: Automate. Auto detect when content doesn’t follow a specific rule. (e.g. Schematron will detect variations in patterns in structured docs; Acrolynx).

Schematron can also check text via regular expressions. Works inside the authoring tool (e.g. Oxygen XML Editor) and is integrated within it so you get messages about potential errors while writing.

With Schematron, you select the rule and provide your parameters, plus a message for the writer.

(Note: I left this session halfway through because while automated style guides are something I wanted to know more about, what he was demonstrating was the integration between an XML/DITA editor [which I don’t use] and Schematron — I couldn’t see how I would ever use this.)

End of day plenaries

The last two plenaries of the day were from David Dylan Thomas (The content strategy of civil discourse: Turning conflict into collaboration) and Megan Gilhooly (The power of learning you were wrong). Both were really excellent!

 

h1

LavaCon2018: Day 1

October 23, 2018

I helped out on the Registration desk this morning. It was a bit of a madhouse, with three of us checking in about 450 people in just on an hour!

By the time I got into the intro and plenary sessions there were no seats left, it was hard to see the screen, and hear the speakers, so I left after Karen McGrane’s keynote.

Some notes from McGrane’s talk:

  • How to create content for any format (including audio only), any device, any screen size, no matter what the ‘next big thing’ is, even for devices not yet created?
  • Still too much reliance on visual styling (as used in books/paper) to create meaning. The web isn’t print — the way people create content has to change.
  • Content ecosystem has to be navigable whatever size device (e.g. watch to stadium screen; digital signage), even if not the primary target device.
  • Need true separation of content from form; presentation-independent content.
  • Content must be readable, browsable, and accessible on any platform/device.

Word and DITA

The first general session I attended was on Microsoft Word and DITA, and featured Doug Corman speaking about the pros and cons of each and how an add-in from SimplyXML could make it easier to general authors to write in a style compatible with DITA (yes, I checked with him afterwards about the add-in’s applicability for editing, and it’s not).

My notes from this session:

DITA positives:

  • XML standard
  • separates content creation and form
  • supports topic-based authoring
  • information typing
  • re-use at many levels
  • open source
  • large global community of practitioners
  • proven for tech pubs
  • flexible (640+ elements)
  • integration with CMSs and modern processes
  • control through a standard architecture

DITA negatives:

  • 640+ elements!
  • transformation costs from Word, XML, PDF
  • XML editors are complicated
  • DITA tools are expensive
  • expertise required for implementation
  • rework/rekeying of DOCX, PDF may be required.

Microsoft Word positives:

  • ubiquitous (at least 1 billion users)
  • does everything
  • has footnote/bibliography capability
  • has review functions such as track changes and comments
  • publishing format flexibility — fonts, templates

Microsoft Word negatives:

  • does everything (flexibility)
  • full DITA functionality is impossible
  • standards, macros can be violated
  • authoring and publishing are tightly integrated

Best of both (ideal world):

  • XML
  • topic-based authoring
  • information typing
  • re-use at many levels
  • open source architecture
  • large global community of practitioners
  • flexible — 640 elements or fewer
  • integration with CMSs and modern processes

SimplyXML has a plug-in for Word (Content Mapper) that uses the Word API and constrains (severely) the use of Word functions to match the DITA schema. Authors still author in Word, but are limited in how they use it. Cost is $300 for a single user; price decreases dramatically when more than 100 users, and even more so when more than 1000 users.

If you focus on the technical side of DITA/XML, you will fail with everyday Word users in an enterprise. need to:

  • take a well-defined systematic approach
  • understand and focus on information consumers
  • make the least number of changes
  • comply with content and markup standards
  • use a phased implementation approach — pilot, then go live in stages
  • need support from C-level leadership (i.e. CEO, CFO, COO, etc.)
  • apply KISS — keep it simple, smart person (i.e. ‘don’t try to  boil the ocean’)

Information consumers want:

  • accurate, actionable content
  • consistent look and feel (branding)
  • just enough and just in time content

Product knowledge triangle

The next general session I attended was after lunch. It was delivered by Hannan Saltzman and Lawrence Orin (from Zoomin software). Lawrence has only just joined Zoomin and worked for a company that implemented their solution, and presented a compelling case study on that.

My notes from this session:

70-85% of site traffic to most tech companies is looking for product content (not just docs!; includes support, forums), with only about 16% to the main website, and about 1% to sales info.

The product knowledge triangle has three parts: product documentation (writer-generated); knowledge base (KB) articles (field-generated), and forums (user-generated).

Product documentation:

  • often PDFs (e.g. admin, user, installation guides)
  • web Help
  • customers must search for this info, often filtering down by product name, version etc. to find the docs and then having to download individual PDFs to find the one with the info they need

KB/support documentation:

  • includes support, tech notes, and KB articles (usually focusing on a specific task)
  • often kept in a KB database, and accessed by case #
  • typical content includes symptom, cause, solution
  • customer needs to search the KB database separately from the product documentation

Forums (can include chat, talk back):

  • has discussion threads
  • disorganised
  • can be hard to search (especially chats)
  • typically has repeated questions, star responders, and a lot of inconsistency
  • customers have to drill down threads to find answers, even if have a search engine

Implications for enterprises regarding this triangle:

  • each place a customer goes is a separate place/dataset with a separate search function and results
  • lots of findability issues

Case study — Aternity:

  • product monitors performance of devices in a company
  • information is displayed in browser
  • monthly software releases (Agile process)
  • audience is IT/product helpdesk
  • had a help site within the site (used DITA for authoring, 2 tech writers)

Before:

  • portal with a login required to access the documentation/help
  • classic doc set (e.g. PDFs) that had to be downloaded before you could see if they contained the relevant info; if not, download and try the next one
  • had to search by product, then version, then publication
  • basically books on the web (with TOCs, indexes), and NOT search oriented
  • needed to shift paradigm to a search with the main aim of ‘findability’ (used Zoomin software)

After:

  • a single search box for everything
  • no login required
  • can quickly jump to topics
  • task topics, not a book metaphor
  • device agnostic
  • filters on side of results page to narrow down search by product etc. (all products, all versions, live filters, intelligent context-sensitive help, synonyms)

When trying to convince management of the solution, had to deal with the ‘docs are an evil necessity’ mindset so came up with a ‘Why write docs’ proposal that had a business and customer focus. Used a restaurant menu metaphor [which I think was a brilliant strategy!]:

  • make the menu (docs) public, but never the recipes (code)
  • make the menu gorgeous and tempting
  • don’t worry about the restaurant down the road (your competitors) seeing your menu posted outside — let them see how good you are, but not how you work

The aims of making the docs public are to sell, upsell, maintain loyalty, promote the thrill of what’s new, and the thrill of being the best (i.e. technical marketing!)

A typical topic page:

  • had a customer focus/relevance
  • focused on customer tasks
  • used customer jargon
  • first paragraph had the who, when, what, and WHY, followed by an example
  • included as many rich visuals as possible
  • had glossary popups (allowed topic to be pared back to basics)
  • had a place for user feedback
  • placed related KB (‘Support’) and forum (‘Community’) topic links into a side bar on the page

The result was a modern help portal that:

  • was easy — inviting, easy to find, open, easy to view
  • had a clear aim — external/potential customers (no login required), to sell and upsell the product, loyalty
  • had a customer focus — tasks, examples, rich graphics
  • was exposed — many more hits, easy to publish, search tracking, tweaked synonyms as a result of search tracking, track feedback
  • had unexpected effects/consequences — customers used the product properly (reduced level 1 support calls; nature of support shifted (questions changed — basic questions almost disappeared, and questions became more sophisticated; tracking allowed benefits to be quantified and raised the profile of the doc team — people stopped thinking of docs as a necessary evil; like support, the nature of training shifted (no more 101 courses needed); prospects/pre-sales were going to the help to find out what the product did; interest in the product increased; other business units wanted to join in.

Before, the web help pages would get 500 to 100 hits per month; after — ~25,000 hits per month. User tracking showed that customers typically spent 4 minutes viewing an article. 45% were ‘hit and run’ users (quick in and out grab of information they needed), while 55% were ‘hit and browse’.

Tracking the top search terms helped them define more synonyms (e.g. ‘db’ for ‘database’), and was used for developing documentation relevant to those search terms. Tracking search errors was also important — as a result they either created more documentation that addressed the missing info, or added synonyms to route users to the correct place. Tracking feedback wasn’t really about where the docs needed work — customers would use it to talk about the product, suggest changes to it, talk about issues they were having with it — the doc team passed these on the support people.

The impact of the ‘modern help’ was immense, in ways unforeseen when they started. The documentation is now the public side of R&D (having it behind a login defeated that purpose). The main takeaways of the project were:

  • know where your customers are looking for answers
  • ensure content is available from all touchpoints
  • prevent info overload by using filters, related articles, glossary popups
  • think content monetisation (how will making content available drive sales, reduce costs)
h1

LavaCon2018: Workshop day

October 22, 2018

I’m in New Orleans and it’s Sunday, which means it’s workshop day at the LavaCon conference! I attended Melinda Belcher’s Preparing Technical Content for Translation half-day workshop. There were about 15 people in the workshop and several were from other countries, so there were lots of perspectives and good information sharing. Some people were in the early stages of translation projects, others were well into it, and yet others were just trying to get information to tailor their documentation so that it was ready for translation sometime in the future.

Here are my notes from Melinda’s session.

Her focus was on optimising content for translation in terms of:

  • Clarity
  • Structure
  • Format
  • Localisation strategy.

Much of what she had to say was very similar to the principles of the plain language movement.

Clarity

  • Keep sentences brief
  • Use as few words as possible
  • Short words are better than long words
  • Use plain English to make your point
  • Use a single term to identify a single concept
  • Write so your audience can understand you
  • Test your word choice and sentence design

Structure

  • Avoid unnecessary complexity
  • Lighten the cognitive load through strategic delivery of information
  • Use Standard English word order whenever possible
  • Use the active voice rather than the passive
  • Use relative pronouns like ‘that’ and ‘which’
  • Avoid phrasal verbs (containing a verb form with one or more articles)
  • Avoid long noun strings

Format

  • Make sure it fits (e.g. German takes much more space than the English equivalent)
  • Be clear with international dates and measurements etc.
  • Allow extra space for translated words
  • Allow extra time for formatting text in languages that read from right to left
  • Make readers aware of other [language] versions

Localisation strategy

  • Avoid humour
  • Strengthen your organisation’s capacity for translation oversight (not just words – cultural nuances, context, fonts, non-Romanised languages, compound words, dialects, other influences)
  • Establish and implement written guidelines for translation methods and for assessing the qualifications of a translator/agency
  • Consider a transcreation process (not necessarily good for long text, but may work well for marketing material [e.g. taglines, slogans, apps])
  • Other approaches to translation: single one-way translation; multiple one-way translation; reverse translation (i.e. translation back into English)

Early on in the session, she made a comment about text embedded in images – get the words out of the images and put them into callouts, captions etc.

She also mentioned these tools:

h1

Word: My process for copying content into a new template

October 9, 2018

Someone asked me the other day what my ‘best practice’ was for applying a new template to an existing Word document. Well, the answer is: ‘It depends’. And what it depends on is the complexity of the document.

If it’s a simple document in one section, with basic formatting, few—if any—cross-references, uses the same page layout throughout, has little (or no) document automation, etc., then just applying the new template may well be enough (assuming the style names in both are the same). You may have a few tweaks to do with the formatting (e.g. reapplying styles), but you should be done.

However, for a more complex document, like the ones I work on, it’s not so simple. My docs have cover and front matter pages, lots of document automation, outline numbered headings, potentially hundreds of cross-references, many section breaks for landscape and A3 pages, appendices, tables of contents/tables/figures, headers and footers populated with data from the cover page (we used to have odd/even headers/footers too, and various page numbering formats, but we got rid of those some time ago because they just added a lot of overhead for no real value), etc. It’s really the section breaks that will cause you grief, plus totally different cover pages and headers/footers. As for a simple document, the process will be much smoother if the style names in both docs are the same.

Oh, and before you ask, yes, I’ve tried every which way to simplify the process below, but each one just adds more time overhead to sorting out the document after I’ve pulled it over. The method that causes me the least grief is the one below.

NOTES:

  • Save often!
  • Make sure formatting marks are turned on so you can see the section breaks.
  • DO NOT copy section breaks. There lie dragons!!
  • You may still have some tweaking to do with applying the correct styles. You can either do this as you go (after each paste), or wait until the end and do it all in a separate pass. Alternatively, make a copy of the old doc, apply the new template to it and fix all the styles first, before copying across the content.

How I deal with putting a complex document onto a new corporate template:

  1. Start a new document based on the new template.
  2. If you want to preserve any existing comments or track changes from the old doc, make sure track changes is turned OFF in BOTH docs—the new AND the old.
  3. Manually complete all the cover page (and other front matter) information in the new doc.
  4. DO NOT copy across the old table of contents, list of tables, or list of figures. You’ll update these later (Step 13) with the new headings.
  5. Let’s assume the main body of the doc starts at section ‘1. Introduction’. Go to that heading in the new doc, then press Enter a couple of times to create some space.
  6. Go to the old doc and copy the content AFTER the ‘1. Introduction’ heading UP TO, BUT NOT INCLUDING, the first section break.
  7. Paste that content into the relevant place (the space you just created) in the new doc.
  8. Manually insert a section break start AND end for the next section in the new doc, and add some empty paragraphs between them. Change the page layout for the section as necessary (e.g. landscape orientation).
  9. Go back to the old doc and copy everything INSIDE the section break, but NOT the section break itself.
  10. Paste into the new doc in between the start and end section break marks you created in Step 8.
  11. Repeat steps 8 to 10 for ALL section breaks and their content.
  12. When you’ve finished, delete any headings and text from the original template that are not required.
  13. Go back to the table of contents in the new doc and update it. Repeat for the list of tables and figures too.
  14. If you have cross-references in your doc, switch to Print Preview mode, then back to Page Layout mode.
  15. Do a Find for ‘Error!’ to find any broken cross-references. Fix, based on the cross-reference information in the old doc.
  16. Zoom out to about 30% and do a visual check to make sure your headers/footers for each section are correct for the page layout.

That should be it!

h1

Word: Insert prime and double prime characters

September 23, 2018

Just as there’s a special character for a degree symbol, there are also special characters for prime and double prime symbols (used when referring to latitude and longitude especially). These are NOT the same characters as a single or double quote mark, though many people assume they are.

Use one of these methods to insert a proper prime or double prime symbol in Microsoft Word:

  • If you have a separate number pad, then press Alt+8242 (press and hold the Alt key while you type 8242) for prime, or Alt+8243 for double prime.
  • Go to the Insert tab > Symbol — the prime symbol is character code = 2032, Unicode (hex), and double prime is 2033.
  • If you have Math AutoCorrect turned on, then type \prime<space> for prime, or \pprime<space> for double prime (to turn on Math AutoCorrect: File > Options > Proofing > AutoCorrect Options > Math AutoCorrect tab).
  • Supposedly you can also type 2032, Alt+x or 2033, Alt+x but neither of those worked for me.

[Link last checked September 2018]

h1

Word: Insert a degree symbol

September 22, 2018

There’s a special character for a degree, so don’t make the mistake of superscripting a lower case ‘o’. Instead, use one of these methods to insert a proper degree symbol in Microsoft Word:

  • If you have a separate number pad, then press Alt+0176 (press and hold the Alt key while you type 0176)
  • For any keyboard with or without a number pad, press Ctrl+Shift+@.
  • Go to the Insert tab > Symbol — the degree symbol is character code = 00B0, Unicode (hex)
  • If you have Math AutoCorrect turned on, then type \degree (to turn on Math AutoCorrect: File > Options > Proofing > AutoCorrect Options > Math AutoCorrect tab).

If you have a lot of superscripted lower case ‘o’ characters used instead of a proper degree symbols, you can search for them and replace them with the correct symbol:

  1. Open the Find and Replace window (Ctrl+h).
  2. In the ‘Find what’ field, type a lower case o.
  3. With your cursor still in the ‘Find what’ field, click More.
  4. Click Format and select Font.
  5. Click the Superscript checkbox until it has a check mark in it.
  6. Click OK to close the Find Font window.
  7. Put your cursor in the ‘Replace with’ field.
  8. Type ^0176
  9. With your cursor still in the ‘Replace with’ field, click Format and select Font.
  10. Click the Superscript checkbox until it is clear. You may have to click it twice.
  11. Check your Find and Replace window looks like the screenshot below. If it does, click Find Next and then Replace for each one found.

Related: Prime and double prime symbols: https://cybertext.wordpress.com/2018/09/23/word-insert-a-prime-and-double-prime-characters/

h1

Word: Macro to set the language for ALL styles

September 21, 2018

One of the issues with setting the language for a Word document is that DOESN’T change the language set for the styles. If you’re lucky, your styles use the same language as your default language, but sometimes they don’t (especially if the document has come from authors in other countries). This can result in some strange behaviour under specific circumstances.

I have a macro for setting the language for all ‘ranges’ in a document, but I needed something to change the language settings for ALL styles in one command. After a bit of internet sleuthing, I came across an answer that looked promising and modified it to suit my purposes. It works! I tested it on a sample document, where I’d set the language for Normal to Alsatian, for Heading 1 to Afrikaans, and for Heading 2 to English (US). The only text I had in the document used Normal style, but that didn’t matter—the language settings for the styles still changed to the one I’d specified in the macro. In my case, that’s English (Australian) [in VBA code that’s wdEnglishAUS].

The only thing you need to change in this macro is the LanguageID. Here are some common ones for English:

  • wdEnglishAUS
  • wdEnglishCanadian
  • wdNewZealand
  • wdEnglishSouthAfrica
  • wdEnglishUK
  • wdEnglishUS.

Here’s the macro (copy it—some of it may go off the page, so if you type it you may miss some):

Sub ChangeLangStyles()

' Macro to change language in styles
' Adapted from Macropod (17 July 2012)
' http://www.vbaexpress.com/forum/showthread.php?42993-Solved-Macro-to-change-all-styles-to-a-specific-language

Dim oDoc As Document, oSty As Style
Set oDoc = ActiveDocument
    With oDoc
        For Each oSty In .Styles
            On Error Resume Next
            oSty.LanguageID = wdEnglishAUS
            On Error GoTo 0
        Next
    End With
End Sub

I adapted it from one shared by Macropod back in July 2012: http://www.vbaexpress.com/forum/showthread.php?42993-Solved-Macro-to-change-all-styles-to-a-specific-language, and full acknowledgement goes to him.

[Links last checked September 2018]