Archive for the ‘Technical writing’ Category

h1

LavaCon2018: Day 3

October 25, 2018

The last day of LavaCon 2018 was full of breakout sessions, with no plenary sessions. Here are my notes from the sessions I attended.

Diversify your content ecosystem

Bernard Aschwanden took us through a heap of information, a tiny amount of which I captured in notes. I’ll have to watch the recording and view the slides later. The notes I did make were:

  • clarify, simplify, and reuse content
  • track/share costs — you can cut costs by sharing content, but even better is to generate revenue. Docs can reduce costs AND generate revenue
  • content is a core business asset
  • content is cross-department and cross-functional — it shouldn’t live in silos
  • search is a headache, find is a solution (think of searching for your car keys [anxiety] versus finding your keys [happiness/relief])
  • think about components, not complete docs
  • reuse components across business verticals
  • how and what to measure — process efficiencies to reduce costs; revenue growth

Cross-format and cross-silo: Lightweight DITA (LWDITA) for intelligent content

(Michael Priestley, IBM)

  • DITA is a standard, which means it’s portable and scalable
  • Why isn’t DITA everywhere? perceived complexity (too many tags, too hard to customise, steep learning curve) and it’s XML (software developers mostly use XML for data, so when JSON came along, XML was dead for them; bias against XML in favour of Markdown, HTML5, custom formats)
  • Simplify the model — no longer reliant on XML schema; cross-format content standard
  • LWDITA — has only 2 doc types (topic and map), 40 elements types (33 from DITA 1.3, 7 for multimedia support), and 3 formats (XML, HTML5, Markdown)
  • LWDITA is less flexible but easier to learn
  • Full DITA — advanced features, more flexibility, mature tools
  • LWDITA — start simple, eventually more tool support, don;t need XML
  • Tools that support LWDITA include oXygen XML Editor, Author, Web Author; SimplyXML Content Mapper; and others
  • Publishing options — DITA-OT; XML Mind DITA Converter; Adobe AEM Publishing
  • people and processes are the hard part — tools are the easy part!
  • more info: http://docs.oasis-open.org/dita/LwDITA/v1.0/LwDITA-v1.0.html and https://wiki.oasis-open.org/dita/LightweightDITASubcommittee/lwditatools

‘Tis an unweeded garden that grows to seed — cultivating a weed-free content ecosystem

This was an info-filled talk by Helen St Denis, Conversion Services Manager, from Stilo. She talked about pre- and post-conversion tasks for DITA, amongst other things. I tried to keep up with her but these notes may be missing a few points.

Content strategy includes:

  • content modelling
  • taxonomies
  • setting up storage, reuse and publication facilities
  • perhaps style guides

First step is the content audit:

  • what do we have
  • what do we need to keep
  • what’s OK and where is it now
  • what needs to be rewritten
  • what should be moved first

Writing for minimalism:

  • focus on action-oriented approach
  • understand the users’ world
  • recognise the importance of troubleshooting info
  • ensure users can find the info they need
  • remember that every page is page 1
  • rewrite for reuse
  • rewrite for localisation/translation even if not going there yet — consistent terminology, concise, clear (avoid idioms), grammatically complete (don’t forget ‘that’)
  • minimise inline x-refs — move into a relationship table

Content model:

  • topic-based — smallest unit of content that makes sense by itself
  • 4 basic topics — task, concept, reference, and troubleshooting
  • do not include multiple types on content in the one topic — just one

Tasks:

  • only 1 set of steps per task
  • if there are two ways to do something, you need two tasks
  • doing and undoing something = two tasks
  • improve conversion by adding paragraph breaks between <step><cmd> and<step><result> or <info>

Short descriptions:

  • these help with findability
  • either use everywhere or nowhere — not a mix; everywhere is best, but time consuming

Pre vs post conversion cleanup:

  • if you don’t need the docs immediately, convert after
  • if active docs, do pre-conversion cleanup

Really need to do:

  • topics — break out based on heading levels
  • topic types — many not need immediately, but much easier to add at conversion time, not after

Authoring conventions:

  • tasks — look for gerunds, ‘how to’
  • concepts — look for ‘about’
  • references — look for titles with command names in lower case
  • paragraph styles — look for styles like ‘procedure heading’
  • consider adding prefixes in text (e.g. T_ for task, C_ for concept, R_ for reference)

Topic types:

  • can allow conversion to determine these based on content of topic (e.g steps = tasks; syntax diagram = reference)
  • can use in combination

Data model:

  • tasks are trickiest
  • only 1 set of steps per task
  • no sections — stuff before and after MUST be in the correct order

Inline elements:

  • especially important if you will localise/translate
  • some things may not be translated
  • some things will be presented differently in different languages
  • may already have styles/typographic conventions for these
  • character-level style names (if still working on the docs); if not, consider colour highlighting the elements

Don’t need to worry about:

  • tables
  • links
  • variables
  • conditions
  • having all the answers

Other considerations:

  • graphics — supported format; where stored; findable
  • do graphics have superimposed callouts? Are they easily editable?
  • paragraph breaks — watch out for hard and soft line breaks
  • look out for text in the wrong order (e.g. from text boxes, or from inDesign)
  • eliminate superfluous docs
  • some legacy content may have reuse already built in; some is from copy/paste
  • legacy reuse can be at doc level (e.g. Word master docs), phrase level (e.g. Word auto text)

DITA reuse:

  • reuse is at the element level
  • maps can bring in other maps
  • phrase level — conrefs, keys
  • conrefs allow reuse of single DITA element (para, table, single list item, whole list)
  • conref range — first and last elements MUST be the same

Identifying reused content:

  • text inserts/snippets — use the file name
  • create conrefs using the filename as the ID
  • each block level element becomes a conref

De-duplicating (dedup’g) content:

  • prune redundancies
  • spreadsheet to track duplications — very painful and slow! Avoid
  • tools can help ID identical and near match content, but still need a human eye (e.g. Stilo’s OptimizeR, which compares DITA elements, shows diffs, you choose to dup or not, auto creates conrefs, auto adjusts maps to refer to selected topic, gives a report of what has been dedup’d)
  • allow time for implementing a reuse mechanism

Content 4.0

(Pam Noreault and Chip Gettinger, SDL)

  • Landscape: any device can be used to display content
  • Customer demands: By 2020, customer experience will overtake price and product as a main differentiator

Product content:

  • branded tech product info is 2nd most trusted source (they didn’t state what the 1st was)
  • customer support (phone/online) has dramatically decreased
  • product info must be available in various languages, for various devices and formats
  • customers want to go to one place to get info

Trends:

  • shared content strategy — content mashups, portals, microcontent, blogs, videos, chatbots, Twitter feeds, documentation and product help, KB articles, communities/forums
  • shared enterprise taxonomy — make content findable by applying consistent terminology and metadata
  • device independence — design content for any device; mobile first; responsive design; how will voice interaction affect content
  • design for global customers — plan initially for translation even if not doing so now
  • adaptive personalisation — machine learning, AI, natural language processing in real time

Actions:

  • review info architecture (IA)
  • be flexible and willing to support new use cases
  • consider content granularity
  • taxonomy — seek help from others, including industry experts
  • work within the politics of the organisation to gain allies — get a seat at the table
  • get your IA house in order
  • move ahead in increments
  • gain knowledge through research and people — seek out those who’ve already done it
  • start with a small pilot and expand
  • morph again and embrace change
  • create a global content strategy
  • understand where you are and where you need to be
  • know the gaps and narrow them
  • support your company’s global business
  • improve the quality of your source content
  • don;t create another content silo
  • get the global strategy completed first
  • Google Translate is NOT the solution!
  • research tools, infrastructure and whatever else you need
  • mix strategy with technology
  • empower SMEs, contributors and authors
  • make content findable and relatable
  • connect contextually (e.g. classified searches vs free text)
h1

LavaCon2018: Day 2

October 24, 2018

Day 2 started as per Day 1, with several short (20 minute) plenary sessions.

A journey to intelligent content delivery

The first breakout session I attended was another case study on how a software company transformed their help and documentation offering using Zoomin, in this case Pam Goodwin describing how her small team of 6 authors changed how documentation was viewed for Cherwell, an IT service management company with 450 employees.

Before:

  • focus was on ‘catch up’ documentation
  • dated look
  • poor search capability
  • different help systems and tools used for different rpoducts
  • no consolidated search
  • no access outside the software to other information

Interim (2016):

  • used SuiteHelp
  • based on DITA source
  • improved searching
  • standalone systems for each product release
  • needed separate PDF plug-in and PDF delivery model

After (2018):

  • easy to find
  • cross-product and cross-version support
  • multi language support
  • improved searching
  • on-demand PDF creation by users at point of need
  • responsive design
  • scalable
  • simple delivery model for authors
  • easy maintenance

Why choose Zoomin Docs?

  • met all their requirements
  • powerful search
  • minimal changes to DITA source
  • improved analytics (esp. for searches)
  • on-demand PDFs
  • built-in feedback mechanism
  • past relationship with vendor’s personnel
  • great customer references

How they sold it to decision makers:

  • Timing — new leadership were looking for opportunities to modernise and scale
  • Executive support — Chief Product officer (CPO) saw tech docs as a valuable marketing tool, esp. once he realised that when he was researching a product, he always checked the docs as part of his decision making
  • Strong program management — saw value and helped navigate the approval process
  • Affordable solution — able to take great strides with minimal investment and compressed timeline

Project challenges:

  • lack of UI/UX support at kick-off
  • struggled with taxonomy shift; once made, then kept it simple
  • quickly pushed multi language support and context-sensitive help to phase [not sure what I wrote there!]
  • table formatting issues
  • automation issues
  • user management
  • need a third testing environment

Results:

  • significant increase in users (27%)
  • number of sessions per user down (i.e. ‘hit and run’)
  • 320% increase in number of page views
  • 91% increase in unique page views
  • 37% decrease in bounce rates
  • 17.5% increase in Net Promoter score [what is this?] in past year; great customer comments

Creating interactive intelligent style guides

(George Bina, Oxygen)

20 years since XML standard was first published, and since Oxygen was created.

A style guide is a set of rules to follow when writing content. (e.g. how to style code blocks so the code remains correct, but visually readable). A style guide helps you avoid making mistakes.

A style guide should:

  • evolve and grow over time
  • deal with errors/new issues as you find them

But often there are too many rules.

Solution: Automate. Auto detect when content doesn’t follow a specific rule. (e.g. Schematron will detect variations in patterns in structured docs; Acrolynx).

Schematron can also check text via regular expressions. Works inside the authoring tool (e.g. Oxygen XML Editor) and is integrated within it so you get messages about potential errors while writing.

With Schematron, you select the rule and provide your parameters, plus a message for the writer.

(Note: I left this session halfway through because while automated style guides are something I wanted to know more about, what he was demonstrating was the integration between an XML/DITA editor [which I don’t use] and Schematron — I couldn’t see how I would ever use this.)

End of day plenaries

The last two plenaries of the day were from David Dylan Thomas (The content strategy of civil discourse: Turning conflict into collaboration) and Megan Gilhooly (The power of learning you were wrong). Both were really excellent!

 

h1

LavaCon2018: Day 1

October 23, 2018

I helped out on the Registration desk this morning. It was a bit of a madhouse, with three of us checking in about 450 people in just on an hour!

By the time I got into the intro and plenary sessions there were no seats left, it was hard to see the screen, and hear the speakers, so I left after Karen McGrane’s keynote.

Some notes from McGrane’s talk:

  • How to create content for any format (including audio only), any device, any screen size, no matter what the ‘next big thing’ is, even for devices not yet created?
  • Still too much reliance on visual styling (as used in books/paper) to create meaning. The web isn’t print — the way people create content has to change.
  • Content ecosystem has to be navigable whatever size device (e.g. watch to stadium screen; digital signage), even if not the primary target device.
  • Need true separation of content from form; presentation-independent content.
  • Content must be readable, browsable, and accessible on any platform/device.

Word and DITA

The first general session I attended was on Microsoft Word and DITA, and featured Doug Corman speaking about the pros and cons of each and how an add-in from SimplyXML could make it easier to general authors to write in a style compatible with DITA (yes, I checked with him afterwards about the add-in’s applicability for editing, and it’s not).

My notes from this session:

DITA positives:

  • XML standard
  • separates content creation and form
  • supports topic-based authoring
  • information typing
  • re-use at many levels
  • open source
  • large global community of practitioners
  • proven for tech pubs
  • flexible (640+ elements)
  • integration with CMSs and modern processes
  • control through a standard architecture

DITA negatives:

  • 640+ elements!
  • transformation costs from Word, XML, PDF
  • XML editors are complicated
  • DITA tools are expensive
  • expertise required for implementation
  • rework/rekeying of DOCX, PDF may be required.

Microsoft Word positives:

  • ubiquitous (at least 1 billion users)
  • does everything
  • has footnote/bibliography capability
  • has review functions such as track changes and comments
  • publishing format flexibility — fonts, templates

Microsoft Word negatives:

  • does everything (flexibility)
  • full DITA functionality is impossible
  • standards, macros can be violated
  • authoring and publishing are tightly integrated

Best of both (ideal world):

  • XML
  • topic-based authoring
  • information typing
  • re-use at many levels
  • open source architecture
  • large global community of practitioners
  • flexible — 640 elements or fewer
  • integration with CMSs and modern processes

SimplyXML has a plug-in for Word (Content Mapper) that uses the Word API and constrains (severely) the use of Word functions to match the DITA schema. Authors still author in Word, but are limited in how they use it. Cost is $300 for a single user; price decreases dramatically when more than 100 users, and even more so when more than 1000 users.

If you focus on the technical side of DITA/XML, you will fail with everyday Word users in an enterprise. need to:

  • take a well-defined systematic approach
  • understand and focus on information consumers
  • make the least number of changes
  • comply with content and markup standards
  • use a phased implementation approach — pilot, then go live in stages
  • need support from C-level leadership (i.e. CEO, CFO, COO, etc.)
  • apply KISS — keep it simple, smart person (i.e. ‘don’t try to  boil the ocean’)

Information consumers want:

  • accurate, actionable content
  • consistent look and feel (branding)
  • just enough and just in time content

Product knowledge triangle

The next general session I attended was after lunch. It was delivered by Hannan Saltzman and Lawrence Orin (from Zoomin software). Lawrence has only just joined Zoomin and worked for a company that implemented their solution, and presented a compelling case study on that.

My notes from this session:

70-85% of site traffic to most tech companies is looking for product content (not just docs!; includes support, forums), with only about 16% to the main website, and about 1% to sales info.

The product knowledge triangle has three parts: product documentation (writer-generated); knowledge base (KB) articles (field-generated), and forums (user-generated).

Product documentation:

  • often PDFs (e.g. admin, user, installation guides)
  • web Help
  • customers must search for this info, often filtering down by product name, version etc. to find the docs and then having to download individual PDFs to find the one with the info they need

KB/support documentation:

  • includes support, tech notes, and KB articles (usually focusing on a specific task)
  • often kept in a KB database, and accessed by case #
  • typical content includes symptom, cause, solution
  • customer needs to search the KB database separately from the product documentation

Forums (can include chat, talk back):

  • has discussion threads
  • disorganised
  • can be hard to search (especially chats)
  • typically has repeated questions, star responders, and a lot of inconsistency
  • customers have to drill down threads to find answers, even if have a search engine

Implications for enterprises regarding this triangle:

  • each place a customer goes is a separate place/dataset with a separate search function and results
  • lots of findability issues

Case study — Aternity:

  • product monitors performance of devices in a company
  • information is displayed in browser
  • monthly software releases (Agile process)
  • audience is IT/product helpdesk
  • had a help site within the site (used DITA for authoring, 2 tech writers)

Before:

  • portal with a login required to access the documentation/help
  • classic doc set (e.g. PDFs) that had to be downloaded before you could see if they contained the relevant info; if not, download and try the next one
  • had to search by product, then version, then publication
  • basically books on the web (with TOCs, indexes), and NOT search oriented
  • needed to shift paradigm to a search with the main aim of ‘findability’ (used Zoomin software)

After:

  • a single search box for everything
  • no login required
  • can quickly jump to topics
  • task topics, not a book metaphor
  • device agnostic
  • filters on side of results page to narrow down search by product etc. (all products, all versions, live filters, intelligent context-sensitive help, synonyms)

When trying to convince management of the solution, had to deal with the ‘docs are an evil necessity’ mindset so came up with a ‘Why write docs’ proposal that had a business and customer focus. Used a restaurant menu metaphor [which I think was a brilliant strategy!]:

  • make the menu (docs) public, but never the recipes (code)
  • make the menu gorgeous and tempting
  • don’t worry about the restaurant down the road (your competitors) seeing your menu posted outside — let them see how good you are, but not how you work

The aims of making the docs public are to sell, upsell, maintain loyalty, promote the thrill of what’s new, and the thrill of being the best (i.e. technical marketing!)

A typical topic page:

  • had a customer focus/relevance
  • focused on customer tasks
  • used customer jargon
  • first paragraph had the who, when, what, and WHY, followed by an example
  • included as many rich visuals as possible
  • had glossary popups (allowed topic to be pared back to basics)
  • had a place for user feedback
  • placed related KB (‘Support’) and forum (‘Community’) topic links into a side bar on the page

The result was a modern help portal that:

  • was easy — inviting, easy to find, open, easy to view
  • had a clear aim — external/potential customers (no login required), to sell and upsell the product, loyalty
  • had a customer focus — tasks, examples, rich graphics
  • was exposed — many more hits, easy to publish, search tracking, tweaked synonyms as a result of search tracking, track feedback
  • had unexpected effects/consequences — customers used the product properly (reduced level 1 support calls; nature of support shifted (questions changed — basic questions almost disappeared, and questions became more sophisticated; tracking allowed benefits to be quantified and raised the profile of the doc team — people stopped thinking of docs as a necessary evil; like support, the nature of training shifted (no more 101 courses needed); prospects/pre-sales were going to the help to find out what the product did; interest in the product increased; other business units wanted to join in.

Before, the web help pages would get 500 to 100 hits per month; after — ~25,000 hits per month. User tracking showed that customers typically spent 4 minutes viewing an article. 45% were ‘hit and run’ users (quick in and out grab of information they needed), while 55% were ‘hit and browse’.

Tracking the top search terms helped them define more synonyms (e.g. ‘db’ for ‘database’), and was used for developing documentation relevant to those search terms. Tracking search errors was also important — as a result they either created more documentation that addressed the missing info, or added synonyms to route users to the correct place. Tracking feedback wasn’t really about where the docs needed work — customers would use it to talk about the product, suggest changes to it, talk about issues they were having with it — the doc team passed these on the support people.

The impact of the ‘modern help’ was immense, in ways unforeseen when they started. The documentation is now the public side of R&D (having it behind a login defeated that purpose). The main takeaways of the project were:

  • know where your customers are looking for answers
  • ensure content is available from all touchpoints
  • prevent info overload by using filters, related articles, glossary popups
  • think content monetisation (how will making content available drive sales, reduce costs)
h1

LavaCon2018: Workshop day

October 22, 2018

I’m in New Orleans and it’s Sunday, which means it’s workshop day at the LavaCon conference! I attended Melinda Belcher’s Preparing Technical Content for Translation half-day workshop. There were about 15 people in the workshop and several were from other countries, so there were lots of perspectives and good information sharing. Some people were in the early stages of translation projects, others were well into it, and yet others were just trying to get information to tailor their documentation so that it was ready for translation sometime in the future.

Here are my notes from Melinda’s session.

Her focus was on optimising content for translation in terms of:

  • Clarity
  • Structure
  • Format
  • Localisation strategy.

Much of what she had to say was very similar to the principles of the plain language movement.

Clarity

  • Keep sentences brief
  • Use as few words as possible
  • Short words are better than long words
  • Use plain English to make your point
  • Use a single term to identify a single concept
  • Write so your audience can understand you
  • Test your word choice and sentence design

Structure

  • Avoid unnecessary complexity
  • Lighten the cognitive load through strategic delivery of information
  • Use Standard English word order whenever possible
  • Use the active voice rather than the passive
  • Use relative pronouns like ‘that’ and ‘which’
  • Avoid phrasal verbs (containing a verb form with one or more articles)
  • Avoid long noun strings

Format

  • Make sure it fits (e.g. German takes much more space than the English equivalent)
  • Be clear with international dates and measurements etc.
  • Allow extra space for translated words
  • Allow extra time for formatting text in languages that read from right to left
  • Make readers aware of other [language] versions

Localisation strategy

  • Avoid humour
  • Strengthen your organisation’s capacity for translation oversight (not just words – cultural nuances, context, fonts, non-Romanised languages, compound words, dialects, other influences)
  • Establish and implement written guidelines for translation methods and for assessing the qualifications of a translator/agency
  • Consider a transcreation process (not necessarily good for long text, but may work well for marketing material [e.g. taglines, slogans, apps])
  • Other approaches to translation: single one-way translation; multiple one-way translation; reverse translation (i.e. translation back into English)

Early on in the session, she made a comment about text embedded in images – get the words out of the images and put them into callouts, captions etc.

She also mentioned these tools:

h1

EditorsWA Winter Seminar, August 2018

August 28, 2018

On 25 August 2018, I attended and spoke at the annual Winter Seminar, held by EditorsWA, the Western Australian branch of IPEd, the national professional association for editors.

Here are my notes from two of the three sessions; the third session (on efficiency) was mine, so there are no notes for it.

Conflict of interest (Vanessa Herbert)

This was an interesting and thought-provoking session. Vanessa started by explaining what conflict of interest means, and that it can be actual, perceived, or potential. She then spent a bit of time discussing IPEd’s Code of Ethics and Code of Conduct members must abide by, and the Conflict of Interest Declaration that IPEd councillors, committee members, contractors or volunteers must sign.

But the most revealing part of the session was when we worked in small groups, discussing the three potential conflict of interest scenarios she posed for us. The biggest takeaway is that what initially appeared to be black and white, may not be, and that many shades of grey exist between those black and white stances. The group I was in found all sorts of fuzziness around the edges, making it difficult to come to a firm answer. Vanessa had made us aware of using false justifications, and that was the hardest part to reconcile.

As I said, thought-provoking. The bottom line is to be open and transparent in all dealings.

Scientific writing (David Lindsay)

Some notes I took during David’s session:

  • The theme of all good scientific stories:
    • how and why does it fit (or not) with other scientists’ work
    • how and where does it fit into the ‘real world’
    • what does it mean for science and the real world.
  • The primary aim of a scientific article is to be read by as many people as possible, and for those readers to be influenced by it.
  • These days, the influence of an individual article is measured by the number of citations it gets (i.e. citation indexes), and the influence of a scientific journal is measured by its ‘impact factor’ (i.e. number of articles from that journal cited in the past xx years). Many articles are never cited and many journals have an impact factor <1.
  • The secret of telling a scientific story is based on the principle of expectation:
    • Readers should have some idea of what to expect from the article (informative and interesting title, familiar structure, sections that deliver what’s expected [e.g. scientific method] and build expectation for what’s coming in the next section, writing style that is clear, concise, and brief [avoid being ‘impressive’, otherwise you’ll alienate readers]).
    • The hypothesis is just a prediction of what the scientist expected, and the rest of the article shows evidence to support or reject that hypothesis.
  • The scientific story has these parts:
    • title (must be interesting and informative to attract the reader)
    • introduction (two parts only—the hypothesis, and the reasoning that makes that hypothesis the most plausible explanation)
    • methodology and materials
    • results (prioritise—some are much more important than others, so spend more time and space on these; include those that relate to the hypothesis and those that don’t)
    • discussion (again, prioritise the arguments that support/refute the hypothesis; consequences for others and possibly the ‘real world’; discard anything that just adds fluff and doesn’t help tell the story)
    • references
  • Characteristics of good scientific writing—precise, clear, brief.
  • Every paragraph must have a conclusion and a way to lead into the next paragraph. Every sentence must follow on from the previous sentence.
h1

How a copyeditor can help your business

July 24, 2018

I found this excellent image on Northern Editorial’s website (an editing company based in the UK)—it sums up all the sorts of things I do, with the aim of making you (and your communications) look better.

The text on this image is:

Copy Editors Help Your Business because…

  • They catch: bias, blindspots, politically incorrect language, potential libel, offensive language, copyright problems.
  • They see: what you wrote, not what you thought you wrote; what the reads see, not what you see; holes in our argument; padding in your prose.
  • They find: repetition, overused phrases, ambiguity.
  • They check: readability, facts, links.
  • They fix errors in: grammar, punctuation, format, style, voice.
  • They spot: missing information, mislabelled information, wrong information.
  • They uphold: quality, credibility, standards.
  • They are invisible; they are valuable; they get your message out there and make you look better.

Thanks for allowing us to share this, Northern Editorial!

Update September 2018: Intelligent Editing, the creators of PerfectIt, one of my go-to editing tools, blogged about why you should hire an editor: https://intelligentediting.com/blog/you-should-hire-an-editor/

h1

About editing and editors

June 17, 2018

In my opinion, this Facebook post sums up editing:

In its early days [early 1980s?], the Freelance Editors’ Association of Canada sent its members a series of sentences to edit, to see which were the most common approaches to fixing some kinds of problems. We were in the very very early days of thinking about standards. One sentence, memorably, was edited by 101 editors. Only one pair of editors made the same corrections to it. So there were literally 100 different edits trying to fix a two-line sentence. And almost all of those edits worked perfectly well.

–Greg Ioannou, Editors Association of Earth (Facebook group), posted 16 June 2018

Every editor approaches a sentence in their own way, and applies the conventions and styles THEY are familiar with or have been asked to use. There are no rules — only traditions**, conventions, and guidelines. This is why I’m conflicted about editing exams and tests — whose ‘rules’ are you meant to apply? And whose ‘rules’ do the examiners follow in marking you? What is ‘correct’?

** Some of  those ‘traditions’ and beliefs may have been embedded into your brain by your Grade 5 teacher several decades ago, and who’s to say they knew what they were talking about? Who’s to say they weren’t repeating what they’d learned at school several decades before too? How much was ‘assumed wisdom’, passed along from one generation to the next without question — or evidence?