Google Docs criticised for ‘woke’ inclusive language suggestions

Critics have described the company’s policy as “speech-policing” and “creepy”. “Housewife” and even a computer “motherboard” have been flagged as not being inclusive by the new system.

Google office
Image:A new feature included in Google Docs has been criticised for being ‘woke’
Why you can trust Sky News

Google has been criticised for an “inclusive language” feature that will recommend word substitutions for people writing in Google Docs.

qatar airways

The tool will offer guidance to people writing in a way that “may not be inclusive to all readers” in a similar manner to spelling and grammar check systems.

Although the suggestions are just suggestions – they aren’t forced on writers and the tool may be turned off – critics have described it as “speech-policing” and “profoundly clumsy, creepy and wrong”.

The new feature is officially called assistive writing and will be on by default for enterprise users, business customers who might want to nudge particular writing styles among their staff.

The language the system favours reflects decades of campaigning for gender-neutral terms (“crewed” instead of “manned”) and against phrases that reflect racial prejudice (“deny list” instead of “blacklist”), as well as more modern concerns about the impact of our vocabulary on how we identify people.

But despite enormous developments in how computers understand natural language, the technology is still in its infancy.

Among the words that the system has flagged in tests are “mankind”, “housewife”, “landlord” and even a computer “motherboard” – which may not cause offence.

MORE ON GOOGLE

  • Electric vehicles climate teaser

    Brits are becoming more climate-conscious, says Google, as it releases new time-lapse photos for Earth Day

  • FILE PHOTO: The logos of Amazon Apple Facebook and Google

    Apple, Amazon, Meta and Google targeted by landmark EU law

  • The invasion of Ukraine has highlighted how critical technology companies are in times of conflict

    How are the big tech companies responding to the invasion of Ukraine?

Google states: “Potentially discriminatory or inappropriate language will be flagged, along with suggestions on how to make your writing more inclusive and appropriate for your audience.”

The tool is reminiscent of Microsoft’s infamously annoying assistant Clippy, which interrupted writers’ own prose stylings with often unwelcome suggestions.

Vice News tested the feature by submitting several famous speeches and literary passages, including the Sermon on the Mount in the Bible, and found most received bad recommendations.

Read more:
Google paves way to expand UK workforce with £730m office investment
UK regulator secures global competition commitment from Google

Notably it also found an interview with the former Ku Klux Klan leader David Duke – in which he spoke about hunting black people – prompted no inclusivity alerts or warnings.

Silkie Carlo, the director of Big Brother Watch, which campaigns for the protection of civil liberties, told The Telegraph: “Google’s new word warnings aren’t assistive, they’re deeply intrusive. With Google’s new assistive writing tool, the company is not only reading every word you type but telling you what to type.

“This speech-policing is profoundly clumsy, creepy and wrong, often reinforcing bias. Invasive tech like this undermines privacy, freedom of expression and increasingly freedom of thought.”

Lazar Radic of the International Centre for Law and Economics told the newspaper: “Not only is this incredibly conceited and patronising – it can also serve to stifle individuality, self-expression, experimentation, and – from a purely utilitarian perspective – progress.”

Google said: “Assisted writing uses language understanding models, which rely on millions of common phrases and sentences to automatically learn how people communicate. This also means they can reflect some human cognitive biases.

“Our technology is always improving, and we don’t yet (and may never) have a complete solution to identifying and mitigating all unwanted word associations and biases.”

LEAVE A REPLY