Latest Algorithm Update Likely To Be Minefield For Health Social Posts
Facebook has announced two algorithm ranking updates, which went live in June, in an attempt to reduce sensationalist content related to health and well-being.
The exact update sounds reasonably innocuous, reducing visibility for:
- Posts with exaggerated or sensational health claims
- Posts attempting to sell products or services based on health-related claims
However, the problem is often how updates like this get implemented. Facebook’s machine learning is notoriously buggy.
At SMK, we train hundreds of marketers each month who complain bitterly regarding how their posts or ads get unreasonably caught up in Facebook’s approval filters.
The latest health changes will no doubt directly impact a wide array of sectors, such as FMCG, healthcare, public sector, beauty and not-for-profit, to name just a few.
Travis Yeh, Facebook Product Manager:
“Posts with sensational health claims or solicitation using health-related claims will have reduced distribution. Pages should avoid posts about health that exaggerate or mislead people and posts that try to sell products using health-related claims.
If a Page stops posting this content, their posts will no longer be affected by this change.”
News Feed updates have been on overdrive in 2019, with one-two changes monthly.
Based upon the heat that Facebook is under from pressure groups, users, marketers and regulators, it would be naïve to think updates will be slowing down any time soon.
How Will Facebook Filter Suspect Health Related Posts?
Naturally, the algorithm is powered by Facebook’s machine learning; therefore it will be filtering health content based posts upon keywords, phrasing and prior Page behaviour.
Travis Yeh, Facebook Product Manager:
“We handled this in a similar way to how we’ve previously reduced low-quality content like clickbait: by identifying phrases that were commonly used in these posts to predict which posts might include sensational health claims or promotion of products with health-related claims, and then showing these lower in News Feed.”
As anyone who has spent time working in and around SEO will know, keyword intent is highly nuanced.
Therefore, the obvious problem with this is context, a point made emphatically on Sunday by external auditors reviewing Facebook's current moderation procedures related to hate speech.
Facebook’s Civil Rights Audit – Progress Report June 30th 2019:
“Facebook’s current white nationalism policy is too narrow, because it prohibits only explicit praise, support or representation of the terms ‘white nationalism’ or ‘white separatism’.
The narrow scope of the policy leaves up content that expressly espouses white nationalist ideology without using the term ‘white nationalist’.
As a result, content that would cause the same harm is permitted to remain on the platform.”
The main takeaway for anyone discussing health and wellbeing (even removed by a few degrees of separation) is monitor your performance closely in the coming weeks.
If in doubt, familiarise yourself with Facebook’s Terms of Service and constantly evolving community standards/ ad guidelines.
Where Does Fake News Start & Why?
Fake news and disinformation are huge, democracy undermining problems.
Sometimes the spread is malignant propaganda, driven by ‘bad actors’ to subvert political discourse and inflame societal division.
Sometimes, it is driven by supposed ‘good actors’ for political or commercial gain. For example, Donald Trump has tweeted more than 30 times on the ‘dangers’ of vaccines.
Curiously, now in office, and with the US experiencing its highest number of new measles cases in 25 years, President Trump seems to have changed his tune, urging Americans to ‘get their shots’…
Other times, and perhaps more often, fake news and disinformation, health or otherwise, are sown ‘unintentionally’ by the ‘unqualified’.
For example, Taylor Winterstein, wife of Penrith Panther’s Frank Winterstein, and Shanelle Cartwright, married to Gold Coast Titans’ Bryce Cartwright, both had restrictions put on their Instagram accounts for spreading anti-vaxxer messages.
Scale up ‘bad actors’, ‘supposed good actors’ and ‘the unqualified’, by a billion, or so, and you can get a better sense of the challenge at hand.
It is in attempting to manage this that millions of legitimate communicators get caught by online algorithms as collateral damage.