LinkedIn Aims To Wipe Out Inappropriate InMail Messages
Dealing with idiots is, alas, part and parcel of communicating on the internet.
And the bigger the social footprint one has, the greater the idiot quotient. A point made frighteningly clear in a fascinating recent blog post from Tim Ferris.
In most social channels, the brunt of the problem takes place publicly, since, after all, trolls crave attention. Or, negative social reward, as the Australian Psychological Society puts it:
“Individuals seeking a negative social reward may still engage in trolling. But if they don’t receive that negative social reward, then their motivation to engage in this behaviour will likely diminish.”
Hence the best advice, for individuals and organisations remains, DO NOT FEED THE TROLLS…
But over on LinkedIn, problem interactions take place more covertly.
Grace Tang, Machine Learning Engineer (Anti-Abuse) at LinkedIn
“We find that reported cases of harassment predominantly stem from private messages rather than the public feed. This finding has led to a series of initiatives and projects to better protect our members against harassment in messaging.
Combating harassment in private messages is especially important as it feels targeted, leading to an acute loss of safety for the victim.”
According to LinkedIn, the sense of being targeted prompts some members to simply block violating members to make the problem “go away”, rather than reporting the message for LinkedIn teams to action.
Therefore, in a welcome series of moves, LinkedIn has introduced an updated strategy to proactively minimise harassment on the platform.
Social Channels Finally Acting On Anti-social Behaviour
LinkedIn's change is the latest in a flurry of similar updates across the major social players.
Since 2018 social channels have shifted towards algorithmically favouring more active forms of user engagement, over the passive.
Hence, on a granular level, photos trump video within social news feeds because they drive more comments (i.e. active consumption), on average.
However, over the past five years, social engagement has shifted more into private messaging, due to the sewer that is often public social media discourse.
To make public social spaces a more enticing and safer place, we have a seen a raft of recent updates aimed at encouraging public discussion and reducing inappropriate and offensive behaviour.
For example, Instagram’s ‘Restrict’ feature which builds on blocking features. Twitter limiting who can comment on Tweets and proactive measures from Instagram related to detecting bullying and offensive behaviour in comments.
While these are all positive steps, a quick glance over any social feed shows they are still woefully inadequate. Hence, it is only a matter of time before policymakers intervene.
LinkedIn Harassment Falls Into Three Vile Buckets
While being a creep on LinkedIn can take many forms, most falls into three pools according to the LinkedIn Engineering Blog.
- Inappropriate Advances: These members send multiple messages soliciting relationships to members they often don’t know
- Targeted Harassment: This includes bringing an off-platform conversation or dispute onto LinkedIn, such as stalking or trolling. These violations are less common and may originate from fake accounts or real members
- Romance Scams: Members who carry out financial scams through fake or hacked accounts using romantic messaging to defraud a member
If and when members experience abuse, of any sort, on LinkedIn, its engineering teams collate and analyse the related user data.
Using that data, LinkedIn has recently built a machine learning harassment detection system. The system consists of a sequence of three models that together identify the violating members and their harassing messages with high precision.
- First, sender behaviour (e.g., site usage, invitations sent) is scored by a behaviour model. This model is trained using members that were confirmed to have conducted harassment (surfaced via member reports)
- Second, content from the message is scored by a message model. This model is trained using messages that have been reported as and confirmed to be harassment
- Finally, the interaction between the two members in the conversation (e.g., how often do they respond to one another, are most of the messages predicted to be harassment by the message model) is scored by an interaction model. This model is trained using signals from the conversations resulting in harassment
At the end, this harassment detection system triggers a recently launched feature, which hides messages detected as harassing and provides the option for recipients to un-hide and easily report them.
While this recent development is unlikely to be bulletproof, it is a step in the right direction, although how big no-one knows.
For anyone who encounters harassment on LinkedIn, do report it. It is likely to be a long road ahead and the more data the detection systems have, the better they can be trained, and the more reliable they can become.