Since being named word of the year in 2017, ‘fake news’ has never been out of the spotlight. More than simply being false claims, fake news can have serious devastating effects. In an era of infinite information, how can we tell what is true and what isn’t?
There is no doubt that the emergence of social media has paved the way to a new era of human interaction, making the constraints of time and space things of the past and allowing people to interact with one another more openly.
In today’s word, with the widespread use of social media and technology applications, the main challenge is data and digital identity security. A few fintech companies have been focusing on new solutions to empower individuals and enterprises to protect themselves. Right of Reply is one of them.
Soon to be listed on the NASDAQ stock exchange, RoR is developing a blockchain-powered platform that allows individuals to first verify their data and create a reputation identity, then use this identity to reply as soon as possible and with equal effect to any damaging or incorrect content online.
Stefania Barbaglio, from Right of Reply, believes new technologies such as blockchain can help counter the effects of fake claims and defamation online: “Thanks to blockchain, we can now directly address the problem of online reputation management and fake news. There is no need for third parties to defend or act as guarantor or solution provider anymore, as any individual and enterprise has the opportunity and the power of action. This is revolutionary.”
It is widely recognised that our new communication technology is not always used for good. While it has brought people closer, it has also allowed the rise of fake news, the products of malicious actors who identified in the virtual world the distance and stealth necessary to act in their own interests to purposely harm an individual, an organisation, or even a decision maker.
Fake news is not a straightforward issue; there is a multitude of reasons that could motivate someone to start a fake rumour. Fake news can take many forms, from some apparently innocent gossip, to being used as a political tool. In many cases, they are pushed by ‘bot’ users, which were created to target and negatively influence the online presence of other users.
Research from Indiana University Network Science Institute indicates that between 9 to 15% of active Twitter accounts are social bots. This means that it’s very likely that less than 90% of current social media users are humans.
“The increase in claims arising from content on social media and websites reflects the growing impact and importance of new media compared with traditional news providers,” said Keith Mathieson, head of media at City law firm RPC. “The increase in actions over internet-based communications is a reflection of people’s concerns about their online reputations and the ease with which damaging information about individuals and businesses can be shared and spread,” he added to the Guardian.
In fact, with more than 3 billion users worldwide, social media platforms are increasingly more concerned about their role in the fake news industry — and how to fight it.
During the first half of this year, in attempts to take action to fight misinformation and fake news, Twitter did a massive ‘clean out’ of fake profiles and bots users across its network and reduced malicious activity, which resulted in tens of millions of accounts being deleted.
“Our digital ecosystem is being polluted by a growing number of fake user accounts, so Twitter’s commitment to cleaning up the digital space should be welcomed wholeheartedly by everyone, from users of the platforms, to creators and advertisers. We’ve focused most of our efforts on removing content against our terms, instead of building a systemic framework to help encourage more healthy debate, conversations, and critical thinking. This is the approach we need now.” said Twitter CEO Jack Dorsey, in light of the clean-outs.
Twitter has not been the only platform to implement actions of this nature. WhatsApp, the instant message app now owned by Facebook, has limited the number of messages users can forward, to avoid spam and spreading of fake news.
The truth is that despite efforts by tech companies to check up on their users and manage their content, malicious actors evolve quickly and show a high level of proficiency in bettering their tactics. Policy changes and calls for regulatory actions are indeed vital in tackling such a complex, multi-layered issue.
If technology spurred the growth fake news, we should also be able to use technology to fight it.