Source: Wikimedia Commons |
So, let us talk about WhatsApp rumours fueled mob lynchings in India first mostly because they seem simplistic to understand than the Cambridge Analytica controversy and the alleged misinformation campaign being waged on democratic institutions. This year alone there have been around 20 deaths by mob lynching in various parts of India based on WhatsApp messages identifying random people as child lifters which has led the recipients to think they are law unto themselves and carry out ghastly murders. Mob lynching in India has been happening in India quite frequently and anyone not living under a rock should be quite aware of it but what is different now is that these incidents have been happening in different parts of the country and the victims don't necessarily fall into the segments of minorities or marginalized classes. This is perhaps what led the Indian government to call on the Facebook-owned platform to take action to prevent the spread of these malicious and life-threatening messages.
WhatsApp has a market of 200 million users in India so it has every reason to take note of the situation. As of now, WhatsApp seems to have come up with some solutions such as limiting the number of messages that can be forwarded to groups, labeling forwarded messages as such. But the question remains, can a technological solution especially on a closed ecosystem such as WhatsApp really help? Moreover, not only are the messages private, WhatsApp claims to offer an end-to-end encrypted messaging platform and if it stays true to its promise it has no way of knowing if a particular message is fake, hate-speech or just another dumb joke.
WhatsApp has a market of 200 million users in India so it has every reason to take note of the situation. As of now, WhatsApp seems to have come up with some solutions such as limiting the number of messages that can be forwarded to groups, labeling forwarded messages as such. But the question remains, can a technological solution especially on a closed ecosystem such as WhatsApp really help? Moreover, not only are the messages private, WhatsApp claims to offer an end-to-end encrypted messaging platform and if it stays true to its promise it has no way of knowing if a particular message is fake, hate-speech or just another dumb joke.
As some tech news sites have pointed out, the problem is not so much technological as it is social. One of the pertinent points is that messages on WhatsApp don't go viral because of any algorithm but because the users proactively propagate them. With 200 million users and more being added every day how do you regulate content in such large volume without the ability to screen them ? And if the users themselves are amplifying these messages doesn't it reflect that the problem lies with the society more than the platform itself ? After all, one of the victims in Tripura was a person hired by the government to spread awareness on the dangers of rumours and fake news being spread over WhatsApp! The Supreme Court of India taking cognizance of the menace has asked the Parliament to enact laws to curb this evil but it really falls on the law enforcement agencies to ensure that that this kind of violence is check by taking swift and preventive actions. However, it is the society as a whole that can actually stop these kinds of crimes from happening.
Now, the Cambridge Analytica scandal which may seem old news, the firm itself has now winded up but is the methodology an approach it used going to die with it ? Despite so much conversation, debates,criticism and even the Congressional hearing of Zuckerberg I find popular discourse focus more on how to make user data more secure and relatively less on how to combat that data is being weaponized. In fact this week itself Facebook has identified,announced and reportedly taken action against social media campaigns to interfere in US midterm elections. What Cambridge Analytica did was siphon off a huge amount of personal data of Facebook users and create psychographic segmentation to micro-target voters with specially crafted messages. Now data protection is undoubtedly a very important issue and there are various aspects to it including the fact that nearly all user data is now in the hands of a few Internet companies making it an alarming situation.
However, efforts are on to ensure that there are some safeguards against collection and misuse of data, GDPR (which I must admit to having not understood in entirety) being the latest and powerful measure. A lot is already being written about it and discussed in various forums so I am not dwelling on it much here with the hope that global community will eventually make efforts to secure data protection. But then data is the lifeblood of Internet economy so things are unlikely to change completely, hence the question on how data is being weaponized as fake news and used against the very people it belongs to.
However, efforts are on to ensure that there are some safeguards against collection and misuse of data, GDPR (which I must admit to having not understood in entirety) being the latest and powerful measure. A lot is already being written about it and discussed in various forums so I am not dwelling on it much here with the hope that global community will eventually make efforts to secure data protection. But then data is the lifeblood of Internet economy so things are unlikely to change completely, hence the question on how data is being weaponized as fake news and used against the very people it belongs to.
The revelations from Cambridge Analytica scandal makes an interesting case study and even if the firm no longer exists, the methodology would surely survive. Psychographic segmentation approach has been around for quite some time but with huge amount of user data taken from Facebook Cambridge Analytica was able to create highly accurate personality profiles of the users which was then used in micro-targeting campaigns. In fact the Cambridge Analytica CEO reportedly claimed that they had about to 5000 data points on users that could be used to craft messages that directly appeal to the user's core personality traits and motivating triggers. In effect, it means exploiting even the hidden psychological traits such as xenophobia, bigotry, misogyny and repeatedly targeting them with subliminal messages that reinforced these traits and perhaps even bring about behavioral change as the elections approached. It is interesting to note that while Obama election campaign leveraging the power of social media and analytics which was widely hailed as a great move , Trump's campaign continues to be under the shadow, the reason being while the former used data with consent and traditional messaging methods (much like companies such as Google,Amazon do) the latter's campaign agencies are suspected of acquiring Facebook users data by conceit and then using them in a most insidious manner to manipulate user behaviour especially in context to voting.
When I say insidious what I mean is the process of creating echo chambers using fake news to reinforce people's prejudices, insecurity, hatred and coalesce them into groups which further make them vulnerable to believe every shred of fake news that confirmed their bias. Of course, thousands of news websites and media portals always exist to inundate people with information that fuel their prejudices. And it is not just one segment only, actors carrying on misinformation campaigns tend to also target groups which they may not be favourable towards. For instance, Russians have been accused of posing as members of Black Lives Matter and Muslim groups on Facebook, ostensibly to further create schism among the voters and lead to more consolidation among White supremacists a key segment among Trump supporters. If indeed Russia interfered in the US election 2016 or not, the damage is done and I am not talking only about Trump leading the US backwards. The other big damage is that a large section of American people appear to have lost trust on their electoral process which is evident in their reluctance to accept Trump as their rightfully elected President or the Russia-Trump collusion scepter wouldn't still be hanging. Then there is this curious global phenomenon of large number of people trusting only those information that confirms their bias, perpetually living in their own echo chambers which is a disturbing trend since a lot of information they (or rather we) consume tend to be fake.
Where are these echo chambers located ? Of course on social media platforms, primarily, Facebook, WhatsApp , YouTube and Twitter. Since we have now become passive consumers of information, not actively seeking out news but simply following what is topmost on our social feed, motivated actors find it easy to manipulate their way through algorithms to make their messages reach us. Facebook in particular has drawn a lot of flak for not doing enough to counter to fake news though it has lately been assuring that it is putting up mechanisms including AI as well as manual fact checkers to demote fake news and hate speech but can it really keep its business model intact and suppress content it considers fake or hateful ? Can it really distinguish between satires ( think Onion) and outright fake content or for that matter what if it flags content from dissenters in authoritarian regimes whose only outlet to the world is through social media.
As it is, currently the actors backed by regimes have plenty of resources to spread misinformation or propaganda on social media without impunity. It is indeed a difficult task even if there weren't questions on the priorities of the social media giants and there are quite a few. While Twitter has been accused of not doing enough to suspend handles spewing hate and fake news, Facebook seems fine to accommodate pages which border on loony right wing conspiracy theories websites like info wars. Even if it banned Alex Jones personal account for 30 days this week, the Facebook page is still operational. Furthermore, Zuckerberg while admitting that he personally finds Holocaust denial offensive finds no problem with allowing the platform to Holocaust deniers which says a lot about his efforts to fight fake news and hate speech.
Credit : Scriberia |
As it is, currently the actors backed by regimes have plenty of resources to spread misinformation or propaganda on social media without impunity. It is indeed a difficult task even if there weren't questions on the priorities of the social media giants and there are quite a few. While Twitter has been accused of not doing enough to suspend handles spewing hate and fake news, Facebook seems fine to accommodate pages which border on loony right wing conspiracy theories websites like info wars. Even if it banned Alex Jones personal account for 30 days this week, the Facebook page is still operational. Furthermore, Zuckerberg while admitting that he personally finds Holocaust denial offensive finds no problem with allowing the platform to Holocaust deniers which says a lot about his efforts to fight fake news and hate speech.
Perhaps, it is time for the Internet giants, especially Facebook,Google and Twitter to consider that principles in their Community Standards ought not mean giving equal treatment to fascist leaning actors and their activities and the rest of the users,they wouldn't even have come into existence or grown so large in a fascist society. Social media came into existence as digital manifestation of a free society, allowing it to be run over by radicals of different hues with clear fascistic tendencies may for the time being see growth in revenue but only for a short period, it's effect in the real world would ultimately effect these companies as adversely. Facebook has already lost 3 million active users in Europe and its growth is stalling, leading to plummeting of its valuation where it lost $119 billion.
Finally, this does not absolve the society from its reluctance to address deep rooted prejudices and hatred that allows its members to kill innocent people based on WhatsApp rumours or elect despots based on imaginary fears fueled by social media messaging which cause rift in social cohesion in society, between nation states, demonization of "others" or simply apathy for sufferings of fellow human beings. I am hugely skeptical of new legislation to crack down on fake news on social media, lest oppressive regimes misuse them to stifle well meaning dissenters and activists by simply labeling them as such. It is the society that needs to heal itself or we are heading back to the age of tribal bloodletting, this time riding on our latest and cutting-edge technology.
0 comments :
Post a Comment