A recent and rather lengthy ProPublica article has revealed that WhatsApp’s end-to-end encryption and privacy promises are “not true.” While the idea of everyday messaging still being private is as far as we know unlikely, it is possible for governments to gain access to your account data.
Though Facebook has denied this report, claiming that it’s based on a misunderstanding. However, straight from the horse’s mouth, Facebook told 9to5Mac that end-to-end encryption doesn’t mean that no one can see such messages. Also, the WhatsApp content moderation team (not Facebook) will be able to see such messages to resolve the problem.*
Does WhatsApp Really Read Your Messages?
End-to-end encryption means messages are protected when they leave your device and receive protection when they reach the recipient. So nobody, not even WhatsApp/Facebook knows the contents of a message. This encryption is one of the biggest reasons for the app’s popularity.
This quote is strange because it’s unclear what this person means by “the report.” We can edit that into more plain, simple language. Also, the block (on WhatsApp) is for privacy – not security. It hides an account from showing up in your contacts list. The deleting was meant to clear up the context on WhatsApp – there is no permanent deletion on WhatsApp; if an account is blocked or deleted, they could make a new account and start all over.
So when you report someone to WhatsApp, the app asks for your permission to take a copy of your most recent messages, so it can share them with law enforcement. The permission only applies to the person you’ve flagged – it’s not a blanket agreement allowing WhatsApp access to all your messages in general. The ProPublica report itself mentioned that when this feature was put into action in India, more than 4,00,000 cases of child exploitation were flagged in just 2 months! This fast-tracked the country toward its goal of keeping WhatsApp safer overall.
Content From ProPublica Report
While ProPublica’s analysis of WhatsApp’s features is somewhat flawed, there are other takeaways from it. It points out the dark side of picking up work in this industry, especially Facebook. Facebook has formed its own internal workforce to review flagged content which increasingly includes children who are caught posting inappropriate messages to members on their friend’s lists or posting sexualized images of themselves.
If the following conditions are true, then Facebook clearly needs to invest some time into helping its content moderators communicate better with one another. The report also states that WhatsApp content moderators mostly have to deal with people who are threatening their friends but they most likely don’t talk about it with each other because spending too little time with a ticket makes it easy for them to become detached and incapable of really empathizing and learning from the people they’re working the help – which is totally fine if most of what you’re dealing with involve pranks or trolling but only problematic when something actually requires empathy and good judgment.
The content-sharing features of the application have been under fire from its very inception, with San Francisco’s Police Department expressing major concerns about the safety of the users. The issues were highlighted during a ProPublica report which claimed to have found “false positives, where an innocuous photo sent as a joke to a friend was flagged as having sexual content by one or both people in a chat room”.
One might argue that absolute privacy is a myth, and in some ways this is true, but in other ways, it’s not. For example, there are countless apps out there that claim to be “secure” and “private” yet they don’t offer true encryption or minimal metadata when they encrypt the actual data. So if your definition of privacy requires no metadata whatsoever, you might want to proceed with caution…