Skip to Content, Navigation, or Footer.
Sunday, Nov. 10, 2024
The Emory Wheel

Watch What You Read Online

I awoke last week to a notification from my Flipboard news app promoting an article headlined “Earth is About to Enter a 30-Year ‘Mini Ice Age’ as a ‘Solar Minimum’ Grips The Planet.” After reading a few paragraphs, I scrolled to the top of the page. The piece was published on Mashable, a site which produces a range of content, much of which is celebrity gossip and click-bait advice columns.

Immediately suspicious, I searched the web for confirmation. The basis of the Mashable article is true; our sun’s magnetic field goes through approximately 11-year cycles and the star has now entered a phase of lower-level surface activity. However, the statement that this change “would have a significant impact on global average temperatures” and the insinuation that such an effect could invalidate climate change predictions were extremely misleading. I shudder to think how such an article would be interpreted by a climate change skeptic, and wondered how many other readers would go to lengths to expose the article for what it is: carefully constructed pseudoscience.

The web we spend so much of our time surfing is inundated not only by blatantly fake news, but also by deceptive headlines, advertisements, clickbait, doctored photos and videos, and unsupported, poorly-argued opinions. As Facebook and other media giants come under fire for allowing misleading and dangerous content on their platforms, consumers must learn that the onus of truth-seeking is on them. 

The internet is an open system. It allows billions of anonymous users to share information via billions of channels, many of them uncensored, and users simply cannot rely on platforms to sanitize sites for them. Requiring sites themselves to vet their information not only raises freedom-of-speech concerns, but is also naive; there is simply too much content, and too many avenues, to stop bad actors from reaching an audience. 

Social media platforms have been publicly condemned for their failure to censor problematic content, and now many are attempting to do so. Instagram has allowed users to flag “misinformation” since August 2019, and Twitter announced on Feb. 4 that it would begin flagging “deceptively altered” images and videos. Those efforts will hopefully help curtail the spread of misleading information, but platforms can only do so much. 

The true irony is that many sites are expressly designed to encourage the kind of impulsive decisions that promote misinformation. With (usually) just one click, we can share, comment on, retweet or “Like” any content we find. And much of the media we consume is designed to trigger immediate action, whether through inciting anger, pity or self-righteousness. 

Consumers are left with two options: either submit to the media’s manipulation or commit to routinely fact-checking their sources. Responsible consumers are left with only the latter. 

Madeline Lutwyche (20C) is from Baltimore.