Goodbye, News at 10; Hello, TikTok
OFCOM’s annual report has found that TikTok is the fastest growing news source for UK adults. BBC One is still the most popular, but the survey indicated that it may not be long until the national broadcaster is knocked off its throne.
The report also revealed that Instagram has become the primary source of news for young people aged 12-15, followed by TikTok and YouTube.
On TikTok, however—or, indeed, any social media platform—news sources aren’t qualified journalists or professional newsreaders, they’re just members of public. They’re regular content creators who share updates about current events and news stories in an accessible, understandable, and engaging way that young adults can better relate to.
Whether it’s watching a video about the situation in Ukraine made by someone who actually lives there, or a ditty about Imperialism, TikTok definitely encourages a more creative approach to news than traditional media outlets can. However, the downside of this involves concerns around the accuracy and impartiality of such content.
Owned by Chinese software company ByteDance, TikTok harvests user data. This became a point of contention in 2020, when then-US President Donald Trump tried to ban the app, condemning it as a ‘national security threat’. TikTok is no different to other social media apps that also harvest user data…Facebook came under fire in recent years due to the Cambridge Analytica (CA) scandal. CA was a firm of political consultants in the UK that collected Facebook data without users’ knowledge and incorporated this information in political advertising strategies during the 2016 US Presidential Election.
With similar affairs still memorable in our recent history, can we really trust social media platforms to feed us the news from an unbiased perspective? If the news suddenly focused on China, for example, would TikTok, a Chinese-owned company, suppress any stories that presented the country in an unfavourable light? Can anyone trust Facebook to give an unbiased report on two election candidates, when it was revealed they’d boosted a disproportionate number of pictures of a certain candidate to swing voters and promoted negative features of the opposing candidate? If the day’s news story was that Twitter’s share prices had plummeted, would there be an attempt by the platform to conceal this information to Twitter users?
There’s also the matter of individual creator bias and the possible spread of misinformation. If a user followed a creator who sported the same views as them, but who had no regard for verifiable information, they’d be engulfed in an echo chamber that would only reinforce the beliefs they already held. During the Covid-19 pandemic there arose a widespread problem with misinformation around vaccines. People who were already anti-vaxxers consumed media, content and news from other anti-vaxxers and Covid conspirators, much of which included unsubstantiated claims. The further people fall down a rabbit hole, the harder it will be for them to recognise, and fight against, misinformation.
Of course, there are still biases on conventional news channels. The BBC is regularly accused of political bias, and other news outlets have clear political leanings, such as GB News. However, these channels are still held to OFCOM’s regulations, which require impartiality across their output.
The internet has moved far from simply being a place to look at cute pictures of dogs and chat to your friends – it’s a minefield, especially when it comes to rules and regulations.
News stories of mass shootings in the US linked this increase in senseless violence to online ‘incel’ groups. These factions are made up of men who have had failed relationships with women and who refer to themselves as ‘involuntary celibates’. They spread hate and misogynistic values and blame women for their failed romances and social lives. A vulnerable young person, who may already feel like an outsider, might stumble across these cult-like groups and find that their feelings and insecurities are validated; they then begin to feed into this echo chamber that contains other incels. Violent fantasies are shared; some even take this to the next level and commit atrocities.
How can we police what people consume online? Whether it’s messages from cult-like incel groups, anti-vaxxers and Covid conspirators, or anyone exposed to misinformation and bias, more and more people are not thinking critically about the information they see. Should the responsibility for this lie with parents, or schools, to monitor the content young people are taking in, and to teach them how to spot misinformation? Or should we be pressuring these apps and websites to regulate content posted there? In 2019, Instagram began removing ‘likes’, in a bid to make the platform a ‘safer place on the internet’, and to hopefully reduce the negative impact the app can have on young people’s mental health.
There is no right answer—not immediately, anyway. Some users can go online, gather the day’s news from Twitter’s trending hashtags, read the opinions of others and spot the tell-tale signs of an obvious troll posting misinformation before going about their day, remaining as unaffected than if they’d simply watched BBC Breakfast that morning. This is not everyone, however. The internet clearly has the power to brainwash people—particularly the young and vulnerable, with potentially disastrous consequences.
Want your article or story on our site? Contact us here