It also turned out that it has never been so easy to manipulate the audience as in the age of social media. Today
, "filter bubbles" create different realities for different social, cultural or ideological groups. In these realities, fake news and people who can refute it may never meet each other.
The evolution of the Internet has gone through three stages. It was technically difficult to publish anything on the first websites. In the second stage, intuitive blogging platforms for text, video, audio content, etc. appeared. In the third phase, these platforms evolved into social networks that brought users together to interact with content and with each other: Facebook, YouTube, Twitter. The list of these brands is constantly updated, because progress is accelerating. New formats for old and new audiences are being tested all the time. In
2021, for example, Clubhouse became world-famous, and TikTok surpassed Facebook and Google as the world's most popular web domain. Messengers perform certain functions of social media. Some of them, such as Telegram, combine the advantages and threats of both. Platform algorithms affect the results and interfere with the reproducibility of research. Messengers are the "grey area" for researchers and fact-checkers, not least because end-to-end encryption aimed to protect privacy makes it almost impossible to track the emergence and spread of disinformation. A
study found that
50% of Canadian respondents regularly received fake news in messengers. A balance is needed between privacy, personal security and countering disinformation.
Dr. Žiga Turk from the University of Ljubljana
described five stages of the news process - creating, editing, publication, amplifying, and consuming. In the traditional model, the content was created mostly by professionals and the responsibility for the next three stages laid with media outlets. Today there is no single point of control, but most often all these stages are implemented on social media platforms. It gives them a powerful position to influence this process.
Services that allow even a child to create a fake photo or video will not surprise anyone. AI has learned to identify signs of editing or search for original content. The new technologies, called the weapons of information wars of the future, such as
deepfakes, are coming into play today. Deepfake is a video where the face of one person is convincingly replaced by a face compiled by a computer from fragments of their photos, videos, and voices that are already on the Internet. They bring a lot of benefits by saving time on video creation, making online learning individual, etc. But at the same time, their potential to create fake news is frightening.
Text generators work in a similar way. Their potential is almost unlimited: they can serve as smart assistants, write texts and programs, process datasets etc. At the same time, in a few years, their ability to create disinformation is predicted to surpass that of any troll factory. In the summer of
2020, OpenAI introduced a language generator that complements and creates from scratch English texts no different from ones written by humans. According to a May
2021 report by the US Center for Security and Emerging Technology, the program generates convincing fake tweets and news stories that shift the tone and manipulate narratives, for example, for and against Donald Trump. The mass production of such content can affect public opinion. Such disinformation will require less time and effort, and the reach and effectiveness of campaigns will increase. That is why the authors of the technology do not open mass access to it.
The platforms, together with leading universities, are working on the deepfake detection software. Google has created a database of about
3,000 such fake videos, using various methods of their creation, to teach AI to identify them. Facebook and Adobe created similar databases, and new ones continue to appear. The job ahead is to teach algorithms to detect any AI-generated material, be it a deepfake video, a text or images. So far, most of these tools are still in development and are not ready for public use. Many platforms have banned the posting of deepfakes. But it is still quite easy for them to slip through the moderation systems.
Availability of apps for creating deepfakes for everyone who has a mobile phone is a matter of time. Responsible creators of such technologies embed special markers into their products to warn viewers that the video is being manipulated. But if it works with video, it may not work with text - at least as of today.
So, there is room for manipulation at each stage of the news process. While developers are working on new detection tools, the platforms also have enough tools to stop them.
At the stage of creating, it is important to destroy the business model of clickbait, which so far looks quite effective. To do this, it makes sense for large advertising services such as Google AdSense to stop advertising on fake news websites. Neural networks that help find the best audience for advertising can also avoid clickbait resources. In fact, they are already doing this, but so far with insufficient success.
At the stage of publication, community standards come into play. Banning malicious YouTube videos or Facebook posts is an effective way of regulation. Social networks, although in different ways, play the role of guardians when it comes to adult content, copyright infringement, incitement to hatred, and so on. The problem is the enormous amount of content and the lack of qualified personnel to check it. So, platforms rely on algorithms and user complaints. If there are too many, the content is blocked or demonetized. Say, YouTube deprives channels of the opportunity to make money on advertising. The war of complaints replaces the war against malicious content. Another extreme is bringing the watchdog function to the maximum.
And finally,
the platforms play the role of amplifiers showing users some posts and hiding others. Facebook shows us less than
10% of all the content we subscribe to. Its visibility depends on an interest in a page, on a post, on a content format – for instance, images and videos are more popular than text, - on its "freshness", interactions, likes, visits and evaluation by the platform. Twitter ranks tweets by posting time and "user relevance". On TikTok, it depends on customer preferences. Many of these parameters can be "hacked” by bots. After all, algorithms are non-transparent. This gives platforms the freedom to choose which news to promote and which to hide. In addition, the algorithms may be imperfect. The dependence of the news feed on preferences only thickens the walls of "echo chambers" around each of us and makes dialogue almost impossible.
Algorithms are created by humans
All methods of combating fake news and manipulations are based on the question of who and how will determine them.