In times of war and conflict, the digital landscape becomes flooded with images and stories. However, many of these images are fraudulent or modified, making it difficult to tell fact from fiction. As regular people, we must assume ethical responsibility for the content we consume and distribute as misinformation and disinformation grow more common. In this article, we will look at how to manage visual trickery and disinformation during times of conflict in order to better understand the realities of the world we live in.
What’s the difference between misinformation and disinformation?
To tackle the issue effectively, it’s essential to differentiate between misinformation and disinformation. Misinformation is defined as false information that is not manufactured or distributed with the aim to deceive. Disinformation, on the other hand, is defined as incorrect information, including visual content, that is purposefully transmitted in order to deceive and cause harm.
Misinformation spreads quickly at the start of any conflict. For example, false stories spread about Ukrainian President Volodymyr Zelenskyy escaping Kyiv during the Russian invasion. Such rumors were soon refuted, but because of the confusion and uncertainty at the time, they circulated widely.
False information is increasingly being spread during wars by numerous entities, including governments, military personnel, separatist groups, and private persons, with the explicit goal to deceive. In Myanmar, for example, military propaganda agents circulated images representing Rohingya refugees fleeing the 1994 Rwandan massacre as if they were arriving under British colonial rule in the mid-20th century. This type of deception is intended to bolster specific narratives and influence public opinion.
Everyday users’ ethical responsibility
One could argue that because people passively consume content online, they carry no ethical responsibility for the information they come across. However, this viewpoint is overly simplistic. Users of digital media have the ability to alter the content they see, making them partially responsible for the visual deception and disinformation they consume.
The algorithms that control social media sites allow users to influence the content they see. While individuals may not have complete control over the content, their interactions with the site, such as like, tagging, or commenting on photographs, influence the content that is displayed to them. Because of this influence, individuals must realize their ethical responsibility for the content they consume.
The function of algorithms
Algorithms are used by social media companies to provide content to users. These algorithms are impacted by users’ previous interactions with platform content. When people interact with images of conflict and violence, they are more likely to see more of the same. This technique can be problematic since it might lead users down a rabbit hole of progressively extreme content, as was demonstrated in the mid-2010s when YouTube’s algorithm directed people to extremist videos.
While social media networks have policies prohibiting encouragement to violence and graphic content, enforcing these policies can be difficult. During current conflicts, some standards have even been eased, allowing posts advocating violence against specific groups. This permissive attitude has aided the spread of misinformation and deception concerning violent conflicts.
Shouldering the ethical responsibility
Users can assume ethical responsibility for their content consumption by changing their digital media engagement behaviors. Viewers can decrease their exposure to misinformation and disinformation by hiding, reporting, or disengaging from violent content. This action may also aid in the prevention of misleading information being transmitted to others.
People can also choose to block or unfollow accounts or content creators who have previously published misleading material. This proactive strategy gives people the ability to alter the material they encounter and promotes a more responsible digital environment.
Individuals can use a simple technique known as SIFT: Stop, Investigate, Find, Trace, to handle the difficulty of confirming photographs during times of conflict. This protocol encourages users to pause when they come upon an image, explore its provenance, seek out greater coverage, and trace quotes and assertions back to their original contexts.
Google’s reverse image search function is an excellent resource for tracking down the origins of photos. This application allows users to choose an image or a portion of an image and see where else it exists online. While these techniques are beneficial, they can only be used on a small percentage of the photos encountered on a daily basis.
While no approach provides complete control over the images seen during conflict, knowing the capacity to influence content and implementing verification standards can help limit risks and promote a more honest digital landscape. Individuals can help prevent visual misrepresentation and disinformation during times of conflict by adopting ethical responsibility for their content consumption.