January 8, 2024
The internet is widely used as a tool for entertainment, to connect with friends and family, and to learn new skills. However, it can also be a battlespace, and just like on land, sea and in aerospace, it’s important that Canada is prepared to face threats online. Defence scientists with Defence Research and Development Canada (DRDC) are looking at ways to identify deception online and protect Canadians from adversarial misinformation, disinformation, and malinformation (MDM).
“The realm of internet deception is growing exponentially,” says Zachary Devereaux, a defence scientist with DRDC’s Valcartier Research Centre. “Research to identify and detect MDM is an important step to protecting Canadians from this growing threat.”
Deception can be any deliberate attempt to induce a false belief in others. By now, most people on the internet have faced some form of deception, or “fake news”, more officially called MDM. Misinformation is false information that users share without knowing it is false, and without deliberately intending to cause harm, while disinformation refers to false information that aims to manipulate, cause damage, or divide and radicalize people, organizations, and countries. Malinformation, on the other hand, refers to information that stems from the truth but is often exaggerated to be misleading.
AI-generated image of a big robot following a human.
MDM can be found in a news article published on a sketchy website, or from a bot account on a social media platform. These examples may seem harmless on the surface, but they have the capacity to cause a lot of damage, eroding public trust in institutions and, during elections, could even pose a threat to democracy. When Canadians’ everyday internet use proliferated in the 1990s and early 2000s, malicious actors needed specialized skills such as photo editing and coding to create and spread false narratives. Recently, the rise of easy-to-use and advanced artificial intelligence (AI) programs have made deception online easier than ever. With one prompt, a malicious actor could develop MDM campaigns with realistic videos, images, and text.
Identifying deception online is no one-person job. In fact, a paper recently presented at the International Command and Control Research and Technology Symposium noted that, with the advancement of technology and simulated content like AI-generated text and images, humans are no better at figuring out what’s real than if they were randomly guessing. This means that people need the help of AI to detect deception; using algorithms and machine learning to conduct mass analyses of data sets with human supervision is the most reliable way to accurately identify bots and MDM online.
AI-generated image of a small robot.
These algorithms look at patterns of strange characteristics in the originator of the message, the form of the post, the content of the message, the medium, the way the message is sent and spread, and the target audience. By identifying sets of odd or inhuman behaviour in these components, like an impossible number of posts per day, bots and false messaging can be discovered more easily. Scientists at DRDC’s Valcartier Research Centre have investigated the ways an intelligence analysis support system that has a text analysis capability could detect certain patterns in text. The system also has a rule-based reasoning service that can analyse data and assign points to each true statement in a predetermined list of propositions (i.e., “the user declares they are a bot”). The higher the score, the higher the likelihood of deceptive activity.
In an age where an increasing number of Canadians get their news online, being able to identify fake accounts and information will be a key aspect of defence in the information environment, providing the Department of National Defence and Canadian Armed Forces with more “boots on the ground” in this unfamiliar territory.