By Michelle J. Gelfand and Virginia Choi
From climate-change alarms and “tripledemic” concerns to the Bulletin of Atomic Scientists’ Doomsday Clock, we are awash in dire warnings. News and social media alert us daily to the dangers of everything from nefarious politicians to natural disasters. All of these warnings — some sincere, some manufactured — are lighting up not only our smartphones but also our brains, prompting us to ask how all the “threat talk” might be affecting us psychologically and socially.
To cut through the hysteria and improve our understanding of credible threats, we and our colleagues have created a “threat dictionary” that uses natural language processing to index threat levels from mass communication channels. In research published in the Proceedings of the National Academy of Sciences, we demonstrate the tool’s use both in identifying invisible historical threat patterns and in potentially predicting future behavior. (The tool is publicly available to anyone who wants to measure the degree of threat language present in any English-language text.)
The threat dictionary builds on past survey research into how chronic threats lead societies to “tighten up” culturally, by imposing strict rules and strong punishments for those who violate them. We have found evidence of a tight/loose divide in all forms of cultures, including between countries (think Singapore versus Brazil), subnational states (Alabama versus New York), organisations (accounting firms versus startups), and social classes (lower versus upper). Yet, we previously lacked a reliable linguistic measure for tracking threat-related talk over time, and for evaluating its relationship to cultural, political and economic trends.
That is why we created the threat dictionary. We wanted a tool to help measure different types of threats over time and the types of cultural responses they elicit. In the past, researchers developed dictionaries by asking people to brainstorm possible keywords. But we wanted both greater breadth and more precision in our mapping of the “threat universe”. So, we created algorithms that scanned Twitter, Wikipedia, and randomly selected websites to find terms that frequently appear alongside words like “threat” in a written text. The algorithms retrieved 240 words that people use widely when writing about threats, including “attack”, “crisis”, “fear”, “looming”, and “tension”. Notably, such words can apply to a threat of any magnitude, from a deadly hurricane to a family conflict, making the dictionary broadly applicable.
We conducted a variety of tests to confirm that our dictionary accurately measures threat associations in text. For example, given that threat levels, particularly the threat of organised violence, have been in historical decline worldwide, we analysed 600 million pages of news articles from 1900 to 2020 to see whether the overall use of threat words in our dictionary also declined over time. We found that it did. But the algorithm also suggests that perceived threat levels could rise again in the next 20 years.
As one would expect, words from the threat dictionary spiked in US newspapers during major military conflicts, such as World War I and the Gulf War. Similarly, when the US Federal Emergency Management Agency has issued declarations of natural disasters, threat language has surged. The threat dictionary also captured the mounting severity of the COVID-19 pandemic in 2020, as reflected in tweets.
In terms of forecasting future developments, the threat dictionary can identify important trends in stock-market activity, cultural norms, and political attitudes. For example, increased threat talk predicts lower daily stock returns in the S&P 500, DIJA, and NASDAQ and lower innovation, as measured by patent filings. And when Americans face serious threats, words associated with tight (as opposed to open) cultures — such as “constrain”, “comply”, and “dictate” — tend to be used more widely.
Increased threat talk has also historically been associated with broad-based political “tightening”, reflected, for example, in higher presidential approval ratings or anti-immigrant sentiment. Analysing presidential speeches over the past 70 years, we found that Republican (conservative) presidents alluded to threats more often than did Democratic (liberal) ones.
Equally important, we found threat talk to be contagious. For example, adding a single threat-related word to a tweet increases the expected retweet rate by 18 per cent. As we have seen in recent years, the contagious nature of threat-laden online messages can be especially problematic during major crises, owing to social media’s well-known role in amplifying misinformation and sowing mass panic.
Now that the threat dictionary has been validated, it can be used to examine an array of critical societal issues. We can examine how threat talk motivates conspiracy theories and extremism, drives the spread of online hate and misinformation, and shapes elections, foreign-policy decisions and economic investments. We can also assess the degree to which social-media users amplify their voices by exaggerating both real and fabricated threats. And by directing the threat dictionary at our personal social-media feeds, we can uncover our own possible role in spreading threatening messages. That would help us all make better-informed decisions about what to share and what to screen out.
In business, politics and our daily lives, we are often in the dark when it comes to understanding the nature of existing threats and their probable implications. Like a searchlight, the threat dictionary can scan for linguistic footprints to reveal important societal patterns across geography, populations and history. By so doing, we can more reliably distinguish genuine danger from manufactured risks, which is essential to building a safer world.
Michele J. Gelfand is professor of Cross-Cultural Management and professor of Organisational Behaviour and Psychology at Stanford University. Virginia Choi is a doctoral candidate at the University of Maryland. Copyright: Project Syndicate, 2023.
www.project-syndicate.org
By Michelle J. Gelfand and Virginia Choi
From climate-change alarms and “tripledemic” concerns to the Bulletin of Atomic Scientists’ Doomsday Clock, we are awash in dire warnings. News and social media alert us daily to the dangers of everything from nefarious politicians to natural disasters. All of these warnings — some sincere, some manufactured — are lighting up not only our smartphones but also our brains, prompting us to ask how all the “threat talk” might be affecting us psychologically and socially.
To cut through the hysteria and improve our understanding of credible threats, we and our colleagues have created a “threat dictionary” that uses natural language processing to index threat levels from mass communication channels. In research published in the Proceedings of the National Academy of Sciences, we demonstrate the tool’s use both in identifying invisible historical threat patterns and in potentially predicting future behavior. (The tool is publicly available to anyone who wants to measure the degree of threat language present in any English-language text.)
The threat dictionary builds on past survey research into how chronic threats lead societies to “tighten up” culturally, by imposing strict rules and strong punishments for those who violate them. We have found evidence of a tight/loose divide in all forms of cultures, including between countries (think Singapore versus Brazil), subnational states (Alabama versus New York), organisations (accounting firms versus startups), and social classes (lower versus upper). Yet, we previously lacked a reliable linguistic measure for tracking threat-related talk over time, and for evaluating its relationship to cultural, political and economic trends.
That is why we created the threat dictionary. We wanted a tool to help measure different types of threats over time and the types of cultural responses they elicit. In the past, researchers developed dictionaries by asking people to brainstorm possible keywords. But we wanted both greater breadth and more precision in our mapping of the “threat universe”. So, we created algorithms that scanned Twitter, Wikipedia, and randomly selected websites to find terms that frequently appear alongside words like “threat” in a written text. The algorithms retrieved 240 words that people use widely when writing about threats, including “attack”, “crisis”, “fear”, “looming”, and “tension”. Notably, such words can apply to a threat of any magnitude, from a deadly hurricane to a family conflict, making the dictionary broadly applicable.
We conducted a variety of tests to confirm that our dictionary accurately measures threat associations in text. For example, given that threat levels, particularly the threat of organised violence, have been in historical decline worldwide, we analysed 600 million pages of news articles from 1900 to 2020 to see whether the overall use of threat words in our dictionary also declined over time. We found that it did. But the algorithm also suggests that perceived threat levels could rise again in the next 20 years.
As one would expect, words from the threat dictionary spiked in US newspapers during major military conflicts, such as World War I and the Gulf War. Similarly, when the US Federal Emergency Management Agency has issued declarations of natural disasters, threat language has surged. The threat dictionary also captured the mounting severity of the COVID-19 pandemic in 2020, as reflected in tweets.
In terms of forecasting future developments, the threat dictionary can identify important trends in stock-market activity, cultural norms, and political attitudes. For example, increased threat talk predicts lower daily stock returns in the S&P 500, DIJA, and NASDAQ and lower innovation, as measured by patent filings. And when Americans face serious threats, words associated with tight (as opposed to open) cultures — such as “constrain”, “comply”, and “dictate” — tend to be used more widely.
Increased threat talk has also historically been associated with broad-based political “tightening”, reflected, for example, in higher presidential approval ratings or anti-immigrant sentiment. Analysing presidential speeches over the past 70 years, we found that Republican (conservative) presidents alluded to threats more often than did Democratic (liberal) ones.
Equally important, we found threat talk to be contagious. For example, adding a single threat-related word to a tweet increases the expected retweet rate by 18 per cent. As we have seen in recent years, the contagious nature of threat-laden online messages can be especially problematic during major crises, owing to social media’s well-known role in amplifying misinformation and sowing mass panic.
Now that the threat dictionary has been validated, it can be used to examine an array of critical societal issues. We can examine how threat talk motivates conspiracy theories and extremism, drives the spread of online hate and misinformation, and shapes elections, foreign-policy decisions and economic investments. We can also assess the degree to which social-media users amplify their voices by exaggerating both real and fabricated threats. And by directing the threat dictionary at our personal social-media feeds, we can uncover our own possible role in spreading threatening messages. That would help us all make better-informed decisions about what to share and what to screen out.
In business, politics and our daily lives, we are often in the dark when it comes to understanding the nature of existing threats and their probable implications. Like a searchlight, the threat dictionary can scan for linguistic footprints to reveal important societal patterns across geography, populations and history. By so doing, we can more reliably distinguish genuine danger from manufactured risks, which is essential to building a safer world.
Michele J. Gelfand is professor of Cross-Cultural Management and professor of Organisational Behaviour and Psychology at Stanford University. Virginia Choi is a doctoral candidate at the University of Maryland. Copyright: Project Syndicate, 2023.
www.project-syndicate.org
By Michelle J. Gelfand and Virginia Choi
From climate-change alarms and “tripledemic” concerns to the Bulletin of Atomic Scientists’ Doomsday Clock, we are awash in dire warnings. News and social media alert us daily to the dangers of everything from nefarious politicians to natural disasters. All of these warnings — some sincere, some manufactured — are lighting up not only our smartphones but also our brains, prompting us to ask how all the “threat talk” might be affecting us psychologically and socially.
To cut through the hysteria and improve our understanding of credible threats, we and our colleagues have created a “threat dictionary” that uses natural language processing to index threat levels from mass communication channels. In research published in the Proceedings of the National Academy of Sciences, we demonstrate the tool’s use both in identifying invisible historical threat patterns and in potentially predicting future behavior. (The tool is publicly available to anyone who wants to measure the degree of threat language present in any English-language text.)
The threat dictionary builds on past survey research into how chronic threats lead societies to “tighten up” culturally, by imposing strict rules and strong punishments for those who violate them. We have found evidence of a tight/loose divide in all forms of cultures, including between countries (think Singapore versus Brazil), subnational states (Alabama versus New York), organisations (accounting firms versus startups), and social classes (lower versus upper). Yet, we previously lacked a reliable linguistic measure for tracking threat-related talk over time, and for evaluating its relationship to cultural, political and economic trends.
That is why we created the threat dictionary. We wanted a tool to help measure different types of threats over time and the types of cultural responses they elicit. In the past, researchers developed dictionaries by asking people to brainstorm possible keywords. But we wanted both greater breadth and more precision in our mapping of the “threat universe”. So, we created algorithms that scanned Twitter, Wikipedia, and randomly selected websites to find terms that frequently appear alongside words like “threat” in a written text. The algorithms retrieved 240 words that people use widely when writing about threats, including “attack”, “crisis”, “fear”, “looming”, and “tension”. Notably, such words can apply to a threat of any magnitude, from a deadly hurricane to a family conflict, making the dictionary broadly applicable.
We conducted a variety of tests to confirm that our dictionary accurately measures threat associations in text. For example, given that threat levels, particularly the threat of organised violence, have been in historical decline worldwide, we analysed 600 million pages of news articles from 1900 to 2020 to see whether the overall use of threat words in our dictionary also declined over time. We found that it did. But the algorithm also suggests that perceived threat levels could rise again in the next 20 years.
As one would expect, words from the threat dictionary spiked in US newspapers during major military conflicts, such as World War I and the Gulf War. Similarly, when the US Federal Emergency Management Agency has issued declarations of natural disasters, threat language has surged. The threat dictionary also captured the mounting severity of the COVID-19 pandemic in 2020, as reflected in tweets.
In terms of forecasting future developments, the threat dictionary can identify important trends in stock-market activity, cultural norms, and political attitudes. For example, increased threat talk predicts lower daily stock returns in the S&P 500, DIJA, and NASDAQ and lower innovation, as measured by patent filings. And when Americans face serious threats, words associated with tight (as opposed to open) cultures — such as “constrain”, “comply”, and “dictate” — tend to be used more widely.
Increased threat talk has also historically been associated with broad-based political “tightening”, reflected, for example, in higher presidential approval ratings or anti-immigrant sentiment. Analysing presidential speeches over the past 70 years, we found that Republican (conservative) presidents alluded to threats more often than did Democratic (liberal) ones.
Equally important, we found threat talk to be contagious. For example, adding a single threat-related word to a tweet increases the expected retweet rate by 18 per cent. As we have seen in recent years, the contagious nature of threat-laden online messages can be especially problematic during major crises, owing to social media’s well-known role in amplifying misinformation and sowing mass panic.
Now that the threat dictionary has been validated, it can be used to examine an array of critical societal issues. We can examine how threat talk motivates conspiracy theories and extremism, drives the spread of online hate and misinformation, and shapes elections, foreign-policy decisions and economic investments. We can also assess the degree to which social-media users amplify their voices by exaggerating both real and fabricated threats. And by directing the threat dictionary at our personal social-media feeds, we can uncover our own possible role in spreading threatening messages. That would help us all make better-informed decisions about what to share and what to screen out.
In business, politics and our daily lives, we are often in the dark when it comes to understanding the nature of existing threats and their probable implications. Like a searchlight, the threat dictionary can scan for linguistic footprints to reveal important societal patterns across geography, populations and history. By so doing, we can more reliably distinguish genuine danger from manufactured risks, which is essential to building a safer world.
Michele J. Gelfand is professor of Cross-Cultural Management and professor of Organisational Behaviour and Psychology at Stanford University. Virginia Choi is a doctoral candidate at the University of Maryland. Copyright: Project Syndicate, 2023.
www.project-syndicate.org
comments