Family Vlogs Can Entertain, Empower And Exploit

“Family vlogs can entertain, empower and exploit,” by Rebecca Hall, Queen’s University, Ontario and Christina Pilgrim, Queen’s University, Ontario

YouTube channels belonging to American content creator Ruby Franke were recently scrubbed from the site after the YouTuber was charged with child abuse. Franke was known for making parenting videos on her YouTube channel, 8 Passengers. Her videos frequently featured content on the family and her six children.

Police in Utah said the charges were laid after Franke’s 12-year-old son climbed out of the window of a home and went to a neighbour to ask for food and water. Police said the boy and his younger sister were found emaciated and required hospitalization.

As blogs and live journals gather internet dust, vlogging has emerged as a new source of intimate entertainment, and for creators, potential income. However, they also raise serious questions about exploitation and the privacy rights of children.

What Is Vlogging?

Vlogs are videos, usually published through social media, that share the creator’s thoughts and experiences. Family vlogs like Franke’s are a popular form of this medium, where parents take viewers into their homes. The content might involve taking viewers along on the family’s daily routine. Family vlogging channels upload videos sharing significant milestones, morning routines and preparing for school.

Many might feel uneasy about content creation that showcases private family life. However, at the same time, vlogs might offer families agency and alternative means of making ends meet at a time of stagnant wages and soaring living costs.

Thinking about vlogging as a kind of social reproduction allows us to think through the double-edged sword of content creation. Social reproduction refers to the labour of life-making: the day-to-day work of care, education and sustenance. Feminist theorists use this term to think about the ways in which caring labour supports and shapes our social, political and economic world.

Social reproduction is “the fleshy, messy and indeterminate stuff of everyday life.” It involves the responsibilities and relationships involved in maintaining daily life.

A Response To The Pressures Of Parenting

Family vlogging did not develop in a vacuum. Instead, the trend towards “mumpreneurs” emerged from within a care crisis. The cost of living is rising, wages are stagnating, and government benefits do not provide the support families need. Parents — and mothers in particular — are facing significant pressures when it comes to caring for children and the household.

There has been a rise in gender equity in the workforce, however, there is still huge inequity when it comes to work in the home. Women are working unprecedented (paid and unpaid) hours, and are often being told they are failing at both.

As a response to these pressures, mothers developed their own online communities to express the highs and lows of parenting. These communities began as “mommy blogs,” but have increasingly moved to vlog format over the years.

Family vlogs can offer intimate counter-narratives to the expectations of parenthood. Mothers can share the anxieties and pressures they face and offer support to one another.

vlogsVlogs: Commodifying Families

However, there can be downsides to the trend. Many family vlogs are highly curated productions that can perpetuate ideas about what constitutes “good” motherhood, rather than challenge racialized, gendered and classist ideals of motherhood. In this way, vlogs are less about connection and more about commodification.

The implications of this monetization are complex. Performing socially desirable forms of motherhood can reproduce racial, sexual and class-based exclusion around who does and who does not count as a good mother. Dominant ideas of “motherhood” are shaped by heterosexual family structures, and there is a long history of surveilling and disciplining racialized parents.

YouTube creators depend on viewership and subscribers to monetize their content. They also use YouTube advertisements, sponsorships and brand deals to generate income. While some creators can make millions of dollars, most do not. Many are precarious workers with fluctuating incomes determined by YouTube’s algorithm.

On the other hand, content creation allows mothers to rebel against economic insecurity by making their motherhood a source of income. While this offers a means of paying the bills, who benefits and who doesn’t when a certain version of the family is commodified?

Kids And Clickbait: What Is The Law?

Exploitation is twofold for family vloggers. Firstly, in the United States, parents are considered responsible for protecting their underage children’s privacy information and consent. Many influencers live or move to the U.S. for creator funds and better networking opportunities. This can become an issue when parents exploit their children while also being in charge of providing consent.

Secondly, social media algorithms determine whether a video becomes popular on a platform, which prioritizes content that gains the most views.

The algorithms can change without warning, so creators never know if their content will remain popular. If family vloggers choose to stop showcasing their children on their channels, they might lose viewership and priority within the algorithm.

Existing U.S. laws are unequipped to handle this new form of child labour. The Coogan Act attempts to protect the income of child performers, but it does not account for the unique conditions of child social media stars.

vlogsMost recently, Illinois is the first U.S. state to pass a law to ensure child influencers featured in monetized videos receive financial compensation. The law will take effect in July 2024, and there is hope that other states will follow suit.

Protect Child Influencers

This is a good start, but it is not enough. Policymakers should also look at the steps France has taken to protect child influencers. In 2020, the country passed a law that gives children the right to be forgotten. This means that child influencers can request that the platform removes content featuring them without their parent’s permission.

Laws need to include more than financial compensation for child influencers. There need to be regulations protecting children’s privacy, rights to have content removed and preventing children from being overworked. There also needs to be a call for greater regulation and transparency of social media algorithms that control and manipulate what is profitable.

Whether it is entertainment, exploitation or employment, family vlogging is a reminder of the complex interconnections between care work and wage work. As the households of strangers stream across our screens, parents and lawmakers must think carefully about the impacts on families and children.The Conversation


Rebecca Hall, Assistant Professor, Global Development Studies, Queen’s University, Ontario and Christina Pilgrim, Master’s student, Department of Sociology, Queen’s University, Ontario

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Posted in Lifestyle | Tagged , , , | Leave a comment

Vaccinating People Against Fake News

Vaccinating People Against Fake News

Researchers are trying to boost people’s immunity to fake news using online games and other strategies. Can these efforts protect the wider population against disinformation?

My first move in the online game Harmony Square is to transform myself into a fake news mastermind. “We hired you to sow discord and chaos,” my fictional boss informs me in a text box that pops up on a stark blue background. “We’ve been looking for a cartoonishly evil information operative. You seemed like a good fit.”

Through a series of text-box prompts, the game goads me to inflame my pretend social media audience as much as possible. I stoke an online firestorm with a ginned-up takedown article about a fictitious local politician: “PLOOG LIED ABOUT PAST—SUPPORTED ANIMAL ABUSE IN COLLEGE!” At management’s behest, I unleash an army of bots to comment approvingly on my story, driving more traffic to it. As I escalate my crusade against Ploog, the game cheers me on.

Harmony Square is one of several games the University of Cambridge researchers have developed to bolster people’s resistance to disinformation. “What we thought would be interesting was having people make their fake news in a safe environment,” says Cambridge psychologist Jon Roozenbeek, a lead researcher on the games project with fellow psychologist Sander van der Linden. “The goal is to prevent unwanted persuasion.”

These games rest on a single, overarching premise: You can inoculate people against fake news by exposing them to small amounts of such content—much as low doses of live virus can vaccinate people against a disease—if you catch them before they are fully infected by conspiratorial belief. So far, games like Harmony Square are among the best-developed vehicles for disinformation inoculation. Researchers are also proposing and testing other, related strategies, including inoculating students in classroom settings, having people cook up their conspiracy theories, and creating online classes that teach how to identify common fake-news tactics.

Immunity Against Fake News

Reaching enough people to achieve something akin to herd immunity against disinformation is a significant challenge, however. In addition to bolstering people’s BS detection skills, a broad immunity-building campaign would need to neutralize fake news’s strong emotional pull. “Even as this approach of science and inoculation takes off, the problem has to be solved at the cultural level,” says Subramaniam Vincent, director of journalism and media ethics at Santa Clara University’s Markkula Center. “So many efforts have to come together.”

Once someone has internalized a nugget of false information, it’s very hard to get that person to disavow it.

Mentally vaccinating people against fake news goes back to the 1960s, when psychologist William McGuire proposed making people resistant to propaganda using a strategy he called a “vaccine for brainwashing.” Much as weakened viruses can teach the immune system to recognize and fight off disease, alerting people to false arguments—and refuting those arguments—might keep them from succumbing to deception, McGuire reasoned.

Take, for example, the public health recommendation that everyone visit a doctor every year. In an experiment, McGuire gave people counterarguments
against going to the doctor annually (say, that regular visits promote health anxiety and actually lead people to avoid the doctor). Then he poked holes in those counterarguments (in reality, regular doctor visits reduce undue health anxiety). In McGuire’s studies, people became better at resisting false arguments after their beliefs were challenged.

The inoculation messages warned people of impending attempts to persuade them, causing them to recognize that they might be vulnerable. The brain is wired to mount a defence against apparent threats, even cognitive ones; when challenged, people therefore seek fresh ways to protect their beliefs, much as they’d fight back if someone attacked them in a bar. The threat is a critical component of inoculation, says Josh Compton, a Dartmouth speech professor who specializes in inoculation theory. “Once we experience threat, we are motivated to think up counterarguments folks might raise and how we’ll respond,” he says.

Inoculation Theory

In the 1980s and ’90s, experts put inoculation theory into practice with fairly limited goals, like preventing teenage smoking, and limited but promising outcomes. It wasn’t until the mid-2010s, as fake news gained traction online, that Cambridge’s Van der Linden was inspired to take the inoculation concept to a higher level. Like McGuire, he was convinced that “prebunking,” or sensitizing people to falsehoods before they encountered them, was better than debunking fake stories after the fact. Multiple studies show that once someone has internalized a nugget of false information, it’s very hard to get that person to disavow it, even if the original creator posts a correction.

Van der Linden found that focusing on a single issue, as McGuire had done, has its limits. Warning people about lies on a particular subject like smoking may help them fend off falsehoods about that one topic, but it doesn’t help them resist fake news more broadly. So Van der Linden started focusing on building people’s general immunity by cluing the into the persuasion techniques in every fake-news creator’s toolbox.

In a series of mostly online studies, Van der Linden gave people general warnings about bad actors’ methods. For instance, he told them that politically motivated groups were using misleading tactics, like circulating a petition signed by fake scientists, to convince the public that there was lots of scientific disagreement about climate change. When Van der Linden revealed such fake news tactics, people readily understood the threat and, as a result, got better at sniffing out and resisting disinformation.

The idea of turning fake news inoculation into something was conceived in 2016 at a bar in the Netherlands. Over beers with friends, Roozenbeek batted around the possibility of using a game to combat false information online. He created a prototype, which he called Bad News. As he researched the idea further, Roozenbeek came across Van der Linden’s studies, and the two agreed to work together on more advanced online inoculation games. Their collaboration expanded on Bad News, then added Harmony Square, which is now freely available at

Common Fake News Tactics

In tongue-in-cheek fashion, the games introduce players to a host of common fake-news tactics. As I type a fake headline about a local politician in Harmony Square, my boss stresses the importance of stoking people’s fear with inflammatory language. “You missed some. Do better,” she scolds when I don’t include enough incendiary words like corrupt or lie in my headline. “Remember: Use words that upset people.” The game also goads me to create a website that claims to be a legitimate news outlet, sucking people in by projecting the appearance of credibility.

The argument against these dishonest tactics is embedded in the gameplay. The more disinformation you spread, the more unrest you sow in the fictional town of Harmony Square. By the end of the game, normally placid townspeople are screaming at one another. As I play, I get caught up in the narrative of how fake news tactics undermine the community from within.

To evaluate whether the games are truly effective, Roozenbeek and Van der Linden surveyed about 14,000 people before and after they played Bad News. After playing the game to the end, people were better overall at spotting falsehoods, rating the reliability of fake tweets and news reports about 20 per cent lower than they had before. The effects lasted for more than two months. These results are in line with those of other anti-disinformation tactics such as correcting or fact-checking suspect content, according to a meta-analysis of such interventions by researchers from the University of Southern California.

Social scientists see promise in the Cambridge team’s efforts to inoculate people against fake news. “Walking in the perpetrators’ shoes, so to speak, can be very effective for understanding how disinformation can be produced and some reasons why,” says Robert Futrell, a sociologist and extremism researcher at the University of Nevada, Las Vegas, although he has not reviewed specific data from Bad News or Harmony Square.

A serious literacy campaign must do more than train people to ferret out falsehoods; it must also counter the emotional pull of those falsehoods.

Even if they work well, games alone will not be enough to inoculate whole populations against online disinformation. Several million people have played the Cambridge team’s offerings so far, according to Roozenbeek, a tiny fraction of the global population. Daniel Jolley, a psychologist at the University of Nottingham, notes that large-scale inoculation will have to be implemented in a wide range of settings, from classrooms to community centres. Ideally, such programs should reach students during their school years, before they have been extensively exposed to fake news, Stanford education professor Sam Wineburg has argued.

Finland Tried It First

Finland is the first country to try inoculating people against fake news on a national scale. As Russian fake news began making its way across the border into Finland in 2014, the Finnish government developed a digital literacy course for state-run elementary and high schools. The curriculum, still in use, asks students to become disinformation czars, writing their own off-the-wall fake news stories. As they learn how fake news is produced, students also learn to recognize and be sceptical of similar content in the real world.

Elsewhere, researchers and organizations are experimenting with inoculation efforts on a smaller scale. In Australia, communications professor John Cook designed an online course in 2015 to teach people how to detect common disinformation tactics used by climate deniers. So far, more than 40,000 people have enrolled in Cook’s course.

In the United States, nonprofits like the News Literacy Project teach middle and high school students how to distinguish between fact and fiction in the media. NLP has developed a series of 18 interactive lessons, some of which walk students through fake-news creation and give examples of bogus stories likely to spread like wildfire (“Fireman Suspended & Jailed by Atheist Mayor for Praying at Scene of Fire”). More than 55,000 educators have signed up to work with NLP so far. (The Science Literacy Foundation, which supports OpenMind, is also a financial supporter of the News Literacy Project.)

Serious Literacy Program Needed

Adding to the challenge of fake news inoculation, a serious literacy campaign must do more than train people to ferret out falsehoods. It must also counter the emotional pull of those falsehoods. People tend to wade into conspiracies and false narratives when they feel scared and vulnerable, according to Jolley. When their brains flood with stress hormones, their working memory capacity takes a hit, which can affect their critical thinking. “You’ve got the skills” to mentally counter conspiracy theories, Jolley says, “but you may not be able to use them.” Research shows that people who feel socially isolated are also more likely to believe in conspiracies.

Bfake newsy contrast, the more fulfilled and capable people feel, the less vulnerable they are to disinformation. Jolley suggests that community-building ventures in which people feel part of a larger whole, like mentoring programs or clubs, could help individuals grow psychologically secure enough to resist the pull of a conspiracy theory. Making it easier to access mental health services, he adds, might also support people’s well-being in ways that improve their immunity to common fake news tactics.

As the disinformation-vaccine movement grows, one crucial unknown is just how much inoculation is enough. “What’s the equivalent of herd immunity for human society?” Vincent asks. “Do we have to have inoculation for, let’s say, 80 per cent of a country in order for the spread of misinformation to be mitigated?” Calculating that percentage, he notes, is a complex undertaking that would have to account for different ways of reaching people online and the multiple strategies used to counter fake news.

Given how challenging it will be to defang disinformation, it seems fitting that the Cambridge team’s Harmony Square game builds to an open-ended finish. When I complete the game’s final chapter, everyone in town is still fighting over the content my fake news empire churns out, and it’s unclear whether the destruction I’ve caused can be reversed. Surveying the damage, my boss applauds me. “They’re all at each other’s throats now.”


August 30, 2022 — This story was updated to include a more recent estimate of the number of educators who have engaged with the News Literacy Project platform. The number of available NLP lessons was also updated.

Posted in Life | Tagged , , | Leave a comment

Algorithms, Lies, and Social Media

Algorithms, Lies, and Social Media

Achieving a more transparent and less manipulative online media may well be the defining political battle of the 21st century.

There was a time when the internet was seen as an unequivocal force for social good. It propelled progressive social movements from Black Lives Matter to the Arab Spring; it set information free and flew the flag of democracy worldwide. But today, democracy is in retreat and the internet’s role as a driver is palpably clear. From fake news bots to misinformation to conspiracy theories, social media has commandeered mindsets, evoking the sense of a dark force that must be countered by authoritarian, top-down controls.

This paradox—that the internet is both the saviour and executioner of democracy—can be understood through the lenses of classical economics and cognitive science. In traditional markets, firms manufacture goods, such as cars or toasters, that satisfy consumers’ preferences. Markets on social media and the internet are radically different because the platforms exist to sell information about their users to advertisers, thus serving the needs of advertisers rather than consumers. On social media and parts of the internet, users “pay” for free services by relinquishing their data to unknown third parties who then expose them to ads targeting their preferences and personal attributes. In what Harvard social psychologist Shoshana Zuboff calls “surveillance capitalism,” the platforms are incentivized to align their interests with advertisers, often at the expense of users’ interests or even their well-being.

Social Media Exploiting Cognitive Limitations of Users

This economic model has driven online and social media platforms (however unwittingly) to exploit the cognitive limitations and vulnerabilities of their users. For instance, human attention has adapted to focus on cues that signal emotion or surprise. Paying attention to emotionally charged or surprising information makes sense in most social and uncertain environments and was critical within the close-knit groups in which early humans lived. In this way, information about the surrounding world and social partners could be quickly updated and acted on.

However, when the interests of the platform do not align with the interests of the user, these strategies become maladaptive. Platforms know how to capitalize on this: To maximize advertising revenue, they present users with content that captures their attention and keeps them engaged. For example, YouTube’s recommendations amplify increasingly sensational content with the goal of keeping people’s eyes on the screen. A study by Mozilla researchers confirms that YouTube not only hosts but actively recommends videos that violate its policies concerning political and medical misinformation, hate speech, and inappropriate content.

Misinformation and Fake News Headlines

In the same vein, our attention online is more effectively captured by news that is either predominantly negative or awe-inspiring.

Misinformation is particularly likely to provoke outrage, and fake news headlines are designed to be substantially more negative than real news headlines. In pursuit of our attention, digital platforms have become paved with misinformation, particularly the kind that feeds outrage and anger. Following recent revelations by a whistle-blower, we now know that Facebook’s newsfeed curation algorithm gave content eliciting anger five times as much weight as content evoking happiness. (Presumably because of the revelations, the algorithm was changed.) We also know that political parties in Europe began running more negative ads because they were favoured by Facebook’s algorithm.

Besides selecting information on the basis of its personalized relevance, algorithms can also filter out information considered harmful or illegal, for instance by automatically removing hate speech and violent content. But until recently, these algorithms went only so far. As Evelyn Douek, a senior research fellow at the Knight First Amendment Institute at Columbia University, points out, before the pandemic, most platforms (including Facebook, Google, and Twitter) erred on the side of protecting free speech and rejected a role, as Mark Zuckerberg put it in a personal Facebook post, of being “arbiters of truth.” But during the pandemic, these same platforms took a more interventionist approach to false information and vowed to remove or limit COVID-19 misinformation and conspiracy theories. Here, too, the platforms relied on automated tools to remove content without human review.

Content Decisions By Algorithms

Even though the majority of content decisions are done by algorithms, humans still design the rules the tools rely upon, and humans have to manage their ambiguities: Should algorithms remove false information about climate change, for instance, or just about COVID-19? This kind of content moderation inevitably means that human decision-makers are weighing values. It requires balancing a defence of free speech and individual rights with safeguarding other interests of society, something social media companies have neither the mandate nor the competence to achieve.

What can be done to shift this balance of power and to make the online world a better place?

None of this is transparent to consumers, because internet and social media platforms lack the basic signals that characterize conventional commercial transactions. When people buy a car, they know they are buying a car. If that car fails to meet their expectations, consumers have a clear signal of the damage done because they no longer have money in their pockets. When people use social media, by contrast, they are not always aware of being the passive subjects of commercial transactions between the platform and advertisers involving their own personal data. And if users experience adverse consequences—such as increased stress or declining mental health—it is difficult to link those consequences to social media use. The link becomes even more difficult to establish when social media facilitates political extremism or polarization.

social mediaSocial Media Content Curation

Users are also often unaware of how their newsfeed on social media is curated. Estimates of the share of users who do not know that algorithms shape their newsfeed range from 27 per cent to 62 per cent. Even people who are aware of algorithmic curation tend not to have an accurate understanding of what that involves. A Pew Research paper published in 2019 found that 74 per cent of Americans did not know that Facebook maintained data about their interests and traits. At the same time, people tend to object to the collection of sensitive information and data for the purposes of personalization and do not approve of personalized political campaigning.

They are often unaware that the information they consume and produce is curated by algorithms. And hardly anyone understands that algorithms will present them with information that is curated to provoke outrage or anger, attributes that fit hand in glove with political misinformation.

People cannot be held responsible for their lack of awareness. They were neither consulted on the design of online architectures nor considered partners in the construction of the rules of online governance.

Shifting the Balance of Power

What can be done to shift this balance of power and to make the online world a better place?

Google executives have referred to the internet and its applications as “the world’s largest ungoverned space,” unbound by terrestrial laws. This view is no longer tenable. Most democratic governments now recognize the need to protect their citizens and democratic institutions online.

Protecting democracy itself requires a redesign of the current online “attention economy” that has misaligned the interests of platforms and consumers.

Protecting citizens from manipulation and misinformation, and protecting democracy itself, requires a redesign of the current online “attention economy” that has misaligned the interests of platforms and consumers. The redesign must restore the signals that are available to consumers and the public in conventional markets: users need to know what platforms do and what they know, and society must have the tools to judge whether platforms act fairly and in the public interest. Where necessary, regulation must ensure fairness.

Regulating Fairness

Four basic steps are required:

  • There must be greater transparency and more individual control of personal data. Transparency and control are not just lofty legal principles; they are also strongly held public values. European survey results suggest that nearly half of the public wants to take a more active role in controlling the use of personal information online. It follows that people need to be given more information about why they see specific ads or other content items. Full transparency about customization and targeting is particularly important because platforms can use personal data to infer attributes—for example, sexual orientation—that a person might never willingly reveal. Until recently, Facebook permitted advertisers to target consumers based on sensitive characteristics such as health, sexual orientation, or religious and political beliefs, a practice that may have jeopardized
    users’ lives in countries where homosexuality is illegal.
  • Platforms must signal the quality of the information in a newsfeed so users can assess the risk of accessing it. A palette of such cues is available. “Endogenous” cues, based on the content itself, could alert us to emotionally charged words geared to provoke outrage. “Exogenous” cues, or commentary from objective sources, could shed light on contextual information: Does the material come from a trustworthy place? Who shared this content previously? Facebook’s own research, said Zuckerberg, showed that access to COVID-related misinformation could be cut by 95 per cent by greying out content (and requiring a click to access) and by providing a warning label.
  • The public should be alerted when political speech circulating on social media is part of an ad campaign. Democracy is based on a free marketplace of ideas in which political proposals can be scrutinized and rebutted by opponents; paid ads masquerading as independent opinions distort that marketplace. Facebook’s “ad library” is a first step toward a fix because, in principle, it permits the public to monitor political advertising. In practice, the library falls short in several important ways. It is incomplete, missing many clearly political ads. It also fails to provide enough information about how an ad targets recipients, thus preventing political opponents from issuing a rebuttal to the same audience. Finally, the ad library is well known among researchers and practitioners but not among the public at large.
  • The public must know exactly how algorithms curate and rank information and then be given the opportunity to shape their own online environment. At present, the only public information about social media algorithms comes from whistle-blowers and from painstaking academic research. Independent agencies must be able to audit platform data and identify measures to remedy the spigot of misinformation. Outside audits would not only identify potential biases in algorithms but also help platforms maintain public trust by not seeking to control content themselves.

The Way Forward

social mediaSeveral legislative proposals in Europe suggest a way forward, but it remains to be seen whether any of these laws will be passed. There is considerable public and political scepticism about regulations in general and about governments stepping in to regulate social media content in particular. This scepticism is at least partially justified because paternalistic interventions may if done improperly, result in censorship. The Chinese government’s censorship of internet content is a case in point. During the pandemic, some authoritarian states, such as Egypt, introduced “fake news laws” to justify repressive policies, stifling opposition and further infringing on freedom of the press. In March 2022, the Russian parliament approved jail terms of up to 15 years for sharing “fake” (as in contradicting official government position) information about the war against Ukraine, causing many foreign and local journalists and news organizations to limit their coverage of the invasion or to withdraw from the country entirely.

In liberal democracies, regulations must not only be proportionate to the threat of harmful misinformation but also respectful of fundamental human rights. Fears of authoritarian government control must be weighed against the dangers of the status quo. It may feel paternalistic for a government to mandate that platform algorithms must not radicalize people into bubbles of extremism. But it’s also paternalistic for Facebook to weigh anger-evoking content five times more than content that makes people happy, and it is far more paternalistic to do so in secret.

Unaccountable Social Media Corporations

The best solution lies in shifting control of social media from unaccountable corporations to democratic agencies that operate openly, under public oversight. There’s no shortage of proposals for how this might work. For example, complaints from the public could be investigated. Settings could preserve user privacy instead of waiving it as the default.

In addition to guiding regulation, tools from the behavioural and cognitive sciences can help balance freedom and safety for the public good. One approach is to research the design of digital architectures that more effectively promote both the accuracy and civility of online conversation. Another is to develop a digital literacy tool kit aimed at boosting users’ awareness and competence in navigating the challenges of online environments.

Achieving a more transparent and less manipulative media may well be the defining political battle of the 21st century.

An earlier version of this essay was published on March 24, 2022.


Posted in Life | Tagged , | Leave a comment

How To Thrive At Work If Your Older Boss Expects Hustle Culture

“How to thrive at work if your older boss expects hustle culture,” by Sorin Rizeanu, University of Victoria

Portia from the comedy-drama series The White Lotus is the epitome of Generation Z. As the assistant for a self-absorbed heiress, she embodies both the strengths and flaws of the younger generation: perpetually connected to the digital world, brimming with intelligence and full of untapped potential.

While Portia’s sometimes obnoxious behaviour can be grating, they also render her relatable. In one scene, she ruminates on a time when the world had more to offer:

“You take a picture and then you realize that everybody’s taken that exact same picture from that exact same spot, and you’ve just made some redundant content for stupid Instagram. And you can’t even get lost anymore because you can just find yourself on Google maps.”

It’s a sentiment that captures Gen Z’s struggle with and desire for, authenticity. Portia’s journey reflects the broader Gen Z experience. Initially adrift, she eventually receives a wake-up call that drives her to take control of her life.

If you, like Portia, are also struggling to meet your boss’ expectations about work, it may help to understand how older generations have shaped management, and how you can thrive in the modern workplace.

History’s Greatest Hustle Devotees

Millennials are arguably history’s greatest devotees of hustle culture. Those born between 1981 and 1996 are more likely to work multiple jobs, with many of them working two or more. One in three millennials intends to work for an app-based company like Uber or Lyft, or rent out their home to earn extra income.

Millennials grew up alongside the internet on the heels of the 2008 recession. Faced with a precarious job market and financial instability, many turned their passions into side hustles.

These side hustles range from online courses and coaching to podcasts and social media influencing. For Millennials, these endeavours are seen as a source of happiness, employability and skill development.


Millennials are the generation who kickstarted content creation and social media influencing.

Side hustles require them to be efficient, more disciplined and concerned with prioritization and focus. Free time often means switching from the main job to the side hustle, and open weekends are a rarity.

However, for Gen Z and some younger Millennials, the prospect of joining the hustle culture may seem daunting and exhausting. Instead, they want to be financially stable and successful without sacrificing their mental health and well-being.

Generational Differences At Work

Every generation leaves its mark on the workplace. Millennials in particular, through their ability to process information in a much more streamlined and efficient fashion than any previous generation.

Millennials increased workplace productivity and remodelled leadership trends. They changed the traditional workplace by wanting to matter, by wanting constant dialogue, and by wanting more opportunity and freedom.

The younger generation is just as ambitious, but they, too, are leaving their mark on work culture. 48 per cent of Gen Z small business owners juggle multiple side jobs, while 91 per cent work unconventional hours and 81 per cent work while on vacation. Almost half hold two jobs or more.

But not every young professional is a small business owner or interested in hustle culture — some just want more freedom and a greater sense of self-meaning.

Many people in their 20s and 30s are shaking off the typical 9-to-5 career path. For these individuals, success is not only measured in dollars but also in overall happiness and fulfilment.

Gen Z

Unlike Millennials, many Gen Zers don’t want to be caught in a ‘work to live’ or ‘live to work’ trap.

Work-life Balance

As a young person in the workplace, you might find yourself clashing with your older boss. Somewhat hypocritically, older bosses often view candidates with multiple jobs as lacking commitment, despite having done the same thing themselves.

Bridging the generational gap in the workplace requires open communication and mutual understanding. As a young professional, navigating hustle culture in the workplace can be challenging, but not impossible. Here are some tips for success:

  1. Be conscious of your mental well-being. An older boss might relentlessly pressure you for results and greater productivity, often at your cost and their gain. If you do not pay attention, you will soon find yourself stressed, burned out and likely feeling inadequate.
  2. Define your own priorities. Set clear boundaries between your work and personal life and prioritize tasks based on significance and impact, establish work hours and dedicate time for yourself, family and leisure. Don’t alienate your boss by directly refusing work, but do thoughtfully explain and gradually implement your boundaries.
  3. Be objectively productive. Remember that time spent working doesn’t always equate to productivity. Overworking on the side will negatively impact your workplace performance. You need to measure and evaluate yourself constantly to prove your commitment, be able to innovate and be productive at your main job.
  4. Recognize that not every workplace will be ideal for you. Instead of worrying about climbing the career ladder, seek out a workplace that acknowledges and nurtures the well-being of its employees. This might involve a job with a certain amount of flexibility or remote work, and a culture of collaboration, teamwork and diversity. An enforced top-down hierarchy may not be the best work environment for everyone. If your current workplace doesn’t align with your values and needs, consider exploring other opportunities or even starting your own venture.The Conversation


Sorin Rizeanu, Assistant Professor, School of Business, University of Victoria

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Posted in Features | Tagged , , | Leave a comment

Thomas Plante: The Mental Healthcare Crisis

Plante, a psychologist and ethicist, weighs the pros and cons of chatbots. Can AI fill in for a shortage of human therapists?

Therapy has changed a lot since the days of Sigmund Freud, from the advent of cognitive behavioural therapy techniques to the use of psychotropic drugs. Now this field is facing a huge shift in who, or rather what, is interacting with the patients. Over the past decade, tech companies have begun rolling out chatbots—interactive AI programs—that use algorithms to dispense mental health advice. In a 2021 survey, 22 per cent of Americans said they had already experimented with one of these computerized therapists. 

Thomas Plante, a psychology professor at Santa Clara University and adjunct professor of psychiatry and behavioural sciences at Stanford University School of Medicine, has watched the rise of mental health machines with a mix of excitement and concern. Plante, who teaches at the Markkula Center for Applied Ethics, wonders about the ethical implications of therapy without the therapist. He also runs a private clinical practice, where he sees up close what works and what doesn’t for his patients. Here, he talks with Corey S. Powell, OpenMind‘s co-founder, about the pros and cons of AI therapy. (This conversation has been edited for length and clarity.)


Thomas Plante on Therapy Bots


Statistics show that a record number of Americans rely on therapy and other forms of mental healthcare. I also hear reports of a chronic shortage of therapists, so many people who need help aren’t getting it. Is there a mental healthcare crisis in the United States?

Absolutely. The Surgeon General at the end of 2021 issued an unprecedented advisory about this mental health crisis. The evidence suggests that a lot of people have significant concerns about mental health across the age span, particularly among youth. We call it a mental health tsunami: anxiety, depression, substance abuse, suicidality—they are all off the charts. There are not enough mental health professionals out there. And even if there were, it’s a hassle. It’s expensive, and a lot of things aren’t covered by insurance. It makes sense that people would be turning to web-based approaches for their mental health issues and concerns.

Do you see an important role for AI therapists? Is this technology good enough to fill in some of the shortage of human therapists?

This is all brand new. In the health world, we always say that before you offer treatment to the public for anything, it has to go through randomized clinical trials to demonstrate that it works. You just don’t want a willy-nilly rush into something that may not only not work but could hurt people. I live in Silicon Valley, where one of the mottos is “move fast and break things.” Well, you don’t want to do that when it comes to mental health treatment. You want to do those studies so that, at the end of the day, you’re giving people evidence-based best practices. But some people don’t want to spend the time. They create these things and just launch them. There are issues with that.

You run a clinical practice. Do your patients use any of these online therapy tools and, if so, do they find the tools effective?

I’ve had patients who use some of the popular apps like Calm (a meditation app), which has been around for a while. I think that’s fine. A lot depends on the person’s diagnosis. When people are looking for some stress relief, maybe they have mild to moderate anxiety, these kinds of things make good sense. But if they are experiencing major psychopathology—let’s say, schizophrenia or major depression that includes active suicidality—then I would worry about overreliance.

The fear is that the technology is oversold, and people might not get effective treatment from a licensed therapist (when they need it) because they’re overly reliant on this convenient app.

I’ve noticed that people have a strong tendency to project consciousness and intent onto these AI systems. Could that impulse help make therapy bots seem more believable—or maybe too believable, depending on your perspective?

That’s a great and complicated question. People project their desires and fantasies onto other human beings, and they do it onto devices too. It can help with the placebo effect: If you’re told that this is a great app and it helps lots of people, you’re going to expect it will help you too. But then you might expect perfection from it because you can’t see the flaws. When you’re working with a human being, even if you think they’re very helpful, you can see the flaws. Maybe your therapist has a messy office, or they were late for your session, or they spilt their coffee on their notes. You can see that they’re far from perfect. You don’t see that with the computer-generated chatbot.

The other thing that’s important to mention is that the helpfulness of therapy for most people goes beyond technique. Part of the helpfulness of therapy is having that human being who is on your side, journeying with you as you face difficult things. Something as simple as listening without judgment and with empathy goes a long way. A chatbot is not going to do that.

I can see the limitations of therapy chatbots, but could they be actively harmful? You alluded to the risks of not going through clinical evaluation. What are those risks?

A lot of people—not just the average people on the street but also the average person working in the industry—seem to think that you can’t hurt people with these apps. You know, “They give you some less-than-perfect advice, but it is not exactly like they’re doing brain surgery.” And yet we know that people can be harmed by licensed mental health professionals, never mind these apps. For instance, anorexia nervosa has a fairly high mortality rate. The last thing you want to tell a teenage girl who’s suffering from anorexia is to congratulate her for eating less. (This happened last year with the Tessa chatbot created by the National Eating Disorders Association.)

These are the challenges of companies not going through the kind of approval process they would have to do if they were offering pharmaceuticals. We also don’t know a lot yet about how much they can help. You have to do longitudinal research, and it takes a while for that research to happen.

The reality is that millions of people are already using therapy bots, and that number is surely going to increase. Where do you see this field heading?

More and more in our culture, we are looking toward computer-related ways to solve our problems. Do you want to buy a book? You go to Amazon when you used to go to the bookstore. A lot of people want services through their computer because there are a lot of advantages: The convenience and cost are very compelling. For people who have mild to moderate issues, it’s possible that AI is going to become the go-to approach, although it’s just one tool of a variety of tools that could be used to help people.

The fear is that the technology is oversold, and people might not get effective treatment from a licensed therapist (when they need it) because they’re overly reliant on this convenient app. If you want to get the most out of your life, you need to use all the tools that are available to you. There’s no one-size-fits-all solution.

This Q&A is part of a series of OpenMind essays, podcasts and videos supported by a generous grant from the Pulitzer Center‘s Truth Decay initiative.

Posted in Life | Tagged , | Leave a comment

Swipe Right Or Left? How Dating Apps Are Impacting Modern Masculinity

“Swipe right or left? How dating apps are impacting modern masculinity,” by Treena Orchard, Western University

What it means to be a man is changing. Critical men or masculinity studies is an emerging robust research field that explores how men and masculinity are being transformed by shifting socio-economic, sexual and political conditions in our post-industrial world.

Fascinating new male-identifying sub-cultures and communities have emerged, like mushroomcore and dandies. Yet heteronormative masculinity is typically framed as threatening, toxic or maladaptive, as in the case of fragile masculinity.

In my years of swiping on dating apps, I encountered different kinds of masculinities, as well as some very offensive and bizarre behaviours. Particularly perplexing was how quickly men vanished — or ghosted — when in-person dates were suggested, despite saying that they wanted physical intimacy. This was confusing and seemed to contradict the dominant narrative that men use dating apps primarily for hookups.

If not for sex, what are straight men doing on the apps? Are dating apps impacting masculinity? How do these changes in the gender and tech landscape impact women’s sexual possibilities?

Intimacy Sought on Dating Apps

a book cover showing a flower with the title Sticky, Sexy, Sad in light blue
A sexuality scholar writes about experiences with online dating.
(University of Toronto Press)

As a sexuality scholar and a woman who has sought intimacy with men on dating apps, these are important questions. I explored many of them in Sticky, Sexy, Sad: Swipe Culture and the Darker Side of Dating Apps, where I applied my academic training as an anthropologist to my dating life.

My book is based on notes that were taken between 2017 and 2022 when I was actively swiping on dating apps. No real names or identifying information are included in these data. Using the researcher’s life as the subject is called auto-ethnography, and it’s an established approach that combines documentation with creative or literary techniques, memoir and cultural critique. Auto-ethnography is about articulating insider knowledge of certain cultural experiences in which the researcher is a participant.

Here are some of the most important things I learned about men and male sexual vulnerability while swiping my way into the dark heart of modern romance.

Lack of Quality Dating Resources

Dating apps provide very little instruction for how to date beyond a few dos and don’ts, but there are other resources men can access, including books and coaching services, to fill the void. The problem is that many are business-oriented or rooted in sexist bro culture ideology, where women are positioned as opponents or prizes to be tricked into submission.

Take the titles of some of the leading dating books for men: The Mystery Method: How to Get Beautiful Women Into Bed; No More Mr. Nice Guy! A Proven Plan for Getting What You Want in Love, Sex and Life; The Foundation: A Blueprint for Becoming an Authentically Attractive Man; and the classic The Game: Penetrating the Secret Society of Pickup Artists.

Also, most dating and relationship coaches, even those who are male, target their services to women, who are consistently framed as searching for love and wanting to understand men. On the other hand, men are typically seen as only wanting sex.

Lack of Follow-Through

One thing that was very clear during my swiping odyssey is that guys want physical intimacy and are eager to talk about it with people they trust. In my in-person dates and text exchanges with men, I would often ask about their experiences with swiping out of a desire to understand the men better and to provide a compassionate ear. Most of them were eager to share their encounters on dating apps, including the factors that were standing in their way of being able to follow through on the intimacy they were seeking.

The first factor is the gamification of dating: how dating apps are marketed as a game with endless options. This keeps men swiping and can make decisions about who to talk to or get together with feel somehow misaligned with the whole swiping endeavour.

Add to this the links between gaming culture and misogyny, and it’s no wonder men regularly sacrifice sex and intimacy in the name of the swipe.

The second factor is the lack of quality sexual education and the resulting dependence on porn. Many matches I spoke with said they learned about sex from pornography sites like PornHub. These sites often depict women in hyper-sexualized ways and don’t include dialogues about how the men featured feel about what they’re doing in an emotional sense. Excessive porn can lead to sexual dysfunction, which has been linked to an increase in sexual vulnerability and anxiety around sex.

The third factor is the perceived social pressure to be sexually sophisticated, to mirror the adventurous lives of celebrities and sports figures. Some men inflate their number of sexual partners or brag to their friends about doing certain acts to uphold a studly image. Yet in our interactions, they shared feelings of guilt and shame about lying to their friends and not being “good enough” at sex. Sometimes this means men avoid sex altogether.

The fourth factor is the impact of the #MeToo movement, which began in 2017, around the time I started swiping. Men talked about feeling nervous and worried that they may come across as a creep or overly aggressive for something as simple as showing an interest in women on dating apps. They explained that this is why many of them ignore women who communicate with them or why they flake out on scheduled dates.

Women in Driver’s Seat

In many cases just talking about sex, let alone doing it, can feel too risky.

dating appsThe fifth factor is the way dating platforms are designed, specifically Bumble. Until April 30 of this year, the heterosexual version required women to make the opening move and men had to wait to be asked out. Intended to put women in the driver’s seat, the role reversal seemed to make men uncomfortable, even though they were aware of the app’s design when they joined.

Indeed, the levels of misogyny on Bumble far exceeded what I experienced on any other swiping platform. This aligns with studies that show how a perceived lack of autonomy and independence — common attributes of masculinity — contribute to toxic masculinity.

Complex Vulnerability

Beneath the negative gloss of toxic masculinity, there is a steady stream of vulnerability regarding sex, intimacy and identity among men in our complex contemporary world. These insights enrich our knowledge about how straight men feel about these issues, including sex, which is something that many are willing to forego rather than get wrong.

Men need and deserve to learn about sex, relationships, gendered communication and themselves in ways that are inclusive, welcoming and supportive. As a recent article in The Washington Post stated, when we teach boys and young men to diminish or ignore their emotions and sexual desires, this leads to poor health outcomes, including rising rates of suicide and unsatisfying or violent relationships.

Let’s make the future of dating sexier and safer by making space for boys, men and male-identifying people to explore and learn about these vital aspects of life in a different way.The Conversation


Treena Orchard, Associate Professor, School of Health Studies, Western University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Posted in Lifestyle | Tagged , , | Leave a comment

Domestic Violence ‘Grown Old’: The Unseen Victims Of Prolonged Abuse

“Domestic violence ‘grown old’: the unseen victims of prolonged abuse,” by Hannah Bows, Durham University

Domestic violence tends to be considered a younger person’s problem. The majority of adverts and campaigns focus on issues affecting younger people – and they often use younger models to increase awareness of domestic violence. But evidence is emerging that violence against women doesn’t stop as they grow older: it’s just less visible.

Counting Dead Women, a study by the chief executive of London-based domestic violence charity nia Karen Ingala Smith, has revealed that the majority of women killed by men’s violence in 2013 were over the age of 40. The victims of two suspected domestic homicides last week were both women over the age of 45.

Yet it is difficult to get any real idea of the prevalence of domestic violence in middle age and later life, because of the lack of studies attempting to capture this data. The main source of information on domestic violence is the Crime Survey for England and Wales, which – among other flaws – has an upper age limit of 59 years for its questions on domestic violence. I have aimed to address this lack of information in my research.

The Longevity Of Violence

Domestic violence against women in middle age and later life occurs in two main contexts. The first is late-onset domestic violence, which begins for the first time in later life, either in a new or existing relationship. The second is domestic violence “grown old”, where women have experienced domestic violence throughout a relationship lasting into their later years. Existing studies indicate that the latter category – domestic violence “grown old” – is the most common. Both contexts can involve a range of behaviours, including financial, emotional, physical and sexual abuse.

In my research, the severity of physical abuse often declined as both the victim and perpetrator aged, however emotional and sexual violence continued. Several women commented that they had experienced physical violence previously in their relationship, however, the perpetrators no longer needed to use physical violence to control them. Instead, the threat of violence and emotional abuse was enough to intimidate and manipulate these women.

Compounded Issues For Older Women

When abusive relationships last for long periods of time, it compounds the issues that make it difficult for the women to leave. For one thing, older women may come from generations where women were less likely to work and have financial independence. For another, generational norms and values – particularly for women over 50 – often mean that women believe violence is a normal part of a relationship and that such matters should be kept private and within the family.

violenceThe shame of experiencing violence is often deeply embedded – and many women blame themselves for the abuse: for not being a good wife, or for expecting too much from their husbands. Having a nice home and a good job can also prevent women from leaving violent relationships. Compared to younger women, older women are more likely to have lived in their home for several years and have accumulated possessions which are difficult to leave behind.

Once a woman becomes a grandmother and her role includes caring for grandchildren this also makes it difficult to leave an abusive relationship because of issues such as feeling responsible for providing childcare or concern about granddad being seen as an abuser. Many of these issues are used as coercive methods by the perpetrator – the threat of losing the home and years of possessions, or the fear of not seeing the grandchildren again both came up in the study.

Different Needs

The damaging effects of domestic violence have been well documented. But research suggests that women in middle age to later life may be more likely to use alcohol as a way of coping, compared to younger age groups. And older women can be more likely to suffer severe physical injuries than younger women; injuries that can be exacerbated by pre-existing health conditions linked to age, such as arthritis, diabetes or osteoporosis.

There is also evidence that older women are more likely to have a range of mental health problems. Although mental health problems are often seen in younger groups, they are likely to be exacerbated by the lasting abuse experienced by older women.

When it comes to providing support for the victims of domestic abuse, some needs are broadly similar across age groups – for instance, the need for secure housing and financial support. However, the issues caused by physical and mental health conditions and alcohol or drug misuse are often magnified for older women. They are less likely to report violence and more likely to need a range of support services including long-term counselling, help with alcohol or drugs and assistance with finances – many women may not have worked or had any access to money.

Domestic Violence Affects All Age Groups

There are other differences too – years of abuse can erode women’s confidence and they may find it difficult to join in group sessions, particularly when the other women are younger. Depending on their age, accommodation in refuges – which is often based around the needs of women with children – may not be appropriate, due to older women’s difficulties with stairs, for example, or because conditions are too loud. This is made worse by a general lack of awareness that domestic violence affects all age groups and a lack of specific support for older victims. Many of the women I asked said they were either unaware of support services for domestic abuse victims or thought they were only for younger women.

But recognition of the fact that middle-aged and older women are experiencing domestic violence, and have different needs to younger women, is growing. The first refuge specifically for women over the age of 45 opened recently in Teeside in north-east England last week, as a part of Eva Women’s Aid. This is an important first step towards responding to the needs of older survivors – but we must continue to raise awareness about domestic violence directed against older women, and address the gaps in support.The Conversation


Hannah Bows, Researcher (Sexual Violence and Violence against Women), Durham University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Posted in Lifestyle | Tagged , | Leave a comment

Your AI Therapist Is Not Your Therapist

“Your AI therapist is not your therapist: The dangers of relying on AI mental health chatbots,” by Zoha Khawaja, Simon Fraser University and Jean-Christophe Bélisle-Pipon, Simon Fraser University

With current physical and financial barriers to accessing care, people with mental health conditions may turn to artificial intelligence (AI)-powered chatbots for mental health relief or aid. Although they have not been approved as medical devices by the U.S. Food and Drug Administration or Health Canada, the appeal to use such chatbots may come from their 24/7 availability, personalized support and marketing of cognitive behavioural therapy.

However, users may overestimate the therapeutic benefits and underestimate the limitations of using such technologies, further deteriorating their mental health. Such a phenomenon can be classified as a therapeutic misconception where users may infer the chatbot’s purpose is to provide them with real therapeutic care.

With AI chatbots, therapeutic misconceptions can occur in four ways, through two main streams: the company’s practices and the design of the AI technology itself.

Company Practices: Meet Your AI Self-Help Expert

First, inaccurate marketing of mental health chatbots by companies that label them as “mental health support” tools that incorporate “cognitive behavioural therapy” can be very misleading as it implies that such chatbots can perform psychotherapy.

Not only do such chatbots lack the skill, training and experience of human therapists, but labelling them as being able to provide a “different way to treat” mental illness insinuates that such chatbots can be used as alternative ways to seek therapy.

This sort of marketing tactic can be very exploitative of users’ trust in the healthcare system, especially when they are marketed as being in “close collaboration with therapists.” Such marketing tactics can lead users to disclose very personal and private health information without fully comprehending who owns and has access to their data.

The second type of therapeutic misconception is when a user forms a digital therapeutic alliance with a chatbot. With a human therapist, it’s beneficial to form a strong therapeutic alliance where both the patient and therapist collaborate and agree on desired goals that can be achieved through tasks, and form a bond built on trust and empathy.

Human Therapist

Since a chatbot cannot develop the same therapeutic relationship as users can with a human therapist, a digital therapeutic alliance can form, where a user perceives an alliance with the chatbot, even though the chatbot can’t actually form one.

Four examples of marketing mental health apps
Examples of how mental health apps are presented: (A) Screenshot taken from Woebot Health website. (B) Screenshot taken from Wysa website. (C) Advertisement of Anna by Happify Health. (D) Screenshot taken from Happify Health website.
(Zoha Khawaja)

A great deal of effort has been made to gain user trust and fortify digital therapeutic alliance with chatbots, including giving chatbots humanistic qualities to resemble and mimic conversations with actual therapists and advertising them as “anonymous” 24/7 companions that can replicate aspects of therapy.

Such an alliance may lead users to inadvertently expect the same patient-provider confidentiality and protection of privacy as they would with their healthcare providers. Unfortunately, the more deceptive the chatbot is, the more effective the digital therapeutic alliance will be.

Technological Design: Is Your Chatbot Trained To Help You?

The third therapeutic misconception occurs when users have limited knowledge about possible biases in the AI’s algorithm. Often marginalized people are left out of the design and development stages of such technologies which may lead to them receiving biased and inappropriate responses.

When such chatbots are unable to recognize risky behaviour or provide culturally and linguistically relevant mental health resources, this could worsen the mental health conditions of vulnerable populations who not only face stigma and discrimination but also lack access to care. A therapeutic misconception occurs when users may expect the chatbot to benefit them therapeutically but are provided with harmful advice.

Lastly, a therapeutic misconception can occur when mental health chatbots are unable to advocate for and foster relational autonomy, a concept that emphasizes that an individual’s autonomy is shaped by their relationships and social context. It is then the responsibility of the therapist to help recover a patient’s autonomy by supporting and motivating them to actively engage in therapy.

AI chatbots provide a paradox in which they are available 24/7 and promise to improve self-sufficiency in managing one’s mental health. This can not only make help-seeking behaviours extremely isolating and individualized but also create a therapeutic misconception where individuals believe they are autonomously taking a positive step towards amending their mental health.

A false sense of well-being is created where a person’s social and cultural context and the inaccessibility of care are not considered contributing factors to their mental health. This false expectation is further emphasized when chatbots are incorrectly advertised as “relational agents” that can “create a bond with people…comparable to that achieved by human therapists.”

Measures To Avoid The Risk Of Therapeutic Misconception

Not all hope is lost with such chatbots, as some proactive steps can be taken to reduce the likelihood of therapeutic misconceptions.

Through honest marketing and regular reminders, users can be kept aware of the chatbot’s limited therapeutic capabilities and be encouraged to seek more traditional forms of therapy. In fact, a therapist should be made available for those who’d like to opt out of using such chatbots. Users would also benefit from transparency on how their information is collected, stored and used.

Active involvement of patients during the design and development stages of such chatbots should also be considered, as well as engagement with multiple experts on ethical guidelines that can govern and regulate such technologies to ensure better safeguards for users.The Conversation


Zoha Khawaja, Master of Science Student, Health Sciences, Simon Fraser University and Jean-Christophe Bélisle-Pipon, Assistant Professor in Health Ethics, Simon Fraser University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Posted in Uncategorized | Tagged , , | Leave a comment

Why Rich Parents Are More Likely To Be Unethical

“Why rich parents are more likely to be unethical,” by David M. Mayer, University of Michigan

Federal attorneys in 2019 arrested 50 people in a college admission scam that allowed wealthy parents to buy their kids’ admission to elite universities. Prosecutors found that parents together paid up to US$6.5 million to get their kids into college. The list included celebrity parents such as actresses Felicity Huffman and Lori Loughlin.

Some might ask why these parents failed to consider the moral implications of their actions.

My 20 years of research in moral psychology suggest many reasons why people behave in an unethical manner. When it comes to the wealthy, research shows that they will go to great lengths to maintain their higher status. A sense of entitlement plays a role.

How People Rationalize

Let’s first consider what allows people to act unethically and yet not feel guilt or remorse.

Research shows that people are good at rationalizing unethical actions that serve their self-interest. The success, or failure, of one’s children often has implications for how parents view themselves and are viewed by others. They are more likely to bask in the reflected glory of their children. They seem to gain esteem based on their connection to successful children. This means parents can be motivated by self-interest to ensure their children’s achievement.

In the case of cheating for their children, parents can justify the behaviour through comparisons that help them morally disengage from an action. For example, they could say that other parents do a lot worse things, or minimize the consequences of their actions through words such as, “My behaviour did not cause much harm.”

Viewing the unethical outcomes as serving others, including one’s children, could help parents create a psychological distance to rationalize misconduct. Several studies demonstrate that people are more likely to be unethical when their actions also help someone else. For example, it is easier for employees to accept a bribe when they plan to share the proceeds with coworkers.

Sense of Entitlement of Rich Parents

When it comes to the wealthy and privileged, a sense of entitlement, or a belief that one is deserving of privileges over others, can play an important role in unethical conduct.


Privileged individuals are also less likely to follow rules and instructions given they believe the rules are unjust. Because they feel deserving of more than their fair share, they are willing to violate norms of appropriate and socially agreed-upon conduct.

Feeling a sense of entitlement also leads people to be more competitive, selfish and aggressive when they sense a threat. For example, white males are less likely to support affirmative action to even the playing field because it threatens their privileged status.

Research suggests that entitlement may come in part from being rich. Wealthy individuals who are considered “upper class” based on their income have been found to lie, steal and cheat more to get what they desire. They have also been found to be less generous. They are more likely to break the law when driving, give less help to strangers in need, and generally give others less attention.

Additionally, growing up with wealth is associated with more narcissistic behaviour, which results in selfishness, expressing a need for admiration, and a lack of empathy.

Consequences of Status Loss

parentsIndividuals who think they deserve unfair advantages are more likely to take action to increase their level of status, such as ensuring their children attend high-status universities. Losing status appears to be particularly threatening for high-status individuals.

A recent review of the research on status demonstrates that status loss, or even a fear of status loss, has been associated with an increase in suicide attempts. Individuals have been reported to show physiological changes such as higher blood pressure and pulse.

Such individuals also made increased efforts to avoid status loss by being willing to pay money and allocating resources to themselves.

In their book “The Coddling of the American Mind,” First Amendment expert Greg Lukianoff and social psychologist Jonathan Haidt make the case that parents, especially in the upper class, are increasingly anxious about their children attending top universities.

These authors argue that given economic prospects are less certain because of stagnating wages, automation and globalization, wealthier parents tend to be particularly concerned about the future economic opportunities for their children.

Feeling Invulnerable

People who feel a sense of power, which often comes along with wealth and fame, tend to be less likely to believe they are vulnerable to the detrimental consequences of unethical behaviour.

Experiencing a psychological sense of power leads to a false feeling of control. It could also lead to increased risk-taking and a decrease in concern for others.

It is possible that some of these moral psychology reasons were behind these wealthy parents cheating on behalf of their children. A desire to go to great lengths to help one’s child is admirable. However, when those lengths cross ethical boundaries it is a step too far.The Conversation


David M. Mayer, Professor of Management & Organizations, University of Michigan

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Posted in Lifestyle | Tagged , , , | Leave a comment

Why We Sometimes Hate The Good Guy

“Why we sometimes hate the good guy,” by Pat Barclay, University of Guelph

Everyone is supposed to cheer for good guys. We’re supposed to honour heroes, saints and anyone who helps others, and we should only punish the bad guys. And that’s what we do, right?

Well, sometimes.

Most of the time, we do indeed reward co-operators. We also often punish uncooperative people who harm others, who aren’t good team players or who freeload on the hard work of others. But sometimes the good guys also get punished or criticized, specifically because they are so good.

Why would anyone punish or criticize someone for being good? This seems puzzling because it brings down group cooperation. However, it is no anomaly.

This punishment of good co-operators has been discovered in multiple fields, including experimental economics, social psychology and anthropology, where it is variously called “antisocial punishment” or “do-gooder derogation.”

Cooperation and punishment are often studied using economic games with real money, where people can either cooperate or be selfish and can pay to “punish” others for their actions.

While most punishment in these studies is directed at uncooperative group members, approximately 20 per cent of all punishment is directed at the most cooperative group members. Furthermore, while the rates of antisocial punishment vary, it has been found in every society where it has been investigated. Researchers are at a loss to explain why antisocial punishment exists.

“You’re Making Me Look Bad!”

Our research suggests a simple reason why we sometimes hate the good guy: They make us look bad by comparison. Many of us have heard of people saying: “Stop working so hard, you’re making the rest of us look bad.”

Are you a Mother Teresa?

This is the same phenomenon: When one person looks really good, others look bad by comparison. They then have an incentive to stop that person from looking good, especially if they can’t (or won’t) compete.

Just like every other trait, generosity is relative. Someone is only deemed good or generous based on how they compare to others. In the land of Scrooges, a normal person seems like Mother Teresa. In the land of Mother Teresas, a normal person seems like Scrooge.

When faced with a Mother Teresa, how can a normal person compete? One option is to step up one’s game and actively compete to be more generous (“competitive altruism”). A second option is to bring the best co-operators down, Scrooge-like, via do-gooder derogation and antisocial punishment.

Or are you a Scrooge?

This manifests as suppressing someone’s cooperation or work ethic, inferring ulterior motives for altruistic actions, implying real or imagined hypocrisy (“He’s a vegetarian, but wears leather shoes!”), attacking them on unrelated dimensions or outright punishing them.

We recently ran an experiment to test whether competition to look good is what drives antisocial punishment. Our participants were assigned to either a control condition or an experimental condition where they had an incentive to appear more generous than others.

Suppressing The Good

In our control condition, participants played an economic game known as a “public goods game,” where they could donate money to a “public good” which benefited everyone, or keep the money for themselves. We then let participants pay to punish others, and we calculated how much punishment was targeted at the best co-operators.

Our experimental condition was the same as the control condition, except that an additional participant was an observer who could see how much everyone donated to the public good. The observer could choose one person as a partner for a subsequent cooperative task, which prompted everyone in the group to appear more cooperative than others.

We hypothesized that when there was this competition to be chosen as a partner, there would be more punishment of the top co-operators because that’s when social comparisons are more important.

Our results unambiguously supported our hypothesis: There was five times as much punishment of the good co-operators when people competed to be chosen compared to the absence of such a competition.

Furthermore, this antisocial punishment was effective at suppressing the good co-operators, thus preventing the good co-operators from making the bad co-operators look bad. In other words, antisocial punishment worked.

Hating the Good Guy – Why Does It Matter?

Critics often attack the motives of people who protect the environment, seek social justice, donate money or work too hard in organizations. Such good deeds are dismissed as naïve, hypocritical (“champagne liberals”) or as mere “virtue signalling” by those who do not perform those deeds. If left unchecked, this criticism may ultimately reduce how often people do good deeds.

Our research helps us recognize these attacks for what they are: A competitive social strategy, used by low co-operators, to bring others down and stop them from looking better than they do.

By identifying this strategy and calling it out, we can make it less effective, and thus allow good deeds to truly go unpunished.The Conversation 


Pat Barclay, Associate Professor of Psychology, University of Guelph

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Posted in Inspirational | Tagged , | Leave a comment