• 0 Posts
  • 169 Comments
Joined 2 years ago
cake
Cake day: July 6th, 2023

help-circle




  • Our interpretation is that people who responded positively to these statements would feel they “win” by endorsing misinformation—doing so can show “the enemy” that it will not gain any ground over people’s views.

    The article glosses over the distinction between endorsing misinformation and believing misinformation. I think people often interpret poll questions as expressions of political affiliation, so for example a person who thinks that the covid lockdowns were a mistake might say that covid is caused by 5G because that’s the answer that upsets or offends lockdown supporters, not because this person thinks it is the literal truth. In other words, what the authors are seeing is not necessarily sincere belief but rather a deliberate, politically motivated endorsement of statements known to be false.

    Edit: a blog I like addressing a similar phenomenon:

    You can see that after the Ferguson shooting, the average American became a little less likely to believe that blacks were treated equally in the criminal justice system. This makes sense, since the Ferguson shooting was a much-publicized example of the criminal justice system treating a black person unfairly.

    But when you break the results down by race, a different picture emerges. White people were actually a little more likely to believe the justice system was fair after the shooting. Why? I mean, if there was no change, you could chalk it up to white people believing the police’s story that the officer involved felt threatened and made a split-second bad decision that had nothing to do with race. That could explain no change just fine. But being more convinced that justice is color-blind? What could explain that?

    My guess – before Ferguson, at least a few people interpreted this as an honest question about race and justice. After Ferguson, everyone mutually agreed it was about politics.








  • The study revealed that, while transparency about good news increases trust, transparency about bad news, such as conflicts of interest or failed experiments, decreases it.

    Yes, that’s generally how a Bayesian agent would determine the extent to which an institution is trustworthy. A failed attempt to hide “bad news” would be stronger evidence that an institution is not trustworthy than a frank admission is, but that frank admission is still a reason to revise one’s estimate of trustworthiness downwards.