I believe that because Reddit is generally left-leaning and the majority of those users are opposed to AI, we may see a disproportionate rise in AI-generated right-wing content, which could influence public opinion. And the pentagon also showed interest in using LLMs to gaslight people.
Reddit doesn’t matter nearly as much as you think. It’s not going to move the needle appreciably.
Almost all English language text is liberal, meaning capitalist, very little is socialist, virtually none is communist, and quite a lot is anti-communist. So there’s your baked-in political bias for English language models.
I wonder if using a chinese language model and then translating it would get better or worse political content

Not 100% on the same topic, but in the software development world we are seeing this quite often. Because there is a bias for react and python on the original pull for ai, there is a deluge of new projects with those two stacks because people ask ai for software…and those two pop up.
In the same way, the talking points for ai will seem wierdly stick in a certain year/decade because that’s where most of the talking points were pulled from.
I believe that because Reddit is generally left-leaning and the majority of those users are opposed to AI, we may see a disproportionate rise in AI-generated right-wing content, which could influence public opinion. And the pentagon also showed interest in using LLMs to gaslight people.
We already see this: actual right wing political advertisements using AI and Twitter is full of the shit. It’s legitimately easier to trick conservatives with the slop.




