Wikipedia will be 25 years old in January. During that time, the encyclopedia has gone from a punchline about the unreliability of online information to the factual foundation of the web. The projectâs status as a trusted source of facts has made it a target of authoritarian governments and powerful individuals, who are attempting to undermine the site and threaten the volunteer editors who maintain it. (For more on this conflict and how Wikipedia is responding, you can read my feature from September.)
Now Wikipediaâs cofounder Jimmy Wales has written a new book, The Seven Rules of Trust: A Blueprint for Building Things That Last. In it, Wales describes a global decline in peopleâs trust in government, media, and each other, instead looking to Wikipedia and other organizations for lessons about how trust can be maintained or recovered. Trust, he writes, is at its core an interpersonal assessment of someoneâs reliability and is best thought of in personal terms, even at the scale of organizations. Transparency, reciprocity â you have to give trust to get trust â and a common purpose are other ingredients that he attributes to Wikipediaâs success.
We spoke over video call about his book, how Wikipedia handles contentious topics, and the threats facing the project and other fact-based institutions.
Photo by Hayley Benoit / The Verge
The interview has been condensed and edited for clarity.
The Verge: You wrote a book about trust, and a global crisis in trust. Can you tell me what that crisis is and how we got there?
Jimmy Wales: If you look at the Edelman Trust Barometer survey, which has been going since 2000, youâve seen this steady erosion of trust in journalism and media and business and to some degree in each other. I think it gives rise in a business context to a lot of increased cost and complexity, and politically, I think itâs tied up with the rise of populism. So I think itâs important that we focus on this issue and think about, Whatâs gone wrong? How do we get back to a culture of trust?
What do you think has gone wrong?
I think thereâs a number of things that have gone wrong. The trend actually goes back to before the Edelman data. Some of the things I would point to are the decline of the business model for local journalism. To the extent that the business model for journalism has been very difficult, full stop, you see the rise of low-quality outlets, clickbait headlines, all of that. But also that local piece means people arenât necessarily getting information that they can verify with their own eyes, and I think that tends to undermine trust. In more recent times, obviously the toxicity of social media hasnât been helpful.
Why has Wikipedia so far bucked that trend and continued to be fairly widely trusted?
Part of the rationale for writing the book is to say, âLook, Wikipedia has gone from being kind of a joke to one of the few things people trust, even though weâre far from perfect.â I think transparency is hugely important. The idea that Wikipedia is an open, collaborative system and you can come and see how decisions are made, you can join and participate in those decisions â thatâs been very helpful. I think neutrality is really important. The idea that we shouldnât take sides on controversial topics is one that resonates with a lot of people. I donât want to come to an encyclopedia or frankly a newspaper and be told only one side of the story. I want to get the full picture so I can understand the situation for myself.
You brought up the Edelman survey and decline in trust in media, government, and to a lesser extent individuals. Are we seeing a decline in trust or a transfer of trust from institutions to individuals? In the book, you say we are hardwired to trust at an interpersonal level by gauging other peopleâs authenticity, which is a trait that plays very well on social platforms, where some very trusted figures also gain extra trust by telling their followers not to trust in the media, the FDA, the universities. Do you see this dynamic playing a role, and if you do, how has Wikipedia, which is an institution, continued to be trusted?
I think thereâs some truth to that. But I also think itâs incomplete because I think a lot of people who support Donald Trump will also say they donât really trust him. They just think itâs not relevant. Theyâve sort of lost faith in the idea of people being honest. So theyâre more likely to say, âAll politicians lie, so why is that a big deal?â I obviously think it is a big deal. I think thatâs very problematic.
Similarly, I think a lot of the people who are jumping on a bandwagon undermining trust in science, for example, basically see a way to get successful doing it. I mean, thatâs a pretty cynical view of those particular people, and Iâm not a very cynical person, but itâs hard to come to any other conclusion sometimes, that thereâs a lot of grifting going on.
I interviewed Francis Fry for the book, and sheâs a Harvard academic who also has business experience. One of the things she said to me was, people often say that once youâve lost trust â thatâs it, youâll never get it back. And she says thatâs not true. You can rebuild trust. There are certain definable things that organizations and people can do to rebuild trust. So when we think about institutions being attacked, they probably should reflect on what made them vulnerable.
You have some examples in the book, like the back-and-forth about masking and covid, and obviously journalists do make errors. But I tend to think that most publications are fairly transparent about issuing corrections, though maybe not to the level of Wikipedia. How much of the decline in trust has to do with actual mistakes made by those institutions, versus people or groups that want to be able to define their own reality undermining what they see as rival centers of facts, whether thatâs academia or science or journalism?
I absolutely think itâs both. In many cases, we have seen media with a real blind spot, and I typically would view it more often as a blind spot problem, rather than deliberately being biased. I live in London. All three of the major political parties were all opposed to Brexit, and in London you could not really find anybody who was openly supporting Brexit, not among my social group. Everybody thought it was a completely ridiculous idea. And yet the public voted for it.
I think a big part of that was that London wasnât listening and the media tended too often to portray Brexit support as having to do with racism and so on. Which, of course, if thatâs how you come at people, they tend to not go, âOh, youâre right, Iâm sorry. Iâm going to stop being racist now and change my political views.â Theyâre more likely to say, âHold on a minute, youâre not listening to me. Iâm not being racist. There are these problems, functional problems, and I donât think Iâm being listened to.â To the extent the media isnât representative of broader segments of society and isnât listening to problems that people are having, thatâs a problem. And then we also have people who are taking advantage of it and who see that opportunity to campaign and build trust by pointing the finger at the other guy.
Debates on Wikipedia talk pages can get heated. People rebut other peopleâs proposals without a lot of pleasantry. There is real conflict, but they are generally productive conflicts. People keep engaging with each other and usually reach a compromise, which I feel is very unique in online discourse. What do you think the mechanism or mechanisms are that make this possible?
We have a purpose to build an encyclopedia, to be high-quality and neutral, and we have a commitment to civility as a virtue in the community. Weâre human beings, so of course sometimes those conversations are, I might say, a bit brusque but hopefully not stretching quite into personal attacks. Thereâs also this view that you really shouldnât attack people personally. And if it gets overheated, you should probably apologize, and things like that, which is not that unusual except in online contexts. I mean, normally I think most people in real life, if you get into a proper nasty quarrel with someone, there is a sort of feeling like, Yeah, that wasnât productive and maybe we need to apologize to each other and find a better way to deal with each other. In terms of how do we foster more of that? I think in online spaces, it has to do with changing culture. And in many cases, I think itâs the design of algorithms.
I donât go on Facebook very much anymore, but if one day I logged in and Facebook had an option that said, âWeâd like to show you things we think you will disagree with, but that we have some signals in our algorithm that are of quality. Would you like to see that?â Iâd be like, yes, sign me up for that. As opposed to: âOur research has shown that you tend to get agitated about trolls, so weâre going to send more trolls your way because you stay on the site longer.â Or âweâre only going to send you stuff we think youâre going to agree with,â which is also not really healthy intellectually.
One of your other examples of a functional online space was the subreddit /changemyview, which feels similar to Wikipedia in some ways. Itâs text-based. There are rules. Youâre there for a specific purpose. Is it possible for a big platform like Facebook or X or whatever to become a healthy space, or do you need to be kind of constrained and purpose-built?
I think itâs hard for sure. And I think thatâs a great question because I donât think anybody knows right now. On Facebook, youâll find pockets of groups that have good, well-run community members who are keeping the peace and insisting on certain standards. And you find horrible places as well. I think Reddit itâs the same. And another thing that I do think is interesting is looking back, because Iâm now old, and I remember before the World Wide Web and I remember Usenet, which was a giant, enormous, largely unmoderated message board. That was super toxic. It had endless flame wars and horribleness and spam and all kinds of nonsense. So I always try to mention that when people have this view of the lovely, sweet days of the early internet â it was such a utopia. Iâm like, it was kind of horrible then too. It turns out we donât need algorithms to be horrible to each other. Thatâs actually something humans can do, and humans can be great to each other at the same time. But I do think, as consumers of internet spaces, I think we should say, âActually, I really would much rather be in places that are good for me.â
You recently weighed in on one of the most contentious topics on Wikipedia or anywhere, the Israel-Gaza conflict. You wrote that you thought that it shouldnât be called a genocide in wiki voice. You normally stay out of content debates on Wikipedia. Why did you decide to weigh in on that one?
I think itâs really important that Wikipedia remain neutral and that we refrain from saying things that are controversial in wiki voice. I think thatâs not healthy for us and not healthy for the world. So it felt important to weigh in and say, âLetâs take a deeper look at this.â And the other thing is normally, we have this idea of consensus in the community, and I would say it has a certain usually constructive ambiguity, like what is consensus? How do you define that? Weâve avoided for good reason, I think, saying, âitâs 80 percentâ or any kind of simple rule like that. And the reason is because there are so many different areas in editing where there are different levels of certainty and different levels of consensus. My simplest example is, which picture of the Eiffel Tower should we have as the main picture on the Eiffel Tower wiki page? Well, maybe somebody does a straw poll and itâs 60-40. Personally, if Iâm in the 40 percent, Iâm going to go, Most people donât agree with me, oh well, because it isnât that important.
Whereas in other cases, if youâve got a significant number of good Wikipedians who are saying, âI donât agree with this, I donât think this should be in wiki voice, you shouldnât go for 60 percent.â Thatâs nowhere near good enough, particularly not if it has enormous implications for the reputation of Wikipedia and neutrality. We should hold ourselves to a very high standard. This is the kind of thing that over the years, we have to reexamine over and over and over. Where are we drawing these lines? And are we doing a good job of it? And should we ratchet it up and be more serious about it? And over the years, we have gotten more serious about it. And I think we should be even more serious about it.
Some of the editors said they felt that there was a consensus, that theyâd debated this question for months, and that to frame the article as you wanted would be to give both sides of the debate equal weight, rather than to represent the proportional view of experts and institutions. What are your thoughts on that critique?
Yeah, I think theyâre wrong. I think we have to always dig deep and examine it, and I think itâs absolutely fine to say, âThe consensus of academic genocide researchers is that this was genocide.â That, as far as I can tell, is a fact, so thatâs fine. Report on that fact. That doesnât mean that Wikipedia should say it in our own voice.
And thatâs actually important more broadly that if thereâs significant disagreement within the community of Wikipedians and we donât have consensus, and if people are putting forward policy-based reasons to disagree with that, which they are, then hold on. We should always be looking for as much agreement as possible. So what can we all agree on? Oftentimes that may be stepping back, going meta and saying, âOkay, well, we can all agree to report on the facts. Weâre not all going to agree on using wiki voice here. So weâre not going to do that. But we are going to report the facts that we can all agree on.â
And itâs important for two reasons. One, itâs what you want from an encyclopedia. You donât want to be jumping to a conclusion while thereâs still live debate. And two, socially within the community, it means we can all have a win-win situation where we can all point at this and say, âYeah, we disagree but we can point to this with pride and say, âActually, this is a good presentation. If you read this, youâll understand the debate.ââ Brilliant. Thatâs where we want to be.
When I see people attack Wikipedia for bias, it often comes down to which sources editors deem reliable. Theyâll say, âWell, you donât let us cite Breitbart, so now itâs going to be biased.â How are you thinking about how to draw the line of what is an acceptable source, and how to maintain neutrality as these decisions no longer seem neutral to people who have a completely different media diet made up of sources deemed unreliable?
Itâs something we will always be grappling with. Wikipedia does not have firm rules. Thatâs one of the core pillars. We donât completely ban sources. We may deprecate them and say, âWell, itâs not preferred as a source. Weâd rather have something better.â And then I make no apologies at all for saying not all sources are equal. I always say, if I have a choice between The New England Journal of Medicine and Breitbart, Iâm going with The New England Journal of Medicine. Thatâs just the way it is, and I think thatâs fine. When I say we have to grapple with it and take seriously the question of bias, I think we do. But sometimes weâre going to conclude, Actually, I think weâre fine here.
Elon Musk has been a loud voice complaining about bias on Wikipedia. Now he has Grokipedia, an AI-rewritten version of Wikipedia that draws on a bunch of sources that Wikipedia wonât allow. Have you looked at Grokipedia?
A little. Not enough. I need to do a deep dive.
What are your thoughts on it?
I think a lot of the criticism that itâs getting is not surprising to me. I use large language models a lot and I know about the hallucination problem, and I see it all the time. Large language models really arenât good enough to write an encyclopedia. And whatâs particularly true is the more obscure the topic, then the more likely they are to hallucinate. I also think in terms of the question of trust, Iâm not sure anybodyâs going to trust an encyclopedia that has a thumb on the scales. Which is to say, when Iâm not happy about something in Wikipedia, I open a conversation and enter the discourse. Iâm sure if Elon doesnât like something, itâs just going to change. I donât see how you can trust a process like that. You know, it is reported that Grokipedia seems to agree with Elon Muskâs political views quite well. Fine. Itâs Elon, but that might not be what we all want from an encyclopedia.
Are you concerned that it could be what some people want, or that people will start to use or prefer an AI-revised version of Wikipedia that conforms to their worldview?
Obviously you canât dismiss that out of hand, but I actually reflect on various research that we cite in the book about trust, that if people feel like thereâs a thumb on the scale, then even if they agree with that thumb on the scale, they are likely to trust it less.
I have great confidence in ordinary people. I think that if you ask people, âWould you prefer to have a news source that reflects all your own prejudices and biases and that you agree with every day?â or âWould you rather get something that is neutral and gives you insight into things you might not agree with?â I donât think itâd be a contest. Most people would prefer the latter. That doesnât mean they automatically click on it, and they may prefer their preferred outlet. Thatâs fine. Thatâs humanity. But I donât think weâre about to all go off into our little mind bubbles permanently.
How are you thinking about Wikipedia and AI more generally? The internet is increasingly full of AI-generated slop, and the foundation noted earlier this year that bots scraping the site were straining Wikipediaâs servers. Do you see AI presenting a threat, possible benefit, both?
Both. AI slop on the internet I donât think is a huge issue for Wikipedia because weâve spent, you know, now nearly 25 years studying sources and debating the quality of sources. And so I think Wikipedians arenât likely to be fooled by, you know, sort of fluff content that is generated by AI.
Obviously, crawling Wikipedia and hammering our servers, thatâs not cool. So we hope we find a reasoned solution to that. The money that supports Wikipedia is the small donors giving an average of just over $10. Theyâre not donating to subsidize billion-dollar companies crawling Wikipedia. So you know, âpay for what youâre usingâ seems like a fair request.
Then the other thing that I think is super interesting are questions around how might we, the community, might use the technology in a new way. Iâm not a very good programmer, but Iâm a programmer and I just wrote a little thing that I can feed it a short Wikipedia entry that maybe has five sources and feed it the five sources and say, âIs there anything in the sources that should be in Wikipedia but isnât? Or is there anything in Wikipedia that isnât supported by the sources?â I havenât even had time to play with it, but even at a first pass, I thought, this is actually not terrible.
Going back to why Wikipedia works, editors do seem to largely trust each other to be working in good faith, but it also seems like they have a lot of trust or respect for Wikipediaâs rules and processes in a way that feels rare in online communities. Where does that come from?
I think it probably has to do with everything being genuinely community-driven and genuinely consensus-driven. The rules arenât imposed, the rules are people writing down accepted best practices. Certainly in the early days, that was absolutely how it worked. We would be doing something for a while and then we would notice, like, Oh, actually, you know, best practice is this, so we should maybe write that down as a guide for people, and it becomes policy at some point. That helps to build trust in the rules, that theyâre genuinely not imposed top-down, that they are the product of our values and a process and the purpose of Wikipedia.
Read the full article here