òtító
p/otito
Tools to crowdsource facts and fight misinformation
Timi Olotu
òtító — Tools to crowdsource facts and fight misinformation
Featured
7
òtító is the new media format that empowers us all to crowdsource and moderate factual information. It includes checks and balances, and software tools to incentivise healthy behaviours, and discourage unhealthy or adversarial approaches to discourse.
Replies
BabaSheez
Looks nice... one suggestion though: We are the “caption on” generation, add more images to your UX if you can. Great work. Congrats.
Timi Olotu
Foreign meddling in national politics and targeted propaganda have revealed just how easily (and dangerously) manipulated society can be, when it's fed false but emotive information. Misinformation is harming social cohesion and making it harder than ever to know whom to trust. Lines of division in society—political, racial, gender, sexuality, ideological, religious and more—are starting to deepen along a fatal trajectory. According to research by pew, the divide between democrats and republicans (in the US, for example) is wider than it’s ever been. We don't believe control of this key aspect of society should be entirely outsourced to AI and algorithms—the process of deciding which information can be trusted should be democratic... but governed within a system where the incentives are healthy so productive behaviours are fostered. That's why I built òtító—a new media format that combats misinformation by empowering users to crowdsource facts (with citations) and giving us all tools to moderate the information submitted by each other. òtító’s articles are organised based on factual evidence and trustworthiness of sources (as decided by you, the users). The app supports these goals thanks to some key features and attributes: - Unlike with fact-checking websites, òtító’s body of facts is perpetually growing and undergoing refinement, thanks to its community model. It doesn’t require misinformation to emerge before it is tackled. It is proactive, not reactive. - Articles are organised into claims or statements of fact. A claim is a fact-based assertion, supported by at least one source. All claims are submitted by users like you. And all carry designations that signal trustworthiness (at a glance). - Users can add topics or claims only if they can also supply source(s) of evidence. This keeps the quality of content above a certain baseline and makes moderation/scrutiny possible. - The karma system means users that consistently supply low-quality information (as indicated by community signals) automatically have their permissions restricted. - òtító’s system is fully democratic, with all users having access to the same features and publishing powers—unless a user’s negative karma rises past a set threshold (at which point their permissions are restricted, to prevent spam). - The interface is extremely simple and isn’t over-engineered, leading to a more pleasant user experience (based on feedback from testing). - Users can interact directly only with ideas, not each other—and no one knows by whom any piece of information was published. This means òtító’s design inherently prevents common unproductive behaviours, such as ad hominems, group signalling, trolling and targeted attacks. - Information isn’t framed as “pros vs cons” or “us vs them”, but rather as one multifaceted and interrelated body of complex knowledge. This encourages users to create shared narratives of truth that are ideologically pluralistic, rather than deepen ideological entrenchment. Please take a look and share your thoughts. It's an extremely difficult problem but I'm determined to meet the challenge head on, and I need your help in doing so. Thanks! Timi
Primer
Love the idea. 2 questions. What was the thinking behind the difficult to pronounce and difficult to spell brand name? What’s to stop the people from either “side” (for want of a better word) from continually moderating each other out?
Timi Olotu
@mickc79 thanks for your comment. Below are my responses to your very important questions. 1.) What was the thinking behind the difficult to pronounce and difficult to spell brand name? In a nutshell, brand differentiation. There are personal/aesthetic drives—for example, "òtító" looks like a palindrome (although it isn't one, strictly speaking) and it means "truth" in Yoruba (one of my native tongues). But, ultimately, I decided to bet on it because it stands out and I thought it would get people curious (precisely because it's unusual). The corpus of branding is full of unusual names that were never supposed to work... but did. So far, so good—I get many good comments about the name (mainly, people are curious about what it means and from which language it's derived). Plus, it's actually very easy to pronounce—just follow your gut and don't be intimidated by the accents :) 2.) What’s to stop the people from either “side” (for want of a better word) from continually moderating each other out? In short, nothing will stop them. They will try very hard to do that. But, I'm a believer in the concept of "anti-fragile" systems, as popularised by Nicholas Nassim Taleb. Instead of trying to stop people's instinctual drive to game the system (which always fails), I tried to construct a system that will benefit from their attempts to do so. The first layer of defence is that it is impossible know by whom a claim/topic has been posted—so you can't target certain ideas because you don't like the people who posted them. But, hypothetically, you can target an idea you don't like, regardless of who posted it. So, if we assume that some left wing people might unfairly vote against some facts simply because those facts support right wing ideas, then we can also assume that some right wing people will also unfairly vote for those facts (simply because those assertions support their ideas). But those people will cancel each other out (roughly speaking). The real difference will be made by the outliers on both sides who actually scrutinise the information and vote/contribute based on the evidence. You might be tempted to dismiss the effectiveness of this dynamic but it's actually well demonstrated in politics, academia and science (for example). It is naturally occuring (rather than artificially imposed). It is the basis of the concept of "institutionalised disconfirmation". Many scientists/academics do not keep their biases in check, so they miss insights they're intellectually capable of seeing. However, the few who are able to better analyse the information objectively, discover new insights and bring those to the table—and that drives the entire field forward. It's why "germ theory" succeeded even though the majority was against it. And it's why "heliocentrism" died and Galileo's ideas prevailed (even though he represented a minority view). It's also why the concept of "swing states" is so key in US politics, for example (it's not the partisan that decide but those who can be swayed using information). The second layer of defence is that the platform records all activities by each user (e.g. which claims they add, how they vote on sources and how other people vote on their sources). If a user consistently votes in a way that runs counter to the votes of the highest-rated contributors (and mirrors the voting behaviour of the lowest-rated contributors), that user's votes start to become less impactful (in terms of weighting). If they continue to make low-quality contributions, their permissions might be restricted. It is very difficult to game this because no one knows for whom's claims they're voting or how individual users have voted. No one knows whom the highest ranked users are, so they can't simply devise workarounds (at least for now). This feature is still in progress and we're even looking at things like a "polarity score". To illustrate, imagine there are 2 groups, "A" and "B", and they disagree on almost everything. But there's person "X", such that groups "A" and "B" both tend to upvote contributions by "X" at rates of 90% and above (which puts "X" in the 99.99th percentile, in terms of quality of contributions). This suggests that "X" contributes quality content, regardless of ideological leaning. If person "Z", whose voting patterns usually match those of group "A", suddenly deviates from that pattern by consistently downvoting contributions by person "X" (even though group "A" does the opposite and person "X" is otherwise highly validated by all user cohorts), that could be a signal that person "Z" is simply ideologically opposed to the information being presented by person "X". There's still lots of testing to be done, however. These are problems of massive scale and we don't have that yet (although, we will soon, I hope). What we want to avoid in these situations is using data signals to make permanent, unilateral decisions. So, rather than simply banning a user, we drastically reduce the impact of their votes (and let them know about it). Or rather than removing content, we hide it, show a warning that the community thinks it's misleading and let users choose to view the content. KEY POINT: none of these measures are foolproof. If it were simple, it would be solved. But, I believe we can make significant progress based on the two overarching approaches identified above: a.) "Blind transparency" (i.e. no one knows who's doing what, even though everyone can see what's being done) b.) "Perpetual recalibration" (i.e. the impact of an actor's action is based on a constant evaluation of the impact of previous actions by the same actor, relative to the impact of all actions by all actors). Hope my somewhat philosophical/abstract explanation of the formal logic underlying the design of the platform doesn't bore you (but I think sharing it is critical to properly answering your question). Please let me know if you have any more questions, thoughts or feedback!
Primer
@timi_olotu Thanks for taking the time to write such a detailed answer. I get it now. I really wish you well with this model.
Harsh Gelda
Great work!
Timi Olotu
@harsh_gelda Thank you!