Cray-cray people? Yes, it is possible to understand opponents. Our first program targets that challenge.
e.pluribus.us launched its first program over the summer and we’re excited with results so far. We believe that once fully developed it has potential to help people actually absorb, and thereby better understand opposing viewpoints.
Here’s a quick intro to what we call “Project LISTEN.”
Originally we just wanted a method to help ourselves better understand viewpoints that we did not. Not what someone thinks, but why they think it. “Why do those people, over there, feel that way about mask use?”
It rapidly evolved beyond that, into (a) a tool that could allow anyone to more easily get their head around why others think what they do, fed by (b) a system that meaningfully automates the process of gathering and prioritizing such data.
The system is still very early stage, but broader applications are now apparent:
- Help e.pluribus.us better understand the reasons behind viewpoints we do not (as originally intended).
- Enable others to do the same: better understand opposing viewpoints on nearly any topic.
- We can publish what we’re learning, and/or even fund an effort to actively build out this tool across a range of relevant issues.
- Help policy-makers better identify primary concerns on issues, the degree (or lack) of support for compromise and the most promising areas where such compromise might be struck.
- Quickly and efficiently gather data to compare/contrast rationales for various viewpoints across different geo/demo/psycho-graphic groups.
- Example: Maybe we’re curious about whether opponents of mask-use in Manhattan are driven by the same rationales as those in Tallahassee. Gathering that data will eventually be fairly easy with this system and we could quickly build up a store of unique, actionable data across a broad range of topics.
We are interested in additionally exploring whether it has application toward influencing perspectives. The system is deliberately designed to remove judgement and argumentation from the process of communicating and thinking about issues online. The premise is that it may be possible for a participant to engage more thoughtfully with information if they are not simultaneously distracted by reacting to others’ judgment and/or argumentation. It would be interesting to explore if participation with the tool alters in any way participants’ willingness to consider viewpoints inconsistent with their own.
We launched a first version of the system (Phase 0.1) in May. It failed, we learned (as was intended), iterated and launched a second version in August. This “Phase 0.2” was a big success in terms of validation of concept, audience engagement, iteration/maturation of systems and copious learnings. We’re iterating again and will launch Phase 0.3 — again against a local but controversial issue — in the coming month. Following that, we intend to advance to an issue of national relevance.
There have been a wealth of learnings and innovations thus far, mainly in the following areas:
- Where are the target audiences/how to efficiently reach them.
- Target audience behaviors, motivations and how to successfully elicit engagement from them.
- Application of bot technology to both (a) improve scalability and (b) eliminate judgment- and conflict-induced reactions that degrade opinion sharing.
- The importance of survey design in accurately capturing compromise-amenable sentiment.
- Of course, on the specific first targeted issue, unique insights into community opinion, including where opportunity for compromise may lie.
The system is _very_ early stage (hello? version 0.3) and needs a _lot_ of info architect/UX design work. Even once developed, the general concept retains understandable limitations. There are questions it should not be applied to answer.
But it is proving quite promising thus far at answering the specific questions it was designed to answer: what are the main reasons behind why people believe the way they do?
If interested, you can follow in more detail what we are learning through this project and how it is iterating over time, by checking our updates in the blog “Updates and learnings from Project LISTEN.“