EXECUTIVE SUMMARY:

A new e.pluribus.US ad campaign meaningfully increases public action to reduce partisan dysfunction.

In multiple, large-scale, randomized, controlled field trials, Americans exposed to the ad campaign exhibited significantly greater behavior supporting collaborative politicians than did those in a control group.

This finding offers a promising tool for reducing dysfunctional partisanship and galvanizing support for pro-collaboration politicians.

OVERVIEW: TRIAL FINDS MESSAGING CAMPAIGN SIGNIFICANTLY BOOSTS COLLABORATIVE BEHAVIOR

106,323 citizens participated across two separate trials, from seven congressional districts in Texas, Oklahoma, Arkansas and Florida (this first phase tested right-leaning audiences). A Test Group was shown a series of social media videos comprising the campaign for several days, while a separate Control Group saw videos unrelated to the campaign. Both groups were then invited to see actions they could take to reduce partisan division (the “Invitation”). Across the two trials, over 1,626 subjects in each of Test and Control participated by clicking to see the list of actions. The results were:

  1. Members of the campaign-treated Test audience chose to click and see the list of actions at a 2.5% – 5.0% greater rate than did those in Control. This is noteworthy because this audience had already been saturated with repeated ad exposure on this issue over the prior week and therefore — if consistent with social media advertising convention — should have been expected to be fatigued with the topic and exhibit a lower click-through rate. See Table 1.

  2. Reinforcing the above, the Test group was additionally 23% more likely than was Control to actually take one of the actions to reduce partisanship, once it had viewed the list. This is a particularly promising outcome, though on this second measure, we would like to test a greater sample size before finalizing conclusions. See Table 2.

These results validate that the campaign meaningfully improves citizen’s propensity to act to reduce partisan dysfunction.

Figure 1: Citizens who saw the campaign were more likely to consider a list of actions to reduce partisanship.
Figure 2: Citizens who saw the campaign were also more likely to subsequently take one of the actions on the list.

Tables 1 & 2 set forth metrics on trial participation and engagement rates. The terms used are defined in the Definitions section below. Trial 1 was conducted in October of 2023 in five congressional districts across Texas, Oklahoma and Arkansas, using quasi-random test/control group assignments based upon alternating birth months. Trial 2 was conducted in January of 2024 in two congressional districts in Florida using fully random test/control group assignment.

Table 1 shows that the Test Group subjects, after seeing the campaign, were more likely to accept an invite to take action to reduce partisan dysfunction than were Control Groups that did not see the campaign, but were similarly invited. Specifically:Test was 2.5% more likely in Trial 1 and 5% more likely in Trial 2.

Table 1: Subjects shown the campaign were more likely to accept an invite to take action to reduce partisanship (click-thru rates were 2.48% and 5.03% higher).

With over 51,000 tested in each group across the two trials, over 1,600 in each group that then elected to participate and a 2.5% – 5.0% difference in outcome, these are robust results.

A secondary question of the trial was to assess whether, when a subject clicked to view the list of ways to reduce partisanship, were treated subjects also more likely to actually take one of the actions in that list? In both trials subjects were significantly more likely to take an action if they had seen the campaign, though we should caveat that at this secondary depth of the test the participant numbers have narrowed. We can evaluate a larger sample size by combining the results across all the trials as shown in Table 2. Note that this table includes data from a yet third trial conducted in January, as described in the Methodology section below. A follow-up test would be to confirm this outcome with greater levels of participation.

Table 2: After accepting the invite and seeing options to overcome partisanship, subjects who had seen the campaign (“Test”) were additionally more likely to actually take an action (Click rate 23% higher).

It should be noted that these measurements actually understate the difference in engagement. For privacy reasons, Meta/Facebook does not allow individual targeting, therefore, within the broad audience allocated to Test, we cannot test _only_ the subjects that we know we treated. Because the total potential Test audience is much larger than we can practically treat or test, after the treatment phase, a large percent of the Test audience remains untreated. A significant number of those are then included in the group subsequently tested in the test phase, because we cannot individually target only those treated. (This problem does not exist in the audience allocated to Control, because no one is treated in that audience). Fortunately we do know the aggregate number of _new_ subjects added during the test phase, and because treatment stops before the test phase begins, we can know that any new subjects added during the test phase are untreated subjects. We can then assume that, as untreated subjects, they behave identically to Control subjects (ie: lower engagement rates) and with that knowledge we can back their behavior out of the aggregate Test group numbers. If we do that adjustment the remaining treated subjects had:

  • Trial 1: 6.1% higher click-thru rate than Control
  • Trial 2: 10.2% higher click-thru rate than Control
  • Combined Trials 1&2: 45.8% higher Action Rate than Control

We can additionally safely assume that the test engagement rate would have been yet higher in the Test group without the “ad fatigue” effect addressed in bullet (1) of the Overview above. Finally, we note that some participants in Test clicked on the messaging ads during treatment, even though there was no action to take at that point. Those clicks, during treatment, were not counted toward the click-through rates of the Test group. It is fair to assume that some portion of those “clickers” viewed the later invitation to take action as redundant to an ad they had already clicked on, and therefore did not click a second time. This is yet a third reason the trial data likely understate the degree of greater engagement by treated subjects.

IMPLICATIONS: A SCALABLE PROGRAM TO REDUCE DIVISIVENESS

There are several implications of this outcome.

  1. First, such a campaign should be considered as one tool for evolving national public behavior broadly away from worsening division and toward more collaborative engagement.
  2. Second, specifically we are currently in an election cycle in which voters will have the opportunity to express views on the importance of politicians’ skill at working through differences to achieve policy goals. This messaging campaign can influence voter opinion on that issue.
  3. Third, while there exists today a rich array of programs to help citizens engage on the challenges of partisanship, once they are motivated to do so (here is a sample list), arguably lacking are sufficient initiatives targeted at igniting that motivation among a critical mass of the public. This campaign fills that gap.

WHY DID WE RUN THIS CAMPAIGN?

e.pluribus.US conceives, builds and tests interventions to scalably improve public attitudes toward working with political opponents. In this current initiative we test methods of influencing public opinion via advertising, initially on social media.

RATIONALE BEHIND THIS FIRST CAMPAIGN TESTED

Our development of this initial candidate messaging intervention was guided by several perspectives:

  1. We observe there exists a wealth of initiatives to help citizens address the challenges of partisanship once they choose to engage in doing so. In our view, what is lacking are sufficient initiatives motivating them to so engage.

  2. We believe citizens don’t necessarily need to agree with, like, nor even understand opponents, but have no choice but to find ways of working with them if they hope to achieve goals in a democracy.

  3. We further hold that most citizens are actually familiar with disagreement and how to work through it to accomplish things with others. The problem is, they do not, in sufficient numbers, demand that same behavior from their politicians.

  4. Finally, our experience is that if one wants to persuade a group to do something (in this case, the citizenry), one needs to consider what is already important to that group and illustrate how one’s proposal supports those goals.

We therefore conclude that the first step in addressing partisanship needs to be motivating citizens to prioritize it among the myriad demands in their daily lives. And we maintain that to accomplish that, an appeal must leverage issues about which the citizens already feel emotion.

Toward this end, we tested messaging that grounds itself in policy issues citizens already prioritize, leverages emotional frustration with politicians’ lack of progress on those issues, and draws a contrast between (a) how citizens approach resolving familiar, day-to-day disagreements and (b) how politicians approach resolving political disagreements. The implied question is, why should we expect to succeed at resolving political disagreements if we — and our politicians — do not apply the same methods that we know are necessary in our day-to-day lives?

Video 1 shows the result: one of several videos developed around these themes and optimized for impatient, short-attention span social media audiences. You may review all five videos here.

Video 1: Example of treatment videos used in messaging campaign.

To test the efficacy of the ads, after treating an audience with these videos, we deploy a second video that (1) cites specific issues around which that audience already has passion, (2) rhetorically asks if the viewer believes politicians are making progress on those issues, then (3) invites the viewer to take action. Video 2 is an example of this “Invitation Ad.” This ad is also shown to the Control group.

Video 2: the Invitation Ad used to invite participants to see the list of actions.

Note that in these videos we deliberately do not discuss citizen behavior, instead addressing that of politicians. This is to abstract the blame away from the viewer feeling culpability, thereby avoiding triggering their cognitive defenses. An interesting aspect of our campaign is that we received significant audience commentary on the ads, but none of it was targeted at us, the messenger. It was all targeted at behavior of politicians (though of course, it often then devolved into name-calling amongst commenters). This is interesting because literally only one of the 500+ comments seemed to draw assumptions about our own ideological lean.

As part of building out this campaign, e.pluribus.US developed a test infrastructure to enable the efficient evaluation of messages at driving pro-collaborative behavior. That infrastructure is now available for testing essentially any messaging treatment that can be presented in a brief, online video format.

NEXT STEP: INFLUENCE VOTERS TO DEMAND POLITICIANS GET THINGS DONE

This program is ongoing. Next priorities include upgrading messaging production values, testing on left-leaning audiences and on larger sample sizes, broadening demographic participation via other social media platforms (ie: Instagram, Snap, TikTok, X, Linked-In etc) and continually expanding the list of options for taking action. Our ultimate goal is to mass-apply the campaign in areas where it can influence voters to support candidates committed to work with opponents to get things done.


FOOTNOTES

Methodology: Details of how the trial was conducted:

Campaign Development: To enable this trial we developed and refined the messaging creative and test infrastructure over roughly a year, iterating through over two dozen developmental tests involving over 300,000 unique participants and spanning dozens of congressional districts and states.

Audience selection: In order to achieve sufficient sample size, we selected five congressional districts for Trial 1 and two for Trial 2. In both trials, the specific audience was chosen with the intent of measuring efficacy on a group that could be considered “core” to the ideology of their party. Currently the cores of each party are positioned away from the ideological center, but short of the extremes. To identify such groups we used a combination of scores from both the Cook Partisan Voting Index for the citizens and VoteView for their representatives. Simply put, based upon a district’s voting in the most recent elections Cook assigns a rating that represents where that district lies on the spectrum of strongly Democrat (negative scores) to strongly Republican (positive scores). VoteView uses an analogous method to rate representatives on their voting. A zero Cook score would be centrist, so we looked for districts with scores from 0.5 to 1.0 standard deviation away from zero. We wanted to test efficacy on each party separately, so for this test chose Republican-leaning districts and therefore Cook scores greater than zero. We also wanted the populations to be roughly geographically proximate. With those criteria, for Trial 1 we settled on the districts of Texas’ 24th and 26th , Oklahoma’s 1st & 5th and Arkansas’ 3rd. For Trial 2 we chose Florida’s 6th and 8th districts. Table 3 presents the Cook and VoteView scores and Facebook account populations in each of the districts. For reference, Republican-leaning Cook Scores currently reach to 33 and VoteView to 0.961. Note these scores change over time, so may not remain the same as you read this.

Table 3: Ideological/partisan voting scores of target districts. A zero score is centrist and higher Cook Scores indicate more Republican while higher VoteView scores indicate more conservative.

Social media platform selection: We chose Meta’s Facebook platform for this test based upon its robust targeting and analytical tools and ability to handle the form of our creative. Note our system easily adapts to other platforms; we have already tested on Instagram and Linked-In.

Creation of Test and Control. For Trial 1 we chose not to use Facebook’s internal A/B Testing tool due to drawbacks discovered in prior tests with that tool. Instead we divided the Test and Control groups based upon birth month. Facebook accounts with birth months in January, March, May, July, September and November were assigned to Test. The alternate six months were assigned to Control. Accounts without birth months were not used. In earlier tests with this methodology we did not observe skew associated with birth month, but we discuss the implications of this approach on randomization in the Known Limitations section below.

For Trial 2 we did use completely random assignment of participants to Test and Control by working within the limitations of Facebook’s internal A/B Testing tool.

Treatment. For Trial 1, the Test group was treated with the messaging campaign for five consecutive days beginning October 20, 2023. There were five creatives, which we changed daily, so each ran for one full day. Daily Reach varied between 7,000 and 20,000 uniques and at the end of the treatment 35,085 subjects had been treated an average of 2.3 times each.

For Trial 2 The Test Group was treated in the same manner, but for seven consecutive days beginning January 24. Daily Reach varied between 800 and 3,653 uniques and concluded with 9,317 subjects being treated an average of 2.22 times each.

Control. In Trial 1 the Control group, in place of the treatment videos, saw the default collection of Facebook ad videos during the period the Test group was being treated.

In Trial 2, the Control group saw the Invitation Ad during the seven days that Test was being treated and then continued to see that ad through the subsequent seven days Test was being tested with that same ad. This was necessary due to the mechanism the Facebook A/B tool uses to allocate subjects among groups. If the two groups do not receive participant allocations simultaneously in consistent ratios throughout the full trial, then the types of subjects allocated to the groups may differ, thereby skewing the results. We avoided that by allocating subjects in equal weights to Test and Control throughout all phases of the trial.

Invitation to take action:

● Preliminary Measurement of Control: In Trial 1, for two days prior to launch of the messaging campaign in the Test group, we launched the “Invitation Ad”, shown as Video 2, above in the Control group, inviting it to view several ways to take action to reduce dysfunctional partisanship. We ran this preliminary ad in the Control group to confirm that the targeted districts would be sufficiently responsive to the ad, then temporarily paused it until it could continue running in parallel with an invitation to the Test group (after treatment of Test). Control exposures in this preliminary period comprised 5.3% of total Control exposures and responses received during this preliminary measurement were included in the total responses reported for Control.

The reason we did this preliminary measure is there was some concern from prior tests on smaller populations that some districts simply may not respond to ads at sufficient rates. If our targeted districts were of that type, we wanted to know that before wasting too much money on treatment. In Trial 1 we learned that this variable effect washes out with volume; that is to say, districts are responsive at roughly consistent rates, but due to random variation one may not see this consistency in small sample sizes. With that learning, in Trial 2 we did not need to run a Preliminary Measurement.

● Post-Treatment Measurement: In Trial 1, on October 24 we suspended the messaging treatment of the Test group and on the 25th launched the Invitation Ad at both Test and Control, targeting equal daily impressions in both groups. We ran the ads at both groups for the seven days through October 31, maintaining comparable reach and frequency throughout. Results from Control were considered to include both the one-day preliminary and post measurement periods.

In Trial 2, on January 30 we suspended the treatment of the Test group and on the 31st launched the Invitation Ad to Test, while continuing to run the same ad to Control, again targeting equal daily impressions to both groups. We ran the ads at both groups through February 9, maintaining comparable reach and frequency throughout.

Table 1 above shows the final reach data for both Trials.

Figure 3: Action list subjects could choose from (click to see full list).

Presentation of action options. In both Trials, if a subject chose to click on the invitation to take action, they were presented with a list of several alternative actions to take. Figure 3 illustrates this list of actions (click on it to see the full list). A click on any of them, including the learning and sharing options, was recorded as a click on an action. Multiple clicks in the same user session were considered only one action. This list was developed and optimized through extensive testing during the Campaign Development phase described above. All of the options received clicks, with the most popular being “I’m Worried,” followed closely by the Lugar Center and “Make us Donate.”

Results: We then tabulated the rate at which the two groups clicked on the Invitation Ad (“Clickers” divided by “Invitees”) and then on the actions (“Clicked To Take an Action” divided by “Viewed List of Options to Take Action”). We also calculated absolute engagement by dividing “Clicked To Take an Action” by “Invitees.”

Behavioral Measurement period used in results: By way of explanation, engagement rates on social media vary significantly by day of week and the number of times a subject sees the ad (“Frequency”). Unique engagement rates are naturally higher at higher frequencies and frequencies between groups can differ at any given point in the trial, even at identical numbers of impressions. So when comparing groups, one needs to normalize for these factors.

Therefore, for Trial 1, we recorded test behavioral data from both groups for the seven day period October 24-31. This allowed both groups to be exposed to all seven days of the week. We then reported results for similar Frequency levels closest to the termination of the seven days. Fortunately frequencies were close. At the conclusion of the test period midnight October 31, the Frequency was 2.83 for Test and 2.88 for Control. With the higher Frequency, Control should naturally have had a higher engagement rate, yet Test was 2.5% higher, even with a lower Frequency. If we normalize on a Frequency of 2.83, in Control that occurred a day earlier, on Day 6, and at that point Control had an even lower engagement rate. We therefore chose to compare the two groups as of midnight October 31, confident that this biases the results in favor of Control, given its frequency is higher than that of Test at that point.

For Trial 2 we were able to compare at identical Frequencies. Both Test and Control ran at roughly equal exposures for seven days beginning at 5AM (subject’s time) on a Wednesday, so they were equally exposed to each day of the week (though they used succeeding weeks). After the seven days their Frequencies were similar. Test reached Frequency 2.75 at noon on Day 8 and Control did so at 11pm of Day 8. Given that Control was exposed to eleven more hours of a Wednesday than Test and Wednesday is the highest engagement day of the week, Control would be expected to naturally have a higher engagement rate than Test, so we are confident that the final outcome biases in favor of Control.

Results from additional trials not discussed herein: In 2023 we conducted additional trials that measured sentiment instead of behavior. The INFLUENCE 1.0 trial showed that the same messaging later used in Trials 1 and 2 also improve sentiment toward collaborative politicians, but that trial did not measure the behavioral changes targeted in Trials 1 and 2.

For behavior, in addition to the Trials 1 and 2, in January, 2024 we conducted an experimental Trial 3 in New Hampshire during the campaign primary, using the same techniques as Trial 2. 21,517 subjects were targeted, with 645 engaging. The districts in New Hampshire are extraordinarily centrist, with Cook Scores of literally zero and -2. One would not expect to be able to measurably move those citizens toward being more compromising than they already are. Accordingly our campaign did not, which we view as validation that we are measuring things correctly. However, in Trial 3 we were able to collect additional data for the combined Test and Control pools of subjects who have clicked to see the actions to reduce partisanship and decided whether or not to take one of them. Those results were consistent with Trials 1 and 2 and we included them in the “Combined — All Trials, Summed” data shown in Table 2 above.

Comparative results are shown in Tables 1 and 2 above and below, and in Figures 1 and 2 above.

Definitions

Following are definitions used in the outcomes listed in Tables 1 & 2 (reprinted here from above).

Table 1: Subjects shown the campaign were more likely to accept an invite to take action to reduce partisanship (click-thru rates were 2.48% and 5.03% higher).
Table 2: After accepting the invite and seeing options to overcome partisanship, subjects who had seen the campaign (“Test”) were additionally more likely to actually take an action (Click rate 23% higher).
  • “Group” identifies Test or Control.
  • “Treated w/videos” addresses the number of accounts that received impressions of the treatment videos.
  • “Shown Invite Ad / Invitees” means the total number of Facebook Accounts that received at least one impression of Invite Ad shown in Video 2 above. We refer to these as the “Invitees.” In social media advertising this is sometimes referred to as “unique impressions.”
  • “Clicked on Ad / Clickers” means the number of unique accounts that clicked on the Invitation Ad, no matter how many times each account clicked. If an account clicks the Invitation Ad more than once, that is considered only 1 “Clicker.”
  • “Click-thru rate” in Table 1 means, in this context, unique click-through rate, or the rate at which a unique account clicked at least once on the Invite Ad. Mathematically it is “Clickers” divided by “Invitees”.
  • “Viewed List of Options to Take Action” is synonymous with “Clicked on Ad” above. Once a subject clicked on the Invitation Ad, it was presented with the list of options to take action shown in Figure 3 above.
  • “Clicked to Take an Action” means the number of unique accounts that clicked on any of the links in the list of actions shown in Figure 3. If an account clicked on more than one action link, that is considered only one Action.
  • “Click rate” in Table 2 means, of the unique accounts that reached the action list, at what rate did they click on at least one action. Mathematically it is “Clicked to Take an Action” divided by “Viewed List of Options to Take Action”
  • “Overall Action rate” means the rate at which a member of the group in that row both clicked on the Invitation Ad and then also clicked on an Action. Mathematically it is “Clicked to Take an Action” divided by “Invitees.”
  • “% change to baseline (Control) behavior” means, assuming the Control group represents the pre-existing (“baseline”) behavior of the overall audience, by how much did that behavior change after treatment, as measured in the Test group. Mathematically it is (the Test rate minus the Control rate) all divided by the Control rate. For example, for a given behavior, if the Control group rate were 1% and the Test group rate were 1.5%, the “% change to baseline” would be (1.5% – 1.0%)/1.0% = 50% increase in that behavior.

Known limitations

Treatment assignment not fully random in Trial 1: Because of drawbacks inherent in Facebook’s internal A/B testing methodology, For Trial 1 we chose to assign subjects to Test and Control based upon birth month. We view this as very close to random, but it is not completely random, because the possibility exists that birth month may skew participant behavior. We conducted Trial 2 specifically to eliminate this limitation by using completely random assignment of subjects.

But as regards Trial 1:

  • We believe we mitigated that risk by using alternating birth months. That is, every birth month in each of Test and Control is bracketed on each side by a birth month in the opposite group (eg, if February were in Test, both January and March were in Control and that is true for every single month). Said differently, no member of Test or Control was born more than 16 days away from a member of the opposite group.
  • Additionally we have previously tested this same, 5 video messaging treatment using a different assortment of birth months for treatment assignment and observed an even greater differential in click-through rate by Test over Control. (That test, INFLUENCE 1.0, measured sentiment instead of behavior, so the “ask’ in the invitation was simply to take a poll and therefore easier to perform).

In sum, the data we have thus far consistently suggest a positive influence of the messaging treatment.

Right-leaning audience: As referenced above, the targeted congressional districts were deliberately right-leaning. It is conceivable that left-leaning or more centrist-voting districts may respond differently to the messaging treatment. We will test left-leaning districts next.

Age skew: This trial used Meta’s Facebook platform, so the demographics skewed older, with the majority of subjects over the age of 55. It is conceivable that younger participants may respond differently to the treatment. We intend to expand into Instagram and other platforms in future trials to diversify age demographics. Note, however, that it may be that these older demographics are the most intransigent in their views and therefore it is promising that the campaign was successful with them.

A related age skew occurred in Trial 2: the Control group had a higher proportion of subjects over the age of 65 than did Test. However, the over 65 age group had the highest engagement rate of all age groups, so this bias favors Control. Additionally, within that age group, Test had 5.2% higher click-through than Control and indeed Test had higher or equal click-through than Control in every single age group of Trial 2. Age data is not available for Trial 1.

Skews associated with online-only audience: There are demo- and psycho-graphic skews associated with the target pool being online-only. We do not believe this significantly invalidates the directional conclusions of the results.


ACKNOWLEDGEMENTS

e.Pluribus.US thanks the following for advice, teachings, guidance, perspective and inspiration in the development and execution of this project.

Trial design: feedback on proposed concept, what to test, best practices on how to test, importance of RCT, testing behavior instead of sentiment, etc.

Critique/suggestions on final report

Social media marketing counsel

Use of their organizations for test actions

Editing & “test-dummy” input

Field implementation on developmental predecessor project (Project LISTEN)


(Just for fun) Things we learned by horribly screwing up

Sometimes it’s fun to just admit you screwed up. We thought you might enjoy stories of our occasional mis-steps in building this campaign. All of these were important learning experiences, of course. But at the time they didn’t feel like it.

  • It turns out … Anything times zero is zero. (who knew?) We ran a full campaign in a New Hampshire district that had a Cook Score of zero. Meaning, they literally are already as centrist as centrist can be; no amount of brilliant campaigning can make a Cook Score of zero more zero than … zero. Learning: Our system did, indeed, measure that we had not “moved their needle,” so the silver lining was that our measurement methods apparently work.

  • Nearly everyone will agree partisanship is bad, just … not bad enough to do something about. Early in our efforts we were encouraged by how many people enthusiastically agreed with us that partisanship is an issue that someone should work on. We were then shortly deflated to learn that “someone’ was not any of them. We learned this one quickly because our backgrounds are in start-ups, wherein you learn that prospects will rave about your brilliant new product until they actually have to pay for it. It’s then that you have to learn how to successfully sell it to them. A critical learning was that if you want people to get involved in reducing partisanship, you have to link it to an issue they already passionately prioritize working on.

  • Everyone believes in compromising, just not if they have to, well, compromise. If you ask people, “should politicians work together to get things done,” nearly 80% will say “Yes.” If you ask them “should politicians compromise on our important issues to get things done,” nearly 80% will say no. Not kidding, it’s exactly that, believe us, we’ve asked. A lot. (See our post about this exact issue.) This guided our thinking toward figuring out how to convey to people that they already know, from personal experience, that nothing is accomplished with opponents without some form of give-and-take (or … death-match fights).

  • Facebook A/B testing is a PAIN IN THE ASS. SOOO much time, effort and money wasted learning how to wrestle Facebook into a clean, fair, balanced A/B test. That’s all we will say without crying. Other than that we did eventually figure it out.

  • Ethnicity matters. It turns out African Americans don’t respond enthusiastically to Caucasian characters in social media ads. Who woulda guessed? We’d have fired the person who failed to foresee that, but then we wouldn’t have had a leader.

  • There is a “sweet spot” length in political videos on social media and it’s not 26 seconds. We tend to be long-winded. (Hadn’t noticed?) But truly, how are you supposed to get your point across in 3-5 seconds? It turns out you can. Or at least we did. (You probably wish we’d gotten this report across in 3-5 seconds.)

  • Explicitly telling people “not to click” within an ad makes them more likely to click. Even at the 26th second. Not kidding. We “has the datas.” Alas, even this, ultimately, has a solution.


e.pluribus.US conceives, builds and tests interventions to scalably improve public attitudes toward working with political opponents.

Track the Movement

Get weekly updates on e.pluribus.US.

INFLUENCE Reduces Partisanship!

In controlled trials Project INFLUENCE reduced partisanship.
Learn more >

Project LISTEN is opening minds!

LISTEN scalably helps us understand why opponents think as they do.
Learn more >

Thoughts from across the aisle...

“The odds are good [America will achieve reconciliation], but the work is hard.”

— Peter T. Coleman, Professor of Psychology and Education, Columbia University

Share this page