Maater

Crowd sourcing the facts.

THIRD PLACE: Computer-Human Interaction 2013 Student Design Competition! 

Maater-HiFi-expanded.png

Role

Contextual Inquiry
Usability Tester
Interaction Designer
Heuristic Evaluator
CHI Presenter
Writer

Team

Mark Baldwin
Raymond Liaw
Ari Zilnik

Process

Problem Validation
Contextual Inquiry
Personas
Parallel Prototyping 
A/B Testing
Usability Testing
Solution Validation

URL

http://maater.com

 

Prompt

From CHI: "Empowering the Crowd: Changing Perspectives Through Collaboration. This year's theme, 'Changing Perspectives' focuses the design challenge on the importance of perception and knowledge as a goal and value of design. "

Description

Maater aims to correct inaccuracies in online reporting from news sources by leveraging the knowledge of the crowd. The internet has enabled the quick spread of misinformation, but a tool that allows a committed community of experts and informed novices to annotate articles and cite sources can push against inaccuracies in reporting. Maater incorporates user-generated in-line commentary and corrections, which are vetted by other readers through a ranking system. This provides corrections with greater prominence than they are given by news outlets.

Contextual Research

In order to better understand how people currently read online news, we conducted contextual interviews with three online-news readers, who we recruited through personal connections in Pittsburgh and Chicago. We wanted to discover how they decide what to read, whether or not they seek out additional information, how they read (skimming, clicking on related links, reading deeply), and how they engage with existing comment systems.

Contextual Interview

Contextual Interview

Affinity diagram of contextual inquiry data

Affinity diagram of contextual inquiry data

All three readers exhibited distrust of the comments sections of the news sites they read. They felt that while some people provide insightful commentary, useful additional information, and corrections, it was difficult to find those comments amidst all of the noise. One reader stated that he only looks at comments for science articles because experts sometimes chime in with additional explanations of the concepts. All participant attempted to read comments but gave up when they saw that the first few did not contain helpful exposition of the article.

From our observations, we felt that it was clear that creating a space specifically for community members to provide factual corrections, balanced counterpoints, and additional expertise would benefit our readers. Time crunched but interested in the world, they wanted easy-to-find, current, accurate information, and they expressed frustration with their inability to get that from news sources.

Iterative Design and Testing

Participant completing usability think aloud

Participant completing usability think aloud

Three rounds of usability testing informed our design process. The first two usability tests focused on locating and reading the annotations to sentences in an article. We did within-subject A/B testing on two interfaces for each test. We identified several significant usability problems. First, users felt that finding sentences in the article and in the modal overlay was a “disconnected” experience. They expected to be able to get information about a sentence in the article. Testing demonstrated that calling notes “annotations” or “comments” confused users. They expected the information submitted to Maater to appear at the end of the article, and they conflated our purpose for comments with the general comments that are generated on the news site.

Lastly, we tested finding, reading, and creating notes. Our results were mixed. Both users felt that the nomenclature used could be clearer, in particular the toggle button that says “show” and “hide”; however, we believe that this confusion was related to the phrasing of the task, not to the interface. Both also struggled with the up and down vote arrows, which they felt could be confusing. One user felt that the overall interface was easy to use, while the other wanted there to be fewer clicks involved. We believe that with more testing and iterating we can fix these final few issues. 

Final design in expanded mode

Final design in expanded mode

Design Validation and Future Work  

In order to test whether Maater could influence readers’ perceptions of online news, we conducted an A/B test using Mechanical Turk. Users were randomly assigned either the original version of a recent article from CNN.com or a version with the Maater overlay. We created artificial notes highlighting inaccuracies in the article. Participants judged the article using the same criteria used in the problem validation survey. Maater users more often questioned the quality of writing and research. Though subtle, the differences are enough to indicate that Maater has an impact. We plan to do further testing of the concept to gain better insight into how Maater changes perceptions of online news.

 

Ratings of trustworthiness of original article

Ratings of trustworthiness of original article

Ratings of trustworthiness of annotated article

Ratings of trustworthiness of annotated article