Google Facts

Information icon.
Year

2022

Role

User Experience
User Research
Interface
Branding
Illustration

Deliverables

Product Design

Overview

Google Facts is an extension of the Google Suite that helps people find and read credible information. I designed this project during my senior year at Cornish College of the Arts.

Launch video

Mobile Prototype

View Prototype

Desktop and Tablet Prototype

View Prototype
Context

This is my thesis project for my BFA in design at Cornish College of the Arts. I was asked to research a problem and design a solution to exhibit at Cornish's 2022 BFA show. We had just under 15 weeks to deliver our designs.

Concept

I chose to research and develop a product around misinformation because it threatens public health. My intrigue spawned from my experience with misinformation and COVID-19 vaccines. False and misleading information influenced the public's adoption of the vaccine. It was clear that people need help finding credible information.

How do you want to read?

Short and sweet
I want all the details

So, who are we looking for?

My users are politically moderate, generally informed, and seek out answers independently. They engage with conventional news, as well as other mediums like social media. They value brand recognition, data, and unbiased reporting.

What do they know?

My interviews dug into the participants’ experiences with misinformation and their opinions about it. I wanted to answer a couple fundamental questions:

"

How do users engage with media literacy principles?

"

How do users interact with media?

Here's what I learned.

Key insights

  1. Users do not actively analyze search results for validity.

  2. Users rely on bias to determine its credibility and validity.

  3. Users say they don’t like being told what to think, but read opinionated and biased news nonetheless.

User behavior

My interviewees all followed the same pattern while researching:

.01
They filter the search results...
.02
Then read the content...
.03
And sometimes analyze for quality.

This information was the key to determining the architecture of my user flows.

Pain Points

Analyzing content for credibility is difficult.

Users value investigating their media but it takes too much time and effort for them to actually do.  

Fact-heavy information is unapproachable.

Users to skim it instead of reading deeply because it can be time consuming, and outside their expertise.

Information is difficult to retain.

Users struggle to remember more than a couple pieces information at a time because articles are difficult to read, too long, or uninteresting.

Synthesis

What's success look like?

Overall, the success of the design should be measured by the amount of time and effort users take to find and read a diverse set of content. Additionally, success should increase ease of access to content metadata, like authorship and funding.

Characterizing Success

Transparent

A successful design will present its users with the information necessary to understand why and how a piece of content exists.

Efficient

A successful design will expedite users’ processing time and expand their capacity for mental load during research.

Standardized

A successful design will assist users in the sorting of information.

Design

Sketching

My wireframes explored different ways to organize the different stages of research (Filtering, Reading, and Analyzing).

After testing at low fidelity, I realized that the hierarchy of my designs didn't match user's research behavior and there wasn't a logical flow of information.

My final design was the simplest. The visual flow carries straight across with a series of columns, following the research flow from high to low level information.

1. Ranked results
2. Content Summaries
3. Metadata and Info
1. Ranked results
2. Summarized Content
3. Metadata

My final design was the simplest. The visual flow carries straight across with a series of columns, following the research flow from high to low level information.

Why Google?

My solution needed to live as close to users’ research habits as possible. Google is the standard when it comes to delivering information, and its users trust it to provide relevant content to their queries. Why couldn’t Google also deliver a solution that filtered credible information and encouraged media literacy?

Let's talk features

Ranking

Media literacy is nuanced, and users need help filtering for credibility. So, the solution ranks content. It values quantity of sources, diverse ranges of sources, multiple types of sources (primary and secondary), neutral language, and expertise in authorship.

Summarization

Users need help with comprehension so they can hold onto more information at once.  The product generates a series of bite sized summaries to speed up high level comprehension. Then, users can dive into low level understanding by toggling each summary to view the original portion of text.

Metadata

Users need a better way to learn about their content quality and goals.  The data used to rank each result is displayed so that users have access to transparent data on their content.

Closing Thoughts

Throughout this process, I constantly asked myself, "What is the effect of being 'wrong', or 'right'? But, perhaps a more important line of inquiry questions our intent. Instead of asking ourselves to qualify our knowledge into categories loaded with morality and ethics, we might ask different questions. We can ask questions like, "Does the information I spread contribute towards a positive impact for others?" or, "Does this information have the ability to affect others in a way that I might not experience?".  

When we're held accountable to be aware of ourselves, our information, and others, we have the potential to do good.

Previous Project
Next Project