Implementing a Model Comparison Feature at Consumer Reports

Dates: Fall 2009 - Spring 2010

My Role: UX Lead

Disciplines involved: Information Architecture, Interaction Design, User Research & Testing, UX Strategy, UX Management & Leadership

Deliverables: Presentations, Requirements, Sketches, Wireframes

Summary

This case study tells the story about how I acted as a UX lead on a project to implement the very first version of a model compare feature on Consumer Reports Online. I was most active from the very beginning of the project to the end of the wireframe stage.

This project is one of my favorite case studies because:

  • The project had a complicated set of problems that I had to understand and navigate throughout the project.

  • I led a small multi-disciplinary team, and was given the freedom to experiment with new project approaches, many of which were successful and implemented at Consumer Reports as standard practices. These included low-fidelity sketching, creating project war rooms, and daily scrums.

  • I was able to incorporate user testing into the design process.

THE PROBLEM

CRO is a product-oriented site. Each year they test and provide ratings for thousands of products covering a wide spectrum—cars, home and garden, electronics, to name a few. The results of these tests are shown in both ratings charts comparing groups of products together, or via individual model pages.

One thing CRO was lacking was a model compare feature that would allow for models selected by the user to be displayed in a comparison chart. This was a standard feature on product-focused websites, but CRO didn’t have one. CRO subscribers were demanding it, and CRO leadership knew we needed to have one.. However, there were several issues that were preventing CRO from simply plugging in a model compare feature:

Ratings issues: Due to the way products were tested and rated, it was difficult to compare different types of products together. The technical group was concerned users would make the wrong inferences, thus getting Consumer Reports into hot water. On the other hand, others in leadership roles felt that users wouldn’t care, and had likely been making those inferences for years.

Taxonomy: There were a host of taxonomy-related issues to sort through, especially when it came to features and specifications.

Site-integration design issues: Model compare could be integrated throughout the site in several areas: ratings charts, model pages, search, news articles to name a few. These were on different platform, and had their own unique set of needs.

Complicated interaction design issues to solve: There would be a wide range of issues to address such as how users would select items to compare as they travelled through the site, and how to address taxonomy issues when comparing across products.

Eventually, leadership decided that we had to create a project to get model compare on the site once and for all. I was tasked to act as UX Lead on the project, and to lead the project from the early discovery phases to launch.

THE SOLUTION

DISCOVERY WORK

My first task was to go off and do some initial discovery work. This included thinking about:

How to integrate model compare in a way IT could support

  • How to test with users with different options to see which worked best

  • Different integration points for model compare, and how to best approach those

  • Ratings charts, model pages, search and other areas where models are presented

  • What the first iteration should be comprised of

  • Were we overthinking this? What was the minimum amount we needed for an MVP vs what were features and functionality that could be added at a later date?

Competitive analysis

I began by conducting a survey of how other sites at the time were handling model compare. I focused on where competitor sites integrated model compare on their sites, and how they handled various issues we were struggling with.

Discovery and presentations to leadership

Based on this analysis, I made recommendations on an approach and presented this to leadership at Consumer Reports. I wanted to make sure that everyone understood the issues we were going to have to deal with in the coming months.

After presenting to Leadership, I was given the green light to start the project.

 

Creating a project approach

Creating a multi-disciplinary team

I designated a core team consisting of myself as UX Lead, a junior UX Architect, a Visual Designer, and the lead from IT. Including somebody from IT as a full-time team member was new to Consumer Reports. My previous experience had taught me that it was preferable to include at least one person from the development team as a core team member on projects like this that involved so many tech and data-related issues.

Decision to use sketching in the design process

I had recently attended a workshop conducted by Adaptive Path advocating sketching as a part of the design process. I devoured techniques on how to use sketching sessions to help a team brainstorm ideas, bringing us to a point where we were ready to settle on a design and recommend imagery and visual iconography. I chose sketching as a direct path toward cohesion, shared vision and, eventually, full team buy-in.

My goals for use the sketching techniques I learned were:

  • To help the team move to the point where each team member could have enough information to go off and create their own project estimate for the work ahead.

  • Rapid idea generation and feedback from stakeholders.

Decision to have daily scrum meetings

I wanted our core team to have scrum meetings every morning. I was familiar with scrum meetings, but this was new to Consumer Reports.

Decision to hijack a “war room”

I knew we were going to need a dedicated space to have daily meetings and to put our work up on the walls. This room would also be used to conduct reviews with stakeholders by walking them through the work we had on the walls. I worked with our building’s operations manager to secure a vacant office.

Design phase 1: sketching

Starting with words

I directed the team to conduct a brainstorming session in which we would break the huge problem of model compare at CRO into smaller, manageable bits. We started with words—ideas, themes, concepts—that came into our heads when we thought about the project, and then iterated.

The results of an initial team brainstorm

After the initial brainstorm, we printed words written on individual pieces of paper and posted them on the walls in an empty room. We then did a brainstorm where we created post-it notes with thoughts and initial sketches around the words.

We expanded on the words by giving each one its own sheet of paper. We then brainstormed around each word, and posted thoughts, sketches, and images to each one.

Moving on to sketching

Once we achieved a brain-dump critical mass, I directed the team to do 6-ups or one-up sketches, according to a technique I had learned at the workshop.

We eventually moved on to creating 1-up sketches, which were sketches of individual pages.

While we were busy hashing out ideas on the wall, we had various formal and information reviews with key stakeholders and leadership. The war room setup lent itself well to impromptu meetings, as we didn’t have to worry about reserving a conference room. Over the course of several weeks, things like features and site concepts started to move to a “nice-to-have” section on the wall as a result of these meetings. For example, a proposed site feature we were exploring could end up in the “nice-to-have” section because it was too complicated, and could end up causing the project to stall out.

User testing during the sketching phase

While we were iterating on design ideas and functionality, we kept a dedicated space on the wall for any research questions we had. These were incorporated into a plan for a usability test we wanted to conduct on various designs.

We decided that a major question we had would be how users would interpret comparisons of ratings between various types of products. Did users interpret the data correctly? Or did they even care? Would it just confuse them if we explained everything?

The test was conducted by the market research team at Consumer Reports. They conducted one-on-one user interviews, with participants representing current subscribers to CR (either the magazine or the website). Various presentations of the layout were shown.

Images from the user test. These were designed by my team’s IA under my supervision.

Findings from the test

The major learning was that users expected to have the ratings for all types on one chart, and got confused when we forced the issue. When it was explained to them, a few people understood why that was the case. Based on the results, we decided to allow for ratings to the various products types to be included on the same chart. However, we would add a "more info" icon on the chart that would explain how our ratings couldn't be compared directly across types for users who wanted that info.

Update to leadership

After a certain point, we ended up with enough information to provide an update to CR leadership on how we wanted to proceed. I gave a presentation to leadership that covered the following:

  • Key decisions: These included all-encompassing debates with passionate arguments on both sides, and technical issues we had to make a decision on in order to proceed.

  • Decisions on where we should integrate the model compare feature on the site during this project, and areas where we could wait for later

  • Decisions on which features were must-have vs. nice-to-have

  • Some sketches of key screens during our presentation

Here are sketches of key screens from the presentation. The sketches were the work of our Creative Director with my input:


Design phase 2: flushing out the details

IA requirements

At this point, our small team split up unto their respective disciplines. My task was to flush out the details of the interaction design with the IA reporting into me. I created an estimate of the IA work which was presented to the project team.


Wireframes

After the test was complete, we moved towards a more formal wireframe phase in order to flush out the details. The wireframes were created by my team’s IA under my supervision.

Below is a sample of some of the wireframes that were delivered. You can view the entire set by downloading the PDF. You’ll also find the PDF to be easier for zooming into the large images.

Visual Design, User Testing and Development

Once the wireframes were finalized, the visual designer worked with our Creative Director to work on the designs further. Once completed, IT took the designs and built out the feature. A round of user testing was done to ensure there weren’t any critical usability errors. I was involved at this point to assist the development team with any UX issues that arose, as well as in QA.

OUTCOME

Model compare was launched with tremendous success. It’s been an integral part of the site ever since.

My change in the traditional waterfall process, which had IT involved early and throughout the IA phase, proved to be the type of reversal that was needed to inject new perspective into a complex environment. One IT manager noted that it was the smoothest build they’ve done for a project of this scope. As a result, the organization started to move further towards this “reversed waterfall” approach as opposed to the pure waterfall that had been utilized time and time before.

The success of the war rooms idea was adopted on several projects after this. Unfortunately, this caused headaches for the building operations manager, as everyone was soon requesting war rooms for their own projects. Eventually, this transitioned into dedicated project team rooms when the organization moved to Agile.