Faceted Search Testing (Johns Hopkins Medicine)

The faculty database on hopkinsmedicine.org is one of the most valuable features of the site. The database is structured using many facets (ex. department) with multiple values (ex. department 1, department 2, etc…) per facet, but the site is not currently configured to offer this functionality on the frontend.

Planning

What

Find the most successful presentation of search facets for the faculty directory on the Johns Hopkins Medicine website.

Key Performance Indicators

Successful solutions will result in:

  1. High scores on perceived ease of use rating scale

  2. High success rates on pass/struggle/fail counter balance testing

    • Increased success rates in round two testing, after prototype design iterations

  3. Observed positive reactions during testing, in both actions and words

Overall Testing Method

Conduct two “Learn and Iterate” cycles, where we test wireframe prototypes with users as follows:

  • Type of user testing:

    • Informal, in-person on Johns Hopkins Medicine campus

    • Counter balance tests on “Find a doctor” scenario (pass / struggle / fail) performed on both designs

      • Randomize order of tasks to remove confounding results

  • Two testers: proctor (gives test) and observer (note taker)

  • Two designs, two presentations: open facets and closed facets, mobile, desktop for each

  • Two rounds of testing:

    • Round One Test: Which is better? Open vs. Closed facets

    • Round Two Test: Make iterative improvements on “better option” (from round one) based on feedback

  • Ask general questions to uncover if the design worked as expected, and if/why any issues existed, observer records responses

  • Capture “ease of use” measurement using a scale

  • Capture user preference for closed vs open faceted design, only one will be tested in round two tests

Round One Findings

Preference vs. Performance

  • Open and closed facets performed similarly

  • There was a strong preference for closed facets

Usability of Prototypes

Observed

  • Desktop versions appeared easier to use than mobile version due to better affordance

  • Users appeared unsure about how to access and navigate the facets on mobile

Measured

  • Desktop “ease of use” rating was higher than mobile, on average

Users Trust Themselves

Observed

  • Instinct is to provide more information in search box before using facets

    • First action is always to scan results to evaluate them

  • Search results affect the perceived ‘value’ of the search tool

Round Two Testing Method

  • Present users with closed facets only and allow them to expand as needed

  • Counter balance test on “Find a doctor” scenario (pass / struggle / fail) performed on both presentations (mobile and desktop)

    • Randomize order of tasks to remove confounding results

  • Ask general questions to uncover if the design worked as expected, and if/why issues existed, observer records responses

  • Capture “ease of use” measurement using a scale

  • Prototype updates from round one:

    • Change language from “narrow your results” to “refine your results”

    • Improve affordance of “refine your results” call to action, particularly on mobile

    • Visually differentiate between ”Refine your results” title and interactive facets

Round Two Findings

  • Both mobile and desktop presentations saw significantly improved results

  • Average “ease of use” score for mobile presentation improved