top of page

Rachel in Industry

Portfolio Overview


Mixed methods research project developing a user-centered & data driven app strategy for a globally recognized fitness brand.


Mixed-methods benchmarking assessment of a SaaS platform for highly technical users.

Ashley Furniture Industries

Multivariate A/B test on the user experience of an e-commerce website to improve engagement on product detail pages.


I am currently a Sr. Product Researcher at WillowTree. You can learn more about WillowTree here. I help companies understand their customers, make strategic decisions, and implement a best in class digital experience. 

Example #1: Mixed-Methods Research Project

Fitness App Strategy & Market Viability

Blonde woman dancing

Stakeholders: Large, global, house-hold name fitness brand (C-Suite, VPs, Directors)

Objective: To explore the viability of and strategy for the development of a direct-to-consumer app. 

Research Activities: 8 Stakeholder interviews, in-depth 45-minute semi-structured interviews with 10 consumers (6 who had prior experience with the brand, 4 who regularly engage with the brand), a 750-person survey, a 400-person experimental survey, and in-dpeth 30 minute semi-structured interviews with 10 instructors over the course of 14 weeks.

Team: Strategy, Design, Growth Marketing, Engineering, Product Management

10 Consumer Interviews

  1. Current at-home and in-person fitness behaviors

  2. Current brand relationship

  3. Spending habits on fitness


Timeline: Three days of planning including a purposive recruitment process. One week for interviews.

Output: Slide deck presented to twenty stakeholders (IC to C-Suite).


Consumer Survey (N = 750)

  1. Current app-based fitness behaviors

  2. Expected vs. Nice to have features for a desired app

  3. Pricing model for expected feature set

  4. Demographic differences in expectations

Timeline: One week for survey planning and purposive recruitment process. Four days for data collection, and preparation of data analysis file in R.

Output: Slide deck presented to twenty stakeholders (IC to C-Suite)

Experimental Survey (N = 400)

  1. Validate a price point based on conceptual app designs

  2. Determine whether perceptions of the app concepts differed based on the price of the app

  3. Identify most compelling screens


Timeline: One week for survey planning and purposive recruitment. Four days for data collection, and preparation of data analysis file in R. Participants were only included in the data analysis if they passed the experimental manipulation check.

Output: Slide deck presented to twenty stakeholders (IC to C-Suite)

10 Instructor Interviews

  1. Identify enthusiasm and concern from fitness instructors about the app

  2. Explore incentive structures that would appeal to those instructors

  3. Gain insights about their communities and students

  4. Understand their relationship with the brand

Timeline: One week of planning including a purposive recruitment process. One week for interviews.

Output: Slide deck presented to twenty stakeholders (IC to C-Suite).

Project Outcomes: Through this process, we identified the market viability of the app and a price range for which the app would be appealing to consumers and fit within a competitive space. Utilizing the results of the research I conducted, we forwarded recommendations for the minimum viable product for launch that would meet consumer fitness needs,  who their target audience is, a messaging strategy that would appeal to those consumers, and a strategy for ingratiating their instructor base. Through this process, we were able to move the brand from ambivalence about developing the app to fully funding and investing in the app.


I was a Sr. UX Researcher at LiveRamp from June 2022 - November 2022. You can learn more about LiveRamp here. I was an independent contributor, conducting research using a variety of quantitative and qualitative research methods: quantitative benchmarking, moderated and unmoderated usability tests, and interviews. I also served as a consultant for UX Designers and Product Managers who would like to conduct their own research with users of the Safe Haven platform. 

Example #2: Mixed-Methods Research Project

Screenshot of LiveRamp's Safe Haven Platform

Safe Haven Benchmarking Assessment

Stakeholders: Product Team

Objective: To explore how ease of use metrics, time to task completion, and user-noted pain points changed year over year in the Safe Haven product

Research Activities: 10 in-depth 60-minute usability interviews, survey, analytics


  • I modified a pre-existing benchmarking plan to account for product changes that had occurred within the prior year. I recruited participants who fell into particular data-informed user personas based on their engagement with the tool and their job roles and responsibilities.

  • Participants were asked to walk through a list of tasks that should be accomplishable by individuals who have similar regular daily tasks in the platform. During this process, participants were asked to share their thoughts as they navigated each task. Each task was timed to see if modifications to the product resulted in ease of use for users. Participants then completed a survey after the task-completion activity to complete the system usability score (SUS) measure to see if overall perceptions of product usability changed year over year. 


  • I compiled a deliverable overviewing task-score changes year over year and highlighting whether the experience was unchanged, improved, or made worse. I tied in thematic assessments of participants' shared insights during the course of the study to ensure the user voice was present throughout the presentation.

Ashley Furniture Industries

In July of 2021, I was promoted to Product Manager of Customer and Product Analytics. I was responsible for understanding fluctuations in customer behaviors to provide data-driven insights for product prioritization, feature effectiveness, and KPI movement. I democratized data to partners within the Product team and across the broader E-Commerce organization to make data and insights accessible. I was the product owner for our GA/GTM development pod. I interfaced with business stakeholders to understand tagging or third-party vendor integration requests and then prioritize tagging initiatives based on their relative business value.

I was promoted to UX Manager of eCommerce Conversion Rate Optimization Data at Ashley Furniture in January of 2021. I was responsible for analyzing UX data streams and generating actionable insights for the business. I presented findings from UX data initiatives, including the results of A/B testing, usability testing, competitive analyses, and session recording analyses, among others, to business stakeholders in Marketing, Merchandising, Business Intelligence, IT, Operations, and the C-Suite. · 

I started as a data analyst on the testing, optimization, and conversion team from December of 2019 through January of 2021. In this role, I generated novel test ideas to improve usability while simultaneously increasing revenue. I also helped guide front-end and full-stack developers in conceptualizing and building test metrics and variations. In addition, I created a system for expeditiously delivering comprehensive test reports to stakeholders and business executives. The written report repository now holds over 110 test reports in which I analyzed data and derived results, and I have orally reported out on over 45 test results to various stakeholders across the business since September of 2020.

Example #3: A/B Testing

Mattress Configurator Multivariate Test

Stakeholders: Merchandising and UX teams

Objective: To identify which button display for product options was most effective for customers on mattress product pages.

Research Activities: Two Multivariate A/B tests

Team: UX Design, Research, Development (Front End & Full Stack)

Screenshot of Ashley Furniture Mattress Page

Process: This test employed a 2 x 2 x 2 multivariate design through which we explored the interplay between selection options for mattress sizing being displayed as text compared to an icon, a radio selector compared to no radio selector, and having an option selected by default compared to not having an option selected by default. Every user was exposed to one condition from each variable (e.g. text, radio selector, default selection). This test was run separately on mobile and desktop devices to account for differences in the volume of traffic coming from different device types. In addition, this test was run specifically on mattress product pages, only.

Data Analysis: I looked at both the main effects for each variable as well as the interaction effects between variables to see which variable and which combination of variables were driving effects. I then assessed differences between mobile and desktop users to appropriately make recommendations for each user group.

Outcomes & Recommendations:

  • Add to cart rates were mostly impacted by having no default selection on product selection buttons for both mobile and non-mobile users.

  • Radio buttons significantly impacted add to cart rates for both mobile and non-mobile users.

  • Finally, the images and text buttons were differently effective for both mobile and non mobile users. 

  • From these results, I made different recommendations for mobile and non-mobile experiences, and made an iterative recommendation to assess the impact of default selection on product options across the website (beyond mattress detail pages) in future tests.

Image of mobile display for mattress size options using radio buttons and text

Mobile Option Selections (above)
Desktop Option Selection (below)

Screenshot of desktop mattress selection options with icons and text

Professional Development

Wanek School of Business

Ashley Leadership Foundations 1                                                                                                         Dec, 2021

SiteSpect Case Study

My manager, Matt Sparks, and I were invited to discuss the testing program at Ashley after a very successful year in 2020. Here I provide tips for designing more rigorous experiments and conducting appropriate analysis in A/B testing.

bottom of page