01 · Context and Stakeholders
Eight business lines, one product brief, no existing digital infrastructure
I joined InfoBeans as Principal PM, embedded with Canyon Ranch's leadership. The project touched marketing, IT, sales, spa, health, and digital experience. Getting eight senior stakeholders to agree on what to build before discovery could start was the first real product challenge. I ran alignment sessions in the first two weeks and left with shared KPIs and a clear MVP definition before anything else began.
The engagement opened with a current-state assessment of Canyon Ranch's scheduling experience across people, process, and technology. That analysis became the basis for the strategic recommendation I took to senior leadership, and it set the scope before any design work was commissioned.
KC
Kevin Coleman
VP, Consumer Insights
MA
Molly Anderson
VP, Sales
DN
Dustin Nabhan
VP, Health & Performance
DS
Deirdre Strunk
VP, Spa, Fitness & Beauty
LS
Liz Shupe
Director, Digital Experience
JE
Jim Eastburn
Director, Transformational Experience
02 · Problem and Competitive Landscape
The market had moved to self-service booking. Canyon Ranch was still on paper.
I started with a competitive analysis of the digital wellness and hospitality booking landscape, benchmarking Canyon Ranch against Miraval, Pritikin, Red Mountain Resort, Marriott Bonvoy, ClassPass, and Mindbody. Every comparable brand had self-service mobile booking. Canyon Ranch was the only luxury wellness property at their scale still running on front-desk calls and paper booklets.
The gap wasn't just operational. Competitors with self-service booking captured impulse revenue that front-desk models structurally miss. A guest who decides at 9pm that they want a massage has no good option when the desk is closed. I brought this analysis to the CMO, CIO, and VP of Sales and recommended native mobile booking as the clearest near-term ROI on the table.
"I was going to book a personal trainer, but the line at the desk was too long. I walked away, and they lost $300. If I could've just tapped it on an app, it would've been done."
Guest interview, Tucson
Guests needed 6 or 7 passes to finalize a schedule
No single view of the day, no self-service option. Every change meant another call or another trip to the front desk. Competitors had eliminated this friction years earlier.
Wellness Guides were spending 20 hours a week on admin
People hired to coach guests on their wellness goals were spending the majority of their day managing bookings. The current-state assessment put a number on it: more than half their time was scheduling work.
Revenue was leaking through discoverability gaps
Guests couldn't find services they didn't already know about. Competitive benchmarking showed digital discoverability was the primary driver of ancillary revenue at comparable properties.
Strategic Recommendation, Presented to Senior Leadership
How might we build a mobile app that makes booking easy enough that guests actually use it, and gives the business real visibility into what they're missing?
KPIs: Increase NPS · Reduce Guide admin load · Increase on-ranch ancillary revenue · Improve service discoverability
Goal 01
Self-service booking for guests
Give guests the ability to browse and book services on their own device, on their own time, without the front desk involved.
Goal 02
Free up the Guides
Get routine scheduling off Wellness Guides' plates so they can spend time on the coaching work they were actually hired for.
Goal 03
Close the revenue gap
Better discoverability means more impulse bookings. The competitive analysis had already shown where that money was going.
03 · Discovery and Agile Process
Three workshops in two weeks, then continuous research through every sprint
Discovery ran across three cross-functional workshops. Week one produced a shared journey map. Week two turned that into a hypothesis backlog. Week three output a risk-ranked MVP roadmap. Every hypothesis got a validation status and became a Jira user story before it was sprint-ready.
From there the cadence stayed consistent through launch: test with 5 to 8 guests per week, synthesize findings, update the backlog, present a recommendation at the next sprint review. I ran planning, grooming, and retros across an on-shore and off-shore team throughout.
Discovery process: journey mapping, hypothesis backlog, risk assessment, sprint-ready roadmap
Workshops
3 cross-functional sessions in 2 weeks
Facilitated with marketing, IT, sales, and spa stakeholders. Output: shared journey map, hypothesis backlog, and a roadmap with acceptance criteria aligned across all business lines.
Agile Cadence
Sprint planning, grooming, and research synthesis weekly
Managed Jira backlog across distributed teams. Research findings translated into user stories each sprint. No team was waiting on product decisions.
H1 Validated
Self-service booking increases ancillary spend
Guests who could see availability and book themselves spent more per stay. Confirmed across all pilot cohorts and consistent with competitive benchmarking data.
H2 Validated
One unified schedule view reduces planning friction
Guests needed a single view of free and paid activities to plan their day. Confirmed in 8 of 8 early usability sessions and shaped the MVP information architecture.
Delivery timeline, Nov to Jul
Nov to Jan
Concept Validation and MVP Definition
Journey mapping, hypothesis backlog, MVP scoping. About 5 usability sessions per week in Tucson
Alpha, March
Alpha Launch
Tucson. Qualitative usability testing, 5 sessions per week. Findings fed into sprint backlog
Beta, June
Beta Launch
TestFlight rollout. 5 to 10 users per week, quantitative and qualitative. Recommendations at each sprint review
MVP, July
MVP Launch
Live across Tucson and Lenox. Impact measured against baseline KPIs
MVP scope definition using MoSCoW and RICE prioritization, presented to senior leadership for sign-off
04 · Usability Research and Customer Insights
32 interviews surfaced two user segments with fundamentally different mental models
I planned and ran usability research across all phases of the project, 175 total guest interviews, with a methodology that shifted from qualitative in early phases to quantitative at beta. In Concept Validation alone, 32 moderated interviews showed that new and returning guests approached the product completely differently. Designing for one would create friction for the other.
The segmentation framework I built from that research went to the Product Owner and UX team with specific recommendations on IA, navigation, and onboarding. Findings weren't handed over as raw data. They came with a recommendation and a rationale.
32
Moderated interviews, Concept Validation phase
17 new guests, 15 returning. Insights delivered to Product Owner and UX lead with roadmap recommendations attached
Generally on a defined wellness pathway
Visiting to decompress, no defined pathway
Came to improve health or lifestyle
Came to get away
Open to any provider
Often has a specific provider preference
Searched by intention ("I want to relax")
Searched by category, already knew the taxonomy
Anxious about missing available services
Comfortable navigating, less need for discovery prompts
Finding and Recommendation
Taxonomy was the adoption risk. 7 of 8 guests in early sessions couldn't find a massage under the existing category structure. I recommended a card sorting study before finalizing IA. It added one sprint and prevented a post-launch navigation failure.
Finding and Recommendation
Conflict detection needed to ship with MVP. Guests were anxious about double-booking free and paid activities. I scoped a lightweight conflict alert feature and recommended adding it before beta. Post-launch it was rated the most valued feature in the app.
Finding and Recommendation
Brand trust was a prerequisite for download. Guests wouldn't install an app from a brand they didn't trust. Research confirmed Canyon Ranch's brand equity was a distribution asset. I recommended leading with brand-consistent design over feature density in the MVP.
Finding and Recommendation
The app replaced the booklet, not the Guide. Guests wanted to handle routine booking on their phone but still valued human input on complex wellness decisions. I recommended keeping Guides in the flow for consultative needs rather than removing them.
"I hate carrying the book. My phone is always with me."
P1, guest interview, Tucson
"There are some apps I wouldn't trust, but Canyon Ranch is a place I trust, like my church, and I would download it immediately."
P4, guest interview, Tucson
Moderated usability testing session with a guest in Tucson, AZ
05 · Key Decisions and Agile Iterations
Three calls that kept the product moving, and what drove each one
Over a 5-month delivery with a distributed team, three situations required a recommendation to senior stakeholders rather than just a product decision. Each involved a real tradeoff and a case I had to make with data.
The situation
The US engineering lead was still configuring APIs. The India dev team needed sprint work. Design was still iterating. Three interdependent streams with no clean order of operations.
My recommendation
Build against mock data APIs and greyscale wireframes now, swap in real data and hi-fi design once both were stable. I presented this sequencing plan to the Product Owner and CIO before the team moved.
What happened
All three teams stayed productive. No milestone slipped. The approach became the standard onboarding pattern for new distributed team members through the rest of the engagement.
Skill demonstrated
Cross-team dependency management across on-shore and off-shore model. Backlog sequencing to protect velocity without compromising quality gates.
The situation
Guests were anxious about accidentally double-booking free and paid activities. The MVP design didn't surface conflicts. Usability sessions showed this was creating hesitation that stopped people from completing bookings.
My recommendation
Add lightweight conflict detection before beta, scoped to one sprint. I brought the usability data to the Product Owner alongside an impact analysis on booking completion rates to make the case for absorbing the scope.
What happened
Post-launch feedback rated conflict alerts as the most valued feature in the app. One sprint of additional scope delivered more user value than anything else that cycle.
Skill demonstrated
Mid-sprint scope adjustment grounded in user research, framed around business impact to get stakeholder buy-in quickly.
"Conflicts is such a good feature." P28, beta usability session, Tucson
On-ranch prototype testing at the Tucson property
The situation
7 of 8 guests in early usability sessions couldn't find a massage under Canyon Ranch's existing service categories. The internal taxonomy made sense to the business but not to guests.
My recommendation
Run a card sorting study with 16 participants before finalizing IA. I presented the timeline impact and adoption risk to the Product Owner and VP of Digital Experience as a tradeoff, not just a research request.
What happened
The revised taxonomy achieved 8 of 8 task completion in follow-up testing. Ease-of-use rating 2.25 out of 5, where 1 is very easy. The IA from that study became the navigation architecture at launch.
Skill demonstrated
Evidence-based scope defense. Making the case for absorbing timeline impact by quantifying the adoption risk of skipping the work.
"It would be challenging for a new person to Canyon Ranch, because of the language and how it's broken up." P28, before the taxonomy revision
"It's even better than the book."
P49, Lenox on-ranch testing session
Beta launch via TestFlight at the Tucson property
06 · Outcomes and Business Impact
Five months, one 0-to-1 delivery, every KPI moved
The product launched on time across two Canyon Ranch properties. Each outcome was measured against the baseline KPIs set at the start, so the business could connect the digital investment to financial performance.
78%
Guest adoption in first 3 weeks
Guests described the app as useful, time-saving, and effective. Measured via App Store activations and post-launch feedback surveys.
+15%
On-ranch ancillary revenue lift
Easier service discovery and frictionless booking increased per-stay spend. Measured by comparing average guest transaction value pre and post launch.
~9 hrs
Saved per Wellness Guide per week
Time that went back into coaching and wellness consultations. Measured via call log analysis and employee interviews.
~175
Guests validated across all phases
Six to eight usability sessions per week throughout the project. Every major product decision had research behind it.
What shipped vs. what moved to backlog
Native iOS app, shipped at MVP
- My Account, guest profile and preferences
- My Schedule, unified free and paid activity view
- Service booking with conflict detection and alerts
- Add to Calendar for complimentary activities
- Dining reservations
Deferred, not validated as guest priority
- Video descriptions of services
- Wish lists and bookmark feature
"If I had it all right here, it would make life so much easier."
P4, CEO, first-time guest, highest-ranking executive woman in the MLB
07 · Reflection
What I'd carry into the next one
Canyon Ranch was a fast, high-visibility build with C-suite oversight and a distributed team. The things that went well were mostly front-loaded decisions. The things I'd change were mostly about getting ahead of problems before they had a chance to cost time.
Post-launch, Canyon Ranch Tucson
What went well
Competitive analysis set up the investment case
Starting with a structured benchmark gave me business language to bring to the C-suite before any design existed. Executives respond to competitive gaps more readily than user pain points in isolation.
Research stayed continuous, not just front-loaded
Testing with real guests every week meant backlog changes came with data. No recommendation to a stakeholder was opinion-based.
Early alignment held through launch
Getting eight stakeholders onto a shared hypothesis backlog in week two meant sprint decisions stuck. Nobody relitigated scope at the end.
What I'd do differently
Run the taxonomy research in week two, not mid-project
The card sorting study was the most valuable thing we did. Running it early would have cut two iteration cycles and reduced backlog churn.
Instrument quantitative baselines earlier
We had strong qualitative findings from the start but had to reconstruct some pre-launch baselines retroactively. Earlier instrumentation would have made the impact story cleaner.
Scope post-visit engagement from day one
The competitive analysis flagged ongoing digital guest relationships as a differentiator for top-performing wellness brands. We didn't scope it and it stayed untapped at launch.