Experience Strategy HealthTech People Leadership 0 to Scale Jintronix · 7 years

Building a rehab platform patients actually finished

Patients weren't completing their rehab programs. Not because they didn't want to recover, but because nothing on the market gave them a reason to keep going. I led product and experience strategy from day one at Jintronix: competitive analysis, ethnographic research across clinical populations, team leadership, and agile delivery. Seven years later the platform had 500+ clinical clients across North America.

My Role
Product Manager / UX Lead
Research · Roadmap · Team leadership · Scale
Team
1 designer · 4 Unity devs
1 portal dev · 1 QA · 1 PM (later)
Duration
7 years
Concept definition through market scale
Jintronix platform
Clinical clients at scale
500+
Hospitals, outpatient centers, and senior care facilities across US and Canada
Session time increase
15x
From 1 to 2 min up to 15 to 30 min per session at 2x per day average
Clinical outcome improvement
+25%
vs. traditional therapy methods, measured across pilot cohorts
00 · Background and Product Context

Seven years. One product. Built from a blank board to standard of care.

Jintronix was my deepest build. I joined as the founding PM when there was a prototype and a hypothesis. When I left, there were 500+ clinical clients, peer-reviewed outcome data, and a product that therapists recommended by name. The competitive analysis, the ethnographic research, and the team building all ran at the same time. Clinical outcomes were the constraint that didn't flex on any of it.

My scope

First and only PM for the first three years. No inherited roadmap, no existing research practice, no PM playbook. I built all of it from scratch while also doing the work.

As the company grew I put together the team: one UX designer, four Unity developers, one portal developer, one QA, and eventually a second PM. I ran sprints, kept research moving into the backlog, and tried to make sure nobody was ever blocked waiting on a product decision.

The north star

Get a patient to finish their prescribed program. Every product decision was evaluated against that single question.

Jintronix was built on a hypothesis competitive analysis confirmed: no digital rehab platform had solved patient engagement. The market was scheduling tools and exercise libraries. The engagement gap was the opening and clinical credibility was the gate to every enterprise sale along the way.

01 · Competitive Analysis and Problem Definition

Before writing a user story, I did the competitive analysis. It shaped everything that came after.

I benchmarked every digital rehab platform I could find across patient engagement mechanics, clinical workflow integration, outcome measurement, and provider adoption. The picture was the same everywhere: scheduling tools, exercise libraries, telehealth bolt-ons. Nobody had touched engagement.

No competitor had solved engagement. That was the category's defining problem
50 to 65% of patients don't complete home exercise programs. Every existing digital platform treated this as a given. None had built the feedback loop, progress visibility, or behavioral mechanics needed to change it.
Therapists were making clinical decisions without adherence data
Clinicians had no visibility into what patients did between appointments. They were managing recovery based on self-report. No competitive platform had addressed and that represented both a clinical risk and a clear provider pain point.
We weren't entering a crowded market. We were creating a category.
No platform competed on patient engagement. The question was whether gamification mechanics could be deployed with enough clinical rigor to earn therapist trust at the enterprise sales stage. The competitive analysis said yes. The pilots had to prove it.
02 · Usability Research and Customer Insights

Ethnographic research inside rehab centers found things surveys would have missed

Before any design work started, I spent weeks inside rehabilitation centers just watching. Not running structured sessions. Just observing. Where patients stopped. Where therapists intervened. Where the paper program fell apart at the kitchen table or the clinic hallway. The behavioral gap between what was prescribed and what actually got done wasn't a data problem. It was a design problem nobody had tried to solve.

I synthesized those observations into testable hypotheses, not findings. Each one had a recommended mechanic attached and went to leadership with a business rationale before any design or engineering investment was made. The research established three north-star KPIs: adherence rate, average session length, and patient-reported satisfaction. Those governed every subsequent product decision.

Jintronix research and discovery methodology
Research methodology mapping the behavioral gap between prescription and completion. The framework used to structure clinical observations into actionable product hypotheses.
Finding: Real-time feedback was the missing variable
Patients who got immediate feedback even just an audio cue for correct form stayed engaged much longer. Recommendation: build real-time motion sensing feedback as the core MVP mechanic, not a phase 2 enhancement.
Finding: Progress visibility drove return behavior
Patients who could see their own improvement over time session counts, range-of-motion gains, comparison to their own baseline were far more likely to come back. Recommendation: personal progress curves as the primary gamification layer.
Patients used what their therapist recommended. Full stop.
Any go-to-market approach that didn't win the prescriber first wasn't going to scale. That insight led directly to co-developing the therapist dashboard as an equal MVP priority, which was the most contested call of the whole project.
The word "gamification" made therapists uncomfortable
So we stopped using it. Game mechanics embedded inside a clinical-grade interface worked. Consumer-facing game features presented as games did not. The mechanics were the same. The framing changed everything.
"The first time my patient stood in front of Jintronix, she went for 20 minutes, wanting to win the ski activity. She changed my mind about the role of games in rehabilitation."
Dr. Aaron Bunnell, Stroke Program, UW Medicine
03 · Key Recommendations and Product Decisions

Three calls I made to leadership that I'd make the same way again

None of these were popular when I made them. All three turned out to be right. Here's what the data showed and what I recommended.

Recommendation 1
Build the therapist dashboard as an equal MVP priority, not a phase 2 feature
The analysis

Research showed patients used what their therapist recommended. Competitive analysis showed no existing platform had invested in therapist-facing tooling at all. The adoption gate was the prescriber. Not the patient.

My recommendation

Build both at the same time, even though it meant more scope. I framed it as a go-to-market risk question, not a development cost question. If therapists couldn't see patient data, they wouldn't prescribe the product. Simple as that.

The outcome

The therapist dashboard became our most powerful enterprise sales tool. Clinicians with real-time adherence data became product advocates, driving the referral network that took us to 500+ clients without a paid acquisition model.

In clinical markets, the prescriber is the first customer. Winning the patient without winning the therapist is a channel that doesn't scale.
Recommendation 2
Cut social comparison features despite strong engagement data
The analysis

Pilot data showed leaderboards and peer benchmarking drove higher engagement. But clinical advisor review flagged that comparing recovery rates across patients with different diagnoses was clinically inappropriate and a liability in enterprise sales.

My recommendation

Cut social comparison. Redirect gamification to self-referential progress: patients competed only against their own prior performance. I presented the enterprise sales risk alongside the engagement data so leadership could weigh both.

The outcome

Short-term engagement dipped slightly. Long-term, the decision unlocked hospital and senior care facility partnerships that would have been impossible with social comparison in the product.

In regulated markets, features that patients love but clinicians can't defend become enterprise sales liabilities regardless of engagement metrics.
Recommendation 3
Scale ahead of full readiness. The market window was closing.
The analysis

After year two, the product had strong pilot data but known rough edges. The clinical team wanted more iteration. Sales had three hospital networks ready to sign. New entrants were beginning to surface in competitive analysis.

My recommendation

Scale now. I presented a risk analysis that distinguished cosmetic limitations from functional ones, and proposed expanding with existing partners while addressing gaps in concurrent sprints.

The outcome

All three hospital networks signed within the quarter. The cosmetic limitations were addressed in two sprints. The first-mover position we protected became a meaningful barrier to entry for later entrants.

"Good enough to ship" and "good enough to scale" are different thresholds. In a nascent market, speed to scale is a strategic asset.
04 · Agile Delivery and Product Evolution

From prototype to standard of care seven years of the product in practice

The sprint pattern was simple: make a hypothesis, build the smallest version that could test it, measure it against the KPIs, bring a recommendation to the next sprint review. I kept ceremonies tight and made sure research was feeding the backlog continuously. The thing I was most paranoid about was people waiting on product decisions.

Jintronix wireframes and prototype iterations
Early session flow wireframes from concept validation. The core question at this stage wasn't visual design. It was whether the feedback loop was fast enough to keep a patient engaged through a full exercise. We tested timing, audio cues, and scoring mechanics as separate hypotheses before any of them went to development.
Phase 2
Concept Validation
Figure out if gamification actually works in a real clinic, with real patients, before spending serious money on it
What happened

The pilots were where the product either proved itself or didn't. I ran sessions at partner rehab centers in person, watching how therapists introduced the system and where patients dropped off. We tested gamification mechanics as separate hypotheses and killed the ones that didn't move adherence numbers. The ones that did got doubled down on immediately.

The biggest shift in that phase wasn't a feature. It was realizing therapists needed to understand the activity library through their own clinical mental model: organized by function and joint, not by game name. Once we restructured the prescription interface around how therapists already thought about treatment planning, adoption changed. They stopped being cautious about the tool and started recommending it by name.

What the pilots confirmed
  • Gamified sessions increased average session time from 2 minutes to 18 minutes in the first week of pilot, validating the core engagement hypothesis
  • Real-time form feedback was the highest-impact mechanic: patients self-corrected and stayed significantly longer
  • Progress streaks drove return behavior more reliably than any other mechanic tested
  • Therapists reviewed dashboards daily once patient-level session data was available, validating the co-development recommendation

Before / After: Therapist Prescription Interface

Before
Original Jintronix therapist dashboard before redesign
Activities organized by game name rather than clinical function. No filtering by therapeutic goal or body region. Therapists had to already know what they were looking for to use it effectively that's not a tool that earns clinical trust, it's one that requires it upfront.
After
Redesigned Jintronix activity prescription interface
Organized around therapeutic objective. Organized around therapeutic objective Arms ROM, Balance, Gait, Cognition. Filterable by position and client level. The 4-step guided workflow replaced a modal-buried process. Therapists stopped needing to learn the product and started using it the way they already thought about treatment planning.
Jintronix annotated MVP screens
Annotated MVP: therapist prescription dashboard on the left, patient session view on the right. The two were built as equal priorities from day one because research confirmed that patients used what their therapist recommended, and therapists only recommended what gave them clinical visibility.
Phase 3
MVP Delivery
Ship a scalable, clinically credible platform aligned with therapist workflows and validated patient needs
What shipped
  • Motion-sensing exercise engine with real-time form detection and scoring
  • Gamified session interface: points, streaks, personal bests, progress curves
  • Therapist dashboard with patient adherence, session length, and range-of-motion tracking
  • Personalized exercise program builder organized by clinical objective
  • Session summary and weekly progress reports, patient-facing
Yr 1–2
Proving the hypothesis
One PM, four Unity developers, one portal dev, one QA, one designer. No product infrastructure to inherit. The first year was figuring out whether gamification mechanics could earn clinical credibility. If therapists wouldn't prescribe it, nothing else mattered. The competitive analysis said the gap existed. The pilots had to prove we could close it.
Yr 3–4
Winning the therapist
The therapist dashboard became the product's real differentiator. Clinicians with real-time adherence data: session length, range-of-motion progress, completion rates against prescription, became advocates. That referral network, built on clinical credibility rather than a sales motion, drove the first hospital network partnerships. We didn't have a paid acquisition model. We had therapists who recommended us by name.
Yr 5–6
Scale and what it cost
Expanding across US and Canada meant building for clinical contexts we hadn't designed for originally: skilled nursing facilities, senior care, outpatient orthopedics. The product stretched. So did the team. Decisions that had been easy when it was six people in Montreal got harder to make consistently across a growing org. I built more formal research documentation and decision-logging practices. Not because the work had changed, but because the team had.
Yr 7
Standard of care
500+ clinical clients. Peer-reviewed outcome data. The product had gone from a hypothesis to a line item in hospital procurement. The thing I remember most from year seven isn't the scale number. It's the first time a therapist told us they couldn't imagine running their program without it.
05 · Business Impact and Outcomes

Seven years of outcomes. Here's what the data actually showed.

Jintronix outcomes dashboard showing Berg Balance Scale and TUG test results
The outcomes dashboard was the feature that took Jintronix beyond engagement. Berg Balance Scale, TUG time, activity tolerance: all tracked against clinical norms and charted over time. A therapist could use it to justify reimbursement with an insurance company, defend the tool's efficacy to a hospital administrator, or sit next to a patient and show them, in plain terms, how far they'd actually come. One feature doing three different jobs , which is why it became the most important thing we built.
Jintronix in clinical use. The platform as therapists and patients experienced it across 500+ clinical sites.
500+
Clinical clients at scale
Hospitals, outpatient centers, and senior care facilities across the US and Canada, driven by therapist advocacy rather than paid acquisition.
15x
Increase in patient session time
From 1 to 2 minute sessions up to 15 to 30 minutes. Average 2x per day usage vs. near-zero on paper programs. The core engagement hypothesis validated at scale.
+25%
Clinical outcome improvement
Patients on Jintronix showed measurably better recovery benchmarks vs. traditional therapy control groups. Those outcomes that supported peer-reviewed publication and enterprise contract renewal.
Up
Provider retention and contract expansion
Improved patient outcomes drove clinical provider satisfaction and high contract renewal rates, validating that clinical credibility was the primary enterprise retention driver.
06 · Retrospect

What I actually learned. Seven years is a long time to get things wrong and right.

What went well
Competitive analysis before any product investment Starting with a structured benchmark gave us a defensible product thesis before spending a dollar on design or engineering. It gave leadership a business case, not just a user need.
Ethnographic research before structured methods Spending weeks inside real rehab centers gave us things surveys wouldn't have. The core mechanics came from watching where patients actually stopped. The most valuable thing we did in year one.
Treating the therapist as the first customer The most important call I made on the whole project, and the least popular one in the room.
Hypothesis-driven delivery throughout Every sprint was organized around a testable assumption. Killing features the team loved, because the data said they weren't moving adherence, was something I had to practice.
What I'd do differently
Build a structured clinician onboarding program earlier Therapists weren't confident prescribing the product early on. I knew this by year two and kept deprioritizing it. That was wrong.
Prioritize EHR integration from the start EHR integration became an enterprise sales blocker, delaying expansion by 12 to 18 months. The competitive analysis flagged it in year one. I accepted "later" twice. I shouldn't have.
Build a direct patient usability feedback loop earlier Patient feedback came through therapists for the first few years. That filtered signal meant we were slow to catch real usability problems, and the honest reason we didn't fix it sooner was that ethics approval for direct patient research is slow and expensive.
Document product decisions more rigorously as the team scaled Decisions made in year two were relitigated in year five with no documented rationale. People who weren't in the room will always question the hard calls. If you can't show your work, the decision gets retried.
What energized me

The clinical pilots. Being in the room when a therapist saw the dashboard for the first time and realized they could actually see what their patient did at home. That never got old. It was the clearest signal that the product was doing something real.

Building the team. Going from doing everything myself to having people who cared about the problem as much as I did, and watching them solve things I wouldn't have thought of.

What I'd tell myself at year one

The therapist is the customer. Not eventually. From day one. Everything else flows from that one insight, and you're going to spend two years half-believing it before the data makes it undeniable.

Also: document everything. Not for process reasons. Because the decisions that feel obvious now will feel mysterious in three years, and you'll spend more time relitigating them than it would have taken to write them down.

Previous project
← Canyon Ranch
Next project
SAG-AFTRA →