Most patients weren't finishing rehab at home. The tools were hard to stick with, and therapists had very little visibility into what happened between visits. I led product and experience strategy at Jintronix from the early prototype through scale.
Jintronix was the longest product build I've worked on. I joined when it was still mostly a prototype and a theory. By the time I left, it had 500+ clinical clients, published outcomes, and a reputation strong enough that therapists recommended it by name. Clinical credibility decided everything.
First and only PM for the first three years. No inherited roadmap, no existing research practice, no PM playbook. I built all of it from scratch while also doing the work.
As the company grew, I helped build the team around the work: one UX designer, four Unity developers, one portal developer, one QA, and eventually a second PM. I ran sprints, built beta clinic relationships, and kept research as a standing part of product strategy.
Get a patient to finish their prescribed program. Every product decision was evaluated against that single question.
Competitive analysis confirmed the core bet: no digital rehab platform had really solved engagement. Most of the market was some version of scheduling software plus an exercise library.
I benchmarked every digital rehab platform I could find across engagement, clinical workflow, outcomes, and provider adoption. The pattern was consistent: scheduling tools, exercise libraries, and a few telehealth add-ons. Nobody was really solving the adherence problem.
Research was a core part of this product from the start. I set up beta clinics, ran ethnographic studies in person, and kept that work going remotely during the pandemic. We sent laptops to seniors so we could keep doing usability play testing at home, including with residents at a retirement home for nuns. Seeing how people used the product in their own space changed our decisions.
Before that, I had already spent weeks inside rehab centers just watching where patients stalled, where therapists stepped in, and where paper programs quietly fell apart once someone left the clinic. Over time, product decisions were driven by, or at least checked against, real end users: seniors, therapists, and the people making purchase decisions.
I turned those observations into testable product hypotheses and tied each one to a concrete recommendation. That work also gave us three KPIs to keep coming back to: adherence, session length, and patient satisfaction.
None of these calls were easy in the room. I'd still make all three.
Research showed patients used what their therapist recommended. Competitive analysis showed almost nobody had invested in therapist-facing tooling. Adoption depended on the prescriber first.
Build both at the same time, even if it meant more scope. I framed it as a go-to-market risk, not a development cost. If therapists couldn't see patient data, they weren't going to prescribe the product.
The therapist dashboard became one of our strongest sales tools. Once clinicians had real adherence data, they started recommending the product, and that referral network helped us reach 500+ clients without paid acquisition.
Pilot data showed leaderboards and peer benchmarking drove higher engagement. But clinical advisor review flagged that comparing recovery rates across patients with different diagnoses was clinically inappropriate and a liability in enterprise sales.
Cut social comparison and shift the motivation model to personal progress. Patients would compete against their own prior performance. I put the sales risk next to the engagement upside and let leadership see the tradeoff clearly.
Engagement dipped a little at first, but the decision made hospital and senior care partnerships much easier to win.
After year two, the product had strong pilot data but known rough edges. The clinical team wanted more iteration. Sales had three hospital networks ready to sign. New entrants were beginning to surface in competitive analysis.
Scale now. I separated cosmetic issues from real functional risk and proposed expanding with existing partners while fixing the rough edges in parallel sprints.
All three hospital networks signed that quarter. We cleaned up the cosmetic issues in two sprints and kept the lead long enough for it to matter.
Our sprint pattern stayed simple: make a hypothesis, build the smallest thing that could test it, measure it, and decide what to do next. I kept ceremonies lean, kept research flowing into the backlog, and tried hard not to become the bottleneck.
The pilots were where the product had to prove itself. I ran sessions at partner rehab centers, watched how therapists introduced the system, and tracked where patients dropped off. We tested mechanics one by one and cut the ones that did not move adherence.
The biggest shift in that phase came from reorganizing the activity library around the way therapists already thought about treatment planning: function and joint, not game name. Once we made that change, adoption picked up fast.
Before / After: Therapist Prescription Interface
The clinical pilots. Seeing a therapist realize they could finally tell what a patient was doing at home never got old. That was when the product felt real.
Building the team, too. I went from doing nearly everything myself to working with people who cared just as much about the problem and kept finding better ways to solve it.
The therapist is the customer from day one, not somewhere down the line. It took me too long to fully trust that, even though the data kept pointing back to it.
And document more than you think you need to. Not because process is beautiful, but because today's obvious decision turns into next year's argument surprisingly fast.