Behind the Training, Episode 3: Trey Morton

Measuring What Matters: How Trey Morton Redefines Sales Training Success

In this episode of Behind the Training, Russ Somers sits down with Trey Morton, Senior Director of Professional Education and Sales Training at a leading life sciences company. With a military background and years in commercial roles, Trey brings a blunt, insightful, and pragmatic approach to training. From taking a red pen to the Kirkpatrick model to leveraging AI for better rep performance, Trey walks us through what it takes to make training that actually drives revenue—and what most trainers still get wrong.

Q: Trey, your LinkedIn headline is “all-around awesome guy,” and you’ve held senior training roles with major life sciences companies. But you also have a significant military background. How do those three things—awesome guy, life sciences training, and your military service—go together?

That headline is mostly sarcasm, but I appreciate the callout. My military background really shaped how I think about training. As an infantry officer, the cycle was always: learn something, go out and do it, then come back and teach the next group. It was practical, focused, and iterative—and that mindset carried over when I moved into the life sciences industry.

I started in sales and marketing roles before moving into training, and I’ve always believed training shouldn’t be academic. It's about application: What are you going to do differently on Monday? That’s the lens I bring—teaching people to actually perform in the field.

 

Q: Let’s get into it: how do you build training that actually drives revenue? Where do most teams fall short?

The biggest gap is that training often gets siloed. Trainers get deep into methodology and forget the business objective. Sales training is a cost—so if we’re not driving incremental revenue, we’re just an easy place to cut.

We need to tie what we’re doing to real business results. That means starting with the end in mind: what outcome do we expect? What behaviors will drive that outcome? Then build training to reinforce those behaviors.

 

Q: At the TT LifeSciences conference, you had some fun with the Kirkpatrick model. Where is the model right—and where does it miss the mark?

The model’s useful, but we tend to stop at Levels 1 and 2—reaction and learning. That keeps us employed and out of regulatory trouble, but it doesn’t make us business leaders.

Level 3 is about behavior change, and that’s where trainers can really make an impact. Are reps actually doing what we trained them to do? If not, we need to retrain or adjust our approach. Level 4—business results—is the promotion zone. We can’t control it directly, but we can influence it by aligning behaviors to outcomes.

And if we don’t see the results we expected, we can trace it back: did the behaviors happen? If yes, maybe our assumptions were wrong. If no, maybe our training or follow-through needs work. Either way, we’re in a position to have a strategic conversation.

 

Q: What do most trainers get wrong about measuring impact?

They measure what’s easy—surveys and tests—but miss the bigger picture. The real mistake is not aligning with business partners upfront. If someone asks for training on product features, I need to ask: What’s the goal? What behavior should this drive? What metrics matter 90 days from now?

Design the training around those goals, not just around test scores. Otherwise, you’re measuring what happened in the classroom, not what happened in the field.

 

Q: A concept I’ve found interesting from a former CEO is the idea of “commander’s intent.” Can you explain that and how it applies to sales training?

Sure. In the military, the commander gives you the mission and the intent. If the plan falls apart, you still know the objective—like helping another unit pass through safely. It’s the difference between the how and the why.

In sales training, the business outcome is the intent. I might design a full training program, but if it doesn’t move the needle, I missed the mark. The intent—revenue, adoption, behavior change—should guide everything.

 

Q: How do you think about AI and technology in the work you’re doing? Where can it improve the work—and where is it just hype?

Right now, AI-generated role-plays are a game-changer. Reps can practice more often, get variable scenarios, and improve faster—without needing a manager to grade them. We’ve seen 12x more practice volume just by using AI tools.

Looking ahead, I’m excited about AI’s potential to diagnose training needs. Feed it sales, CRM, and usage data, and let it identify patterns—like whether reps are consistently struggling with a certain hospital size or procedure. That helps us intervene earlier and smarter.

And with simulations, especially in clinical settings, we’re seeing things like predictive modeling for complex procedures. AI can help reps prepare for worst-case scenarios before they ever walk into the OR.

 

Q: What are your leading indicators for knowing a training initiative is working?

Behavior change in the field. Are reps doing what we trained them to do? That’s key. Then we link those behaviors to business metrics. If results don’t follow, we adjust—either the behaviors, the assumptions, or both.

 

Q: If you had to give a room of sales trainers one hard truth about their impact, what would it be?

Nobody cares. That sounds harsh, but here’s what I mean: if your training works, people don’t care how it got done. You’ll have credibility and freedom. If it doesn’t work, they don’t care about your learning models or theories.

You have to focus on delivering business outcomes. That’s what earns trust, budget, and a seat at the table.