Back to All Posts

DASH: Debriefing Assessment for Simulation in Healthcare

Get a clear overview of DASH scoring and the six debriefing elements. Learn how programs use DASH for observation, coaching, and program consistency.

February 26, 2026

9 min. read

debriefing assessment for simulation

Strong simulation programs depend on strong debriefing. A scenario may be well designed and expertly facilitated, but the debrief is what most reliably transforms simulation into learning. When debriefing varies from instructor to instructor, programs often see inconsistent learner feedback, uneven skill uptake, and difficulty scaling simulation across units or sites.

That’s where DASH (Debriefing Assessment for Simulation in Healthcare) fits. DASH is a structured tool used to observe, rate, and strengthen debriefing practice using a shared set of behaviors and a consistent scoring approach.1,2

In this article, you’ll learn what DASH is, how it’s organized, and how simulation teams use it to build faculty skill, align expectations, and track improvement over time. You’ll also see a practical example of how a program can apply DASH after a simulation session.

What DASH is and what it measures

DASH is a behavior-based assessment tool created by the Center for Medical Simulation in Boston.1 It helps programs evaluate the quality of debriefing and support instructor development by focusing on observable actions that shape learner reflection and discussion.2

DASH is supported by published psychometric research. A foundational study examined the reliability of DASH scores and presented validity evidence for using the instrument to evaluate debriefing quality.3 Additional studies have examined how DASH performs in specific contexts and populations, highlighting that reliability may vary by setting and rater structure.4

Two practical takeaways follow from that design:

  • DASH provides a shared language for describing debriefing quality. Instead of “that debrief felt good,” teams can point to concrete behaviors and score patterns tied to specific parts of the debrief.

  • DASH supports skill-building. Because it breaks debriefing into repeatable components, it works well for coaching, peer review, and targeted faculty development plans that address the lowest-scoring behaviors first.

The structure of DASH: versions, elements, and scoring

To understand how DASH works in practice, it helps to look at how the tool is organized.

Three primary versions

DASH includes three primary versions, each aligned to who is providing the rating:

  • Rater Version: for trained raters observing instructors

  • Student Version: for learners rating their instructors

  • Instructor Version: for instructors conducting self-assessment

Programs often begin with the version that best fits their current workflow:

  • If you have simulation faculty ready for peer observation and coaching, the Rater Version provides the most detailed input.

  • If you want to establish routine learner feedback, the Student Version can support that feedback loop.

  • If you’re working with instructors who are newer to facilitation, the Instructor Version supports reflection and goal-setting.

Six elements (the “what” of debriefing)

DASH organizes debriefing into six elements that represent a complete debriefing cycle, from establishing the learning environment to closing and summarizing.

Even when programs use different debriefing styles or frameworks, DASH provides consistent categories for rating instructor behaviors within that cycle.

A standardized rating approach (the “how well”)

Raters use an anchored effectiveness scale across the elements. The handbook and score sheets are designed so that the same observed debrief can be discussed using shared scoring expectations, which helps when you’re comparing performance across instructors, courses, or sites.

How to implement DASH in a simulation program

DASH works best when used as an improvement system, not a one-time evaluation. Below is a practical approach teams use to introduce the tool while keeping the process manageable.

Step 1: Define the purpose and guardrails

Before scoring a debrief, decide how the data will be used:

  • Development only (coaching and training)

  • Quality monitoring (program benchmarking)

  • Both, with clear separation between coaching data and any high-stakes decisions

Published implementation reports commonly emphasize faculty development and standardization rather than punitive evaluation.5

Step 2: Choose the version that matches your workflow

A common rollout sequence includes:

  1. Instructor self-assessment to build familiarity with the elements

  2. Student ratings for routine input from learners

  3. Trained rater observation for deeper coaching and calibration

If using the Rater Version, plan for rater preparation. DASH materials describe expectations for trained raters when programs want dependable scoring.

Step 3: Train raters and calibrate scoring

Inter-rater alignment often determines whether scores feel dependable or debatable. Calibration typically includes:

  • Reviewing the DASH elements and behavioral anchors

  • Scoring the same recorded debrief independently

  • Comparing results and discussing differences

  • Repeating until score differences narrow to an agreed threshold

Programs often start with an archived video review as a quality improvement step, then refine training based on what scoring patterns reveal.

Step 4: Build a feedback loop that leads to action

Scores alone do not change practice. A workable feedback loop includes:

  • A short coaching conversation within a week of observation

  • One or two targeted goals tied to the lowest-scoring element(s)

  • A re-observation cadence (for example, quarterly or per course cycle)

A faculty training implementation using DASH describes how the tool can structure coaching and skill-building, moving faculty from general impressions to specific behaviors to practice.5

Step 5: Align DASH with broader simulation standards

Many programs map DASH use to broader simulation standards, particularly around planned debriefing, psychological safety, and structured reflection. The INACSL Healthcare Simulation Standards of Best Practice include a debriefing standard emphasizing planned debriefing as part of simulation-based education.6

Using DASH results to strengthen quality, consistency, and scale

Once DASH is in place, teams can use it in ways that support instructor growth and program consistency.

Identify program-wide skill themes

After you have a baseline across multiple sessions, look for patterns:

  • Do scores cluster lower in early elements, such as setting expectations or framing the conversation?

  • Do they dip during analysis, when linking actions to reasoning and drawing in multiple voices?

  • Do they fall during closure, when summarizing and connecting back to objectives?

These themes help you prioritize faculty development topics and coaching time.

Track change over time with less noise

When you use the same tool, elements, and consistent rater practices, you can track improvement while keeping the measurement approach stable. This also helps simulation leaders describe progress using observable behaviors rather than anecdotal impressions.

Support onboarding and mentorship

For new simulation faculty, DASH can function as:

  • A shared expectation framework for high-quality facilitation

  • A structured coaching guide

  • A mentorship model for observation and feedback

​​This is particularly valuable when programs expand across departments or sites and need consistent instructional norms.

Example: Applying DASH after a sepsis deterioration simulation

An interprofessional team runs a sepsis deterioration simulation involving nurses, a resident, and a respiratory therapist. The debrief lasts 25 minutes.

What happens during the debrief

  • The instructor opens quickly with “What went well?” without setting confidentiality or clarifying learning intent.

  • Two participants dominate the discussion; quieter learners speak late or not at all.

  • Clinical decisions are discussed, but the conversation remains at the “what happened” level rather than exploring reasoning.

  • Time runs out, and the debrief ends with a minimal summary linked to objectives.

How DASH helps

Using DASH, the rater scores the debrief by element and provides feedback tied to observable behaviors rather than personal style. A coaching plan might focus on:

  • Opening moves: Briefly setting expectations and reinforcing a supportive learning environment.

  • Participation shaping: Inviting input from multiple roles and managing dominance.

  • Reasoning-focused analysis: Using prompts that connect actions to clinical frames such as recognition, prioritization, and escalation.

  • Closing summary: Ending with a concise synthesis linked to objectives and next-step behaviors on the unit.

What changes next session

In the following simulation, the instructor uses a structured opening, invites multiple perspectives early, and reserves the final two minutes for a concise summary tied to objectives. The rater repeats DASH scoring to assess improvement in targeted elements.

Practical tips for getting value from DASH quickly

Programs often see better adoption when they start small. Piloting DASH within one course or unit allows faculty and raters to become familiar with the elements and scoring approach before expanding across departments. A focused rollout also makes it easier to refine observation logistics and feedback conversations early on.

To keep the process practical, treat scores as coaching inputs rather than performance labels. Instead of trying to improve everything at once, identify one or two specific behaviors to strengthen before the next session. Targeted practice tends to produce more meaningful change than broad, general feedback.

Finally, revisit rater calibration periodically to maintain scoring consistency. Reviewing and discussing sample debriefs together helps preserve a shared interpretation of the behavioral anchors and keeps results comparable over time.

Building a consistent debriefing practice

High-quality debriefing does not happen by chance. It requires shared expectations, observable behaviors, and structured feedback that supports growth over time. DASH offers a practical framework to strengthen facilitation, align faculty across sites, and track measurable improvement. When used as part of an intentional coaching process, debriefing becomes a consistent, scalable standard of practice.

References

  1. Center for Medical Simulation. (n.d.). Debriefing assessment for simulation in healthcare (DASH). https://harvardmedsim.org/debriefing-assessment-for-simulation-in-healthcare-dash/

  2. Simon, R., Raemer, D. B., & Rudolph, J. W. (2010). Debriefing assessment for simulation in healthcare (DASH)© rater’s handbook. Center for Medical Simulation. https://harvardmedsim.org/wp-content/uploads/2017/01/DASH.handbook.2010.Final.Rev.2.pdf

  3. Brett-Fleegler, M., Rudolph, J., Eppich, W., et al. (2012). Debriefing assessment for simulation in healthcare: Development and psychometric properties. Simulation in Healthcare, 7(5), 288–294. https://pubmed.ncbi.nlm.nih.gov/22902606/

  4. Zargham, S., Hanson, A., Laniewicz, M., et al. (2020). Psychometric testing of the Debriefing Assessment for Simulation in Healthcare (DASH) for trainee-led, in situ simulations in the pediatric emergency department context. AEM Education and Training. https://onlinelibrary.wiley.com/doi/full/10.1002/aet2.10482

  5. Al-Khayat, T., Carter, S., Mauger, M., et al. (2024). Implementing the Debriefing Assessment for Simulation in Healthcare (DASH) tool for training medical faculty. Cureus, 16(9), e69290. https://pmc.ncbi.nlm.nih.gov/articles/PMC11471299/

  6. International Nursing Association for Clinical Simulation and Learning (INACSL). (n.d.). INACSL healthcare simulation standards of best practice®. https://www.inacsl.org/healthcare-simulation-standards-of-best-practice-

Meet the Author

Subscribe to Our Newsletter