Introduction

Training and leadership programs include an array of entities that offer specialized instruction and coaching in sector-specific content. Such programs exist as institutes, fellowships, certificates, academies, and conventional programs. Many program leaders recognize their program’s areas for refinement particularly around fine tuning the alignment between program activities and the intended outcomes. (For a thorough conversation on this point, see the W.K. Kellogg Foundation’s Evaluating Outcomes and Impacts: A Scan of 55 Leadership Development Programs pg 6 and 15.) The video and article below are intended for use by program leadership and their teams for meaningful and practical program assessment.

Types of Impact to Assess

There are three layers to assess or evaluate in training and leadership programs including the following:

  1. Impact on consumers’ lives and careers (e.g. increases to knowledge-base, skill-set, and readiness to move in a new direction or adventure)
  2. Impact on the immediate environment (e.g. the workplace, community, and local region)
  3. Impact on sector (e.g. contributions to workforce power, professional networks, and innovation)

Assessing Impact

In this section, we emphasize the potential of training and leadership programs to manage program assessment. We outline the types of data to collect and how to go about doing it for tech savvy, no so techie, smaller, and larger programs. Importantly, we are focused on collecting data that acts as evidence of impact rather than output or operational effectiveness such as program reach and participant satisfaction (See this tool by the Indiana Youth Institute for the difference between assessing program outputs vs. outcomes.  Also check out Beyond Counting Heads by Anchoring Success.)

 

Type of data to collect

1.) Explore pre-/post- surveys/tests. The results of tests and surveys become quantifiable data. For each instruction and coaching topic (e.g. sector-wide economic and political issues), administer pre-/post-tests to gather a baseline understanding of what consumers know, their skills, their organizational experience on this topic, and their role in the sector. This can later be compared to what the consumers know, do, etc. by the end of their participation in the program. Use specific prompts to identify what consumers know such as multiple-choice prompts like “Which two factors contribute to…” and critical thinking opportunities for consumers to apply what they know such as “In the following scenario, show how you would…” (Check out the Kellogg report pages 33-34 for examples of survey/test questions.) Pre-/post-surveys can also be used to gather the consumers’ motivations, experiences with leadership competencies, and so forth. Like with pre-/post-test, these surveys provide information to program leadership about the gains made by consumers.

2.) Examine portfolios and consumers’ annotations about portfolio pieces. The portfolios and annotations can be quantified in some ways as well as serve as qualitative examples. (WASC has a higher education tool that is an easy to use, one-page table on how to carefully set-up or refine the portfolio approach.)

3.) Study short- and long-term impact with a 1-year and 3-year follow-up survey. The results of surveys become quantifiable data. The data to be collected through this strategy should tell the program leadership about the application of the consumers’ coaching and training experience (e.g. knowledge, skills, network, etc.). (Check out the Kellogg report pages 35-38 for examples of survey/test questions.)

 

How to collect data

Tech inclined programs

A great option includes making program activities automate data collection. This means that surveys, pre-/post- tests, and portfolios become data collection tools. Make these electronic with your current database, website, or an open-source database. Making the program activities electronic means that the program activities automatically transfer data to be organized and stored for later analysis.

 

Not so techie programs

An alternative to electronic versions of surveys, pre-/post- tests, and portfolios, your program can invest in short paper-based materials; shorter rather than longer paper materials require careful consideration of which types of information are the most important to collect (only gather data that has a specific job to do). Also, shorter materials decrease the likelihood of user error and data entry error.

The trick here is to identify a reliable and effective staff member to enter the results of the surveys, pre-/post- tests, and portfolios into spreadsheets (i.e. this is not a task for a junior volunteer). We recommend that “Not so techie programs” also opt for choosing one or two types of data to collect (e.g. pre-/post- tests and a 1-year follow-up survey); this approach aligns with programs that don’t yet have the technological infrastructure for comprehensive program assessment. It’s best to successfully accomplish collecting and using two sorts of data.

 

Smaller programs

Is your program small in terms of consumers? Let’s consider programs that have 10 to 15 consumers per year that receive in-depth, quality instruction and coaching. The trick here is that smaller programs need to be particularly tech savvy. Smaller programs benefit from the most comprehensive approach to program assessment. We encourage your program team to collect at least three types of data (e.g. surveys, pre-/post- tests, and portfolios) in order to (once the data has been analyzed) say as much as possible about the impact of the program even with small cohorts.

Is your program small in terms of the staff team? Whether or not the number of consumers is lesser or greater than 10 to 15, small staff teams need to be comprised of a coordinator of program and instruction tasks, a coordinator of communications and social media, and a coordinator of assessment, documentation, and reporting. Does your current staff team have these talents? Are the talents dispersed across multiple people? If yes to both questions, we encourage your program to reorganize roles so that each person is focused on just one coordination area.

We encourage this because the staff member who coordinates assessment, documentation, and reporting needs to focus most of their time on being clear about what data is collected, any tool or data errors to address, administering the tools, and so on. If the coordination of assessment is sprinkled across multiple people in a leadership and training program, assessment tasks will seem daunting in terms of reinitiating certain tasks after one’s attention has been elsewhere, away from the rhythm of assessment tasks. If the coordinator of assessment wants to further experiment with creative data collection strategies, check out these ideas in Dynamic Definitions of Data by Anchoring Success.

 

Larger programs

Is your program large in terms of consumers? Let’s consider programs that have 50 to 100 consumers per year that receive in-depth, quality instruction and coaching. Your program must be tech savvy in order to deliver the instruction and coaching (or your team is comprised of miracle workers). We encourage your team to (if it has not already done so) align all data collection tools in terms of making sure redundant data is not collected, that there is an automated life cycle or timeline for administering the tools, and so forth.

Additionally, we encourage programs with a large number of consumers to refresh their data collection tools about every 3-years to ensure that the data that is collected is the most meaningful for decision-making and ensuring the ongoing quality of the program. The “how” of data collection here should focus on A.) Make sure data collection tools are aligned with intended outcomes; B.) Confirm that the data is giving the program meaningful information for decision-making; and C.) Inform consumers of the timeline of data collection so that they know the meta-organization of when and how to respond to your tools (e.g. an emailed survey) and the results of past program assessments.

Is your program large in terms of the staff team? A large staff team for a training or leadership program is a dream for many organizations. In terms of “how” to collect data, your team should be comprised of the necessary talents. The gift of a large team is that program assessment will fit somewhere nicely rather then being crammed into someone’s already full workload. The curse of a large staff team is the lack of clarity around how non-assessment staff contribute indirectly to program assessment (e.g. which staff commit to reviewing early interpretations of data analysis). A comprehensive program assessment plan indicates who has direct and indirect roles in the collection and analysis of data. (For examples of comprehensive assessment plans across specializations, check out the Anchoring Success Pinterest page for curated materials.)

Closing: Unrolling a Practical Approach to Assessment

We recommend that if your program is not ready to partner with a specialist on program assessment, then master one assessment step at a time. Well collected data is gold; it is best to have only one type of data that is precise and rigorous. This way, program leadership are confident about the decisions made from that data. Programs that have tons of data that cannot be put to work are in an unfortunate situation.

 

Events, Resources, Learnings // Seasonal Newsletters

The seasonal newsletter shares social transformation and anti-racism resources, events, and learnings through BrownGirlHealing.org. 

You will also receive a free copy of the introduction to Dr. Vèlez Young's latest book, Nonprofit Work is Killin' Me: Mitigating Chronic Stress and Vicarious Trauma in Social Service Organizations.

You have successfully signed up for Event Alerts and monthly resources from BrownGirlHealing.org!