Weekly Assessment Tips: a Vlog that supports assessment in the Human Services and Education Sectors
Introduction
We are inspired by organizations and schools that respond to unexpected assessments results in creative and interesting ways. When the data shows that client satisfaction is low, the ratio of input to output is too large, and/or scores on post-tests are lower than pre-test scores, these are opportunities for advancement. Our tips this week illustrate two ways to drive your decision-making in light of unexpected assessment results. We recommend engaging in both tips, taking the first step in the short-term and the second step in the long-term.
Tips
1. Collaboratively develop direct actions (operational or programmatic) to respond to the results.
2. Explore and coordinate adjustments to data collection strategies.
Examples
Below, the three scenarios illustrate the most frequent and/or interesting types of unexpected results that we see organizations grapple with.
Scenario 1: Client satisfaction survey results indicate low scores for your staff team on “trust,” “competence,” etc.
Tip 1
For orgs that engage diverse consumer groups, explore with staff cross-cultural and cross-generational approaches to communicate “trust,” “competence,” etc. For example, body language can communicate trustworthiness differently across groups; eye contact may be problematic in some interactions and vital in others.
Tip 2
Survey clients in order to gather scenarios and examples that convey to them “trust,” “competence,” etc.
Scenario 2: The ratio of input (e.g. staff reaching out to clients) to output (e.g. successful facilitation of face-to-face meetings with clients) is too large.
Tip 1
Have staff teams experiment with building scripts for use in communication with clients. Specific phrasing can communicate the purpose of the outreach and the benefits gained by responding. Staff teams may uncover important components to an effective script; and, this script can support the onboarding of new and future staff members.
Change staff shifts to better match client schedules such as 10am to 7pm.
Tip 2
Track how much is accomplished during the face-to-face visits; perhaps a high level of accomplishments in face-to-face meetings offset the lower output rate.
Scenario 3: Clients’ scores on a post-test are lower than, or too low, compared to pre-test scores.
Tip 1
Coordinate a focus group with clients whose post-test scores were low. With the group, explore their insights on the score discrepancies.
Contact the author of the curriculum and/or test. Find out how often they have seen this occur and what they recommend for the next application of the curriculum.
Tip 2
Adjust the pre-/post-test design. After multiple choice/yes-no response questions, add a qualitative follow-up question such as “Why did you choose this answer?” This way, additional understanding with each response is collected at the point of the pre-test. For example, if a cohort of consumers scores themselves very high on safety in the pre-test, the follow-up questions may collect information such as “I already took a training on this topic,” “My brother explained this to me,” and/or “I can just tell.” Once the pre-test scores are collected, there is greater opportunity to analyze factors involved in surprisingly high pre-test scores.
Adjust the pre-test to not simply capture self-ratings on knowledge, but actually, test knowledge and critical thinking applications of the knowledge.
Gains
1. Move from disappointment to excitement in order to seize opportunities to encourage a learning culture in your org.
2. Encourage creative and interesting actions in order to respond precisely to each result that is of concern.
3. Record and track actions in order to build a literal record of data-driven decision-making.
We love to hear about examples and experiments that you’re undertaking to engage unexpected assessment results. Please share them with us at contact@achoringsuccess.com
Brought to you by Anchoring Success
Feature image borrowed from a report of the World Bank
Recent Comments