Inertia is a fact of life. No matter how detailed or well-designed your data plan may be, it won’t flourish without consistent attention. But it’s not always feasible to add dedicated data meetings to your team’s already-cramped schedule. Instead, consider what opportunities for small but frequent reviews already exist in your system.
Today: creating consistent data checks that hone your skills and build your team’s momentum.
TACKLING COMMON DATA ISSUES | PROBLEM 6
The data isn’t shared and used frequently within the planning team
Never underestimate the power of habit, particularly when it comes to good data practices. The number one danger in a fledgeling program is that initial burst of enthusiasm for collecting data which then goes unused, wasting your time and killing the project’s momentum.
This doesn’t mean you need to review every piece of information you’re tracking every day — quite the opposite. Aim for a single key metric or an important (but limited) area where predictive data can provide immediate and valuable benefits. Then use the results to build credibility, excitement and drive.
Examine the existing opportunities
With the busy schedule of an instructional coaching team, scheduling new data review meetings is sometimes just not feasible. Instead, ask where you might find opportunities for review in existing calendar slots. Weekly team check-ins, one-on-one sessions, or quarterly all-district meetings all present the opportunity to go over the numbers as a group — even if it’s just five minutes at the start.
Start small and expand
Building a data habit from scratch is no small task. Rather than crafting a full dashboard, select a point or two to focus on at the start. But how do you decide what those points should be?
As a rule of thumb, the more frequent the review, the more granular that single data point can be. A small group of instructional coaches that meets once a week has the necessary familiarity with the data to draw conclusions from one metric. A biyearly all-district staff meeting, on the other hand, is better served by a walkthrough of high-level trends.
Let’s say you’re tracking how new teachers rate their proficiency with the district’s PBIS framework. By the end of the first semester, your goal (based on previous years’ experience) is that 85% of the teachers will rate themselves as “proficient” or “highly proficient” in the framework.
Someone whose work doesn’t involve PBIS may be concerned if they check in halfway through the semester and only 28% of the teachers have met the goal. But someone who works in PBIS full-time knows, based on past years’ experience, that there’s usually a sharp upward tick in November as the new teachers master the learning curve and gain confidence in their skills.
The first person in this example doesn’t work with the project, so they only need a high-level view, such as the overall numbers for the year. The second person, though, has a driving need to keep up with the data on a week-by-week basis. 28% might not be a cause for concern at the halfway point, but 8% might be — and catching the problem early means fixing it in time.