Good measures are not easy to find. We follow SIS design principles that say, “Evaluation tools, measures and indicators should be technically sound, based on evidence, open to criticism and continuously improved”. Experienced evaluators know how hard it is to select indicators and measures that have construct validity (which includes reliability and predictive validity), are credible to stakeholders (face validity), are feasible to collect, analyze and communicate, and will have a positive effect on service outcomes. Much of our time goes into researching measures that are useful, timely, valid and brief. 

We choose indicators from a wide range of policy and research literature. Our first ‘go to’ sources, though, are from indicator registries and lists that have already been screened. Following are just a few of the major registries that we review when we’re looking for measures. You may notice that we prefer measures that are brief, to reduce the data collection burden. If we have a choice between a 20-question measure and a 1-question measure of the same construct (e.g., financial insecurity), we’ll take the shorter one unless there’s a compelling reason. 

We like to borrow questions from national household surveys like the Canadian Community Health Survey (such as, “How would you describe your sense of belonging to your local community?”, which is one of the key measures of mental health outcomes adopted by the Government of Canada for the population as a whole and for immigrants in particular.) By adopting questions from national surveys, we get the additional benefit of baseline data for your population. And by using international taxonomies like WHO’s and SDGs we can aggregate results across sectors, organizations and measures.