The content of the quality standards was informed by existing national and international standards and guidelines [ 27 , 28 , 29 , 30 ], state and federal government policy for primary care, chronic conditions, and diabetes [ 31 , 32 ], and the National Safety and Quality Health Service Standards NSQHS [ 33 ];.
It is also recommended that DSMES should be accessible, culturally appropriate, and provide appropriate information and education for all people with diabetes, their families, and carers. The importance of strategies promoting active learning, goal setting, and supported decision making is also recognized.
Programs should have a written curriculum, standardized facilitator training, and a quality development pathway to ensure fidelity and facilitate quality assurance.
Existing topic specific and comprehensive NDSS DSMES were assessed in accordance with the newly developed standards, to ensure consistent quality across all states and territories. In September , representatives of each Agent assessed their existing DSMES utilizing a user-friendly self-assessment tool, developed to guide the application of the quality standards into practice.
The tool is presented in Additional file 1. Agent representatives were engaged as Health Services Managers or Program Managers in their respective organizations. Agent representatives attended a one and a half day training workshop facilitated by the NET. The reviewers then met to discuss the outcomes of the independent assessments and whether individual programs met quality standards. Discrepancies between reviewers were identified and resolved through mutual agreement.
Behavior change outcomes were not expected from tier 1 basic education programs; thus, no formal review of these programs was undertaken. However, the standards provide a general guide for the provision of basic education programs through the NDSS. The Framework consists of nationally standardized outcomes and indicators, program categories, objectives, measurement tools, evaluation processes, and quality standards.
These are summarized in Fig. Four indicators i. Programs were classified into three distinct categories. These programs were characterized as high-reach and low-cost and did not incorporate behavioral strategies to foster improvements in self-management, self-determination, or psychological adjustment. Thus, while basic education programs addressed the knowledge and understanding indicator, these programs were assessed as unlikely to address indicators of self-management, self-determination, or psychological adjustment.
Group DSMES that were of longer duration, provided structured self-management education covering a range of diabetes-related topics, and targeted all four Framework indictors i. Comprehensive DSMES are highly structured and resource intensive, with higher estimated cost per participant and lower reach than programs classified at other levels.
However, such programs were anticipated to be the most effective in eliciting behavioral change. The burden of measuring constructs related to all four indicators was deemed too great and the assessment of constructs related to self-determination and psychological adjustment were prioritized. Moreover, direct evaluation of clinical outcomes and cost was not feasible, given the available resources, accessibility of information, and scope of the NDSS e. Likewise, intensive longitudinal surveillance necessary to determine cost savings would require a substantial financial investment beyond current levels of resourcing and access to information not currently available to Agents or evaluators.
Four objectives related to self-determination and psychological adjustment were identified for DSMES, including: a increased diabetes-related empowerment, b participant perceptions that facilitators were autonomy supportive, c reduced diabetes-related distress, and d positive consumer satisfaction.
Four instruments were selected to measure outcomes against these objectives. Items are statements reflecting beliefs of empowerment and confidence to manage diabetes e. Higher scores indicate greater diabetes-related empowerment. The health care climate questionnaire HCCQ [ 37 ]; was selected to evaluate a construct related to the indicator of self-determination. Perceptions of autonomy support from health care providers have been associated with significant improvements in autonomous motivation for glucose control and reductions in HbA1c among people receiving treatment for diabetes [ 38 ].
Inclusion of the HCCQ provided a mechanism of monitoring the fidelity of program delivery by educators specifically trained in a person-centered approach.
Items are statements reflecting perceived autonomy support e. Higher scores represent greater perceived autonomy support from health care providers. Diabetes distress refers to psychosocial distress specifically related to the burden of living with, and managing, diabetes and its complications [ 39 ].
Diabetes distress is highly prevalent and associated with sub-optimal self-care and poorer emotional well-being [ 40 ]. Higher scores indicate greater diabetes-related distress. A global measure of consumer satisfaction was also adopted. Net Promotor Score NPS [ 42 , 43 ]; reflects the proportion of participants likely to recommend a program to others i. Higher NPS scores indicate greater participant satisfaction. Basic education programs were nominated at the first tier of evaluation.
Evaluation focused on knowledge and understanding, and consumer satisfaction. As basic programs aim only to increase knowledge and understanding, with no focus on behavioral change, evaluation of tier one programs excluded evaluation of diabetes empowerment, distress, and autonomy support, and measures were administered immediately post-program.
DSMES classified as topic specific were ascribed to the second tier of evaluation. Evaluation included assessment of knowledge and understanding, confidence for self-management, self-determination, and psychological adjustment. As behavioral change was anticipated from DSMES within this tier, data collection was planned to take place pre- and post-program participation to assess changes in outcome measures. DSMES classified as comprehensive were nominated to the third, and highest, evaluation tier.
More complex evaluation was planned, with assessment of all behavior change objectives relating to diabetes empowerment and diabetes distress. Data collection was planned for three time points including pre- and post-program, and follow-up e. The final NDSS programs included within each category, the indicators addressed within each of those categories and the ascribed evaluation tiers and evaluation processes relevant to each tier are presented in Table 1.
It was expected that programs meeting these standards would be more likely to achieve the outcomes and indicators adopted in the National Evaluation Framework compared to programs that do not meet the standards.
The assessment and review of existing programs resulted in the identification of a suite of programs meeting the quality standards. These programs were recommended and subsequently nominated for national delivery by the NDSS administrator, Diabetes Australia.
Five topic specific programs met the quality standards, including programs related to self-management of carbohydrate intake CarbSmart , food shopping and interpreting nutrition labels ShopSmart , glucose monitoring MonitorSmart , foot care FootSmart , and managing medication MedSmart.
Gaps in service provision were then noted, including a lack of programs to support physical activity, self-management of insulin and the use of insulin pumps. To fill this gap, two programs initially assessed as not meeting the standards went through a quality improvement process and were subsequently re-assessed, resulting in inclusion in the national program suite.
All other programs that did not meet the requisite quality standards were no longer supported for delivery through the NDSS. This National Evaluation Framework has led to nationally consistent delivery of evidence based, person-centered DSMES, including standardized curriculum and facilitator training. Data collection and analysis continue, to facilitate ongoing evaluation and quality improvement.
As health care costs continue to increase, government-funded organizations are under increasing pressure to demonstrate that programs and services are cost-effective, impactful, and achieve targeted outcomes.
Effective evaluation of programs and services and transparency in the expenditure of public funds is critical. The National Evaluation Framework helps to maintain accountability in terms of the justification of government spending, and importantly, to people in Australia living with diabetes. The Framework represents an innovative and comprehensive approach to achieving national consistency in quality and delivery and evaluation of DSMES.
The methods described in this article may guide program administrators and service-providers in other regions around the world, and those providing services for other clinical populations, or other chronic conditions that have a significant requirement for self-management support. Moreover, the Framework could be implemented or adapted for use by other service providers who administer multiple diabetes education programs across a range of settings.
A participatory approach involving service providers, scheme administrators, and evaluation professionals resulted in consensus around the National Evaluation Framework within the NDSS. The Framework provided for the categorization of the diabetes education and support programs available nationally through the NDSS, and guides evaluation processes based on targeted indicators.
Moreover, for the first time in Australia, quality standards for diabetes education and support facilitated through the Australian Government-funded NDSS were developed and implemented. These standards ensure that all NDSS programs are of the highest quality, are person-centered, and contain key aspects known to be associated with optimal consumer outcomes, in addition to ensuring that programs comply with the Australian NSQHS.
Uses the steps in the CDC Evaluation Framework as a way to accommodate the program context and meet or exceed all relevant standards. Section Navigation. Facebook Twitter LinkedIn Syndicate. Minus Related Pages. What are we learning from evaluation? How will we use the learning to make our efforts more effective?
Division of Cancer Control and Prevention Comprehensive Cancer Control Branch Program Evaluation Toolkit Designed to help grantees plan and implement evaluations of their NCCCP-funded programs by providing general guidance on evaluation principles and techniques, as well as practical templates and tools.
Division of Sexually Transmitted Diseases Prevention Practical Use of Program Evaluation among Sexually Transmitted Disease STD Programs Provide step-by-step guidance on how to design and implement a program evaluation to build the evaluation capacity of STD programs so that they can internally monitor their program activities, understand what is working or not working, and improve their efforts.
Linking program performance to program budget is the final step in accountability. The early steps in the program evaluation approach such as logic modeling clarify these relationships, making the link between budget and performance easier and more apparent.
While the terms surveillance and evaluation are often used interchangeably, each makes a distinctive contribution to a program, and it is important to clarify their different purposes.
Surveillance is the continuous monitoring or routine data collection on various factors e. Surveillance systems have existing resources and infrastructure. Data gathered by surveillance systems are invaluable for performance measurement and program evaluation, especially of longer term and population-based outcomes.
There are limits, however, to how useful surveillance data can be for evaluators. Also, these surveillance systems may have limited flexibility to add questions for a particular program evaluation. In the best of all worlds, surveillance and evaluation are companion processes that can be conducted simultaneously.
Evaluation may supplement surveillance data by providing tailored information to answer specific questions about a program.
Data from specific questions for an evaluation are more flexible than surveillance and may allow program areas to be assessed in greater depth. Evaluators can also use qualitative methods e.
Both research and program evaluation make important contributions to the body of knowledge, but fundamental differences in the purpose of research and the purpose of evaluation mean that good program evaluation need not always follow an academic research model. Research is generally thought of as requiring a controlled environment or control groups. In field settings directed at prevention and control of a public health problem, this is seldom realistic. Of the ten concepts contrasted in the table, the last three are especially worth noting.
Unlike pure academic research models, program evaluation acknowledges and incorporates differences in values and perspectives from the start, may address many questions besides attribution, and tends to produce results for varied audiences.
Program staff may be pushed to do evaluation by external mandates from funders, authorizers, or others, or they may be pulled to do evaluation by an internal need to determine how the program is performing and what can be improved. While push or pull can motivate a program to conduct good evaluations, program evaluation efforts are more likely to be sustained when staff see the results as useful information that can help them do their jobs better.
Data gathered during evaluation enable managers and staff to create the best possible programs, to learn from mistakes, to make modifications as needed, to monitor progress toward program goals, and to judge the success of the program in achieving its short-term, intermediate, and long-term outcomes.
Most public health programs aim to change behavior in one or more target groups and to create an environment that reinforces sustained adoption of these changes, with the intention that changes in environments and behaviors will prevent and control diseases and injuries.
Through evaluation, you can track these changes and, with careful evaluation designs, assess the effectiveness and impact of a particular program, intervention, or strategy in producing these changes.
The Working Group prepared a set of conclusions and related recommendations to guide policymakers and practitioners. Program evaluation is one of ten essential public health services [8] and a critical organizational practice in public health. The underlying logic of the Evaluation Framework is that good evaluation does not merely gather accurate evidence and draw valid conclusions, but produces results that are used to make a difference.
You determine the market by focusing evaluations on questions that are most salient, relevant, and important. You ensure the best evaluation focus by understanding where the questions fit into the full landscape of your program description, and especially by ensuring that you have identified and engaged stakeholders who care about these questions and want to take action on the results.
The steps in the CDC Framework are informed by a set of standards for evaluation. The 30 standards cluster into four groups:. Utility: Who needs the evaluation results?
Will the evaluation provide relevant information in a timely manner for them? Feasibility: Are the planned evaluation activities realistic given the time, resources, and expertise at hand?
Propriety: Does the evaluation protect the rights of individuals and protect the welfare of those involved? Does it engage those most directly affected by the program and changes in the program, such as participants or the surrounding community? Facebook Twitter LinkedIn Syndicate. A Framework for Program Evaluation. Minus Related Pages. The purposes of the framework are to:.
Evaluation framework materials and resources. Contact Evaluation Program.
0コメント