This article is about characterizing and appraising something of interest. It is long term and done at the end of a period of time. Evaluation is the structured interpretation and giving of meaning to predicted or actual impacts of proposals or results. It looks at original 12 lead ecg the art of interpretation pdf, and at what is either predicted or what was accomplished and how it was accomplished.
A systematic, rigorous, and meticulous application of scientific methods to assess the design, implementation, improvement, or outcomes of a program. The focus of this definition is on attaining objective knowledge, and scientifically or quantitatively measuring predetermined and external concepts. In this definition the focus is on facts as well as value laden judgments of the programs outcomes and worth. The core of the problem is thus about defining what is of value. From this perspective, evaluation “is a contested term”, as “evaluators” use the term evaluation to describe an assessment, or investigation of a program whilst others simply understand evaluation as being synonymous with applied research.
There are two function considering to the evaluation purpose Formative Evaluations provide the information on the improving a product or a process Summative Evaluations provide information of short-term effectiveness or long-term impact to deciding the adoption of a product or process. Not all evaluations serve the same purpose some evaluations serve a monitoring function rather than focusing solely on measurable program outcomes or evaluation findings and a full list of types of evaluations would be difficult to compile. However, the strict adherence to a set of methodological assumptions may make the field of evaluation more acceptable to a mainstream audience but this adherence will work towards preventing evaluators from developing new strategies for dealing with the myriad problems that programs face. Some reasons for this situation may be the failure of the evaluator to establish a set of shared aims with the evaluand, or creating overly ambitious aims, as well as failing to compromise and incorporate the cultural differences of individuals and programs within the evaluation aims and process.
None of these problems are due to a lack of a definition of evaluation but are rather due to evaluators attempting to impose predisposed notions and definitions of evaluations on clients. Evaluators may encounter complex, culturally specific systems resistant to external evaluation. Furthermore, the project organization or other stakeholders may be invested in a particular evaluation outcome. However, specific guidelines particular to the evaluator’s role that can be utilized in the management of unique ethical challenges are required. The Joint Committee standards are broken into four sections: Utility, Feasibility, Propriety, and Accuracy. Various European institutions have also prepared their own standards, more or less related to those produced by the Joint Committee. They provide guidelines about basing value judgments on systematic inquiry, evaluator competence and integrity, respect for people, and regard for the general and public welfare.
This requires quality data collection, including a defensible choice of indicators, which lends credibility to findings. Findings are credible when they are demonstrably evidence-based, reliable and valid. This requires that evaluation teams comprise an appropriate combination of competencies, such that varied and appropriate expertise is available for the evaluation process, and that evaluators work within their scope of capability. A key element of this principle is freedom from bias in evaluation and this is underscored by three principles: impartiality, independence, and transparency. Independence is attained through ensuring independence of judgment is upheld such that evaluation conclusions are not influenced or pressured by another party, and avoidance of conflict of interest, such that the evaluator does not have a stake in a particular conclusion. Conflict of interest is at issue particularly where funding of evaluations is provided by particular bodies with a stake in conclusions of the evaluation, and this is seen as potentially compromising the independence of the evaluator.
Whilst it is acknowledged that evaluators may be familiar with agencies or projects that they are required to evaluate, independence requires that they not have been involved in the planning or implementation of the project. A declaration of interest should be made where any benefits or association with project are stated. Impartiality pertains to findings being a fair and thorough assessment of strengths and weaknesses of a project or program. This requires taking due input from all stakeholders involved and findings presented without bias and with a transparent, proportionate, and persuasive link between findings and recommendations. Thus evaluators are required to delimit their findings to evidence. A mechanism to ensure impartiality is external and internal review. Transparency requires that stakeholders are aware of the reason for the evaluation, the criteria by which evaluation occurs and the purposes to which the findings will be applied.
This is particularly pertinent with regards to those who will be impacted upon by the evaluation findings. Protection of people includes ensuring informed consent from those involved in the evaluation, upholding confidentiality, and ensuring that the identity of those who may provide sensitive information towards the program evaluation is protected. Evaluators are ethically required to respect the customs and beliefs of those who are impacted upon by the evaluation or program activities. Examples of how such respect is demonstrated is through respecting local customs e. Where stakeholders wish to place objections to evaluation findings, such a process should be facilitated through the local office of the evaluation organization, and procedures for lodging complaints or queries should be accessible and clear. Access to evaluation documents by the wider public should be facilitated such that discussion and feedback is enabled. Furthermore, the international organizations such as the I.
World Bank have independent evaluation functions. UN norms and standards for evaluation. There is also an evaluation group within the OECD-DAC, which endeavors to improve development evaluation standards. MDB effectiveness and accountability, share lessons from MDB evaluations, and promote evaluation harmonization and collaboration.
Many of the evaluation approaches in use today make truly unique contributions to solving important problems, while others refine existing approaches in some way. House then divides each epistemological approach into two main political perspectives. The values orientation includes approaches primarily intended to determine the value of an object—they call this true evaluation. Two pseudo-evaluation approaches, politically controlled and public relations studies, are represented. They are based on an objectivist epistemology from an elite perspective. Six quasi-evaluation approaches use an objectivist epistemology.