Ga direct naar: Zoeken | Hoofdmenu | Snelkoppelingen
You are here: Home » KIT Information & Library Services » ILS Information products » MONITORING & EVALUATION FOR LEARNING IN RURAL INNOVATION SYSTEMS
Contents
Purposes of Monitoring & Evaluation
Van Mierlo et al.(2010) distinguish 3 paradigms for classifying existing methodologies for M&E: (1) the positivist; (2) the constructivist; and (3) the transformative paradigm. The positivist paradigm is grounded in the idea of a single, perfectly knowable reality so that change can be planned and controlled. It is exemplified by the LogFrame approach (LFA, see below). The constructivist paradigm attempts to take into account the multiple perspectives of stakeholders. It has a strong emphasis on learning, but it may not be sufficiently innovative in that it is based in existing perspectives rather than challenging them. It may also be criticized for its absolute relativism. Examples include fourth generation evaluation, responsive evaluation, learning histories, most significant change, and horizontal evaluation (for references see Van Mierlo et al., 2010). According to Mertens (1999), the transformative paradigm is particularly useful when dealing with asymmetric power relationships, e.g. for empowering the poor, but it is equally useful for collaborative efforts to produce system change or innovation. Transformative approaches aim for objectivity by seeking a balanced and in-depth understanding of the different perspectives. Of the different M&E approaches for learning in innovation systems presented here, reflexive monitoring, soft systems methodology (RAAKS), and developmental evaluation are among the more dedicated examples of transformative evaluation, whereas action research and outcome mapping could also qualify, but generally have a wider range of application. At a more practical level, Woodhill (2006) defines the purposes of M&E as follows: (1) accountability; (2) operational management; (3) strategic development or specific innovation; (4) knowledge creation or generic innovation; and (5) empowerment or capacity building. Clearly, accountability and operational management belong to the positivist school of evaluation, and so does knowledge creation. Strategic development, innovation, and empowerment are important aims of both the constructivist and transformative approaches. The Logical Framework Approach (LFA or LogFrame) is exemplary of the positivist school and widely used in development projects.
The critique of LogFrame and similar approaches
LogFrame (LFA) is the tool of pre ference for project design, monitoring and evaluation (M&E) in international development. It is very useful to set up a well-structured M&E framework that will satisfy the requirements of donor organizations, especially for accountability and operational information. LogFrame uses a logic model of how a development programme should work to solve identified problems (see Fig. 2 from: Wholey, 2004). The model is based on the hypothesis that all inputs can and must be foreseen, and that every input should and will lead to a measurable outcome (Earle, 2002). While this is often a difficult assumption in the Western context where LFA was developed and has been discarded a long time ago, it is even more unlikely in development projects and programmes in the South, where goals tend to be less simple, clear, accepted and comparable with each other, and knowledge of causal links is weaker (Gasper 1997: 2). Fom a "systems" point of view, it could be argued that LFA presupposes "systemic invariance". Uphoff (1992) points out that analysis of the type used in LFA, with its emphasis on attribution and accountability, can be dangerous as it leads to simplified, reductionist thinking. For almost half a century the resulting disconnect between outputs, short-term outcomes and longer-term outcomes has been known to the evaluation community (Suchman, 1967, in IDRC, 2001).
With a little exaggeration, one could say that under conditions where creative, innovative solutions are called for, LogFrame behaves like a steamroller, rigidly ignoring or crushing the unexpected, while it should have nourished necessary, new and promising initiatives. It is not surprising that in such situations, evaluation becomes an exercise in futility and gives rise to frustration, both at the donor end and to project staff working in the field. Efforts have been made to graft more participatory and learning elements to the Logical Framework or similar approaches, but many believe there must be real rather than summary room for change, evolution and iterative learning.
Monitoring & Evaluation approaches for learning
In their introduction to "systems concepts in evaluation" Williams and Imam (2006, pp. 6-7) list a number of reasons for the paradigm shift in M&E that took place over the past decades, including the need to: (1) get to the core of the issue; (2) avoid getting lost in the complexity of the situation; (3) understand the dynamics of the issue and identify leverage points; (4) better address the needs of the community being served; (5) ensure that all stakeholders are properly represented; (6) resolve multiple perspectives, using a new array of tools; (7) expose the underlying assumptions; and (8) to dispose of a new array of tools to accommodate the differences among stakeholders. These reasons also represent the kind of learning the M&E4L aims to promote. They mostly correspond to the categories of strategic development, empowerment, and capacity building in Woodhill's shortlist of purposes of M&E (see above).
In practice many M&E4L practitioners are not very concerned with the hair-splitting details of one or the other methodology, but rather just use a number of tools to achieve their goal, which is often best described as bringing a diverse group of people together and encourage stakeholders to reach a point where they collaborate in a process of learning, planning, and working for their mutual benefit. These practical facilitators may well have borrowed a number of tools from one approach or the other, without being aware whether it "belongs" to some methodology or not. This points to the notion that the various methodologies are in fact no more than attempted formalizations of theorizing practitioners.
Theoretical coherence may be a desirable ideal, but the unpredictable demands of the learning process defy the logic of the type of structuring some of us would have preferred. Interestingly, this is a problem some of the main theorizers behind systems learning seem to have struggled with, too. They appear to have tried to structure their approaches into flexible "frameworks", while at the same time providing sufficient handles (or tools) for these frameworks to remain practicable. Judge for yourself! Here are the 5 approaches we have selected:
Soft Systems Methodology/RAAKS
Soft Systems Methodology (SSM) was developed by Peter Checkland in the 1960s while attempting to apply systems engineering principles to complex, real-life management problems. He had already become aware of systems thinking and systems analysis through his project management work in a large British company (ICI, now part of AkzoNobel). He was the first to rediscover the human element in business and industry, which explains the "Soft" in SSM.
SSM is a general "messy" problem solving tool, which helps formulate and structure thinking about complex, human situations. Its core is the construction of conceptual models and the comparison of those models with the real world. SSM is NOT about analysing systems, but about synthesizing solutions by applying sytems principles to structure THINKING about things that happen in a particular situation or organization. A clear distinction is made between the real world and systems thinking about the real world, see fig. 3.
The "root definition" typically takes the form of: a system to do X, by means of Y, in order to Z. This is accompanied by a worldview, which makes the transformation meaningful or reasonable or valuable. At the heart of each conceptual systems is a transformation process in which an input is changed into a new form of itself, an output. The conceptual model provides a defensible account of this transformation. The transformation is then evaluated (or monitored, which here means an iterative type of evaluation) in terms of efficacy (E1, does it work?), efficiency (E2, is it not a waste of resources?), and effectiveness (E3, does it achieve the long-term goals?). For each system the method of measurement for the 3 E’s must be indicated.
SSM is a well-established methodology and there is any number of introductions on the Internet. A fairly brief and insightful introduction by Jeremy Rose can be found at http://www.docstoc.com/docs/25166487/SSM-handout. Rose’s handout includes a worked example to illustrate the various steps of SSM. In order to broadly situate SSM in the field of systems monitoring & evaluation we recommend "Systems concepts in action" (Williams, 2010) or its website equivalent at http://users.actrix.co.nz/bobwill/Resources.html (for SSM try http://users.actrix.co.nz/bobwill/ssm.pdf). The Royal Tropical Institute (KIT) has applied SSM since the 1990s, mainly in the form of RAAKS. A detailed description of the RAAKS approach, its application, its relationship to SSM, and its evolution from AKIS (Agricultural Knowledge and Information Systems) can be found in the KIT dossier on RAAKS.See also the Resources section of this dossier.
In 1946, Kurt Lewin, who incidentally had coined the term "action research" (AR) two years earlier, described AR as "a comparative research on the conditions and effects of various forms of social action and research leading to social action" that uses an ongoing cycle of planning --> acting --> observing --> reflecting (and then --> planning etc). AR is very useful for working on complex social problems or issues that need systematic planning and analysis.
After six decades of AR development, many methodologies have evolved that adjust the balance to focus more on the actions taken or more on the research that results from the reflective understanding of the actions (Wikipedia, 2011). AR is known by many other names, including participatory (action) research, collaborative inquiry, emancipatory research, action learning, and contextual action research, but all are variations on a theme. Put simply, action research is "learning by doing" - a group of people identify a problem, do something to resolve it, see how successful their efforts were, and if not satisfied, try again. Lewin believed that the motivation to change was strongly related to action: If people are active in decisions affecting them, they are more likely to adopt new ways. "Rational social management", he said, "proceeds in a spiral of steps, each of which is composed of a circle of planning, action, and fact-finding about the result of action". According to Lewin (1951) the objective of social change might concern the nutritional standard of consumption, the economic standard of living, the type of group relation, the output of a factory, the productivity of an educational team. He recommends that for social change to be permanent, it should use a three-step procedure of: (1) unfreezing; (2) moving; and (3) freezing of a level. Effective action research depends on the agreement and commitment of participants. Critical reflection is an important step in each cycle.In a way, Action Research is the prototype of the four other methodologies described here. However, AR descriptions do not often mention M&E explicitly. Yet, M&E could easily be added to the basic structure of AR. Read more in the Resources section of this dossier, e.g. the reference to Participatory Impact Pathways Analysis (Douthwaite et al. 2008).
In developmental evaluation, evaluators become involved in a regular or continuous process to improve the intervention and use evaluative approaches to facilitate ongoing programme, project, products, staff, and organizational development (Mathison, 2005). Patton admits there "are sound arguments for defining evaluation narrowly to distinguish genuinely evaluative efforts from other kinds of organizational engagement." Instead, he argues that "organizational development is a legitimate use of evaluation processes". To leave the accumulated knowledge and wisdom about what works and what does not work untapped would delay or devaluate intervention design. According to Patton (2011) "an evaluation focused on development assistance in developing countries could use a developmental approach, especially if such developmental assistance is viewed as occurring under conditions of complexity with a focus on adaption to local context."
There is considerable debate in the evaluation community whether developmental evaluation (or the other types of M&E for learning in innovation discussed here) is really evaluation or not (Donaldson et al., 2010; Moleko, 2011).
Patton neatly captures the shift in paradigmatic perspective on pp. 2-3 of his last book. The methodology is designed to improve programs and improve decisions about programs. It is preferable that the primary decision-makers who are going to use the evaluation are involved in generating their own recommendations through a process of facilitation and collaboration. The ‘developmental’ in developmental evaluation (DE) is based on the innovation driving change. It applies to an ongoing process of innovation in which both the path and the destination are evolving. There are various reasons why an organization may be in an innovative state. It may be a newly formed or forming organization seeking to respond to a particular issue, or exploring a new idea that has not yet fully taken shape; or it may be that a changing context has rendered traditional approaches ineffective and there is a consequent need to explore alternatives.
Given the innovation and complexity orientation, developmental evaluation is best suited for situations in which innovation is key with a high degree of uncertainty about the path forward. The following skills are very useful: community connectedness or domain expertise, curiosity, appreciativeness (as in "appreciative inquiry"), facilitation skills, and active listening skills.
There is no step-by-step methodology for DE. The right method is determined by need and context. Approaches may be drawn from organizational development, traditional evaluation, or community development. Important entry points are: orientation; building relationships; and, developing a learning framework. Tools include: stakeholder analysis; social analysis systems, mapping and visualization tools; systems change framework; outcome mapping; systems analysis framework; system mapping; system modeling; strategy development, testing, and refinement.
During the entire process it is important to keep an eye on group dynamics, key developmental moments (serendipity "under the shower"), and action (small or not). In the end, sense has to be made of the data. This requires both analysis and synthesis. This is when the emerging insights are identified, assessed, and developed. For more information on the backgrounds of developmental evaluation, refer to Resources section of this dossier.
Outcome Mapping (OM) is an approach to planning, monitoring, and evaluating social change initiatives developed by the International Development Research Centre (IDRC) in Canada with the involvement of Michael Patton (Earl et al., 2001). It is positioned as an alternative to logic model approaches. At a practical level, OM is a set of tools and guidelines that steer project or programme teams through an iterative process to identify their desired change and to work collaboratively to bring it about. Attribution and measuring long-term downstream results are dealt with through a more direct focus on transformations in the actions of the main actors. The key difference between outcome mapping and most other project evaluation systems is its approach to the problem that a project's direct influence over a community only lasts for as long as the project is running, and developing agencies have difficulty in attributing resultant change in those communities directly to the actions of the project itself. The outcome mapping approach is to focus less on the direct deliverables of the project and to focus more on the behavioural changes in peripheral parties affected by the project team. It focuses on behavioural change exhibited by secondary beneficiaries. The outcome mapping process consist of a lengthy design phase followed by a cyclic record-keeping phase. Outcome mapping is intended primarily for development projects in the South. In SSM terms, OM is more concerned with effectiveness than with efficacy or efficiency.
Among the principles, which guide evaluation at IDRC, the following are embedded in the process of Outcome Mapping and argue for the relevance of evaluation as an integral component of a program’s learning system: (i) evaluation is intended to improve program planning and delivery; (ii) evaluations are designed to lead to action; (iii) evaluations should enlist the participation of relevant stakeholders; (iv) monitoring and evaluation planning add value at the design stage of a program by making it more efficient; (v) evaluation should be an asset for those being evaluated; (vi) evaluations are a means of negotiating different realities; and (vii) evaluations should leave behind an increased capacity to use evaluation findings.
There are three stages and twelve steps to Outcome Mapping. They take the program from reaching consensus about the macro-level changes it would like to support to developing a monitoring framework and an evaluation plan. Outcome mapping consists of two phases, namely a design phase and a record-keeping phase. During the design phase, project leaders identify metrics in terms of which records will be kept. In outcome mapping, three types of records can be kept, and it is largely up to the project leaders or donor organization to decide which of the three (or all three) types of records should be reported back on. The records are: (1) a performance journal, which is essentially a collection of minutes of progress meeting; (2) a strategy journal, which records strategic actions and their results; and (3) an outcome journal, which is an anecdotal record of any events that related directly or indirectly to the progress markers (the expect-to-see, like-to-see and love-to-see items). The outcome journal is most useful towards the end of the project in providing the donor with visible impact stemming from the expenditure of funds, but may also be submitted to the donor at intervals. See also the Resources section of this dossier.
Reflexive Monitoring in Action (RMA)
RMA is a monitoring approach that has been developed by researchers from Wageningen University and the VU University Amsterdam for supporting and facilitating innovation projects in general, and complex system innovation projects in the agricultural sector in the Netherlands in particular.
Such projects are carried out by stakeholder networks, e.g. networks to develop CO2-neutral cultivation or networks to create ultra-short food chains. These are learning & reflection networks in the sense that the new knowledge can only emerge as a result of one or several social learning events. The facilitator or "monitor" will take action if there is insufficient trust within the network or of if participants are getting so entangled in details as to distracted them from the long-term ambitions. To do so the monitor can make use of seven tools, some of which were specifically developed for RMA: (1) system analysis; (2) actor analysis plus causal analysis; (3) dynamic learning agenda; (4) indicator sets; (5) reflexive process description; (6) audiovisual learning history; and (7) timeline and eye-opener workshop. Guidelines for the application of RMA are provided by van Mierlo et al. (2010, see references below). For more information on tools and cases of RMA, refer to Resources section of this dossier.
The above methodologies all share the principle of emergence, which is based on the notion that the whole is greater than the parts. This is in fact a systemic concept, which means that analysis of the parts will never create the insight necessary to improve the whole. The implication is that only social learning or social knowledge creation can create the conditions for this emergence of innovative solutions to occur. The principle of emergence is related to the concept of complexity: Patton (2011) emphasizes that the first thing to do before embarking on developmental or utilization-focused evaluation is to decide whether there is a situation appropriate for such a type of evaluation, that is, a complex situation.
To better understand how complexity affects decision making, Snowden designed the cynefin framework (Snowden and Kurtz, 2003), which has five domains: simple, complicated, complex, chaotic, and disorderly (the four-pointed shape in the middle). Every domain has its own decision-making approach suited to it. Things are "simple" when the relationship between cause and effect is obvious to all. The problem-solving or decision-making approach is to apply "best practice" following an approach of Sense - Categorise – Respond.
In the "complicated" domain, the relationship between cause and effect requires analysis or the application of expert knowledge, the approach is to Sense - Analyze - Respond and one can apply good practice. In the "complex" domain, the relationship between cause and effect can only be perceived in retrospect, the approach is to Probe - Sense - Respond and we can sense emergent practice. When things are "chaotic", there is no relationship between cause and effect at systems level, the approach is to Act - Sense - Respond and we can discover novel practice. When in the fifth domain – "disorder" - people will tend to randomly follow the approach they feel most comfortable with, whether it is appropriate or not. The Stacey matrix is a somewhat similar model to describe complex problems.
For better and more satisfactory solutions to complex, "messy"" problems to emerge, the process will have to be both inclusive, with input from all stakeholders, and iterative, with learning upon learning. Emergence and learning take time, which is one of the main drawbacks of these methodologies. However, it saves a lot of time and money if it helps avoid taking unsustainable, ineffective courses of action. Collaboration of this type normally requires the support of expert facilitators who are well versed in the use of a number of tools, mostly tools for mapping and modelling the situation at hand.
According to Midgley (2006), the first wave of systems thinking had its roots in three parallel schools after WWII: general system theory, cybernetics and complexity science. From the late 1960s to the early 1980s, these first schools of thought were criticized by several authors, among which Churchman, Checkland, and Ackoff, who pointed out that they failed to see the value of bringing the insights of stakeholders into activities of planning and decision making. They emphasized the impossibility of absolute proof of what is real, and therefore suggested to shift the emphasis from supposedly objective modeling (or "truth-seeking") to approaches, which encourage mutual appreciation of different perspectives. The implication for evaluation was that expert-led modeling gave way to the development of participative methodologies. A fundamental assumption of the second wave was also that people are more likely to take "ownership" of evaluations, and thereby implement the recommendations arising from them, if they can play a part in defining the goals. The final implication for evaluation comes from Churchman’s (1979) advocacy of dialectical processes. The third wave ideas came under the banner of "critical systems thinking" (1980s-now). They provide a rationale for combining the best methods from both the first and second waves. The third wave not only advocates methodological pluralism, but also the need for boundary critique (reflections on and dialogue about values, boundaries and marginalization).
Systems thinking is an active field with many managerial and innovation offshoots, mostly in the world of management science, but also in its application to international development. There is a growing conviction that (innovation) systems thinking has a crucial contribution to make to the effectiveness of rural development efforts in general and the sustainability of agricultural development in particular. We hope this dossier will remove the main barrier to the wider application of systems thinking in the South, which is the broad lack of understanding of its usefulness and workings (Pyburn & Gildemacher, 2008).
Agricultural innovation coaching (AI-Coaching) is a form of facilitation that is used in the advisory practice of KIT. When conceiving this dossier on M&E for innovation, another KIT Dossier on the topic of AI-coaching was already well underway (Sluijs, 2010). An Agricultural Innovation coach, or AI-coach, is a highly experienced development professional who facilitates multi-stakeholder interaction for innovation in the agricultural sector with the specific objective of poverty alleviation.
This dossier on M&E for innovation highlights a number of methodologies and tools that could be used by AI-coaches in rural innovation projects in the South.
KIT’s research and activities in this field are ongoing. We very much welcome your feedback and input. If you would like to make any comments on this dossier or add new resources, please contact sjon.v.t.hof@kit.nl or one if the Sustainable Economic Development team members, especially s.nederlof@kit.nl, r.mur@kit.nl, r.pyburn@kit.nl, p.gildemacher@kit.nl, or w.heemskerk@kit.nl.
Agenda News Press Working at KIT About KIT Cookies Contact Nederlands -English