Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Organizational Capacity Development Measurement, Exams of Organizations and Society

This document provides a set of recommendations for measuring the results of USAID-supported organizational capacity development efforts.

Typology: Exams

2022/2023

Uploaded on 03/01/2023

ekansh
ekansh 🇺🇸

4.4

(17)

265 documents

1 / 12

Toggle sidebar

Related documents


Partial preview of the text

Download Organizational Capacity Development Measurement and more Exams Organizations and Society in PDF only on Docsity! 1 Organizational Capacity Development Measurement Executive Summary This document provides a set of recommendations for measuring the results of USAID-supported organizational capacity development efforts. It provides grounding principles and background that inform its recommendations. The purpose behind these recommendations is to improve the consistency with which USAID program managers and partners appropriately measure organizational capacity development, enabling more effective learning from and accountability of capacity development programming across the Agency. The recommendations cover aspects of both what to measure and, to an extent, how to measure it, but leave large space for staff to interpret and apply them as appropriate for the particulars of their programming. The recommendations describe an approach rather than a single indicator as most appropriate to measuring capacity as a multifaceted topic. These recommendations offer an important step forward in thinking about why and how to invest in improving organizational performance, and in capturing the value that capacity development is adding to development. The recommendations are:  In defining measures for organizational strengthening, performance measures are the most appropriate area of emphasis – generally with performance expectations set jointly with the assisted organization(s). Measurement should be centered on organizational performance.  Performance should be measured across multiple domains, including adaptive functions, to reflect capacity development investments in both short-term and long-term aspects of performance.  An organization’s performance depends on its fit in a wider local system of actors, and its interrelationships with them. Therefore, we must measure at both organizational and local system levels in order to capture the value of performance change.  Organizational performance change is pursued in order to affect wider, systemic changes. However, attribution for change is unlikely to be provable. We should trace the credible contribution from organizational to system change with rigor.  Some ways in which organizational capacity development will affect future performance cannot be anticipated at the start. Therefore attend to multiple pathways of change and to the unpredicted in order to perceive the full spectrum of results. Two of these recommendations – to emphasize organizational performance as the metric for success of organizational capacity development investments, and to measure at multiple levels including organization and local system – are echoed as requirements in Agency policy guidance for monitoring. Consensus: Capacity What is Meant by Capacity? 2 Note: Different Levels of Capacity These recommendations center on measurement of organizational capacity. Capacity exists at several different levels – individual, organization, network, system, etc. Any organizational capacity must encompass the people within an organization and must be oriented within the local systems in which an organization is embedded. There are ramifications around measurement at other levels that can be inferred from this document, but it does not speak to other levels of capacity directly. USAID has no single definition of capacity, and deliberately chose not to create one during this process, for two main reasons. First, this document identifies several fundamental aspects of capacity that should inform its measurement. These fundamental characteristics and their implications are more salient to the recommendations made herein than a specific definition. Second, there are a number of excellent definitions available and in broad use which we think serve as better common reference points than a brand-new definition – most pertinently the “Five Capabilities” stemming from a major study by the European Center for Development Policy and Management, and the definition used by the book Capacity Development in Practice, as well as commonly-cited definitions by the UNDP and OECD: ECDPM’s Five Capabilities: “To achieve its development goals, every organization/system must have five core capabilities: to act and commit; to deliver on development objectives; to relate to external stakeholders; to adapt and self-renew; and to achieve coherence.” Capacity Development in Practice: “Capacity is the ability of a human system to perform, sustain itself, and self-renew.” UNDP Definition: “The process through which individuals, organizations and societies obtain, strengthen and maintain the capabilities to set and achieve their own development objectives over time.” OECD Definition: “Capacity is the ability of people, organizations and society as a whole to manage their affairs successfully. Capacity development is the process whereby people, organizations and society as a whole unleash, strengthen, create, adapt and maintain capacity over time.” Principles of Capacity - Capacity, at organizational level, cannot be understood without reference to the wider system that surrounds any organization Capacity as a concept can only have meaning if it describes the capacity of an organization to perform within its context – the system of other actors that an organization affects and is affected by in carrying out whatever actions it performs. Normative statements of how “organizations of type x should operate” must be grounded in a rich picture of the actual situation in order to support capacity development that maximizes value-added. Capacity development approaches should always reference a relevant local system as it informs the organization’s current role, and 5 Second, performance measurement must be defined holistically, encompassing both the organization’s performance in achieving targeted results and the organization’s performance in learning, adapting, and sustaining itself over time. An organization’s performance matters in at least two senses – an organization’s performance in achieving results, and an organization’s performance in adapting and renewing itself in response to its changing context. In order to identify a common language for these different dimensions of performance, the Local Solutions working group is recommending adoption of the IDRC/Universalia Framework for organizational performance that is operationalized in the Pact Organizational Performance Index (OPI). The OPI’s Framework is shown here for reference, with its four domains of effectiveness, efficiency, relevance, and sustainability. Other offices and units are employing other index indicators or tools. Regardless of the tools or indicators used, ensuring a focus on performance and attention to performance areas such as relevance and sustainability that matter more over time will enable more effective monitoring. Third, the measurement of organizational performance must be complemented by measures of the wider local system that co-produces the development results of interest. For the given organization, its performance horizons are shaped by the local system around it, and performance measurement depend son observing how it functions within that wider system. And to speak to the value that a given organization’s performance improvement may have, one must observe how that role as well as the wider system is changing as a result of capacity development supported by USAID. Any targets of expected performance change should derive from the activity’s articulated theory of change for how organizational performance improvement is predicted to affect a wider local system. This requires a clear description of the roles in local systems that given local organizations are playing as a baseline. Further, targets for performance change (and the theory 6 Core Recommendations 1. Measurement must be centered on organizational performance 2. Measurement performance across both achieving targeted results and in learning, adapting, and sustaining itself over time. 3. Measurement of organizational performance must be complemented by measures of the wider local system that co-produces the development results of interest. 4. The credible contribution of organizational performance change to local system change will fit a contribution paradigm. 5. Measurement approach should incorporate at least one method of perceiving unpredicted changes in performance and of validating the pathway of predicted changes. of change relating the organization’s performance to a relevant local system) should be validated with the partner organization and consensus established around targets. For example, if USAID is supporting improved performance by public organizations providing agricultural extension services, USAID would want to measure both the performance change of those organizations and the performance of the agricultural value chains that those organizations’ efforts were intended to improve. Or if USAID is supporting improved performance in budget formation and execution by selected municipalities, we would also want to measure a systems outcome such as the perceived fairness and legitimacy of the state by citizens in the target regions, or improved cost efficiency in service delivery for publicly-funded services in the target regions. Due to the importance of interrelationships as structuring the way in which capacity emerges, it is recommended to include at least one measurement at systems level of the interrelationships between actors and how those are changing over time. Measurement of interrelationships can be either qualitative or quantitative, and may not be easy to link with targets, but relationships within the relevant system often serve as a key context indicator to be regularly reviewed and used to inform programming. Some projects have successfully used social network mapping or related techniques to visualize and quantify this type of data, and this seems a practice with high potential to add value to Mission learning. Other tools to measure systems can include wide stakeholder feedback through collection of narratives or polling data; visualization of systems dynamics or constituent parts; or indicators of system stocks and flows. Fourth, the effect of organizational performance change on local system change will fit a contribution paradigm. Given the complexity of local systems, statements about the linkages from performance change to effect on local systems will necessarily be contribution rather than attribution. USAID can increase the rigor with which confidence is established in the contribution of performance improvement to system change through the use of multiple methods to connect organizational performance and systems change, and through gathering different perspectives on change. 7 Fifth, the measurement approach should incorporate at least one method of perceiving unpredicted changes in performance and of validating the pathway of change where predicted changes in performance occur. This often requires deductive approaches that trace processes after change has happened. Employing these approaches also adds rigor to assertions of contribution along predicted lines. Because capacity development is an engagement with complexity, initial theories of change should be updated through the validation of pathways of change. Even when performance change is measured where one has supported capacity development, one must gather input to validate that outside support contributed to that performance change. This entails some process tracing or other ways of looking backwards at how capacity development support was understood to yield performance change, including multiple perspectives on the same question. And since some performance change is likely in areas where it was not predicted, efforts to understand an outside contribution to performance change should include effort to examine the pathways through which change happened and to look at contributions to unpredicted performance change. Several examples of these types of tools are captured in the Discussion Note on Complexity- Aware Monitoring, and all three blind spots of performance monitoring noted in the Discussion Note are relevant to capacity development. Scope and Use of This Document The approach described in this document covers organizational capacity and principles to apply when measuring its change. It is closely related to efforts to measure wider changes (across a relevant local system) and issues of ownership and sustainability to which organizational capacity can contribute, however it does not address those issues directly. Measurement following these recommendations is intended to serve as one part in a chain, and to offer more rigor for speaking to the contribution that Agency efforts to strengthen organizational capacity are making to higher level, wider system results. This approach is applicable to any type of organization: public or non-public, for-profit or not- for-profit, formally or informally defined, of any size. Each of those factors may introduce considerations that inform the specifics of monitoring or evaluation, such as issues around data availability or time and expense of data gathering. Certain types of organization may have specific constraints that affect how their capacity is shaped and expressed. Every organization’s capacity is also shaped significantly by the wider systems in which it is embedded. It is important to emphasize that this measurement approach is informed by scholarship and practice related to capacity development in diverse sectors and organization types, and reflects the commonalities and consensus areas across those realms. It pushes practitioners to move from older mental models of capacity development that articulate best practice attributes of organizations toward an approach rooted in context and best fit, in keeping with the latest thinking in the discipline. 10 motivate, not in the subjective scores or ratings they provide. Attempting to use the same tool to support capacity development and to measure the effect of capacity development introduces a tension into the tool that limits its effectiveness for both purposes. Therefore, it is not appropriate to substitute a capacity development tool for a measurement of organizational capacity or its expression. Leveraging the Learning How This Approach Compares to Current Practice Presently, many Agency units support organizational capacity development, and most of them incorporate portions of the guidance within this approach. Tools and monitoring methods have evolved in recent years, as has the wider policy environment, and these allow a robust measurement approach to capture improvements in performance that is often now unrecognized. It is worth noting at the outset that part of the rationale behind creating a common measurement approach is in order to better align Agency incentives – what USAID measures in its programming is, by virtue of being measured and made visible to project and activity managers, often what we and our partners perceive as valued - “what counts, matters.” It is therefore most useful to highlight where the recommended measurement approach differs from typical Agency practice. First, much of the CD measurement that currently occurs places emphasis on measuring capacity qua capacity rather than measuring performance change, or mixing the two together. Often there is an imported “best practice” normative model for how an organization should perform that is not relevant to the fit between a given organization and its local system. Sometimes the same tool is used to assess risks or to catalyze capacity development as well as to measure capacity change. In either case, these introduce perverse incentives into the capacity development, biasing capacity development toward compliance checklists and allowing for organizations to “signal” capacity change without truly improving performance. Second, when performance is measured, the emphasis is often on achieving results without due attention to performance in learning, adapting, and self-renewal. This creates incentives that privilege shorter-term accomplishments and undervalue investments in sustainability. The emphasis on short-term results, and on compliance as opposed to long-term performance, has in some cases been exacerbated by recent emphasis on aspirational targets for local awards spurred by USAID Forward’s Implementation and Procurement Reform (IPR). The focus on longer-term performance and connections from organizational performance to local systems change is consistent with the shift from IPR to Local Solutions already underway. Third, in many instances, even where capacity development is pursued, Agency activity and project managers do not measure at both the organizational performance and systems outcome levels. This obscures the logic underlying the capacity development activity and makes it difficult to adjust programming when inputs are not producing predicted outputs and outcomes. This is because absent a clear theory of change around how each level was expected to affect the next, adaptation is much more difficult. For example, if the only measure of capacity 11 development investments in a set of hospitals is their number of patients seen after TA provision, and target numbers of patients are not reached, it is difficult to adjust absent metrics around how internal hospital improvements were intended to allow them to see more patients (and why seeing more patients is an appropriate performance measure, given the role of the hospitals in their local system and context). Fourth, it is not yet a common Agency practice to attend to unpredicted changes or to examine the pathways of change that occurred as predicted, as part of either routine monitoring or periodic evaluation. As many important outcomes from capacity change are not predicted in advance, this reduces the perceived effectiveness of capacity development by failing to fully tell the story of what capacity development efforts have achieved. And by not validating the pathways of change that were predicted, USAID Officers miss opportunities to update their theories of change to better reflect the context. Finally, even where USAID support for organizational capacity development otherwise follows these recommendations, the lack of any common performance indicators makes it difficult to aggregate data or identify patterns at a level beyond the individual activity or project around what capacity development support is yielding what sort of performance change, and what performance improvements are yielding changes of significance in development results. Uses Within Projects For any given activity, USAID project and activity designers should first have surfaced our theory of change around how USAID expects capacity development to yield performance improvement. During implementation, staff should review the monitoring data to constantly verify or update that theory of change based on what is actually happening. Clearly identifying how the results monitored cause USAID to update its theory of change – and putting more emphasis on an evolving theory of change (and related implementation approach) than fidelity to the initial theory of change – will greatly facilitate adaptive management of capacity development programming. Where measurement of organizational performance change is carried out appropriately, again in line with the theory of change laid out in the project design and as updated through implementation, USAID will be able to relate organizational change to measurement at systems level, and thereby speak with more clarity and rigor about our contributions to achieving and sustaining ultimate results of interest. Uses Across USAID USAID will also be able to apply learning across the discipline of organizational capacity development more broadly – a potential area of great learning whose utility has been undervalued due to differences that have obscured key commonalities across organizational capacity development in different organization types, sectors, and country contexts. Use of one or more shared tools to measure changes in organizational performance is expected to generate much more data from which to identify patterns – even though any such shared tools would be complemented by additional indicators or tools that address particular performance changes 12 specific to an organization and its context. Having a common language to describe different areas of performance improvement, and a common measurement approach underlying the appreciation of the principles of capacity and capacity development, will enable greater clarity in conversations around what is working, and feed into learning at scale around capacity development. Annex A: Selected Annotated Bibliography Annex B: Background and Process to This Document Annex C: Two Example Project M&E Plans Using This Approach Annex D: Example Solicitation Language for Activity M&E Plan that Uses This Approach