Where we work
In this blog post, we share what we learnt from this journey.
It was not new to many of us that monitoring and results measurement is a crucial part of good project management. Through the learning expedition, we confirmed that a well-functioning monitoring and results measurement system guides project management and staff at each step of the project. In practice, this means, for example actually using the system for analysing and deciding which sector to work on for designing solutions, implementing them and evaluating impacts.
We also reaffirmed three key objectives of the system. First, it helps projects improve their contributions to address unemployment or income poverty. This happens through better steering by using ‘evidence-based’ decisions. Second, it supports projects to learn more by asking questions such as ‘to what extent do results happen as planned?’ ‘And why?’ Third, it serves as proving the ‘value for money’ through upward accountability to taxpayers and downward accountability to target groups (e.g. farmers, young people).
In many cases, we realised that changes happen quickly and strategies need to be adapted continuously. It was clear that monitoring and results measurement needs to adapt to such working environment. Having access to reliable and timely information through the monitoring and results measurement system is thus crucial.
We also learnt that for projects, in particular with a small budget and limited human resources, balancing the benefits and costs of designing and implementing the system had been challenging. We came to know that designing monitoring and results measurement system to correspond to the size and capacities of projects was one of the top common challenges.
By right-sizing, we understood it to be making monitoring and results measurement system ‘manageable’ and ‘fit to realities’. We did not understand it to be ‘self-selection’ through downsizing basic elements of monitoring and results measurement system (e.g. baselines, indicators). While the cost of designing and implementing the system is important, it is not the only concern; the challenge is also setting it up for meeting the goals of projects without making it cumbersome.
As one manager of a project in Eastern Europe said, “our project is not designed just to manage a system that is more complex and expensive; the system should serve the project”. In short, we learnt that right-sizing means designing and implementing a monitoring and results measurement system that is appropriate in scope and timeframe for achieving measurable impacts.
Allocating sufficient financial resources from the start
We were surprised by the difficulty of estimating the actual costs of designing, implementing and managing an effective monitoring and results measurement system. It is often impossible to assess which tasks are purely for monitoring and results measurement purposes and which tasks are for other management purposes. Many projects, especially smaller ones (e.g. in terms of budget), did not have sufficient funds to hire person(s) specifically dedicated to the system, to conduct solid surveys/researchers, or to collect reliable data for all indicators.
The learning expedition provided us with an important lesson on why it is crucial to ensure the system is fully integrated with the project management system and not just isolated from the project cycle. In addition, during the design process, awareness on budgetary needs (and not to underestimate these) are important; even though it will not be possible to define all costs in detail at the design stage. Smaller projects were extra careful to prioritise their monitoring and results measurement activities: where to invest depends on projects’ overall goal and the potential scale of the interventions.
Monitoring Expert of a project in Asia
Ensuring adequate and capable human resources
We observed those project managers who did not have experience in monitoring and results measurement or failed to establish a ‘monitoring and results measurement culture’ early on, were likely to face a certain degree of resistance among staff regarding monitoring and results measurement tasks and duties. At the same time, implementation staff that had little experience in monitoring and results measurement were often of the opinion that it is not their job – saying that this should be done by a monitoring and results measurement manager/team. This caused various problems during the implementation of the system.
Some of the projects worked to increase appreciation for and understanding of monitoring and results measurement as a management tool that supports staff in their day-to-day work, and is not regarded as just a necessary ‘evil’. This also meant that hiring (temporary) external expert to improve a project’s system was not a substitute for developing good practices across a project team as a whole. Few projects achieved progress by including clear monitoring and results measurement responsibilities in job descriptions of staff, agreeing on a capacity development plan and annual staff performance review.
“I decided to put a lot of emphasis on monitoring and results measurement during all types of occasions. Even when it was not directly necessary, I tried to challenge staff with questions such as: How do you think to make the expected change happen? Is this enough to trigger change? What else could influence the change? I did this to cultivate a good culture of monitoring and results measurement” Manager of a project in Eastern Europe
Measuring quantitative and qualitative changes in indicators
During stakeholder meetings and field visits, staff observe a lot of qualitative information about the status and progress of their project activities and partners’ performance. This information, however, was not gathered and documented consistently. The information was used for decision making even though this was not done transparently and did not help in terms of learning and reporting. Many tools already exist to collect quantitative and qualitative data. Projects are becoming increasingly proficient in using these tools. However, to reduce the barrier for people to spend time on data entry and analysis, there is still a great interest to make these tools more user-friendly.
To be able to make the best use of ‘observational information’, some projects tried to ensure that staff collect this type of information whenever they go in the field or meet partners. For instance, a basic template that projects used when meeting stakeholders includes a section on observations (while not everything should be recorded, changes in the behaviour of partners are important signs that should be captured).
A related challenge was that projects sometimes struggled to agree on an appropriate level of details in their monitoring and results measurement system (e.g. results chains). Results chains or indicators were far too general to collect meaningful information. Projects also felt pressured to develop very complex results chains at the start. As results chains needed to be reviewed regularly, in any case, it would be helpful to start with less complex ones; more details could be added as the interventions progress.
Project Manager from Eastern Europe
Realistically estimating contributions of projects (attribution)
There are also a number of factors – political, economic or environmental – that affect, positively or negatively, how impacts occur. Knowing exactly the share of a project’s contribution is an age-old challenge in all projects.
Through the learning expedition, we understood one main guiding principle for projects: “it is better to be partially correct than completely wrong”. Rather than selecting one method, projects aimed to use a range of tools to collect and analyse the necessary data; information generated by mixed methods helped to establish the validity of the data and the reliability of the measures of change.
“We relied on a combination of qualitative and quantitative methods for collecting and checking information. We used, for example, interviews, participant observations, case studies, focus group discussions and trend analysis with actors such as producers and service providers. Through this quantitative method, the project sought to use simple before-after comparison. In relation to other quantitative methods, the project staff thought that this would be relatively cheaper and less difficult despite the requirement for careful design and measurement.” Project staff from Asia