In an increasingly competitive and scrutinised environment, credibility is becoming a defining asset. Organisations that can demonstrate clear, measurable outcomes – supported by robust data and independent validation – will stand apart. However, it is precisely against this rising standard of credibility that a striking disconnect becomes apparent. On one hand, corporate websites across sectors, along with annual reports, sustainability disclosures, social media content, and presentations, are replete with claims of scale, innovation, and transformative potential. On the other hand, beneath this growing emphasis on impact lies a persistent and often overlooked challenge: the gap between what organisations claim to enable and deliver, and what they can credibly demonstrate as impact, remains significant.
A closer look at public-facing materials across industries reveals four recurring gaps in how impact is articulated and evidenced:
- Statements of capability rather than evidence of outcomes and impact
- Emphasis on inputs and outputs instead of outcomes and impact
- Limited number of robust, data-driven analysis and case studies
- Absence of independent assessment of impact
On the contrary, they reveal a strategic opportunity for growth. Organisations that invest in social impact assessment – systematically analysing how their interventions shape behaviours, opportunities, and long-term outcomes – move beyond narrative to robust, evidence-based results, strengthening their positioning with governments, investors, and partners while enabling sharper decisions and more sustainable outcomes.
Capability Is Not Impact
Many organisations describe what their programmes, platforms or services enable – such as training delivery, access to finance, research, stakeholder engagement, promotion, advocacy or communication at scale. While these capabilities are important, they do not, in themselves, constitute evidence of impact, nor do they answer the following questions:
- What changed as a result?
- Did decision-making or practices improve?
In the education and continuing professional development sector – applicable across all industries – organisations often highlight that their digital platforms enable remote lesson delivery, increased engagement, training, wider learner reach, and improved access to resources. These are capabilities, not impact. To demonstrate impact, the relevant question is whether learner attainment has improved, management of operational practices have changed, curriculum upgraded, or inequities have narrowed as a result. A useful education related benchmark is the guidance on impact case studies provided by the NHS in the UK, which explicitly cautions that describing activity, outputs, or positive feedback is not the same as demonstrating impact.
A similar pattern can be observed in the health sector, particularly in relation to patient portals. While public-facing materials typically emphasise their ability to support secure messaging, access to records, and appointment booking, a 2021 systematic review of 47 patient portals across the United States, Canada, the Netherlands, Finland, the United Kingdom, Australia, France, and Sweden revealed various weaknesses in utilisation and efficiency of portals. This underscores that claims of capability should not be conflated with evidence of outcomes.
The lesson is clear: capability describes potential, whereas impact requires credible evidence of change. Conflating the two can lead shareholders and decision-makers to overvalue activity and visibility at the expense of real outcomes, resulting in weaker strategic choices, suboptimal allocation of resources, and ultimately over time placing an organisation’s reputation and institutional integrity at risk.
Inputs and Outputs are Not Impact
A second common gap lies in the over-reliance on inputs and outputs as proxies for impact. Metrics such as the number of users reached, scholarships awarded, professionals trained, trees planted, or services delivered are frequently presented as evidence of success. While these figures are useful for understanding reach and potential scale, they do not answer the most important questions:
- Did behaviours or practices change?
- Did the quality of individuals’ lives or communities’ wellbeing improve?
The microfinance sector offers a good example. Early success was often measured by loan disbursement volumes and repayment rates. However, various assessments later revealed that access to services or credit alone didn’t automatically translate into poverty reduction. In Southeast Asia, corporations such as Maybank, Development Bank of Singapore (DBS), PT Bank Central Asia (BCA), as well as development organisations such as Mercy Corps, frequently demonstrate reach (e.g. number of active clients), scale (millions in mobilised savings), portfolio growth, and forward-looking ambitions (such as targeting millions of lives economically impacted by 2030). However, publicly available evidence on direct, long-term poverty impact – such as sustained income gains, reduced deprivation, or improved welfare over time – is still relatively thin.
The message is clear: demonstrating reach, scale, and access operates at the level of outputs, does not constitute impact. Presenting outputs as impact can create a false sense of success. While this may leave a favourable impression on the general public or certain stakeholders, it risks encouraging organisations to invest in initiatives that appear effective on paper but fail to deliver meaningful change. Over time, this can undermine credibility, waste resources, and erode stakeholder trust.
Testimonials and Anecdotes are not Evidence of Impact
A third challenge is the limited availability of robust, data-driven analysis and case studies. Many organisations present examples of their work – their activities, a few testimonials, many photos of beneficiaries, or events – but these are often thematic summaries rather than evidence-based analyses. Strong case studies should go beyond storytelling. They should include:
- Clear baseline and endline data
- Defined outcome indicators linked to specific interventions
- Where possible, comparative or counterfactual analysis
- Transparent explanation of how change occured
Impact investors now expect portfolio companies to demonstrate not just activity, but verifiable mid- to long-term results. Frameworks such as IRIS+ and the Global Impact Investing Network (GIIN) standards have emerged precisely to address data clarity, comparability, and rigor in impact measurement. These frameworks emerged specifically to meet investor needs for aggregating and benchmarking impact across portfolios, supporting better resource allocation.
The way forward is clear: while testimonials and anecdotes can illustrate context and human stories, they are insufficient for assessing true impact. Evidence-based analysis and rigorous case studies, form the foundation of credibility, enabling corporations to make strategic decisions with confidence and allowing external stakeholders to accurately assess the value of an intervention.
Internal Monitoring and Evaluation is not Independent Social Impact Assessment
The last gap is the absence of independent social impact assessment (SIA). When impact claims rely solely on internal monitoring systems, they may raise legitimate concerns around bias, credibility, and methodological robustness. This is particularly important when organisations seek to influence public policy, attract donor funding, or position themselves as system-level actors.
Independent SIA signals corporate level of maturity, confidence in the results and a willingness to be held accountable. Global health initiatives provide strong examples. Organisations such as Gavi, the Vaccine Alliance and The Global Fund routinely commission external evaluations and independent impact assessments to assess programme effectiveness, cost-efficiency, and long-term impact. These assessments play a critical role in maintaining donor confidence and guiding strategic decisions.
Towards More Credible Impact Measurement and Reporting
Closing these impact reporting gaps requires a shift in how organisations approach SIA. It is not about collecting more data, but about collecting the right data, and linking it to meaningful change.
A robust and practical approach to externally conducted SIA is guided by several key principles:
- Focus on real-world change: Measurement priorities behavioural shifts, practice changes, and system-level outcomes.
- Business relevance: Impact metrics align with what matters to investors, partners, and policymakers – not just internal reporting needs.
- Pragmatism: A clear understanding of what can realistically be measured in complex, real-world environments.
This shift marks a move from descriptive reporting to evidence-based impact reporting – where success is not defined by what organisations claim to do, but by the tangible changes they create.
In a landscape of constant scrutiny and intense competition, organisations that can demonstrate measurable, evidence-based outcomes gain a strategic advantage, earning the trust, confidence, and influence of stakeholders over the medium to long term.
At the end of the day, the proof carries more weight than the promise.
Dr Jasmina Kuka, CEO and Founder of W!SE Achievements, brings over 20 years of global experience in assessing and evaluating educational, cultural, and social programmes. Based in her beloved Malaysia, she has conducted numerous social impact assessments of programmes implemented in over 30 countries across four continents. She has delivered analyses and training that have been used to inform effective funding policies, revisit goals and priorities, and provide evidence for more effective decision-making.
The article was written by Dr. Jasmina Kuka.
5.0 












