This dialogue session took place on January 11, 2018.
Co-hosts: Tech Networks of Boston, TSNE MissionWorks, Essential Partners
Organizer: Deborah Elizabeth Finn, Tech Networks of Boston
Facilitator in chief: Dave Joseph, Essential Partners
Ethnographer: Rachael Stark, RefDesk
Other dialogue crew: Steve Pratt, Impact Catalysts
Session goal: To brainstorm specific best practices for grant makers and nonprofit grant recipients who want to work collaboratively and productively on data and evaluation
This was the third and final dialogue in the series. Each dialogue focused on bringing an equal number of grant makers and nonprofit professionals together as peers for candid conversations on data and evaluation. Two crucial themes emerged: the power imbalance in the grant maker/nonprofit grant recipient relationship, and the complex justice/equity issues inherent in how these two groups use programmatic data. We anticipated the former, but not the latter.
_____________________________________________________________________________________
Take Aways
Shared definition of evaluation, how to collect data.
Build candid, transparent relationship, discuss shared goals.
Be explicit re: cost of doing evaluation, include in grant.
Grantors take grantee’s capacity into account.
More dialogue among and between funders, grantees.
Resources.
Lack of clarity, lack of shared measurements.
All narratives, other forms of data.
Share analytic tools and methods, match researchers and organizations.
Cities and towns have more shared learning spaces – reduce “pounce” – encourage getting to know each other.
Take mutual responsibility for relationship.
Require funding boards to do race/class analysis of outcome they want.
Clarify expectations, transparency at outset.
Create and sustain communities of practice.
Reach shared understanding of resources needed for evaluation, reporting, (?) what is “impact?”
Be flexible as work evolves – don’t be wedded.
At beginning, meet in person, have regular conversations.
Assess needs of both parties at outset.
_____________________________________________________________________________________
Less emphasis on outcome, more on process.
Get to know each other at outset – before getting to data, have conversation.
2-way dialogue.
Admit you don’t know.
Build relationships so you can push back.
Mutually beneficial partnerships.
Org – capacity.
Power dynamic/fear – funders are vulnerable too – dialogue, relationship (same grant officer by years).
Be transparent as funders – say why we’re collecting info – be flexible – recognize organization’s capacity.
Normalize failure – talk about how it’s ok not to meet all expectations – learning is key.
Race/class.
Fail fest.
_____________________________________________________________________________________
Race and class equity analysis
- Importance of talking about race and class separately, then bringing them together
- Having external facilitators /trainers come in to host dialogues specifically for staff. Maybe separately for boards.
- Sharing your personal class/background and experience – encourage diffusion of (?) with resources, then ask individuals to share with whole staff. Seeing everyone’s unique experiences as complementary strengths.
- Importance of ongoing self-training, short and long term.
- Commitment by organization to ongoing training.
- Creating an organizational “inclusion strategy plan” that crosses all departments (not just one department) but an equity lens/framework for everything you do.
- Ask staff to attend seminars and programs that focus on race and class separately and (?) , make available for all staff.
- Specifically on class/another for race, or one that provides separate paths for both – with timeline, measurable outcomes, milestones
- Framework must also include
- Joint learning groups
- Data
- A group that researches what similar orgs are going and latest finds in the field, best practices and creates syllabus of readings/videos/websites.
- Convenings for multiple foundations
- Set aside funds specifically to create race/class plans(s).
- Build the inclusion plan(s) into the overall strategic plan.
_____________________________________________________________________________________
Notes from small group
- How can funders and nonprofits encourage more peer-based learning and dialogue about evaluation?
What we can do more?
- Continue to support curated conversations like this one on various topics of interest.
- Support sectoral conversations (e.g. creative youth arts development orgs, land trusts, legal services, etc.) that identify common concerns and interests related to monitoring, evaluation, and learning.
- Funders can convene grantees either by whole portfolios or in clusters for breakfast or lunch dialogues.
- Funders and intermediaries can continue support topical workshops, trainings, tool-shares, fail-fests, panels, roundtables, etc.
How we can do it better?
- Increase conversations about monitoring, evaluation, and learning outside the transactional grantmaking process.
- Have more of these dialogues together rather than separately as is the norm.
- Keep it simple and focus on nurturing relationships.
- Pay for good facilitation and coordination e.g this roundtable.
- Use technology to share tools, contact info, minutes, history, etc.
- Anchor organizations, intermediary organizations, membership organizations, associations, and funders can allocate resources to support this. Notice how AGM and the MA Nonprofit network came around to supporting this after TNB took the risk and led the way. THEY should have been leaders as institutions.
- Funders can be honest brokers and set the table, but so can nonprofits.
- Donate space and facilities like TSNE does.
- Create confidential safe spaces or private caucuses when necessary.
_____________________________________________________________________________________
Building mutually beneficial partnerships
- Multi-year grants
- Clear expectations of evaluation requirement.
- Discussions before grant submissions about capacity and operations
- Preview of reports and applications available freely
- Provide clear description and statement of desired outcomes and missions.
- Continued conversations
- Facilitate dialogue for (?) parties
- Risk assessment/burden assessment
_____________________________________________________________________________________
Funders should take into account and consider grant making for the costs of evaluation relative to the size of the grant to avoid exceeding the cost of a small grant with the cost of evaluation.
_____________________________________________________________________________________
Group recommendation
- Could we create a process to agree on definition of key terms related to data and evaluation?
- What would that process look like?
- Who would need to be involved?
- Conversation to clarify terms at the beginning of each relationship.
- Pilot here in New England. Convene.
- Who hosts? National intermediary.
- Who participates?
- Networks of different philanthropists, sectors
- Network of different (?) of NPs.
- Cb
- Evaluators
- Who else?
- Process every group’s voice (?)
_____________________________________________________________________________________
Create spaces for learning
- Convene regularly
- Curated conversations
- Sectoral conversations
- Don’t have to be formal and use significant resources; just create connections between those who can share expertise/resources and those who need info.
- Need safe spaces.
- Funders have the birds-eye view and network and can make connections.
- Can feel vulnerable if space is perceived as competitive.
- Good ideas, reminder personal relationships with funder/nonprofit important.
- Structure of conversation to all to contribute, equal opportunity to speak.
- Framework to think about relationships with existing funders – results, (?), transition, and improve relationships.
- Kick off with evaluation partners, establish relationships with TA staff, admin/finance, more check-ins in process, clear understanding of roles.
- Conferences – effective format for equitable conversation.
- My responsibility to funders
- What I can do to help them
- Understand, improve relationship
- Great camaraderie, how to replicate
- Relationship important
- Story and data both important
- Capacity building, structure, and learning examining deeper (?) practices of power dynamics, why, how to change
- What would look like to upend dynamic
- Mechanics of evaluation to go thru corporate reporting structure – ways to design grants to impact
- Nice to have an opportunity to reflect
- All trying to make change, need each other
- Hear new ideas, have never considered
- Appreciate format
_____________________________________________________________________________________
Q:1
- Establish open lines of communication
- Work with each other, recognize
- More transparency, understanding about what you being asked and why.
- Have multiple ways of providing information and giving feedback
- Use of evaluation being useful for increasing buy-in among stakeholders
- Who cares about this work, involve them in developing
- Be very clear about what it the goal of the programs and its evaluation
- Evaluation is a dialogue, two-ways between funder and provider
- Ensure evaluation isn’t an after thought, be clear about the purpose
- Thoughtful introspection
- How can we grow and promote growth
- That it’s a funder requirement can’t be the only reason
- Information should build the field
- It’s about building and growing a relationship
- We want to have a conversation, we want to know what you are learning . Let people talk about what they want to talk about.
- We want integrity.
- We don’t know what we don’t know.
- Get to the why of why we are going through this process
Q:2
What are the major barriers in your view to more effective communication and collaboration regarding evaluation?
- Capacity (staff, time, expertise, and data)
- Power dynamic – who asks the questions and what are they asking? More flexible than first felt.
- Shared understanding of meaning of “evaluation.”
- Collecting and analyzing data.
- Preparation/systems (?)
- Expectations
- Organizational culture and hierarchy
- Understanding of granter’s strategic goals/lack of transparency
- Fear of risk taking
- Lack of rigor and flexibility
- Telling what you (I) want to hear
- Inertia/way things have always been done
- Learning is not a priority
- Accumulating data vs. analyzing data
- Tools/reliance on templates
Q:3
How do we disrupt the power dynamic?
- Be clear what the goal is
- See the program in action
- Talk with, not talk at – make it a dialogue
- Be outside the box – videos, photos, stories
- Don’t be afraid not to know
- Assume best intentions
- (?) dialogue about why at the outset
- More (Listening?) grantee and funder but also grantee to grantee
- Grantees learn from each other
- Concrete ways to help each other to learn from each other
- Use evaluation to deepen relationships – (?) of staff is key
- Anxiety exists about collecting this stuff. Capital E evaluation. Instead go back to the basic Howe to develop a hypothesis. Get to fundamental questions.
- What needs to be measured and what can be measured is discussed and agreed upon by both the funder and grantee
- Create a culture of learning. Have an annual FailFest.
- Do “pre-mortem” – how could this go wrong?
- We are all doing the same thing.
- What the most useful kind of evaluation?
- Funders ask how they can strengthen evaluation capacity.
- Normalize the idea that not everything will be a success. Sometimes our best learning can be from (these notes break off here) ….
_____________________________________________________________________________________
Grantors/Grantees
(?) conversation.
One thing to be more supportive of grantees.
Concrete ideas for grant cycle.
Expand understanding relationship.
Insight into nonprofit challenges, what evaluation is useful.
Doesn’t end here.
Just ideas.
I’m grantor and grantee.
Potential changles.
Grantee perceptions (?)
Sometimes process (?)
How do grantors use data?
Q:1
How pull data; my predecessor did it differently.
Peace Corps – on ground had to understand grantor constraints – (?) multiple ways of feedback better.
Think re: real stakeholders re: evaluation besides grantors and grantees – pulled them (?) (e.g. K-12 ed system) together.
At front end, grantor was flexible, as was research team “I think our goals are different” – should have happened at onset.
Dialogue two way – “we don’t (?) (?) that – that’s not what our program is about. We’ve (?) (?) (?) and when (?) (?) .
If nothing specified up front, evaluation felt like afterthought, (?) (?) at (?) .
When criteria clear up front and part of dialogue defining success from both sides.
My predecessor had fudged numbers – I understood (?) expectations out of whack.
Took while to right-size.
Grantor was (?) to go long they really thought $ (?) (?) was enough – helps me as a grantor.
100% success not goal – not enough risk.
We’re qualitative than quantitative – need clear expectations, tailored to size.
Not just a report but two-way processes.
What future should/could be.
We see both sides.
Participatory evaluation.
Don’t just collect data to report to funders but to improve program.
New grantee applicant told us what to do – didn’t get it.
Another grantee said “we don’t expect to get grant 1st year – need to build a relationship first.”
Much info is perfunctory, we’re not using it well.
For (organization name) initiative, ditched data reporting requirements, (?) (?) instead on mid-stream conversation and at end needed to ask (them) to trust us.
We want relationship and prrocess of integrity – we don’t know what we don’t know.
Q:2 Barriers
Capacity – small nonprofit has no evaluation person/department.
Power dynamic – who gets to ask/frame question.
Funder is more flexible than we thought.
Evaluation means different things to different people – need to clarify at the beginning.
Grantmaker needs transparency re: how I got to that, how long I need.
Preparation – rolled out new evaluation process retroactively! Ooops! Didn’t track it.
Technical skills missing.
Organizational culture/hierarchy – his new role wasn’t shared with the donor – has to navigate policies (politics?).
I’ll now ask grantees for their input before we set reqts.
Why does funder want this info? How will it be used to make future decisions. Want funders to be transparent – evaluation feels imposed.
Fear – what happens externally if program not as good as we think – both funders and grantees.
Culture and egos – people not used to failing
Need rigorr and flexibility both
Talk to right people at right time
Funder needs to give operations support, $, sets up for failure
Capacity – importance vs. Urgency – former often doesn’t happen cuz not urgent.
Fear re: poor results, what does it mean for us, our funding, our clients? Natural inclination to see success – neolled to make room for failure.
Relationships, dialogue mitigate this.
I get to talk to people changing world, some do evaluation well – successful ones prioritize learning, accountability, management.
Power can get negotiated.
They can decline grant.
Leadership – no strategy re: data.
Tools – I like narrative info, NPs want templates. I need to report to my superior and they want template/numbers too!
Fear, power, capacity.
Unreasonable expectations of themselves, not right-sizing.
Lack/fear of risk-taking- maybe new ways to collect/report data, may need to push trustees.
E.g., young person draws picture of themselves before and after program!
Capacity is key – (?) people wearing 6 hats; they love to cut and paste report from another funder.
Black box – why is data important? Will it be used to improve (?)
Q: 3 – Strategies, best practices
Assess staff re: what they concerned about.
Grantors need to clarify goals.
Send grantees same survey I sent internal staff.
See program in action (direct service, visit schools).
Talk with/not at – what do you think? What ideas do you have?
Stories as good or better than numbers; videos great.
Resist pressure to conform.
Admit “I don’t know.”
Assume best intentions.
Everyone gets unconditional positive regard, as funders I’m aware of power, being clear this is a dialogue, 2-way street, my ideas aren’t sacrosanct.
Ask re: every evaluation – why are we asking? What good will it do for us and for NP?
We may focus more on field than building organizations.
Be honest re: what we can/want to do – ask grantor re: their mission and vision?
More listening – grantees share info w/each other more and learn – not competitive; seek support to build capacity, share logic model, tools, “please steal from each other.”
Have consistency so same person evaluates program every year, if possible -> relationships, stake in program.
Step back and see completely, how to help orgs ask good Qs?
What’s your hypotheis re: “social-emotional learning” (or Topic X)? Appreciate what data you need and (?).
Dialogue at outset mitigates fear, build relationship.
Go to events where you’ll talk to grantors.
Hard when foundation requires only online application – causes distance.
Encourage people at NP to call grant office with questions.
Get face to face more.
FailFest – share things that went badly.
Culture of learning – applaud!
Ask: what is worst thing evaluation could find? No impact? Do harm? Might steer you away from certain grantors or projects. What’s most useful type of evaluation for NP/program at this stage?
Funders ask NP upfront who/how they’ll collect data, so they can strengthen capacity.
I skip workshops cuz not useful – focus narrowly, help grantors and grantees talk to each other, likes structure of this program.
Connected conversation.
What do funders do with data? Report to donors, pull together info from many grantees to grab their heart.
Larger funders – look at portfolio (initiatives) > individual grantees.
A lot of data doesn’t get used at all! Waste of (?) time. Gathering it just in case need it!
System-wide city-wide goal might take small data to report on broader movement.
We use it to tell stories.
Longitudinally – we compare years.
Cluster evaluation – e.g., workforce development, wage rates, duration of program, etc.
Incentives on funder side to talk re: good than what’s not working.
Hard to tell (?) you wasted their $1.5 million.
Academics may not publish results that show no impact – deprives us all of a opportunity to learn.
You mainly want org to think re: evaluation.
RCT = Randomized Controlled Trial – can’t work for small org.
Capacity – any good models?
Grantor can build in funding for (?) hr tech asst. to do data/evaluation.
Some funders do good job helping (?) do this.
“Capacity” is buzzword – what does it mean? $, skills, leadership.
Everyone needs to have evaluation mind set.
Some can do good job on a (?). Not sure how they do it. Mind set?
What to call this? Holistic? Emotional intelligence? Collective empowerment.
Value of evaluation is the process, not the outcome itself.
Power dynamic.
Funders vulnerable.
Closing – Take-way
Time frames respected.
Helpful to hear funders ask my Q re: how they use data.
Ability to be more transparent re: what we don’t do well, and ask grantors same Q.
Process is as important as product.
As evaluator, I will ask grantee to tell us how long it took to do it, so we can fine tune.
Create common app for reporting.
Funders more vulnerable, less powerful than we think – let’s go in more open, in same boat.
Rethinking what “capacity” is – what it means for us.
Like structured dialogue – want to do it in (location)!
(?) topic.
I stayed present, didn’t go off in many directions.
Consistency.
FailFest.
Practices makes better.
I’ll know in shower tomorrow.
I refrained from asking more Qs.
Evaluators are intermediaries – very useful.