Radically Open: Performance Scorecard

For this paper, we are focusing on a newer practice for us: publicly reporting our performance on key indicators. We launched our performance scorecard on our website in May 2023.

DATE

July 23, 2024

Part of our commitment to being radically open is transparency. As a field, we have some baseline expectations for what transparency means: like sharing who gets grants and explaining funding criteria and decision-making processes. Those baseline expectations are mostly about transparency in WHAT we are doing.

In addition, we have tried to be transparent about WHY we are doing what we are doing and HOW we are doing at it. For example, it is in our core operating values to “candidly share what we learn with others.” We do this in a variety of ways, including through learning papers like this — in which we explain things we have tried to do, how they went and any lessons learned. We also publicly post reports from grantees and Bush Fellows on our website as “learning logs,” so everyone can benefit from what people are learning.

For this paper, we are focusing on a newer practice for us: publicly reporting our performance on key indicators. We launched our performance scorecard on our website in May 2023. This was a big step for us and, as far as we know, unique among private foundations.

What is it?

Our performance scorecard presents 12 key indicators of how we’re doing. These indicators relate to our own activities and decisions – not the activities or impact of grantees and Fellows. They are organized in three sections:

WHAT WE FUND. This section highlights indicators related to where our grantmaking dollars are going – like the percent of grantmaking to advance racial and/or economic equity and the percent of grantmaking that supports community-led efforts.

HOW WE FUND. This section highlights indicators related to our grantmaking practices – like how quickly we respond to applicants and how people rate the experience of working with us.

HOW WE OPERATE. This section highlights indicators related to our non-grantmaking work – like reporting our investment returns and our efforts to use our non-grant spending to build wealth and address inequities.

For each indicator, we explain “why it is important to us,” “how we are doing,” and “what’s next.”

Why did we do it?

Within philanthropy, it has long been considered a virtue for foundations to NOT talk much about themselves. The idea is that the real story is what grantees are doing — not the foundation itself. While well-intended, the result is that many foundations are black boxes.

We are trying to disrupt this way of working. We are thinking about holding ourselves accountable in the way that funders ask of grantees. We should be able to articulate what good performance looks like in our own work. That means not just sharing what great things our grantees are doing but defining what it means for us to do a good job at what we do, as a funder, and then track and report on how we are doing. In other words: What does it mean for us to do a good job as a grantmaker? And, by that standard, are we doing a good job?

We believe that answering those questions, and sharing those answers publicly, can make us and our field stronger and strengthen our connection to the communities we serve.

We believe it will make us more of what we want to be.

Our operating values call us to work beyond ourselves, to steward the Foundation’s resources well, and to do more good every year. The scorecard pushes us to live all these values more fully.

As a management tool, developing the scorecard forced us to clarify – for ourselves and for others – what is most important in whether we are doing a good job in our work. We were already oriented toward continuous improvement. For example, for all our programs we regularly survey applicants and selectors and refine our programs based on what we hear. While we believe we are holding ourselves to a high standard, the stakes feel a lot higher for us in making it all public.  

We believe it can help foundations earn and keep public trust.

Once money has been put into a foundation, it does not belong to the donor anymore. It doesn’t belong to the board or the people who work at the foundation. We are just stewards of resources for public benefit. We take this very seriously.

We are in a time of diminished and diminishing trust in institutions — including foundations. After decades of mostly getting the benefit of the doubt that we are doing good, there are far more questions being raised about the purpose and impact of foundations. We believe that we need to up our transparency game. This is important for the credibility and effectiveness of individual foundations, and important for the credibility and effectiveness of our overall field.

How did we do it?

Engaging staff and board. Going public with this data — the good, the bad, and the ugly — was a big deal. It required our staff and board to be aligned and willing to be vulnerable. It was important to make sure staff and board were fully bought in and involved in the process along the way.

Importantly, after some initial work, we made creating the scorecard a foundation priority. Foundation priorities are those activities that board and staff have agreed will be the focus of the whole organization and tracked on a strategy dashboard that is regularly updated and reviewed with our board. Through the priority setting process, we had to make the case to ourselves that this was of high strategic importance. That helped to clarify what we were trying to do and why, and make sure we had the focused capacity to get it done. 

Ensuring staff were involved along the way included multiple working sessions to share what we heard from external stakeholders, to brainstorm indicators, to consider what “good” performance would be on these indicators, and to talk through how we and others could use the scorecard. We have a standing cross-foundation evaluation team that served as a sounding board and advisory group on the work throughout.

From the board perspective, this project built on a lot of other work we were doing to be more open and responsive to the communities we serve. They could see how this was taking that work to the next level. From a governance perspective, they also want the Bush Foundation to operate at the highest possible standards of accountability and transparency and liked that this would by pushing the practices of the field in that direction. The board was supportive and encouraging all along the way. Staff kept them engaged in the work by soliciting ideas for indicators, testing our thinking on what we might include and regularly keeping them updated. 

Engaging community. The scorecard is designed for external stakeholders, so we knew we needed to design it with external stakeholders.

One of the very first steps in our process was to interview 20 community and philanthropic leaders, representing various stakeholder perspectives. We got their feedback about the concept, ideas for what we might include, and what issues and concerns were most important to them.

The short story: they wanted to know a lot. People had questions about nearly every aspect of what we do — some of which we thought was already pretty clear. It was eye opening. People had far more questions than we could incorporate into a scorecard so we are also creating a complementary “Community Questions” section on our website to answer more of them (to be launched in Fall 2023)

Once we had a starter list of potential indicators, we created a mock-up of the scorecard with the help of an external web development team. We tested the mock-up with people who lived in our region that had some experience with foundation funding — some of whom were familiar with the Bush Foundation and some not. Their feedback led to critical changes in the content and design of the scorecard.

Choosing indicators. There were some things we knew from the outset we wanted to be true of the indicators. We knew we wanted the indicators to reflect work across the Foundation and to show both things that we are doing well and things that are real growth areas for us.

Some things we learned along the way. We recognized the indicators are really illustrations of an operating value or strategy we think is important. For example, we care a lot about bringing a range of perspectives into our decision making. We think about diversity of perspective in a lot of ways — including where people live, what sector they work in, their political affiliation, their experiences related to important issues like poverty, immigration, disability inclusion, and aspects of their personal identify like race and ethnicity, gender, etc. It was hard to have a metric that conveyed our full intent. We decided to focus on race and ethnicity and created a new metric that combined the self-identification of board, staff and community members who participated in grant selection in the past year. Getting the bigger concept clearly articulated and then picking the best metric to represent that concept was often tricky. We had to resist the temptation to add in more indicators and data that would add nuance but be harder to understand. This means that the indicators we chose rarely tell the whole story, but we hope they make the most important points. 

Some indicators felt obvious for us to include. For example, we had already committed to regularly share the percentage of our funding going to Native people and nations. We had been publishing regular reports to share that data. It was an easy decision to fold this into our foundation scorecard as an important indicator for us.

Other indicators were less obvious. We struggled the most with an indicator related to how our applicants feel our process compares to other foundation application processes. We have asked that question of applicants for years — but the responses were private. We worried people might be annoyed with us for putting that measure out publicly, that we were trying to toot our own horn or encourage comparisons to our colleagues in the field at their expense. At the same time, we knew from experience that the comparative framing has been the most helpful to us in getting honest feedback that pushes our own practice, rather than just positive ‘you’re great’ kind of feedback. So we included it. (With some ongoing angst about it.)

Usually, the indicators we used were important to both internal and external stakeholders. There were exceptions though. For example, we believed it was important to share investment returns, even though it was a less frequent suggestion from stakeholders. Since we are stewarding these assets on behalf of communities, we believe that is a critical part of our accountability. When we talked with folks, it was clear that people didn’t really understand that ALL of our revenue comes from our investment returns. So, we felt like including a metric related to investment performance was also an opportunity to explain more about how private foundations operate.

We struggled to figure out the right framework for how the indicators fit together. Our first try was to have a couple indicators for each of our five operating values. While we walked away from this framing, it was a helpful exercise to ensure the indicators matched up with our values. After multiple tries, we landed on these: what we fund, how we fund and how we operate.

Analyzing data. Most of the indicators we chose used data we were already tracking in some form. This means we’d already worked through some of the challenges in getting data that is accurate and meaningful. For example, we invested a lot of time in past years in figuring out how to best track our data related to grantmaking on Native issues and in Native communities and nations. (Candid did a case study on our efforts that you can read here.)

There were some measures we were tracking but don’t yet feel solid about our data practices. For example, we have tracked what giving goes to “community-led” organizations for years but never had much confidence in our methodology or coding. We are transparent on the current dashboard about the shortcomings of our data on this indicator and we will work to get it better for future versions. The process highlighted the areas in which our data practices were inconsistent, and we have changed some practices and training to ensure better future consistency.

And some measures are truly new. For example, we have never systematically tracked applicant response time. It took some work to figure out how to define the measure and put in place a new methodology for tracking and reporting that could be applied toward continuous improvement.

We sometimes track data internally in ways that are more complex than made sense to share externally, and so we had to figure out how to map what we had in our systems to a simpler way of communicating the data. We wanted the scorecard to be easy to understand through visual presentation of the data, and that pushed us toward more and more simplicity. We also had to work around technological limitations in some of our data systems, that we hope we can overcome in time.

This work was driven by different key staff over time — with some staff transitions along the way. Ownership for figuring out the data was in our talent, learning and evaluation team, in deep partnership with our program operations team, working with the key internal stakeholders on each indicator. It was a big lift to get this done for the first time! We expect that our annual update will be a lot easier.

Explaining ourselves. Beyond getting the data, we had to make sense of it for ourselves and figure out how to help others make sense of it. We needed to explain why each indicator was important to us — including why we picked the particular metric, how we are doing, and what we plan to do next.

This was a bigger effort than we expected. It required us to explain our “why” on a whole lot of things in a more concise and specific way than we had done before. Even more challenging, it required us to have a 2- to 3-sentence reflection on our current performance on each indicator, requiring more simplicity in our conclusions than is our usual mode.

We tried to write about the data in a way that feels like the reader is talking to a Bush staff person and not reading a white paper. It was not always easy to keep the conversational tone we wanted while getting enough into the details and the nuance of our methodology to accurately convey what the data actually shows. A public scorecard is as much a communications project as it is a learning and evaluation project — and we worked hard to keep our audience in mind.

In the lead-up to the launch, we had a working group of our communications director, talent, learning and evaluation director, grantmaking vice president and president working through and finalizing all the explanations. Our communications director managed the process, testing the content on different indicators with other key folks in the foundation along the way. As with the data analysis, we hope this part of the process is much easier in future years — when we are updating rather than working from scratch.

What have we learned?

Creating the scorecard can be as much about building internal understanding as external understanding. The process of identifying the indicators and the creating the content forced us to clearly articulate the “why” of many of our practices in a new and very concise way. We are already seeing how this can be helpful in ensuring org-wide understanding of the rationale for things we do every day, in onboarding new staff, and in readying staff to answer questions from others about our work. 

Make plenty of time for reflection on the dataWe had a group of people working on the data and another group of people working on the narrative related to the data in parallel. If we could do it over again, we would sequence this better by gathering and reviewing the data earlier in the process allowing for more focused reflection on what the data showed and what we wanted to say about it. Different indicators are extremely important to different staff members and we could have done more to engage those folks in crafting the scorecard content. It was a lot to come up with content on all the indicators at the same time and we were more centralized in creating the content — and ensuring consistency across the indicators — than we will be as we update the data in future years. We are looking forward to deep, ongoing learning and reflection as we regularly review the data and our progress.

It is hard to assess “good” performance. For most of our indicators, we don’t have comparative metrics for other foundations. That makes it hard to evaluate whether we are doing well or not. We had a lot of internal debate about whether to set specific targets for each indicator, to be able to say more definitively whether we were nailing it or failing it. We ultimately opted against hard targets — unless we already had a specific goal as part of an organization strategy. There is a lot of nuance behind some indicators — like how long it takes to respond is tied to the fact that we get a lot of proposals which is tied to our conviction around being open and accessible, and our commitment to consider every proposal seriously and with care. We decided to just share our data, share how we are thinking about it and invite the conversation.

Good legal counsel is important. We have a very public organizational focus on equity — including racial equity and a commitment to Native people and nations. We always work closely with our lawyers to make sure that we are practicing within the law and are very mindful of ensuring we are available to people of all backgrounds, all across the region. Putting out public indicators on things like vendor diversity requires care in explaining our practices. We were grateful for the help of our attorneys in making sure we were thoughtfully conveying the intent and action behind any indicators related to particular groups of people.

It is an org-wide effort. Developing the performance scorecard involved every part of our organization. By the time we were ready to launch, every team and individual had contributed directly or indirectly to the finished project. This is a good thing insomuch as it is part of getting better alignment across the Foundation of what’s important and how we’re doing. And, at the same time, that org-wide effort takes time and dedicated capacity from a lot of folks. We adjusted ownership and leadership of the work depending on the different phases of development. And, along the way, transitions of key staff drove home the need to develop bench strength on data and analysis. We believe we have stronger data skills and understanding across the organization now than when we started the project.

Just do it. There are a lot of reasons to worry about creating a public scorecard. It is scary to put your stuff out there. That would always be true but is especially so in a polarized time when people are quick to attack institutions for all sorts of reasons. Getting over this required us to focus on our operating values and our conviction that creating the scorecard would make us more of what we say we want to be. It would be easy to keep perfecting any and all elements of the performance scorecard – narrative, the data we show, how we show the data, etc. Each time we review it, we see something else that could be better. This project has required us to resist perfectionism (which was and is hard!) and to get something out so that the conversation can move outside of our walls. Because our scorecard is web-based, we can keep adjusting and refining as we go.

Conclusion

We are writing this paper as we launch the scorecard. A lot has gone into creating it and we have learned a lot. And … we expect many lessons to come.

We expect to learn a lot more about what is important to our stakeholders. We expect to learn more about how to talk about our work in ways that make sense to people outside of philanthropy and build trust. We expect to learn a lot more about how to best engage our staff in continuous learning and improvement around the scorecard. And we already have a lot of ambitions for improving the scorecard itself. (We are particularly eager to upgrade the data visualization options of our web template.)

We believe that this project is important for being open and transparent — in a time of diminishing trust in foundations. We hope that other foundations will be inspired to do more toward that end, too. We can all support each other and push each other to raise the standards in our field.

If you are interested in learning more about what we did, please don’t hesitate to reach out to us at staff@bushfoundation.org.

Download the learning paper (PDF)

Acknowledgements

We drew inspiration from the model of the New York Health Foundation’s annual progress reports — which we learned about in Phil Buchanan’s book “Giving Done Right.”

Thanks to those external stakeholders who gave input that shaped the project.

Bush Foundation staff members who played significant roles in the creation of the scorecard include: Kassira Absar, Stephanie Andrews, Vivian Chow, Justin Christy, Erica Orton, Molly Matheson Gruen, Anita Patel, Jen Ford Reedy, Amanda Rios-Heintz and Kari Ruth. Lots and lots of others also helped make it happen!

Footnotes

Our timeline for the work was a lot longer than planned. We started on it in 2020 and then paused to respond to the pandemic and the needs and opportunities created by racial reckoning and uprisings and then paused again to manage transitions of key staff.

2Our indicators reflect our Bush strategy and operating model — and we expect that other foundations would choose very different kinds of indicators. At the same time, we hope that we can find more shared indicators to report across our field, so that we can better assess ourselves and learn from best practices.

Continue reading

View all
Back to top