You are on page 1of 13

TEACHING NOTE: SI-56 TN

DATE: 06/01/13

TEACHING NOTE FOR


SAND HILL FOUNDATION
Susan Ford served as the president and cofounder of the Sand Hill Foundation, a family
foundation that made grants to organizations that benefited people on the San Francisco
Peninsula. Tom and Susan Ford established the foundation in 1995, reflecting from the Fords
shared passion for giving and community development. The foundation focused on the
environment, education, preservation of open space, youth development and job training.

The Fords were among the original donors of the Teen Success Program, a support group for
teen mothers launched in 1990 by Planned Parenthood Mar Monte (PPMM). The program
encouraged teens not to have a second child and to stay in school, in exchange for $10 per week
and $100 for every 25 weeks of attendance. Facilitator-led Teen Success groups of up to 12 teen
mothers met weekly. Childcare was provided during meetings, and participants could remain in
the groups until they turned 18 or completed high school.

After investing more than $200,000 in the initiative, Susan Ford decided to measure the
effectiveness of the Teen Success Program. Her intention was to validate the programs results
and identify its strengths and improvement opportunities to help it grow. Yet, even though Ford
had developed a positive relationship with Linda Williams, the head of PPMM, she worried that
Williams might feel threatened by her proposal for an assessment of the programs impact. The
evaluation process resulted in tensions that caused both Ford and Williams to reflect upon the
dynamics of the grantor-grantee relationship, as well as the role of evaluation in their future
work.

By 2002, the Teen Success program was operating in over 20 communities in California and
Nevada and had served 625 teen mothers. That year, PPMM won the Planned Parenthood
Affiliate Excellence Award for services to teens. In mid-2002, PPMM was seeking funding for
another comprehensive Teen Success Program evaluation so that other Planned Parenthood
chapters could potentially replicate the initiative. Looking forward, Williams and Ford hoped to
capitalize on their learning to more constructively engage all stakeholders in the evaluation
process, effectively monitor the programs impact and take action on evaluation results.
Copyright 2013 by the Board of Trustees of the Leland Stanford Junior University. This note was prepared by
Lecturer Laura Arrillaga-Andreessen with assistance from Lauren Wechsler for the sole purpose of aiding
classroom instructors in the use of Sand Hill Foundation, GSB No. SI-56. It provides analysis and questions that are
intended to present alternative approaches to deepening students comprehension of the business issues presented in
the case and to energize classroom discussion.
Teaching Note for Sand Hill Foundation SI-56 p. 2

Key Facts

Mission: The Sand Hill Foundations mission was to make grants to organizations that
benefited people on the San Francisco Peninsula, particularly in San Mateo and Santa
Clara Counties.

Grantmaking Focus: The foundation focused on the environment, education, open space
preservation, youth development and job training.

Key Players: Susan Ford, Sand Hill Foundation president and cofounder; Linda Williams,
Planned Parenthood Mar Monte (PPMM) director; Myrna Oliver, director of Teen
Services at PPMM; and Jane Kramer, Teen Success evaluator.

Teen Success Program: The Teen Success Program was a support group for teen mothers
launched in 1990 by PPMM. The Sand Hill Foundation invested more than $200,000 in
PPMMs Teen Success Program between 1990 and 1995. By 1995, the program was
serving 46 teens at five sites, located in Eastside (two groups), Mountain View,
Sunnyvale, Gilroy and Hollister. By 2002, the Teen Success program had served 625 teen
mothers, sponsored 24 support groups of 12 teen mothers per group per year and operated
in 20 communities.

Teen Success Program Evaluation: According to PPMM, only 4% of Teen Success


participants had become pregnant again, compared to 33% of the general population of
teen mothers. At least 80% of Teen Success graduates had completed high school or
received a GED, compared to the average of 50% of pregnant teens in the general
population. More than one in four Teen Success participants had continued her education
beyond high school. According to Kramers evaluation, approximately one-third of Teen
Success participants may have left the program because of a pregnancy. Kramer
determined that the true dropout rate due to pregnancy was likely to be somewhere
between 12% and 30%.

Position in Course

This case is intended for use in a course on philanthropic grantmaking or foundation strategy.
The teaching objective is to explore how to manage donor-donee relationships and to develop
effective program measurement strategies, processes and implementation plans. The case
highlights the Sand Hill Foundations efforts to measure the effectiveness of the Teen Success
Program run by Planned Parenthood Mar Monte (PPMM).

Supplementary Readings

Arrillaga-Andreessen, Laura. Robert Wood Johnson Foundation. Case Study. Stanford Graduate
School of Business. Case No. SI-75. (2007). Web.

Emerson, Jed. Mutual Accountability and the Wisdom of Frank Capra. Foundation News &
Commentary, Mach/April 2001: 42-46 pp. Web.
Teaching Note for Sand Hill Foundation SI-56 p. 3

Eisenberg, Pablo. Weve Got Relationship Problems: How Can We Improve Grantee/Grantor
Relations? Foundation News & Commentary, 40.4, July/August 1999.Web.

Eisenberg, Pablo. Philanthropic Ethics from a Donee Perspective. Foundation News &
Commentary, September/October 1983. Web.

Heifetz, Ronald A., John V. Kania and Mark R. Kramer. Leading Boldly. Stanford Social
Innovation Review (Winter 2004). Web.

Rodin, Judith and Nancy MacPherson. Shared Outcomes. Stanford Social Innovation Review
(Summer 2012). Web.

Twersky, Fay. Foundations Can Learn a Lot From the People They Want to Help. The
Chronicle of Philanthropy. November 13, 2011. Web.

Wilhelm, Ian. Report Cites Grant-Making Officers Who Forge Strong Relationships with
Grantees. Chronicle of Philanthropy. May 2, 2010. Web.

Assignment Questions

Primary Questions:

1. Assess Fords approach to initiating the Teen Success Programs evaluation. What steps
could Ford and Williams have taken to improve Planned Parenthood Mar Monte and the
Sand Hill Foundations mutual learning?
a. Timing for Class: 10 minutes for class discussion.

2. As Williams looks to the future, how could Planned Parenthood Mar Monte improve its
capacity to monitor the Teen Success Programs impact? How could the Sand Hill
Foundation support the programs ability to measure and communicate results?
a. Timing for Class: 10-15 minutes for class discussion, with 5-8 minutes allocated
to each of the two questions posed above.

4. How could Ford use the lessons learned from the Teen Success Program evaluation to inform
her future grantmaking related to teen pregnancy prevention?
a. Timing for Class: 10 minutes for class discussion and brainstorming.

Supplementary Questions:

5. What are the potential issues Susan Ford faces in her desire to measure the Teen Success
Program?
a. Timing for Class: 10 minutes for class discussion.

6. Evaluate Kramers evaluation methodology, results and recommendations.


b. Timing for Class: 10 minutes for class discussion.
Teaching Note for Sand Hill Foundation SI-56 p. 4

7. Evaluate Fords management of the evaluation process.


c. Timing for Class: 10 minutes for class discussion.

8. Evaluate PPMMs and Williams management of the evaluation process and results.
d. Timing for Class: 10 minutes for class discussion.

9. Evaluate PPMMs management of donor relationships.


e. Timing for Class: 7-10 minutes for class discussion.

10. Discuss the general challenges that nonprofits have faced and still face in accountability and
measurement. How has the landscape changed in the last five to seven years? How might
nonprofits improve their management of donor relationships?
f. Timing for Class: 10-15 minutes for class discussion and brainstorming,
allocating approximately 5 minutes to each of the three questions posed above.

Analysis for Primary Questions

1. Assess Fords approach to initiating the Teen Success Programs evaluation. What steps
could Ford and Williams have taken to improve Planned Parenthood Mar Monte and the
Sand Hill Foundations mutual learning?

If Ford and Williams were to approach the evaluation process over again, they could take a
number of steps to improve their mutual learning:

Agree on evaluation in the initial grant contract. Introduce the possibility of an evaluation
during the proposal process rather than surprising the grantee with it partway through the
funding cycle. This way, both parties expect the evaluation from the beginning and can
determine how to maximize mutual learning.

Focus on building a strong grantor-grantee relationship with open communication. Both


parties could invest their energy in creating a culture of open, honest communication that
values evaluations potential to further continuous improvement. Ford indicated that the
extent to which she fostered a relationship with grantees depended on if we know the
director and trust them. Williams noted that while she and Ford had been in periodic
communication, they had not engaged in direct discussion about the evaluation.
Cultivating an ongoing, consistent relationship could create a communication channel
through which both parties could address concerns regarding evaluation and thus improve
the mutual learning that results.

Have open conversations about evaluation purposes to create a collaborative culture.


Another way that the grantor can foster open communication and trust is to suggest that
both sides discuss past evaluation experiences. The grantor can share examples of other
evaluations that it has conducted with grantees, highlighting examples where the initial
results might have been indicative of failure but where the foundation and grantees
continued their funding relationships and worked together to learn from the assessment.
This would help build trust in the relationship so that the grantee will not assume that
Teaching Note for Sand Hill Foundation SI-56 p. 5

continued funding depends on a perfect assessment. Additionally, it will help the


grantees to understand the grantors perspective on the importance of evaluation in
informing future decision-making and program strategy.

Discuss the grantees past evaluation experiences. Ford did her best to convey that the
evaluation outcome would not prevent future Sand Hill Foundation funding. However,
without a more developed dialogue, Williams did not seem to fully believe or internalize
Fords view. By fostering a transparent relationship that includes conversations about
lessons learned from past evaluation, the grantee will have the opportunity to share
concerns stemming from past experiences or the current situation.

Make as many decisions about the evaluation together as reasonably possible. The
grantor may involve the grantee in as many decisions related to the evaluation as
possible. Grantees could provide input on selecting the external evaluator, defining the
evaluation methodology and identifying what issues or challenges may arise during the
evaluation. Ford commented that in retrospect she would have given Williams a greater
voice in the process, inquiring about the benefits that PPMM hoped to gain from the
evaluation and any top priority questions the organization wished to include. By
approaching the evaluation collaboratively, the grantee is more likely to buy-in to
evaluation results rather than approach them defensively or dismiss them as inaccurate,
which was how Williams and her staff initially reacted to the evaluation that Ford
commissioned.

Further integrate the perspectives of the end recipients and the program operators into the
evaluation design and implementation. Group facilitators could visit each others sessions
to help learn from one another, evaluate session practices and work together to develop a
consistent model based on best practices. As the group facilitators are the leaders working
directly with the teens and have the most direct program knowledge, they could be more
actively involved in shaping the evaluation as well as helping to evaluate the programs
impact. Additionally, the teens themselves could be more proactively involved in
explaining and defining what success looks like for the program.

Commit foundation funding to implement at least some of the evaluation results. Finally,
if the foundation truly envisions the purpose of the evaluation to help grantees better
achieve their objectives, it could consider providing the financial and intellectual capital
necessary to implement at least some of the evaluation recommendations. Some
foundations may be willing to leave this amount open, depending on what the evaluation
determines. Others might stipulate from the onset that they will provide a set amount of
money. Either way, this financial commitment would show how seriously the foundation
believes in both the assessment and the grantees continued work toward achieving their
shared goals.

2. As Williams looks to the future, how could Planned Parenthood Mar Monte improve its
capacity to monitor the Teen Success Programs impact?

Determine key metrics. As a starting point, PPMM needs to determine what information
Teaching Note for Sand Hill Foundation SI-56 p. 6

it will consistently collect and determine efficient and effective methods to aggregate data
across teen groups. The evaluation indicated particular data that PPMM could capture,
namely detailed reasons and outcomes for teens that leave the program.

Designate one staff member to oversee all data collection. As the evaluation indicated,
PPMM could improve its capacity to monitor the Teen Success Programs impact by
designating one staff member to oversee data collection and management. If multiple
people are monitoring the teens and their groups, there is too much risk that the data
collected will be inconsistent. The organization needs to allocate part of a staff members
time to oversee the collection and analysis of consistent and reliable data.

Create and support better systems for data collection. In order to collect and assess data,
PPMM could install and leverage an IT system that integrates participant questionnaires,
facilitator observations, emails, online responses and even text messages (since teens may
be more likely to have a cell phone and use text messages to communicate rather than
emails). Such an IT system could aggregate participant data and enable PPMM senior
managers to readily access this data and analyze programs, monitor participant needs and
generate relevant and accurate program statistics.

Streamline and simplify data reporting to funders. Coordinating and streamlining


reporting to different funders could free up staff time to focus on data collection and
program evaluation. If PPMM identified an essential data set to measure program
success, it could share this information with all funders and recommend a common
reporting system. One way to do this would be to create an evaluation dashboard
highlighting progress along key outcome indicators. This would minimize time spent on
reporting and enable staff members to focus on consistent program evaluation and
strategy improvements.

How could the Sand Hill Foundation support the programs ability to measure and
communicate results?

Sand Hill Foundation support for PPMMs data collection. The Sand Hill Foundation
could support PPMMs ability to measure and communicate results by advocating for the
organizations proposed single dashboard of data reporting. If PPMM still feels that it
needs additional staff dedicated to data collection and program evaluation, the Sand Hill
Foundation could provide financial capital to hire and/or train the data collection staff
member.

Sand Hill Foundation support for PPMMs communication of results. To help PPMM
communicate results to other funders and teen pregnancy-related organizations, the Sand
Hill Foundation could write articles about PPMMs strategy, evaluation and evolution, or
provide PPMM with the resources to do so itself. Additionally, the foundation could seek
out venues for PPMM to meet with other funders and nonprofits working on teen
pregnancy issues in order to advance field-wide learning. In the business case, PPMM
staff indicated that they were so busy implementing the program that they did not have
time to think about evaluating their impact, let alone communicating their results. The
Teaching Note for Sand Hill Foundation SI-56 p. 7

foundation could help by identifying appropriate communication forums such as field


publications and conferences and providing the financial and intellectual capital to ensure
that PPMM is represented.

3. How could Ford use the lessons learned from the Teen Success Program evaluation to
inform her future grantmaking related to teen pregnancy prevention?

Lesson from evaluation: Key program strengths were retention, graduation rates and
participant satisfaction. According to the evaluation, Calculations based on 25 teens that
were reached for follow-up interviews indicate that approximately one-third may leave
the program because of a pregnancy. PPMMs estimates are lower than this
(approximately 4% or approximately five of 124 teens). The true dropout rate due to
pregnancy is likely to be somewhere between 12% and 30%. The retention of teens
appears to be strong, the number of graduates is impressive and the teens appear to be
extremely satisfied with the program.
Implication for future grantmaking:
o When conducting and releasing a philanthropic program evaluation, it is
important to identify and highlight program strengths in order to motivate
program staff and determine what program elements will be maintained in the
future.
o It is also important to fund an evaluation at a level that enables the data collection
to be sufficiently robust, providing an adequate sample size to test hypotheses and
gain accurate results. For example, a sample set of 25 teens may not be a large
enough percentage of the overall population (of hundreds of teens) to constitute a
sample that can represent Teen Success Program results.
o It is important to gather benchmarks, where available, to find out if the funded
intervention is actually improving conditions or resulting in the same situation
that would have occurred without the intervention. For example, according to
PPMM (and publically-available data), 33% of the general population of teen
mothers had a second pregnancy within two years of giving birth to their first.
The Teen Success Program may aim to reduce this rate by a certain percentage
and in the very least, demonstrate that among mothers participating in the Teen
Success Programs, the rate of second pregnancy is lower.

Lesson from evaluation: PPMM maintained poor records on teens that dropped out of the
program. However, this data was essential to determining how program strategy could be
adapted to retain more program participants.
Implication for future grantmaking:
o Whether initiating a new program or funding an existing one, it is important to
allocate funds for proper and thorough record keeping. Record keeping is critical
to evaluate program impact and identify its strengths and weaknesses. Record
keeping also enables program staff and funders to identify divergences from
anticipated results, uncover problems and adapt program design to better meet
expectations.
o It is also critical to identify upfront the metrics that will be most important to
track in order to demonstrate whether or not a philanthropic program is achieving
Teaching Note for Sand Hill Foundation SI-56 p. 8

its goals. As the most important goal for the Teen Success Program is preventing
a second pregnancy, it is critical to understand why a teen might drop out of the
program (and determine whether or not this was due to a second pregnancy).

Lesson from evaluation: Too many different people conducted data collection. Instead,
one staff member could be assigned to oversee all data collection and aggregation,
focusing particularly on teens that get pregnant while in the program and/or pre-maturely
dropout of the program. This would improve evaluation clarity, consistency and quality,
rather than having all facilitators track assessment metrics. Nonetheless, all staff
members could be educated about the importance of evaluation and knowledge
management.
Implication for future grantmaking:
o Because of the importance of collecting thorough and reliable data, it is
imperative that one person has oversight responsibility for collecting, managing
and analyzing data. With this structure, data collection could be more consistent
and complete than if everyone independently gathers data according to their own
criteria.
o For future grantmaking, Ford could provide the funding (or partial funding) for a
centralized staff member to collect and analyze data across the program. This cost
could be highlighted in the overall program budget so that other donors that
support and replicate the program fully fund this expense.

Lesson from evaluation: Facilitators curricula and expectations for program


implementation and performance varied across sites. Program facilitators could receive
more guidance and training about consistent program design in the future.
Implication for future grantmaking:
o When operating a program that involves multiple facilitators, it is important that
everyone receives the same baseline training and works toward the same data
collection goals. Consistency in training and goal setting across groups is critical
for implementing effective, high-quality programs and ensuring that outcome data
can be compared across groups.
o Future grant-funded programs could consider developing a program guide for all
staff members that outlines clear expectations and best practices for the program.

Lesson from evaluation: Programs like PPMM could use self-assessment to evaluate their
impact and evolve programming to meet their desired outcomes. Ongoing stakeholder
feedback will help verify that the nonprofits programs are functioning as intended and
will help determine whether current strategies effectively achieve program goals.
Implication for future grantmaking:
o Teen pregnancy prevention programs could share their best practices and work
continuously to improve their offerings so that teens stay in their programs. It is
not realistic for every teen pregnancy prevention program to conduct its own
research. However, these programs could harness the network of similar
organizations to learn about ideas that have been tested and best practices that
have emerged. This would also help to establish benchmarks that PPMM can use
in evaluating its programs success and making strategic updates.
Teaching Note for Sand Hill Foundation SI-56 p. 9

Lesson from evaluation: PPMM needs to determine if the program effectively prevents
teen pregnancy. At the time of the case, the programs impact was unclear. Every teen
pregnancy prevention program could aim to have a clear understanding of the extent (if
and how much) to which it is preventing teen pregnancy.
Implication for future grantmaking:
o Ultimately, it is important for every teen pregnancy prevention program to
establish clear metrics for success and collect data accordingly. In order to
achieve successful outcomes, teen pregnancy prevention programs may rigorously
evaluate themselves to determine if their current programs are effective.

Analysis for Supplementary Questions

4. What are the potential issues Susan Ford faces in her desire to measure the Teen Success
Program?

Resistance from Williams due to fear of losing funds.


Williams feeling threatened that Ford might become too involved in PPMM and the
program.
PPMM resource constraintsevaluation taking up too much time and human resources.
Williams desire to be heavily involved in designing the evaluation, thus biasing process
and results.
Measurement cost issues.
Difficulty in tracking teens and gathering relevant data.
Challenges in designing a long-term measurement program that transcends one-time use.
Internal versus external evaluation.
Defining inputs, outputs and outcomes to better understand the dependencies created by
the program design and assess the programs overall impact.

5. Evaluate Kramers evaluation methodology, results and recommendations.

Strengths: Overall, Kramers evaluation was successful in that it challenged PPMMs


assumptions. Kramer used typical evaluation methods that combined both qualitative and
quantitative research. The questionnaires sent to both existing Teen Success participants
and those who had left the program contained targeted and relevant questions.
Appropriately, Kramer focused on the support groups content, facilitator training and
data collection. She didnt necessarily disagree with the support group method but did
suggest additional ways to engage participants. Kramer was correct in suggesting a
reassessment of the Teen Success Programs vision and goals, as the program had
remained unchanged for five years despite uncertainty as to whether it was achieving its
goals. Finally, Kramer correctly pointed to data collection as a serious problem with the
Teen Success Program. Up until that point, the program had not adequately collected
data, which made it difficult for the program to measure its success and identify potential
areas for improvement.

Opportunities for Improvement - Evaluation Methodology: Kramer could have visited the
Teaching Note for Sand Hill Foundation SI-56 p. 10

support groups over a longer period of time to gauge changes in the teens attitudes and
to experience a greater variety of program curricula, as Oliver had suggested. Kramer
could have also surveyed the teens at the beginning of the support groups and at a later
date to measure changes in teen attitudes over time.

Opportunities for Improvement Recommendations for Improved Donor Relations:


Collecting data was critical for donor development and internal organizational
development and growth. Kramer could have included additional recommendations on
data types to share regularly with PPMMs donors and advice for managing donor
relationships. Specifically, Kramer could have provided a non-letter template for all of
PPMMs donors. This template could include basic information such as the number of
Teen Success support groups, growth since the last report, number of total participants,
percentage of participants currently enrolled in school, participant pregnancy rate,
program improvements, updates on program strategy and vision, future support groups
and target expansion regions. Kramer could have also provided recommendations on how
PPMM could better manage its donor relationships. Suggestions could focus on
institutionalizing relationships to transcend a natural tendency for ad-hoc relationships.
Examples include regular face-to-face donor-grantee meetings, joint strategy sessions to
solicit feedback and new program ideas and donor events with Teen Success participants
and alumnae to celebrate program success.

6. Evaluate Fords management of the evaluation process.


It was important that Ford made an effort to measure the effectiveness of the Teen Success
Program. She managed the process well but she could also make future improvements in a few
areas.

Room for Improvement

Including evaluation and reporting requirements in the original grant agreement. Ford
could have stated in the original grant agreement both the type of reporting the Sand Hill
Foundation required and the frequency with which grantees would report. After having
engaged in the evaluation, Ford could have also taken the opportunity to require regular
evaluations of the program.

Including grantee perspective in the evaluation design and implementation. Ford rightfully
admits that she could have, at a minimum, solicited Williams thoughts on evaluation
methods and potential evaluators. Although Fords desire to collect unbiased results was
good, she probably would have benefited from soliciting Williams opinion and input from
the start. In the end, she may have selected Kramer to conduct the evaluation, but Williams
and PPMM may not have felt as defensive. If Ford had included Williams in the process
earlier on, Williams may have trusted Ford when Ford emphasized that she and the Sand
Hill Foundation would not pull funds, even if the results were less than satisfactory.
Williams involvement would have also demonstrated to PPMM that Ford genuinely
believed that the programs success relied on being a joint donor-donee effort.

Including more specific goals in the evaluation design. Ford also could have structured the
Teaching Note for Sand Hill Foundation SI-56 p. 11

evaluation more specifically (i.e. selecting one of the following: goals-based evaluation
focused on the programs ability to achieve its predetermined objectives, process-based
evaluationunderstanding how the program really works and its strengths and weaknesses
or outcomes-based evaluationidentifying client benefits). Kramers evaluation was not
as specific about its goals and as a result, the data collection and analysis process, as well
as the overall impact assessment was not as clear or effective. More information on such
topics can be found through the Basic Guide to Program Evaluation (Including Outcomes
Evaluation) developed by Dr. Carter McNamara, Authenticity Consulting LLC:
http://www.mapnp.org/library/evaluatn/fnl_eval.htm#anchor1581634

Successful Management

Listening and learning. In terms of evaluation reporting, Ford successfully managed the
process. She served as a sounding board for both the evaluator and PPMM. She listened to
PPMMs issues with the evaluation and remained calm in a potentially relationship-
damaging situation. She also took the opportunity to provide her own suggestions to
Williams on how to improve the Teen Success Program.

Using evaluation as a means for program improvement. Eventually, Williams and PPMM
saw the value of the evaluation and instituted change within the program.

7. Evaluate PPMMs and Williams management of the evaluation process and results.

Defensive behavior limited the opportunity for learning. Many organizations believe that
an evaluation is about proving a programs success or failure. Williams and PPMM
managed the evaluation process in a manner indicative of a threatened organization. They
felt threatened by the evaluator and openly disagreed with the evaluation results. However,
they overreacted to the studys results, which in fact provided PPMM with reasonable
suggestions for improvement. PPMM could have initially been more open to the results
and less defensive, as it subsequently admitted.

Program measurement and evaluation could be more institutionalized. To show its


initiative and its desire to continuously improve the Teen Success Program, PPMM could
institute formal program evaluations on a consistent basis. PPMM could also provide
regular progress reports offering updated data on retention and dropout rates for teens in
the program, instead of the periodic letter with random information, inconsistent data and
anecdotes about successful teens. As program and impact assessment is a key part of any
successful fundraising strategy, Williams and Oliver could develop an internal reporting
system that would expand the organizations knowledge base, help PPMM make more
informed decisions and communicate the programs value to current and potential donors.

8. Evaluate PPMMs management of donor relationships.

More formalized and consistent interactions could improve grantee-grantor relationships.


Williams often left a very positive impression on donors. Donors such as Susan Ford
trusted Williams and enjoyed working with her. However, PPMM could develop more
Teaching Note for Sand Hill Foundation SI-56 p. 12

formal relationships with important donors such as Ford. Such relationships can easily
fall to the wayside during busy times, as Williams openly admits had happened recently.
Williams could take the initiative and set up semi-annual or annual meetings between
PPMM team members and major donors at which PPMM could discuss the Teen Success
Programs progress and future goals. This session could also be a time to solicit donors
thoughts on future program opportunities and strategic evolution.

Data sharing could improve and streamline donor reporting. As mentioned previously,
PPMM needs to institutionalize reporting for major program funders. The organization
needs to take the initiative and establish a template that includes all relevant information
to preempt each donor from asking for different information, which can be time
consuming and costly. As a result, donors will inevitably be impressed by PPMMs
initiative and vision, potentially increasing their donations. Such action would give
PPMM a competitive advantage since so few nonprofits currently take such action.

9. Discuss the general challenges that nonprofits have faced and still face in accountability
and measurement. How has the landscape changed in the past five to seven years? How
might nonprofits improve their management of donor relationships?

In the nonprofit world, accountability is largely self-imposed. No market-driven


mechanisms exist to force nonprofits to demonstrate that the resources they manage and
the strategies they employ are in fact achieving established goals. In the for-profit sector,
investors and stockholders demand results and an understanding of how resources
invested in a company are delivering value. In the nonprofit sector, donors often do not
demand proof of their donations results and are satisfied merely with feeling good about
giving to a worthy cause. Additionally, donors rarely conduct enough research on
organizations before making a gift and few even know what might indicate whether or
not a nonprofit is a high performer.

Lack of clear metrics measuring performance. In the for-profit world, a companys value
and performance are ultimately measured by a single consistent metric the dollar. In the
nonprofit world, no such single measure of value exists. With so many different global
social issues, influenced by myriad internal and external factors, metrics can demonstrate
achievement of social impact in infinite ways. In the absence of accepted standard
metrics, the majority of nonprofit leaders and donors independently define the metrics
that are most likely to measure a nonprofit programs social value.

Lack of business expertise in nonprofit leadership. Due to limited financial rewards and
often demanding working conditions, nonprofits have found it difficult to attract
experienced leaders and employees with solid business acumen. Lack of business
expertise in the nonprofit sector has meant that critical areas such as accountability and
measurement have often been overlooked.

Nonprofit accountability and measurement is now becoming a high-priority in nonprofit


management. This is for a variety of reasons, including:
o Nonprofit management as an academic field is growing at the graduate level. This
Teaching Note for Sand Hill Foundation SI-56 p. 13

presence in academia has challenged the status quo and has brought forth a wealth
of new nonprofit strategies and processes for accountability and measurement.
o A new generation of wealthy donors who built their fortunes through competitive
business practices are demanding and expecting high levels of accountability,
reporting and results from the organizations to which they donate.
o Technology now enables nonprofits to track their impact and report results to
donors more easily. As growing numbers of nonprofits do this, more donors are
expecting nonprofits to readily provide feedback on social impact. Donors
increasing demand for nonprofit transparency could shift both donors and
nonprofits expectations. This may result in more frequent and open discussions
of how philanthropic dollars are being used and what impact they are making.

Nonprofits can improve their donor relationship management to promote a culture of


accountability and measurable impact. Examples include:
o Dynamically (in a timely, creative and forthright manner) sharing the latest
impact to date (including outputs and outcomes achieved, as well as money raised
and invested in critical programs and operations) on the nonprofit websites so that
donors and prospective donors can see the progress achieved at any point in time.
o Sending out quarterly or biannual updates via email, mobile message or mail to
proactively let donors know of the current impact created through their collective
donations as well as share lessons learned and future priorities.
o While the entire sector continues to struggle to find good ways to measure social
impact, nonprofits could publicly share the metrics they use to track program
success on their websites or in relevant publications. This would invite others to
further discuss and develop these metrics and share best practices across the
industry. These actions would also provide greater transparency to donors and
build donor confidence in the nonprofits commitment to continuous
improvement and accountability.

Teaching Approach

The Sand Hill Foundation case study is appropriate for a 45-minute teaching module including
both a lecture and a discussion.

Key themes for discussion include:


Evaluation
Measuring impact
Grantor-grantee relationships
Mutual accountability
Professionalization of the philanthropic sector

Please see the Giving 2.0 website (giving2.com) for Stanford Graduate School of Business
Lecturer Laura Arrillaga-Andreessens complete portfolio of philanthropy Stanford Graduate
School of Business case studies, teaching notes, frameworks and learning resources that she has
created since 2000.

You might also like