Menu
Log in

 
 
 

It’s Getting Better All The Time

09 Apr 2018 11:06 AM | Anonymous

By: Meg Kinney


I have a natural tendency to be at least a little dissatisfied with my work. There’s always some improvement I could have made, some newer, better technique just over the horizon. Over my five years as a prospect researcher, this tendency has manifested itself most in how I approach lists of prospects for D.O.s. These are longish spreadsheets, usually pulled based on a few criteria from the D.O.–alumni in a certain town, prospects with certain wealth ratings, whatever. Over the years, how I and my fellow researchers present these lists (since we try to help D.O.s focus on top prospects instead of just sending a big undifferentiated list) has changed pretty dramatically, and there is more in store. Our changing approach presents a good case study of how innovation can add up over time.


Ideally, before sending a list of prospects we’d go through each one manually, weeding out people who don’t seem like great prospects and calling attention to the best ones. For larger lists, though, and given our time constraints, this isn’t possible. We rely on a standardized(ish) approach to help D.O.s prioritize prospects on the list instead. When I started here, the approach was to narrow a list by certain minimum criteria (like taking out prospects with no prior giving or with low capacity) and then guide D.O.s towards the better prospects in this narrow pool by highlighting (literally, in the Excel spreadsheet) positive indicators. We might go through the giving column and highlight those who’d given a lot to the D.O.’s unit, the affinity column and highlight those with high affinity, etc. Since the lists we send tend to have a lot of columns, this was fairly cumbersome to look through, and it wasn’t easy for the D.O.s to get a quick sense of who had a lot of good indicators compared to others on the list.


Shortly after I started, I came up with a macro that would count the highlights. Effectively, this was a way to create a heuristic score for each list. What qualifies as a “positive indicator” can vary from list to list–the standards would be much higher on a list of top donors vs. a list of Education alumni in the Midwest, for example. The highlight macro is a little complicated, and I’ve started just using formulas to assign points for certain criteria and then sum them up into a score, but it was a way to build on past practices rather than overthrowing them.


Then, I started sending select lists with pivot tables as a way to give D.O.s more power to filter things themselves (by the scores I’d created or otherwise). I had gotten feedback that some D.O.s like to have more control over who they see on a list, so this was a way to allow them that while also presenting helpful summary information that can be particularly useful for larger lists.


Throughout the time I’ve been here but especially in the last two years, we’ve done research and various descriptive analytics projects that have helped us better understand which factors are important to finding a good prospect. What does a major gift prospect look like before they give? What makes a good grateful patient prospect? We’ve been able to incorporate this into our scores and filters to better guide D.O.s to good prospects when sending lists.


More recently, the sophistication in the data points we use to score or filter a list has increased as well. I’ve started learning SQL so that I can pull data that’s not included in our preexisting reports, and be more flexible on what I include. For example, on a list for young alumni I can include an indicator of whether or not they played a sport or participated in a student activity, which we don’t typically include on other lists.


The future of prospect lists for us holds a couple more big changes. My organization has started using Cognos for reporting purposes now, and I’m learning to build reports with it. In the future, I hope to build more nuanced self-service list-building reports that will allow people to filter for capacity, affinity, giving, etc. as they see fit (but guided by our knowledge of what factors are most helpful in looking for good prospects). We also now have a data analyst who is capable of creating much more complex and mathematically sound scores, so in the future we’ll have more scores to help narrow and filter lists. Check out his blog if you’re technically-minded. Right now he is working on a big project that will (theoretically) be able to score each prospect in the database against each allocation in the database, so that he won’t have to create a score for giving likelihood to this or that unit or type of allocation, he can just add up or average a prospect’s score for all the allocations that fit that unit/allocation type. Imagine the kind of on-the-fly list scoring we could do!


I look forward to finding ever-better ways to deliver research and vet prospects for D.O.s. The volume of our prospect pool and the demands on our time mean that we’ll never be able to hand-pick prospects for every list someone requests, and I think this pressure has and will continue to spur innovation in our work.

Powered by Wild Apricot Membership Software