Global Oncology Initiative (GO!) is a volunteer organization dedicated to helping improve cancer care worldwide. In this effort, it gathers advocates from many backgrounds -- medical professionals, researchers, health policy specialists, and biomedical engineers -- with the hope that interdisciplinary collaboration can lead to original, impactful developments in cancer education and treatment.
An important component of cancer care is pain-reduction, and in spite of their relatively low cost, palliative medications are not easily accessible in many parts of the world. The reasons for these inequities in access are complicated, and reflect variation in physician practice, government policies, and historical attitudes towards opiate medications.
To better understand this modern inequity, it is worthwhile to investigate historical variation in access to palliative medication. Towards this, Statistics for Social Good partnered with Global Oncology to gain access to opioid consumption data and to visualize it -- the result was the Opioid Atlas. Our hope is that expressive visualizations of this data can both facilitate analysis by palliative care researchers and educate the broader public on the state of opiate access. To reach these audiences, we showcased our work at the 2016 Global Cancer Research Symposium.
Finally, we have made all data and analysis public, and welcome all ideas in improving global cancer care through data.
The Indigo Education Company is a social enterprise that administers their corporate level non-cognitive Indigo Assessment surveys in high-school and college classrooms and ensures that the results are accessible and informative for students and educators. In contrast to traditional academic evaluations, the Indigo Assessment places a high emphasis on the social and emotional aspects of student development, highlighting non-academic strengths, behavior styles and motivators. The reports that Indigo generates from these assessments have had different purposes in different contexts -- they have been used by students for career guidance and better understanding themselves, educators for supporting students, and policy-makers for understanding the underlying data and making appropriate amendments to curricula.
Statistics for Social Good developed an online exploratory data analysis tool to enable quick views of survey results at the school level. Rather than providing a one-time statistical consultation, our objective was to empower educators and Indigo Education Company analysts to develop their own interpretations and insights from the data. The motivation for this approach was that similar questions end to emerge across different schools and classrooms. Some questions include:
Hence, it would be helpful to provide relevant visualizations and analysis that are easy to regenerate each time new data is collected. Per-question scatterplots and histograms are directed towards the first and second questions, respectively, while population-wide hierarchical clusterings and lasso regressions facilitate investigation of the third. Finally, we have made all code for this tool public, so it can be adapted by different organizations to suit their needs.
The Parking and Transportation Services department at Stanford University administers programs to encourage Stanford employees, students, and affiliates to commute to campus in environmentally sustainable ways. These include Caltrain and VTA subsidies, carpool credits, the bicycle program, and the campus Marguerite shuttle. P&TS conducts an annual survey of Stanford commuters to better understand their needs and to inform the creation of new programs, and Stats for Good was enlisted to help in the analysis of survey results.
By combining survey responses with home address information of the respondents, we identified six geographic sub-regions of the San Francisco Bay Area that are differentiated by how residents in those regions typically commute to campus. Fitting latent factor models to the survey responses, we then identified sub-groups of commuters within each region who share many demographic characteristics and/or commute schedules and preferences. Among the identified groups are evening- and night-shift Stanford hospital employees living in the East Bay who indicate a lack of public transit and carpool options, certain San Francisco and San Jose residents (many of whom are students or non-employees) who are ineligible for P&TS Caltrain subsidies and indicate the cost of public transit as a major reason for driving to campus, and residents of towns neighboring Palo Alto who indicate a willingness to bike to campus if there are safer and more convenient bike paths and/or increased financial incentives. Our findings provided guidance to P&TS on possible new incentive programs that are tailored to specific sub-populations of Stanford commuters.
Prediction tools for identifying the patients who will accrue the bulk of spending are crucial for many smaller healthcare organizations to succeed under new payment reforms. However, scant progress has been made in improvement of cost-prediction tools for over a decade. Using population health data for over two million residents of Western Denmark between 2004 and 2011, we aimed to improve prediction of high-cost patients—i.e., the top 10% of patients in a sample based on annual health spending.
We evaluated our prediction models on the most recent year of Danish data. We conducted both a whole-population analysis (N=1,557,950), and, considering only those participants who were not in the top 10% of population health spending in the prior year, a separate “cost bloom” analysis (N=1,402,155). Compared to a widely used diagnosis-based prediction tools developed in the US, our best model achieved a 21% improvement in prediction of high-cost spending at the population-level, and a 30% imporvement in the prediction of cost bloom spending.
We expect our study to inform payers and providers, who need better tools to identify future high-cost patients. More details on our study, which involved researchers at the Center for Biomedical Inforamtics Research and the Clinical Excellence Research Center, will appear in a forthcomming manuscript.
GreatNonprofits is an online hub for nonprofit reviews that provides valuable feedback for charitable organizations and helps donors, volunteers, and clients learn about organizations matching their interests. GreatNonprofits hopes to use the panoply of review data collected from a diversity of organizations and audiences to help nonprofits better understand and respond to client feedback. To this end, Stats for Good was enlisted to study the relationship between reviewer background and review content.
In particular, we were tasked with investigating the influence of courtesy bias on reviews. That is, do those who depend on non-profit services to meet their basic needs avoid negatively reviewing them? Our analysis of client feedback revealed the opposite effect: clients of low-income food bank and homeless shelter clients rated their non-profit benefactors more harshly than clients of other organizations. Moreover, food bank and homeless shelter clients wrote suggestions as often as all other reviewer populations, suggesting that they are no less willing to provide feedback, in contradiction to the courtesy bias hypothesis. To validate our analysis and develop deeper insight, we, along with GreatNonprofits, interviewed several organizations that actively solicit client feedback. Our findings are summarized in the Stanford Social Innovation Review.
SparkPoint is an anti-poverty organization in the San Francisco Bay Area that helps low-income individuals and families reach financial stability. Clients work together with SparkPoint coaches to design a personalized path towards their financial goals. A central theme of this approach is the freedom to choose from a variety of services, including those that support access to financial planning, social assistance, and educational or employment opportunities.
Statistics for Social Good analyzed several years' worth of client data to help SparkPoint gain a better understanding of client characteristics and outcomes at each of the nine SparkPoint drop-in centers in the Bay Area. Our site-level analysis identified notable differences in the demographics of clients and the distribution of services provided across centers. SparkPoint was particularly interested in studying the effect of a "magic bundle" combining employment and financial budgeting services. To this end, for each of the four SparkPoint outcomes of interest -- debt-to-income ratio, credit score, income, and savings -- we used a combination of data visualization and regression modeling to analyze the improvement over baseline that was achieved by clients receiving the bundled service, compared to those receiving other services.