Theore - DARPA Project

Theore - DARPA Project

We gratefully acknowledge the Defense Advanced Research Projects Agency (DARPA) for their support on our project “Theore: Theory-Driven Curation and Reusable Evaluation of Research Claims in social and behavioral studies”.

Project Summary

The very essence of scientific progress is the systematic accumulation of knowledge, yet the viability of doing so is being questioned by experts and practitioners alike. Critics point to many conflicting findings on the same research claim in social and behavioral studies, akin to a famous quote by Senator Walter Mondale: “For every study that contains a recommendation, there is another, equally well documented study, challenging the conclusions of the first.” Without the ability to collectively reason about such conflicting findings, we often see tens, even hundreds, of studies examining the same research claim from different angles, only to make the picture even murkier for a practitioner who wants a firm answer. Further obstructing knowledge accumulation, even social scientists often cannot agree on whether a new finding is expected or unexpected from existing knowledge, meaning that judging the value of the new finding is left to intuition or one’s own life experience, rather than how it impacts the robustness of the existing knowledge base.

Our proposed system to address this issue features two key innovations: 1) a structured, machine readable, language called causal-link graph for codifying the connections between various claims in social and behavioral studies; and 2) an automated robustness calculus algorithm that uses the causal-link graph to collectively reason about the inter-dependencies between the robustness of different claims in social and behavioral studies. Our proposed approach is grounded in recent advancements in machine learning, e.g., the theory of causal inference for building the causal-link graph, and probabilistic graphical models for robustness estimation. If successful, our project will be able to demonstrate the feasibility of using machine learning to collectively reason about potentially conflicting research claims found in the scientific literature.