Automating evidence-based medicine

Evidence-based medicine (EBM) looks to inform patient care with the totality of available relevant evidence. But the volume of published biomedical data renders this goal infeasible.

For example, systematic reviews are the cornerstone of EBM and are critical to modern healthcare, informing everything from national health policy to bedside decision-making. But conducting systematic reviews is extremely laborious (and hence expensive): producing a single review requires thousands of person-hours. Moreover, the exponential expansion of the biomedical literature base has imposed an unprecedented burden on reviewers, thus multiplying costs. Researchers can no longer keep up with the primary literature, and this hinders the practice of evidence-based care.

This project concerns optimizing the practice of EBM via novel machine learning and natural language processing methods, with the aim of facilitating evidence-based care in an era of information overload.


Support

DBI-1262442: Collaborative research: ABI Development: Making Advanced Statistical Tools Accessible for Quantitative Research Synthesis and Discovery in Ecology and Evolutionary Biology (National Science Foundation (NSF))
Scaling Evidence-Based Medicine via Automation and Crowdsourcing (Brown University -- Seed Funding)
Semi-automating citation screening: an assessment of machine learning approaches using one year’s worth of human-generated data from the Embase crowdsourcing project (The Cochrane Collaboration)
R01: Semi-Automating Data Extraction for Systematic Reviews (National Institutes of Health (NIH) / National Library of Medicine (NLM))

Related publications

Meta-Analyst: software for meta-analysis of binary, continuous and diagnostic data. BMC Research Methods. 2009.
Modeling annotation time to reduce workload in comparative effectiveness reviews. ACM International Health Informatics Symposium (IHI). 2010.
Semi-automated screening of biomedical citations for systematic reviews. BMC Bioinformatics. 2010.
Active learning for biomedical citation screening. ACM SIGKDD international conference on Knowledge discovery and data mining. 2010.
Who should label what? instance allocation in multiple expert active learning. SIAM International Conference on Data Mining (SDM). 2011.
Deploying an interactive machine learning system in an evidence-based practice center: abstrackr. ACM SIGHIT International Health Informatics Symposium (IHI). 2011.
Class Imbalance, Redux. International Conference on Data Mining (ICDM). 2011.
The constrained weight space SVM: Learning with ranked features. International Conference on Machine Learning (ICML). 2011.
Machine Learning Health Informatics: Making Better use of Domain Experts. PhD Thesis, Tufts University. 2012.
Toward modernizing the systematic review pipeline in genetics: efficient updating via data mining.. Genetics in Medicine. 2012.
Closing the Gap between Methodologists and End-Users: R as a Computational Back-End. Journal of Statistical Software. 2012.
Active Literature Discovery for Scoping Evidence Reviews: How Many Needles are There?. KDD Workshop on Data Mining for Healthcare (KDD-DMH). 2013.
Improving Class Probability Estimates for Imbalanced Data. Knowledge and Information Systems. 2014.
Modernizing the systematic review process to inform comparative effectiveness: tools and methods. Journal of Comparative Effectiveness Research. 2013.
Spá: a web-based viewer for text mining in Evidence Based Medicine. The European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD). 2014.
#CochraneTech: Technology and the Future of Systematic Reviews. Cochrane Library Editorial. 2014.
Automating Risk of Bias Assessment for Clinical Trials. ACM Conference on Bioinformatics, Computational Biology, and Health Informatics. 2014.
Modernizing Evidence Synthesis for Evidence-Based Medicine (Book Chapter). Clinical Decision Support: The Road to Broad Adoption. 2013.
Graph-Sparse LDA: A Topic Model with Structured Sparsity. AAAI Conference on Artificial Intelligence (AAAI). 2015.
RobotReviewer: Evaluation of a System for Automatically Assessing Bias in Clinical Trials. Journal of the American Medical Informatics Association (JAMIA). 2015.
Automating Risk of Bias Assessment for Clinical Trials. Journal of Biomedical and Health Informatics (JBHI). 2015.
Combining Crowd and Expert Labels using Decision Theoretic Active Learning. Conference on Human Computation & Crowdsourcing (HCOMP). 2015.
A Sensitivity Analysis of (and Practitioners' Guide to) Convolutional Neural Networks for Sentence Classification. arXiv. 2015.
Improving the Utility of MeSH terms Using the TopicalMeSH Representation. Journal of Biomedical Informatics (JBI). 2016.
Extracting PICO Sentences from Clinical Trial Reports using Supervised Distant Supervision. Journal of Machine Learning Research (JMLR). 2016.
A Correlated Worker Model for Grouped, Imbalanced and Multitask Data. Uncertainty in Artificial Intelligence (UAI). 2016.
Rationale-Augmented Convolutional Neural Networks for Text Classification. Empirical Methods in Natural Language Processing (EMNLP). 2016.
Systematic Review is e-Discovery in Doctor's Clothing. Medical Information Retrieval (MedIR) Workshop at the 39th International ACM SIGIR Conference on Research and Development in Information Retrieval. 2016.
Using Electronic Medical Records and Physician Data to Improve Information Retrieval for Evidence-Based Care. International Conference on Healthcare Informatics (ICHI). 2016.
Leveraging coreference to identify arms in medical abstracts: An experimental study. The International Workshop on Health Text Mining and Information Analysis (Co-Located with EMNLP 2016). 2016.
Retrofitting Word Vectors of MeSH Terms to Improve Semantic Similarity Measures. The International Workshop on Health Text Mining and Information Analysis (Co-Located with EMNLP 2016). 2016.
OpenMEE: Intuitive, open-source software for meta-analysis in ecology and evolutionary biology. Methods in Ecology and Evolution. 2017.
Evaluating Data Abstraction Assistant, anovel software application for dataabstraction during systematic reviews:protocol for a randomized controlled trial. Systematic Reviews. 2016.
Exploiting Domain Knowledge via Grouped Weight Sharing with Application to Text Categorization. Association for Computational Linguistics (ACL). 2017.
Learning Interpretable Disentangled Representations of Clinical Abstracts. None. 2017.
Aggregating and Predicting Sequence Labels from Crowd Annotations. Association for Computational Linguistics (ACL). 2017.
Automating Biomedical Evidence Synthesis: RobotReviewer. Association for Computational Linguistics (ACL). 2017.
Identifying Reports of Randomized Controlled Trials (RCTs) via a Hybrid Machine Learning and Crowdsourcing Approach . Journal of the American Medical Informatics Association (JAMIA). 2017.
An Exploration of Crowdsourcing Citation Screening for Systematic Reviews. Research Synthesis Methods. 2017.
Identifying diagnostic test accuracy publications using a deep model. CLEF eHealth. 2017.
A Neural Candidate-Selector Architecture for Automatic Structured Clinical Text Annotation. International Conference on Information and Knowledge Management (CIKM). 2017.


Detecting verbal irony in online texts

We aim to develop resources and novel computational methods to advance automated irony detection (i.e., identification of the ironic voice in online content). This is a challenging task because the meaning of natural language is not captured by words and syntax alone. Rather, utterances (tweets, sentences in forum posts, etc.) are embedded within a specific context. The ironic voice is an important example of this phenomenon: to appreciate a speaker’s intended meaning, it is crucial to first infer if he or she is being ironic or sincere.

Existing automated approaches to irony detection leverage statistical natural language processing (NLP) and machine learning (ML) methods. These models tend to be relatively ‘shallow’ in that they operate only over simple, unstructured representations of data. In our view, verbal irony detection is somewhat unique in that such representations are likely inadequate. Context is necessary to discern ironic intent: we aim to demonstrate this empirically and build models that exploit contextual cues.


Support

Sociolinguistically Informed Natural Language Processing: Automating Irony Detection (Army Research Office (ARO))

Related publications

Computational Irony: A Survey and new Perspectives. Artificial Intelligence Review. 2013.
Humans Require Context to Infer Ironic Intent (so Computers Probably do, too). Association for Computational Linguistics (ACL). 2014.
Can Cognitive Scientists Help Computers Recognize Irony?. Cognitive Science Society (CogSci). 2014.
Sparse, Contextually Informed Models for Irony Detection: Exploiting User Communities, Entities and Sentiment. Association for Computational Linguistics (ACL). 2015.
A Sensitivity Analysis of (and Practitioners' Guide to) Convolutional Neural Networks for Sentence Classification. arXiv. 2015.
MGNC-CNN: A Simple Approach to Exploiting Multiple Word Embeddings for Sentence Classification. North American Chapter of the Association for Computational Linguistics (NAACL). 2016.
Modelling Context with User Embeddings for Sarcasm Detection in Social Media. The SIGNLL Conference on Computational Natural Language Learning (CoNLL). 2016.


Computational Models of Physician-Patient Communication

Physician-patient communication is a key aspect of health-care, but it is poorly understood. How can physicians communicate with their patients to effect better health outcomes? To answer this question, we first need to better understand and quantify how physicians currently communicate with patients. To this end, transcripts of physician-patient visits annotated with clinically relevant codes that capture important topics and acts of speech can provide vital insights into interactions by revealing communication patterns. For example, using codes that indicate the topic being discussed, we can quantify the fraction of time spent discussing, e.g., biomedical issues (as opposed to, say, time spent socializing). These kinds of analyses allow us to quantify measures of patient-centeredness in outpatient care.

But labeling transcripts with such codes is tedious and expensive, precluding large-scale analyses. Moreover, existing analyses of such codes are inadequate: we need better models to perform sophisticated analyses. These might, for example, highlight subtle but important communicative patterns. This project looks to realize this aim.


Support

Manual and Automatic Analysis of Patients’ Values and Preferences Using Seton HACHPS Surveys (The Seton Healthcare Family and The University of Texas (UT) at Austin Center for Health and Social Policy (CHASP))

Related publications

A Generative Joint, Additive, Sequential Model of Topics and Speech Acts in Patient-Doctor Communication. Empirical Methods in Natural Language Processing (EMNLP). 2013.
Automatically Annotating Topics in Transcripts of Patient-Provider Interactions via Machine Learning. Medical Decision Making. 2013.
Identifying Differences in Physician Communication Styles with a Log-Linear Transition Component Model. AAAI Conference on Artificial Intelligence (AAAI). 2014.


Social Media & Health

The amount of unstructured health-related information online has exploded (e.g., consumer reviews of physicians; health-related Tweets; media coverage of health stories and associated comments).

This broad project aims to design new models that process and make sense of such information to better understand the health needs and wants of individuals.


Related publications

What Affects Patient (Dis)satisfaction? Analyzing Online Doctor Ratings with a Joint Topic-Sentiment Model. AAAI Workshop on Expanding the Boundaries of Health Informatics Using AI (HIAI). 2013.
A Large-Scale Quantitative Analysis of Latent Factors and Sentiment in Online Doctor Reviews. Journal of the American Medical Informatics Association (JAMIA). 2014.
What Predicts Media Coverage of Health Science Articles?. The International Workshop on the World Wide Web and Public Health Intelligence (W3PHI). 2015.
MGNC-CNN: A Simple Approach to Exploiting Multiple Word Embeddings for Sentence Classification. North American Chapter of the Association for Computational Linguistics (NAACL). 2016.
A Data-Driven Approach to Characterizing the (Perceived) Newsworthiness of Health Science Articles. JMIR Medical Informatics . 2016.
"Jerk" or "Judgemental"? Patient perceptions of male versus female physicians in online reviews. AAAI Joint Workshop on Health Intelligence (W3PHIAI). 2017.
Quantifying Mental Health from Social Media with Neural User Embeddings. Machine Learning in Health Care (MLHC). 2017.