Elder Research to Sponsor the UVA Link Lab Flash Talk Series
Elder Research was proud to sponsor the University of Virginia Link Lab Flash Talk series. Elder Research is an industry affiliate sponsor of the Link Lab, an interdisciplinary, cyber-physical systems research center hosted at UVA. During this one hour session, PhD students from the Link Lab provided two-minute synopses of their research on difficult problems in Autonomous Systems, Connected Health, Smart Cities, and Hardware for IoT.
John Elder to Present to The Department of Statistical Science at Duke
John Elder will deliver the virtual talk “Top 3 Things I’ve Learned in 3 Decades Of Data Science” to the students in the Department of Statistical Science at Duke University on February 17, 2021. John says “the three most important analytic innovations [he’s] seen in three decades of extracting useful information from data have to do with Ensemble models, Target Shuffling, and Cognitive Biases.
Dr. Jennifer Schaff Presents at Partnering with the Public for Biomedical Research Seminar
Dr. Jennifer Schaff joined Larsson Omberg, Sage Bionetworks, to present on the “Life Cycle of the mPower Public Researcher Portal“, one of the first large-scale attempts to assess the feasibility of quantifying Parkinson disease symptoms and their changes in a real-world setting.
The new seminar series “Partnering with the Public for Biomedical Research” was hosted by the NIH Citizen Science Working Group and sponsored by the NIH Office of Data Science Strategy (ODSS).
Grant Fleming to Speak at the RStudio Global 2021 Conference
Grant Fleming delivered a talk on “Fairness and Data Science: Failures, Factors, and Futures” at the virtual RStudio::Global 2021 Conference.
Session Abstract: In recent years, numerous highly publicized failures in data science have made evident that biases or issues of fairness in training data can sneak into, and be magnified by, our models, leading to harmful, incorrect predictions being made once the models are deployed into the real world. But what actually constitutes an unfiar or biased model, and how can we diagnose and address these issues within our own work? In this talk, I will present a framework for better understanding how issues of fairness overlap with data science as well as how we can improve our modeling pipelines to make them more interpretable, reproducible, and fair to the groups that they are intended to serve. We will explore this new framework together through an analysis of ProPublica’s COMPAS recidivism dataset using the tidymodels, drake, and iml packages.