Predictive Analytics and Machine Learning are revolutionizing decision-making and optimizing business processes in nearly every commercial and government sector. Drawing on decades of practical experience our speakers make analytics understandable and accessible within your organization. 

Need a Keynote Speaker for your event?

Speakers from Elder Research draw on decades of industry and academic experience as they share lessons learned and best practices for managing analytics initiatives at international conferences and corporate events. Elder Research has a range of existing topics available but the content of these talks can be modified for your audience or custom talks can be developed to meet the specific requirements for your conference or corporate event.

Please complete the form if you would like to discuss requirements for your speaking engagement. Gerhard Speaker Page Bio 10.png

The Gap: Is It Leadership?

An Executive Overview for Harnessing Analytics Insight

Data Analytics is a hot topic, and deservedly so.  It powers exponential growth in modern behemoths like Google and Facebook but also drives positive transformation in ancient businesses and agencies - helping to cut costs, uncover fraud, discover new markets, etc.  Still, many analytic initiatives are never implemented, though they are complete technical successes; they are proven to work but never given the chance.  What is going wrong? This talk explores the heretical thought that leadership is getting in the way - that leaders often inadvertently nurture organizational inertia that diminishes, or completely eliminates, the chance for success with analytics.  Learn instead how to harness its counter-intuitive insights, as illustrated by tales from the front lines of this emerging field.

John Elder Speaker Page Bio 1.png

THE DATA SCIENCE REVOLUTION IN INDUSTRY

The hype around “the thinking sciences” — Artificial Intelligence, Machine Learning, and Data Science — is enormous, so it’s tempting to be skeptical of the gains claimed. Still, most of the results are real. The capabilities of Data Science, where models are inductively built from real history, have been growing steadily. I will reveal as examples five of our most interesting fielded solutions from the last two decades, from the diverse worlds of investment timing, credit scoring, drug discovery, medical diagnosis, and gas exploration.

The Greatest Science 

Data science, if judged as a separate science, exceeds its sisters in truth, breadth, and utility.  Date Science finds truth better than any other science; the crisis in replicability of results in the sciences today is largely due to bad data analysis, performed by amateurs.  As for breadth, a data scientist can contribute mightily to a new field with only minor cooperation from a domain expert, whereas the reverse is not so easy.  And for utility, data science can fit empirical behavior to provide a useful model where good theory doesn’t yet exist.  That is, it can predict “what” is likely even when “why” is beyond reach.

But only if we do it right!  The most vital data scientist skill is recognizing analytic hazards.  With that, we become indispensable.

The Top 3 Innovations in Analytics I’ve Seen

The three most important analytic innovations I’ve seen in three decades of extracting useful information from data have to do with:  Ensemble models, Target Shuffling, and Cognitive Biases.  Ensembles are sets of competing models that can combine to be more accurate than the best of their components.  They seem to defy the Occam’s Razor tradeoff between complexity and accuracy, yet have led to a new understanding of simplicity.  Target Shuffling is a resampling method that corrects for “p-hacking” or the “vast search effect” where spurious correlations are often uncovered by modern methods’ ability to try millions of hypotheses.  Target shuffling the true significance of a model, accurately assessing its out-of-sample precision.  Lastly, the increasing understanding of our own Cognitive Biases, and how deeply flawed our reasoning can be, helps reveal how vital projects can be doomed unless we seek out -- and heed -- constructive critique from outside.

Luck, Skill, or Torture?  How Target Shuffling Can Tell

When you mine past data and find a pattern or quantitative model, to what degree is it real, or chance? Ancient Statistics geniuses devised formulas to answer this for special-case scenarios. Yet, those only apply to handmade analyses where a few hypotheses are considered. But modern predictive analytic algorithms are hypothesis-generating machines, capable of testing millions of "ideas." The best result stumbled over in their vast searches have a great chance of being spurious, leading to failing models molded to noise. The good news is an antidote exists! I will reveal the simple permutation technique of “Target Shuffling”, and illustrate how it has helped in real-world projects - in the stock market, medical research, gas exploration, and baseball. 

What to Optimize? The Heart of Every Analytics Problem

Every analytics challenge reduces, at its technical core, to optimizing a metric. Product recommendation engines push items to maximize a customer's purchases; fraud detection algorithms flag transactions to minimize losses; and so forth. As modeling and classification (optimization) algorithms improve over time, one could imagine obtaining a solution merely by defining the guiding metric. But are our tools that good? More importantly, are we aiming them in the right direction? I think, too often, the answer is no. I'll argue for clear thinking about what exactly it is we ask our computer assistant to do for us, and recount some illustrative war stories. (Analytic heresy guaranteed.)

Top 5 Technical Tricks to Try when Trapped

There's no better source for tricks of the analytics trade than Dr. John Elder, the established industry leader renowned as an acclaimed training workshop instructor and author -- and well-known for his "Top 10 Data Mining Mistakes" and advanced methods like Target Shuffling. In this webcast, Dr. Elder, who is the Founder of Elder Research, North America's largest pure play consultancy in predictive analytics, will cover his Top Five methods for boosting your practice beyond barriers and gaining stronger results.

Doing Space-Age Analytics with Our Hunter-Gatherer Brains 

Predictive Analytics is so powerful and so useful - everywhere - we are astonished that its widespread adoption has taken so long. Its modest risk and phenomenal return should lead rational actors to cooperatively pool technical and domain expertise to tweak production processes to the benefit of all. And yet, most early projects fail to be implemented - felled by fear, pride, and ignorance.

But we can anticipate those foes! Recall that success requires solving three serious challenges: 1) Convincing experts that their ways can be improved, 2) Discovering new breakthroughs, and 3) Getting front-line users to completely change the way they work. No wonder there is resistance at every stage! 

John argues that it's helpful to have a mental model of the human brain as not optimized for success in our modern life of safety and abundance, but for survival within a small tribal society. And that with this model we can better anticipate - and escape - the traps that we idealistic techno-nerds tend to blunder into as we try to bring life-changing fire into the tribal circle.

Can Machines Think?

Data Science has been able to tremendously improve decision accuracy and productivity through capturing patterns from the past to predict the future.  It can be said to learn general principals from experience.  But isn’t this humankind’s greatest distinctive?  As machines master complex tasks once thought to be beyond automation, can they be said to think?  (Eventually, will they take over?  Merge with us?  Be awesome sidekicks, or eliminate us?)  And, meanwhile (setting aside for a moment frets about extinction), what meaningful work will be left for us to do? Come hear musings - dark and light - on the (near) future of life with our creations.

How to Tell if Your Market Timing System Will Work

A New Measure of Model Quality

The most widely used measure of investment performance is the Sharpe ratio, which is simply the “excess” return an investment has achieved (over a “risk free rate of return — say, US Treasuries), divided by the investment’s volatility.  It enables different instruments to be compared fairly, and allows one to create an efficient frontier of optimal alternatives at different risk levels.  Yet, the Sharpe ratio really reveals the quality of your returns and not the quality of your strategy.  For instance, just buying and holding bonds — not a novel system — has had a Sharpe > 1.0 for some time.  Quel genius!

I will describe a measure I use to evaluate the quality of market timing systems.  It evaluates their information quality, taking into account not only return and volatility, but also the trend of the market being traded and the system's exposure (% in the market vs. out).  Most importantly, the new metric is a better predictor of which timing systems will succeed in the only place that matters:  tomorrow.

Workshop: The Best and the Worst of Predictive Analytics

Predictive Modeling Methods and Common Data Mining Mistakes

Predictive analytics has proven capable of enormous returns across industries – but, with so many core methods for predictive modeling, there are some tough questions that need answering:

  • How do you pick the right one to deliver the greatest impact for your business, as applied over your data?
  • What are the best practices along the way?
  • And how do you avoid the most treacherous pitfalls?

This one-day session surveys standard and advanced methods for predictive modeling. Dr. Elder will describe the key inner workings of leading algorithms, demonstrate their performance with business case studies, compare their merits, and show you how to pick the method and tool best suited to each predictive analytics project. Methods covered include classical regression, decision trees, neural networks, ensemble methods, target shuffling and more. The key to successfully leveraging these methods is to avoid “worst practices”. It's all too easy to go too far in one's analysis and “torture the data until it confesses” or otherwise doom predictive models to fail where they really matter: on new situations.

If you'd like to become a practitioner of predictive analytics – or if you already are, and would like to hone your knowledge across methods and best practices, this workshop is for you!

What you will learn:

  • The tremendous value of learning from data
  • How to create valuable predictive models for your business
  • Best Practices by seeing their flip side: Worst Practices

Videos of select keynotes by John Elder:

How Target Shuffling Can Tell if What your Data Says is Real

The Power of Predictive Analytics

The Data Science Revolution in Industry

Aric Labarr Speaker Page Bio.png

BIG DATA & ANALYTICS: WHAT'S HERE, WHAT'S COMING, AND WHAT DO I DO?

Data is transforming our society as more information is being collected every day. Companies are starting to harness the power of data and apply it to current company problems and procedures. But what if you decided to change how you do business now that you have more data? This talk describes new possibilities in the data analytics world that could fundamentally change how companies and even business sectors interact with consumers in the new data-driven age.

CREDIT MODELS ARE WINNING AND I'M KEEPING SCORE

Classification scorecards are a great way to predict things because the techniques used in the banking industry specialize in interpretability, predictive power, and ease of deployment. The banking industry has long used credit scoring to determine credit risk—the likelihood a particular loan will be paid back.  A scorecard is a common way of displaying the patterns found in a classification model -- typically a logistic regression model. However, to be useful the results of the scorecard must be easy to interpret. The main goal of a credit score and scorecard is to provide a clear and intuitive way of presenting regression model results. This talk briefly discusses what scorecard analysis is and how it can be applied to score almost anything. 

HOW GOOD IS YOUR FORECAST?

When predicting across time, typical methodologies of prediction evaluation no longer hold true. It is not practical to take a hold-out sample randomly from observations in the data set or even use a typical k-fold cross-validation structure. Even newer methods of prediction evaluation in cross-sectional data like target shuffling should not just be applied to data where a temporal structure is inherent. How then can we determine if we have a good forecast? This talk highlights advantages and disadvantages to techniques evaluating predictions when forecasting future observations. It also discusses possible biases arising from time structures of data that should be considered.

Optimize My Stock Portfolio!

Investors diversify risk by investing in more than one stock. These stock portfolios are a collection of assets that each have their own inherent risk. If you know the future risk of each of the assets, you can optimize how much of each asset to keep in the portfolio. The real challenge is trying to evaluate the potential future risk of these assets. Different techniques provide different forecasts, which can drastically change the optimal allocation of assets. This talk presents a case study of portfolio optimization in three different scenarios - historical standard deviation estimation, capital asset pricing model (CAPM), and GARCH-based volatility modeling. The structure and results of these three approaches will be discussed.