Deployment isn’t the finish line for a machine learning (ML) model—it’s just the beginning. In this session from posit::conf 2025, Principal Data Scientist Tom Shafer shares what it really takes to keep ML models running, relevant, and valuable long after they go live.
Drawing on real-world experience deploying and maintaining models in production, Tom shares practical strategies for improving model durability through better governance. While the techniques are grounded in R, many can also be applied in Python. From building models to writing testable code, Tom shares why it’s important to plan for change from the start.
You’ll learn:
- The importance of thinking broadly about model governance
- How R packages, testing frameworks, and intuitive method design can support maintainability
- How writing modular, testable code makes models easier to monitor and update over time
Whether you’re maintaining your first model or scaling mature pipelines, this talk offers foundational practices to help ensure your models stay production-ready for the long haul.