Clearly, opacity can hinder acceptance, and therefore deployment, of an analytical model. Opaque models are liable to be misconstrued or misapplied, if they are even adopted at all by a potentially skeptical audience. With the explosion of elaborate data science algorithms in the last few decades, complexity has become the new opacity.
This paper explores the broader costs of complexity along with some heuristics for deciding how good is good enough. The paper also focuses on a few questions every data scientist should keep in mind at the outset of each project - to design a model based on how it will be adopted and consumed.