Machine learning algorithms that are both interpretable and accurate are essential in applications such as medicine where errors can have a dire consequence. Unfortunately, there is currently a tradeoff between accuracy and interpretability among state-of-the-art machine learning methods. Decision trees or Linear models are interpretable and are therefore used extensively throughout medicine. They are, however, consistently outperformed in accuracy by other, less-interpretable algorithms, such as ensemble methods or neural networks. Here we present three algorithms that aim to address the tradeoff between interpretability and accuracy: 1) The Additive Tree (AddTree); a novel framework for constructing decision trees with the same architecture as CART but with improved accuracy, 2) The Conditional Super Learner (CSL), an algorithm which selects the best model candidate from a library conditional on the covariates, 3) Expert Augmented Machine Learning (EAML), an algorithms that automatically extracts clinical priors and combine it with machine-learned models to detect hidden confounders and build robust models with significantly less data. Extensive empirical evidence to illustrate the advantages and disadvantages of these three algorithms will be presented. Theoretical results will be also highlighted. Finally, we will choose the prediction of hospital mortality for Intensive Care Unit (ICU) patients to highlight the points discussed throughout the presentation.