Individual Conditional Expectation (ICE) is a powerful tool for understanding and interpreting complex machine learning models by visualizing the relationship between features and predictions.
Machine learning models are becoming increasingly prevalent in various applications, making it essential to understand and interpret their behavior. Individual Conditional Expectation (ICE) plots offer a way to visualize the relationship between features and model predictions, providing insights into how a model relies on specific features. ICE plots are model-agnostic and can be applied to any supervised learning algorithm, making them a valuable tool for practitioners.
Recent research has focused on extending ICE plots to provide more quantitative measures of feature impact, such as ICE feature impact, which can be interpreted similarly to linear regression coefficients. Additionally, researchers have introduced in-distribution variants of ICE feature impact to account for out-of-distribution points and measures to characterize feature impact heterogeneity and non-linearity.
Arxiv papers on ICE have explored various aspects of the technique, including uncovering feature impact from ICE plots, visualizing statistical learning with ICE plots, and developing new visualization tools based on local feature importance. These studies have demonstrated the utility of ICE in various tasks using real-world data and have contributed to the development of more interpretable machine learning models.
Practical applications of ICE include:
1. Model debugging: ICE plots can help identify issues with a model's predictions, such as overfitting or unexpected interactions between features.
2. Feature selection: By visualizing the impact of individual features on model predictions, ICE plots can guide the selection of important features for model training.
3. Model explanation: ICE plots can be used to explain the behavior of complex models to non-experts, making it easier to build trust in machine learning systems.
A company case study involving ICE is the R package ICEbox, which provides a suite of tools for generating ICE plots and conducting exploratory analysis. This package has been used in various applications to better understand and interpret machine learning models.
In conclusion, Individual Conditional Expectation (ICE) is a valuable technique for understanding and interpreting complex machine learning models. By visualizing the relationship between features and predictions, ICE plots provide insights into model behavior and help practitioners build more interpretable and trustworthy machine learning systems.

Individual Conditional Expectation (ICE)
Individual Conditional Expectation (ICE) Further Reading
1.Bringing a Ruler Into the Black Box: Uncovering Feature Impact from Individual Conditional Expectation Plots http://arxiv.org/abs/2109.02724v1 Andrew Yeh, Anhthy Ngo2.Peeking Inside the Black Box: Visualizing Statistical Learning with Plots of Individual Conditional Expectation http://arxiv.org/abs/1309.6392v2 Alex Goldstein, Adam Kapelner, Justin Bleich, Emil Pitkin3.A new perspective on interiors of ice-rich planets: Ice-rock mixture instead of ice on top of rock http://arxiv.org/abs/2011.00602v2 Allona Vazan, Re'em Sari, Ronit Kessel4.Entrapment of CO in CO2 ice http://arxiv.org/abs/1907.09011v1 Alexia Simon, Karin I. Oberg, Mahesh Rajappan, Pavlo Maksiutenko5.Visualizing the Feature Importance for Black Box Models http://arxiv.org/abs/1804.06620v3 Giuseppe Casalicchio, Christoph Molnar, Bernd Bischl6.Flexoelectricity and surface phase transition in natural ice http://arxiv.org/abs/2212.00323v1 Xin Wen, Qianqian Ma, Shengping Shen, Gustau Catalan7.Centralizing-Unitizing Standardized High-Dimensional Directional Statistics and Its Applications in Finance http://arxiv.org/abs/1912.10709v2 Yijian Chuan, Lan Wu8.The Effects of Grain Size and Temperature Distributions on the Formation of Interstellar Ice Mantles http://arxiv.org/abs/1512.06714v1 Tyler Pauly, Robin T. Garrod9.Trends in sea-ice variability on the way to an ice-free Arctic http://arxiv.org/abs/1601.06286v1 Sebastian Bathiany, Bregje van der Bolt, Mark S. Williamson, Timothy M. Lenton, Marten Scheffer, Egbert van Nes, Dirk Notz10.The Spectral SN-GRB Connection: Systematic Spectral Comparisons between Type Ic Supernovae, and broad-lined Type Ic Supernovae with and without Gamma-Ray Bursts http://arxiv.org/abs/1509.07124v3 Maryam Modjaz, Yuqian Q. Liu, Federica B. Bianco, Or GraurIndividual Conditional Expectation (ICE) Frequently Asked Questions
What is an Individual Conditional Expectation (ICE) plot?
An Individual Conditional Expectation (ICE) plot is a visualization technique used to understand and interpret complex machine learning models. It displays the relationship between a specific feature and the model's predictions for individual data points. By examining these plots, practitioners can gain insights into how a model relies on specific features, identify issues with model predictions, and guide feature selection for model training.
What is an ICE curve?
An ICE curve is a graphical representation of the relationship between a single feature and the model's predictions for a specific data point. In an ICE plot, multiple ICE curves are displayed together, with each curve representing a different data point. This allows for the visualization of how the model's predictions change as the feature value varies for each individual data point, revealing the impact of the feature on the model's predictions.
What is ICE plot in H2O?
H2O is an open-source machine learning platform that provides various tools and algorithms for data analysis. ICE plots in H2O refer to the implementation of Individual Conditional Expectation plots within the H2O platform. These plots can be generated using H2O's built-in functions, allowing users to visualize the relationship between features and model predictions, and gain insights into the behavior of machine learning models built using H2O.
What is a classical Partial Dependence Plot?
A classical Partial Dependence Plot (PDP) is a visualization technique that shows the average effect of a single feature on the model's predictions across all data points. It is similar to ICE plots but focuses on the average impact of a feature rather than individual data points. PDPs help in understanding the global relationship between a feature and the model's predictions, while ICE plots provide more granular insights into the local behavior of the model for each data point.
How do ICE plots differ from Partial Dependence Plots?
ICE plots and Partial Dependence Plots (PDPs) are both visualization techniques used to understand the relationship between features and model predictions. The main difference between them is that ICE plots display the impact of a feature on the model's predictions for individual data points, while PDPs show the average effect of a feature across all data points. ICE plots provide more detailed insights into the local behavior of the model, whereas PDPs focus on the global relationship between a feature and the model's predictions.
How can ICE plots be used for model debugging?
ICE plots can be used for model debugging by visualizing the relationship between features and model predictions for individual data points. By examining these plots, practitioners can identify issues with the model's predictions, such as overfitting or unexpected interactions between features. This information can then be used to refine the model, improve its performance, and ensure that it is making accurate predictions based on the input features.
How do ICE plots help in feature selection?
ICE plots help in feature selection by visualizing the impact of individual features on model predictions. By examining the ICE curves for different features, practitioners can identify which features have a significant impact on the model's predictions and which features have little or no impact. This information can guide the selection of important features for model training, leading to more accurate and interpretable machine learning models.
How can ICE plots be used to explain complex models to non-experts?
ICE plots can be used to explain complex models to non-experts by providing a visual representation of the relationship between features and model predictions. By displaying how the model's predictions change as the feature value varies for individual data points, ICE plots make it easier for non-experts to understand the behavior of the model and build trust in machine learning systems. This can be particularly useful when presenting the results of machine learning models to stakeholders who may not have a deep understanding of the underlying algorithms.
Explore More Machine Learning Terms & Concepts