{"id":3766,"date":"2024-10-02T15:21:49","date_gmt":"2024-10-02T15:21:49","guid":{"rendered":"https:\/\/techhub.saworks.io\/?p=3766"},"modified":"2025-06-20T14:58:51","modified_gmt":"2025-06-20T14:58:51","slug":"combler-les-lacunes-un-guide-comparatif-des-techniques-dimputation-en-machine-learning","status":"publish","type":"post","link":"https:\/\/techhub.saworks.io\/fr\/combler-les-lacunes-un-guide-comparatif-des-techniques-dimputation-en-machine-learning\/","title":{"rendered":"Combler les lacunes : Un guide comparatif des techniques d&rsquo;imputation en Machine Learning"},"content":{"rendered":"\n<p>Dans notre pr\u00e9c\u00e9dente analyse des mod\u00e8les de r\u00e9gression p\u00e9nalis\u00e9e (comme&nbsp;<strong>Lasso<\/strong>,&nbsp;<strong>Ridge<\/strong>&nbsp;et&nbsp;<strong>ElasticNet<\/strong>), nous avons montr\u00e9 leur efficacit\u00e9 pour g\u00e9rer la&nbsp;<strong>multicolin\u00e9arit\u00e9<\/strong>, permettant d&rsquo;exploiter un \u00e9ventail plus large de caract\u00e9ristiques et d&rsquo;am\u00e9liorer les performances des mod\u00e8les.<\/p>\n\n\n\n<p>Nous abordons aujourd\u2019hui un autre aspect cl\u00e9 du pr\u00e9traitement des donn\u00e9es :&nbsp;<strong>la gestion des valeurs manquantes<\/strong>. Ces lacunes peuvent grandement compromettre la&nbsp;<strong>pr\u00e9cision<\/strong>&nbsp;et la&nbsp;<strong>fiabilit\u00e9<\/strong>&nbsp;des mod\u00e8les si elles ne sont pas trait\u00e9es correctement.<\/p>\n\n\n\n<p>Cet article explore diff\u00e9rentes&nbsp;<strong>strat\u00e9gies d&rsquo;imputation<\/strong>&nbsp;pour traiter les donn\u00e9es manquantes et les int\u00e9grer \u00e0 notre pipeline. Cette approche nous permettra d\u2019affiner encore notre&nbsp;<strong>pr\u00e9cision pr\u00e9dictive<\/strong>&nbsp;en r\u00e9int\u00e9grant des caract\u00e9ristiques pr\u00e9c\u00e9demment exclues, tirant ainsi pleinement parti de la richesse de notre jeu de donn\u00e9es.<\/p>\n\n\n\n<p><strong>Commen\u00e7ons sans plus attendre.<\/strong><\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"1024\" height=\"684\" data-src=\"https:\/\/techhub.saworks.io\/wp-content\/uploads\/2024\/10\/Capture-decran-2025-06-20-143629-1024x684.png\" alt=\"\" class=\"wp-image-3769 lazyload\" data-srcset=\"https:\/\/techhub.saworks.io\/wp-content\/uploads\/2024\/10\/Capture-decran-2025-06-20-143629-1024x684.png 1024w, https:\/\/techhub.saworks.io\/wp-content\/uploads\/2024\/10\/Capture-decran-2025-06-20-143629-300x200.png 300w, https:\/\/techhub.saworks.io\/wp-content\/uploads\/2024\/10\/Capture-decran-2025-06-20-143629-768x513.png 768w, https:\/\/techhub.saworks.io\/wp-content\/uploads\/2024\/10\/Capture-decran-2025-06-20-143629.png 1204w\" data-sizes=\"(max-width: 1024px) 100vw, 1024px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 1024px; --smush-placeholder-aspect-ratio: 1024\/684;\" \/><\/figure>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Vue d&rsquo;ensemble<\/strong><\/h3>\n\n\n\n<p>Cet article est divis\u00e9 en trois parties :<\/p>\n\n\n\n<ol start=\"1\" class=\"wp-block-list\">\n<li><strong>Reconstruction de l&rsquo;imputation manuelle avec SimpleImputer<\/strong><\/li>\n\n\n\n<li><strong>Perfectionnement des techniques d&rsquo;imputation avec IterativeImputer<\/strong><\/li>\n\n\n\n<li><strong>Exploitation des relations de voisinage via l&rsquo;imputation KNN<\/strong><\/li>\n<\/ol>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Reconstruction de l&rsquo;imputation manuelle avec SimpleImputer<\/strong><\/h3>\n\n\n\n<p>Dans cette premi\u00e8re partie, nous revisitons et&nbsp;<strong>automatisons<\/strong>&nbsp;nos techniques d&rsquo;imputation manuelle en utilisant&nbsp;<code>SimpleImputer<\/code>.<\/p>\n\n\n\n<p>Lors de notre pr\u00e9c\u00e9dente analyse du jeu de donn\u00e9es&nbsp;<strong>Ames Housing<\/strong>, nous avions explor\u00e9 des strat\u00e9gies manuelles adapt\u00e9es \u00e0 chaque type de donn\u00e9e :<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Variables cat\u00e9gorielles<\/strong>&nbsp;(ex. :&nbsp;<code>PoolQC<\/code>) : Les valeurs manquantes indiquaient souvent l&rsquo;absence d&rsquo;une caract\u00e9ristique (ici, l&rsquo;absence de piscine). Nous les avons remplac\u00e9es par&nbsp;<code>\"None\"<\/code>&nbsp;pour pr\u00e9server l&rsquo;int\u00e9grit\u00e9 des donn\u00e9es.<\/li>\n\n\n\n<li><strong>Variables num\u00e9riques<\/strong>&nbsp;: Imputation par la&nbsp;<strong>moyenne<\/strong>&nbsp;ou d&rsquo;autres m\u00e9thodes statistiques.<\/li>\n<\/ul>\n\n\n\n<p>D\u00e9sormais, avec&nbsp;<code>SimpleImputer<\/code>&nbsp;de&nbsp;<strong>scikit-learn<\/strong>, nous&nbsp;<strong>automatisons<\/strong>&nbsp;ces processus pour gagner en :<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Reproductibilit\u00e9<\/strong><\/li>\n\n\n\n<li><strong>Efficacit\u00e9<\/strong><\/li>\n<\/ul>\n\n\n\n<p># Import the necessary libraries import pandas as pd from sklearn.pipeline import Pipeline from sklearn.impute import SimpleImputer from sklearn.compose import ColumnTransformer from sklearn.preprocessing import StandardScaler, OneHotEncoder, FunctionTransformer from sklearn.linear_model import Lasso, Ridge, ElasticNet from sklearn.model_selection import cross_val_score # Load the dataset Ames = pd.read_csv(&lsquo;Ames.csv&rsquo;) # Exclude &lsquo;PID&rsquo; and &lsquo;SalePrice&rsquo; from features and specifically handle the &lsquo;Electrical&rsquo; column numeric_features = Ames.select_dtypes(include=[&lsquo;int64&rsquo;, &lsquo;float64&rsquo;]).drop(columns=[&lsquo;PID&rsquo;, &lsquo;SalePrice&rsquo;]).columns categorical_features = Ames.select_dtypes(include=[&lsquo;object&rsquo;]).columns.difference([&lsquo;Electrical&rsquo;]) electrical_feature = [&lsquo;Electrical&rsquo;] # Specifically handle the &lsquo;Electrical&rsquo; column # Helper function to fill &lsquo;None&rsquo; for missing categorical data def fill_none(X): return X.fillna(\u00ab\u00a0None\u00a0\u00bb) # Pipeline for numeric features: Impute missing values then scale numeric_transformer = Pipeline(steps=[ (&lsquo;impute_mean&rsquo;, SimpleImputer(strategy=&rsquo;mean&rsquo;)), (&lsquo;scaler&rsquo;, StandardScaler()) ]) # Pipeline for general categorical features: Fill missing values with &lsquo;None&rsquo; then apply one-hot encoding categorical_transformer = Pipeline(steps=[ (&lsquo;fill_none&rsquo;, FunctionTransformer(fill_none, validate=False)), (&lsquo;onehot&rsquo;, OneHotEncoder(handle_unknown=&rsquo;ignore&rsquo;)) ]) # Specific transformer for &lsquo;Electrical&rsquo; using the mode for imputation electrical_transformer = Pipeline(steps=[ (&lsquo;impute_electrical&rsquo;, SimpleImputer(strategy=&rsquo;most_frequent&rsquo;)), (&lsquo;onehot_electrical&rsquo;, OneHotEncoder(handle_unknown=&rsquo;ignore&rsquo;)) ]) # Combined preprocessor for numeric, general categorical, and electrical data preprocessor = ColumnTransformer( transformers=[ (&lsquo;num&rsquo;, numeric_transformer, numeric_features), (&lsquo;cat&rsquo;, categorical_transformer, categorical_features), (&lsquo;electrical&rsquo;, electrical_transformer, electrical_feature) ]) # Target variable y = Ames[&lsquo;SalePrice&rsquo;] # All features X = Ames[numeric_features.tolist() + categorical_features.tolist() + electrical_feature] # Define the model pipelines with preprocessor and regressor models = { &lsquo;Lasso&rsquo;: Lasso(max_iter=20000), &lsquo;Ridge&rsquo;: Ridge(), &lsquo;ElasticNet&rsquo;: ElasticNet() } results = {} for name, model in models.items(): pipeline = Pipeline(steps=[ (&lsquo;preprocessor&rsquo;, preprocessor), (&lsquo;regressor&rsquo;, model) ]) # Perform cross-validation scores = cross_val_score(pipeline, X, y) results[name] = round(scores.mean(), 4) # Output the cross-validation scores print(\u00ab\u00a0Cross-validation scores with Simple Imputer:\u00a0\u00bb, results)<\/p>\n\n\n\n<p>Les r\u00e9sultats de cette impl\u00e9mentation sont pr\u00e9sent\u00e9s ci-dessous, illustrant comment&nbsp;<strong>l&rsquo;imputation simple<\/strong>&nbsp;influence la pr\u00e9cision du mod\u00e8le et \u00e9tablit une&nbsp;<strong>r\u00e9f\u00e9rence<\/strong>&nbsp;pour les m\u00e9thodes plus sophistiqu\u00e9es abord\u00e9es ult\u00e9rieurement :<\/p>\n\n\n\n<p>Cross-validation scores with Simple Imputer: {&lsquo;Lasso&rsquo;: 0.9138, &lsquo;Ridge&rsquo;: 0.9134, &lsquo;ElasticNet&rsquo;: 0.8752}<\/p>\n\n\n\n<p>Le remplacement des m\u00e9thodes manuelles par une&nbsp;<strong>approche pipeline<\/strong>&nbsp;am\u00e9liore significativement le traitement des donn\u00e9es :<\/p>\n\n\n\n<ol start=\"1\" class=\"wp-block-list\">\n<li><strong>Efficacit\u00e9 et r\u00e9duction des erreurs<\/strong>\n<ul class=\"wp-block-list\">\n<li>L&rsquo;imputation manuelle est chronophage et sujette aux erreurs, surtout avec des donn\u00e9es complexes.<\/li>\n\n\n\n<li>Le pipeline&nbsp;<strong>automatise<\/strong>&nbsp;ces \u00e9tapes, garantissant des transformations&nbsp;<strong>coh\u00e9rentes<\/strong>&nbsp;et minimisant les risques d&rsquo;erreurs.<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li><strong>R\u00e9utilisabilit\u00e9 et int\u00e9gration<\/strong>\n<ul class=\"wp-block-list\">\n<li>Les m\u00e9thodes manuelles sont peu r\u00e9utilisables.<\/li>\n\n\n\n<li>Les pipelines&nbsp;<strong>int\u00e8grent<\/strong>&nbsp;l&rsquo;ensemble du pr\u00e9traitement et de la mod\u00e9lisation, les rendant&nbsp;<strong>facilement r\u00e9utilisables<\/strong>&nbsp;et compatibles avec l&rsquo;entra\u00eenement des mod\u00e8les.<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li><strong>Pr\u00e9vention des fuites de donn\u00e9es<\/strong>\n<ul class=\"wp-block-list\">\n<li>L&rsquo;imputation manuelle peut involontairement inclure des donn\u00e9es de test lors du calcul des valeurs (ex. : moyenne calcul\u00e9e sur l&rsquo;ensemble du dataset).<\/li>\n\n\n\n<li>Les pipelines \u00e9liminent ce risque via la m\u00e9thodologie&nbsp;<strong>fit\/transform<\/strong>, en s&rsquo;appuyant&nbsp;<strong>uniquement sur le jeu d&rsquo;entra\u00eenement<\/strong>.<\/li>\n<\/ul>\n<\/li>\n<\/ol>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Flexibilit\u00e9 d\u00e9montr\u00e9e avec SimpleImputer<\/strong><\/h3>\n\n\n\n<p>Cette structure, illustr\u00e9e ici avec&nbsp;<code>SimpleImputer<\/code>, offre une approche&nbsp;<strong>modulable<\/strong>&nbsp;pour le pr\u00e9traitement, adaptable \u00e0 diverses strat\u00e9gies d&rsquo;imputation. Dans les sections suivantes, nous explorerons des techniques plus avanc\u00e9es et leur impact sur les performances des mod\u00e8les.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>2. Perfectionnement des techniques d&rsquo;imputation avec IterativeImputer<\/strong><\/h3>\n\n\n\n<p>Dans cette deuxi\u00e8me partie, nous testons&nbsp;<code>IterativeImputer<\/code>, une m\u00e9thode d&rsquo;imputation&nbsp;<strong>plus sophistiqu\u00e9e<\/strong>&nbsp;qui mod\u00e9lise chaque caract\u00e9ristique avec valeurs manquantes comme une&nbsp;<strong>fonction des autres caract\u00e9ristiques<\/strong>&nbsp;(approche&nbsp;<em>round-robin<\/em>).<\/p>\n\n\n\n<h4 class=\"wp-block-heading\"><strong>Contraste avec les m\u00e9thodes simples<\/strong><\/h4>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Les approches basiques (ex. : moyenne\/m\u00e9diane) appliquent une statistique&nbsp;<strong>globale<\/strong>.<\/li>\n\n\n\n<li><code>IterativeImputer<\/code>&nbsp;utilise une&nbsp;<strong>r\u00e9gression<\/strong>&nbsp;: chaque caract\u00e9ristique incompl\u00e8te devient une&nbsp;<strong>variable d\u00e9pendante<\/strong>&nbsp;pr\u00e9dite par les autres.<\/li>\n<\/ul>\n\n\n\n<p># Import the necessary libraries import pandas as pd from sklearn.pipeline import Pipeline from sklearn.experimental import enable_iterative_imputer # This line is needed for IterativeImputer from sklearn.impute import SimpleImputer, IterativeImputer from sklearn.compose import ColumnTransformer from sklearn.preprocessing import StandardScaler, OneHotEncoder, FunctionTransformer from sklearn.linear_model import Lasso, Ridge, ElasticNet from sklearn.model_selection import cross_val_score # Load the dataset Ames = pd.read_csv(&lsquo;Ames.csv&rsquo;) # Exclude &lsquo;PID&rsquo; and &lsquo;SalePrice&rsquo; from features and specifically handle the &lsquo;Electrical&rsquo; column numeric_features = Ames.select_dtypes(include=[&lsquo;int64&rsquo;, &lsquo;float64&rsquo;]).drop(columns=[&lsquo;PID&rsquo;, &lsquo;SalePrice&rsquo;]).columns categorical_features = Ames.select_dtypes(include=[&lsquo;object&rsquo;]).columns.difference([&lsquo;Electrical&rsquo;]) electrical_feature = [&lsquo;Electrical&rsquo;] # Specifically handle the &lsquo;Electrical&rsquo; column # Helper function to fill &lsquo;None&rsquo; for missing categorical data def fill_none(X): return X.fillna(\u00ab\u00a0None\u00a0\u00bb) # Pipeline for numeric features: Iterative imputation then scale numeric_transformer_advanced = Pipeline(steps=[ (&lsquo;impute_iterative&rsquo;, IterativeImputer(random_state=42)), (&lsquo;scaler&rsquo;, StandardScaler()) ]) # Pipeline for general categorical features: Fill missing values with &lsquo;None&rsquo; then apply one-hot encoding categorical_transformer = Pipeline(steps=[ (&lsquo;fill_none&rsquo;, FunctionTransformer(fill_none, validate=False)), (&lsquo;onehot&rsquo;, OneHotEncoder(handle_unknown=&rsquo;ignore&rsquo;)) ]) # Specific transformer for &lsquo;Electrical&rsquo; using the mode for imputation electrical_transformer = Pipeline(steps=[ (&lsquo;impute_electrical&rsquo;, SimpleImputer(strategy=&rsquo;most_frequent&rsquo;)), (&lsquo;onehot_electrical&rsquo;, OneHotEncoder(handle_unknown=&rsquo;ignore&rsquo;)) ]) # Combined preprocessor for numeric, general categorical, and electrical data preprocessor_advanced = ColumnTransformer( transformers=[ (&lsquo;num&rsquo;, numeric_transformer_advanced, numeric_features), (&lsquo;cat&rsquo;, categorical_transformer, categorical_features), (&lsquo;electrical&rsquo;, electrical_transformer, electrical_feature) ]) # Target variable y = Ames[&lsquo;SalePrice&rsquo;] # All features X = Ames[numeric_features.tolist() + categorical_features.tolist() + electrical_feature] # Define the model pipelines with preprocessor and regressor models = { &lsquo;Lasso&rsquo;: Lasso(max_iter=20000), &lsquo;Ridge&rsquo;: Ridge(), &lsquo;ElasticNet&rsquo;: ElasticNet() } results_advanced = {} for name, model in models.items(): pipeline = Pipeline(steps=[ (&lsquo;preprocessor&rsquo;, preprocessor_advanced), (&lsquo;regressor&rsquo;, model) ]) # Perform cross-validation scores = cross_val_score(pipeline, X, y) results_advanced[name] = round(scores.mean(), 4) # Output the cross-validation scores for advanced imputation print(\u00ab\u00a0Cross-validation scores with Iterative Imputer:\u00a0\u00bb, results_advanced)<\/p>\n\n\n\n<p>Bien que les gains de pr\u00e9cision apport\u00e9s par&nbsp;<strong>IterativeImputer<\/strong>&nbsp;par rapport \u00e0 SimpleImputer soient modestes, ils soulignent un aspect crucial de l&rsquo;imputation de donn\u00e9es :&nbsp;<strong>la complexit\u00e9 et les interd\u00e9pendances<\/strong>&nbsp;pr\u00e9sentes dans un jeu de donn\u00e9es ne garantissent pas syst\u00e9matiquement une am\u00e9lioration spectaculaire des performances, m\u00eame avec des m\u00e9thodes plus sophistiqu\u00e9es.<\/p>\n\n\n\n<p>Cross-validation scores with Iterative Imputer: {&lsquo;Lasso&rsquo;: 0.9142, &lsquo;Ridge&rsquo;: 0.9135, &lsquo;ElasticNet&rsquo;: 0.8746}<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Exploitation des relations de voisinage avec l&rsquo;imputation KNN<\/strong><\/h3>\n\n\n\n<p>Dans cette derni\u00e8re partie, nous explorons&nbsp;<strong>KNNImputer<\/strong>, une m\u00e9thode qui impute les valeurs manquantes en utilisant la&nbsp;<strong>moyenne des k-plus proches voisins<\/strong>&nbsp;identifi\u00e9s dans l&rsquo;ensemble d&rsquo;entra\u00eenement.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\"><strong>Fondement th\u00e9orique<\/strong><\/h4>\n\n\n\n<p>Cette approche repose sur l&rsquo;hypoth\u00e8se que des&nbsp;<strong>donn\u00e9es similaires<\/strong>&nbsp;se trouvent \u00e0 proximit\u00e9 dans l&rsquo;espace des caract\u00e9ristiques. Elle s&rsquo;av\u00e8re particuli\u00e8rement efficace pour les jeux de donn\u00e9es o\u00f9 cette hypoth\u00e8se est v\u00e9rifi\u00e9e.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\"><strong>Cas d&rsquo;usage privil\u00e9gi\u00e9s<\/strong><\/h4>\n\n\n\n<p>L&rsquo;imputation KNN est puissante dans les sc\u00e9narios o\u00f9 :<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Les points de donn\u00e9es&nbsp;<strong>partageant des caract\u00e9ristiques similaires<\/strong><\/li>\n\n\n\n<li>Sont susceptibles d&rsquo;avoir des&nbsp;<strong>r\u00e9ponses ou attributs similaires<\/strong><\/li>\n<\/ul>\n\n\n\n<p># Import the necessary libraries import pandas as pd from sklearn.pipeline import Pipeline from sklearn.impute import SimpleImputer, KNNImputer from sklearn.compose import ColumnTransformer from sklearn.preprocessing import StandardScaler, OneHotEncoder, FunctionTransformer from sklearn.linear_model import Lasso, Ridge, ElasticNet from sklearn.model_selection import cross_val_score # Load the dataset Ames = pd.read_csv(&lsquo;Ames.csv&rsquo;) # Exclude &lsquo;PID&rsquo; and &lsquo;SalePrice&rsquo; from features and specifically handle the &lsquo;Electrical&rsquo; column numeric_features = Ames.select_dtypes(include=[&lsquo;int64&rsquo;, &lsquo;float64&rsquo;]).drop(columns=[&lsquo;PID&rsquo;, &lsquo;SalePrice&rsquo;]).columns categorical_features = Ames.select_dtypes(include=[&lsquo;object&rsquo;]).columns.difference([&lsquo;Electrical&rsquo;]) electrical_feature = [&lsquo;Electrical&rsquo;] # Specifically handle the &lsquo;Electrical&rsquo; column # Helper function to fill &lsquo;None&rsquo; for missing categorical data def fill_none(X): return X.fillna(\u00ab\u00a0None\u00a0\u00bb) # Pipeline for numeric features: K-Nearest Neighbors Imputation then scale numeric_transformer_knn = Pipeline(steps=[ (&lsquo;impute_knn&rsquo;, KNNImputer(n_neighbors=5)), (&lsquo;scaler&rsquo;, StandardScaler()) ]) # Pipeline for general categorical features: Fill missing values with &lsquo;None&rsquo; then apply one-hot encoding categorical_transformer = Pipeline(steps=[ (&lsquo;fill_none&rsquo;, FunctionTransformer(fill_none, validate=False)), (&lsquo;onehot&rsquo;, OneHotEncoder(handle_unknown=&rsquo;ignore&rsquo;)) ]) # Specific transformer for &lsquo;Electrical&rsquo; using the mode for imputation electrical_transformer = Pipeline(steps=[ (&lsquo;impute_electrical&rsquo;, SimpleImputer(strategy=&rsquo;most_frequent&rsquo;)), (&lsquo;onehot_electrical&rsquo;, OneHotEncoder(handle_unknown=&rsquo;ignore&rsquo;)) ]) # Combined preprocessor for numeric, general categorical, and electrical data preprocessor_knn = ColumnTransformer( transformers=[ (&lsquo;num&rsquo;, numeric_transformer_knn, numeric_features), (&lsquo;cat&rsquo;, categorical_transformer, categorical_features), (&lsquo;electrical&rsquo;, electrical_transformer, electrical_feature) ]) # Target variable y = Ames[&lsquo;SalePrice&rsquo;] # All features X = Ames[numeric_features.tolist() + categorical_features.tolist() + electrical_feature] # Define the model pipelines with preprocessor and regressor models = { &lsquo;Lasso&rsquo;: Lasso(max_iter=20000), &lsquo;Ridge&rsquo;: Ridge(), &lsquo;ElasticNet&rsquo;: ElasticNet() } results_knn = {} for name, model in models.items(): pipeline = Pipeline(steps=[ (&lsquo;preprocessor&rsquo;, preprocessor_knn), (&lsquo;regressor&rsquo;, model) ]) # Perform cross-validation scores = cross_val_score(pipeline, X, y) results_knn[name] = round(scores.mean(), 4) # Output the cross-validation scores for KNN imputation print(\u00ab\u00a0Cross-validation scores with KNN Imputer:\u00a0\u00bb, results_knn)<\/p>\n\n\n\n<p>-Les r\u00e9sultats obtenus avec KNNImputer montrent une&nbsp;<strong>am\u00e9lioration tr\u00e8s l\u00e9g\u00e8re<\/strong>&nbsp;par rapport \u00e0 ceux de SimpleImputer et IterativeImputer :<\/p>\n\n\n\n<p>Cross-validation scores with KNN Imputer: {&lsquo;Lasso&rsquo;: 0.9146, &lsquo;Ridge&rsquo;: 0.9138, &lsquo;ElasticNet&rsquo;: 0.8748}<\/p>\n\n\n\n<p>Cette l\u00e9g\u00e8re am\u00e9lioration sugg\u00e8re que, pour certains jeux de donn\u00e9es :<\/p>\n\n\n\n<ol start=\"1\" class=\"wp-block-list\">\n<li>L&rsquo;approche&nbsp;<strong>bas\u00e9e sur la proximit\u00e9<\/strong>&nbsp;de KNNImputer<\/li>\n\n\n\n<li>Qui prend en compte&nbsp;<strong>la similarit\u00e9 entre points de donn\u00e9es<\/strong><\/li>\n<\/ol>\n\n\n\n<p>Peut s&rsquo;av\u00e9rer plus efficace pour :<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Capturer<\/strong>&nbsp;la structure sous-jacente des donn\u00e9es<\/li>\n\n\n\n<li><strong>Pr\u00e9server<\/strong>&nbsp;les relations intrins\u00e8ques<\/li>\n\n\n\n<li>Potentiellement&nbsp;<strong>am\u00e9liorer la pr\u00e9cision<\/strong>&nbsp;des pr\u00e9dictions<\/li>\n<\/ul>\n\n\n\n<p>R\u00e9cup\u00e9r\u00e9 de : <a href=\"https:\/\/machinelearningmastery.com\/filling-the-gaps-a-comparative-guide-to-imputation-techniques-in-machine-learning\/\"> https:\/\/machinelearningmastery.com\/filling-the-gaps-a-comparative-guide-to-imputation-techniques-in-machine-learning\/<\/a><\/p>\n\n\n\n<p>Auteur: Vinod Chugani<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Dans notre pr\u00e9c\u00e9dente analyse des mod\u00e8les de r\u00e9gression p\u00e9nalis\u00e9e (comme&nbsp;Lasso,&nbsp;Ridge&nbsp;et&nbsp;ElasticNet), nous avons montr\u00e9 leur efficacit\u00e9 pour g\u00e9rer la&nbsp;multicolin\u00e9arit\u00e9, permettant d&rsquo;exploiter un \u00e9ventail plus [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":3767,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[109],"tags":[110],"class_list":["post-3766","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ia-aa","tag-ia"],"_links":{"self":[{"href":"https:\/\/techhub.saworks.io\/fr\/wp-json\/wp\/v2\/posts\/3766"}],"collection":[{"href":"https:\/\/techhub.saworks.io\/fr\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/techhub.saworks.io\/fr\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/techhub.saworks.io\/fr\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/techhub.saworks.io\/fr\/wp-json\/wp\/v2\/comments?post=3766"}],"version-history":[{"count":0,"href":"https:\/\/techhub.saworks.io\/fr\/wp-json\/wp\/v2\/posts\/3766\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/techhub.saworks.io\/fr\/wp-json\/wp\/v2\/media\/3767"}],"wp:attachment":[{"href":"https:\/\/techhub.saworks.io\/fr\/wp-json\/wp\/v2\/media?parent=3766"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/techhub.saworks.io\/fr\/wp-json\/wp\/v2\/categories?post=3766"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/techhub.saworks.io\/fr\/wp-json\/wp\/v2\/tags?post=3766"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}