This page contains a list of articles in peer-reviewed scientific journals. [BibTex]

Journals Conferences Preprints
~27 ~16 ~9

Selected

Interpretable ensembles of hyper-rectangles as base models 2023
A new computationally simple approach for implementing neural networks with output hard constraints 2023
Neural Attention Forests: Transformer-Based Forest Improvement 2023
Multi-attention multiple instance learning 2022
Interpretable machine learning with an ensemble of gradient boosting machines 2021
A weighted random survival forest 2019

All

Journals

  1. BENK: The Beran Estimator with Neural Kernels for Estimating the Heterogeneous Treatment Effect (2024) [DOI]   

  2. LARF: Two-Level Attention-Based Random Forests with a Mixture of Contamination Models (2023) [DOI]   

  3. Attention-like feature explanation for tabular data (2023) [DOI]  [GitHub] 

  4. Heterogeneous Treatment Effect with Trained Kernels of the Nadaraya–Watson Regression (2023) [DOI]   

  5. Attention and self-attention in random forests (2023) [DOI]  [GitHub]  [ArXiV]

  6. Multiple Instance Learning with Trainable Soft Decision Tree Ensembles (2023) [DOI]   

  7. Attention-Based Random Forests and the Imprecise Pari-Mutual Model (2023) [DOI]   

  8. Interpretable ensembles of hyper-rectangles as base models (2023) [DOI]  [GitHub]  [ArXiV]

  9. Flexible deep forest classifier with multi-head attention (2023) [DOI]  [GitHub] 

  10. A new computationally simple approach for implementing neural networks with output hard constraints (2023) [DOI]  [GitHub]  [ArXiV]

  11. Random Survival Forests Incorporated By The Nadaraya-Watson Regression (2022) [DOI]   

  12. Improved Anomaly Detection by Using the Attention-Based Isolation Forest (2022) [DOI]   

  13. An Extension of the Neural Additive Model for Uncertainty Explanation of Machine Learning Survival Models (2022) [DOI]   

  14. Multi-attention multiple instance learning (2022) [DOI]  [GitHub]  [ArXiV]

  15. SurvNAM: The machine learning survival model explanation (2022) [DOI]   

  16. Ensembles of random SHAPs (2022) [DOI]   

  17. Attention-based random forest and contamination model (2022) [DOI]   

  18. Uncertainty Interpretation of the Machine Learning Survival Model Predictions (2021) [DOI]   

  19. Deep Gradient Boosting For Regression Problems (2021) [DOI]   

  20. A Generalized Stacking for Implementing Ensembles of Gradient Boosting Machines (2021) [DOI]   

  21. Counterfactual explanation of machine learning survival models (2021) [DOI]   

  22. Interpretable machine learning with an ensemble of gradient boosting machines (2021) [DOI]  [GitHub]  [ArXiV]

  23. An Adaptive Weighted Deep Survival Forest (2020) [DOI]   

  24. Estimation of Personalized Heterogeneous Treatment Effects Using Concatenation and Augmentation of Feature Vectors (2020) [DOI]   

  25. A new adaptive weighted deep forest and its modifications (2020) [DOI]   

  26. Deep Forest as a framework for a new class of machine-learning models (2019) [DOI]   

  27. A weighted random survival forest (2019) [DOI]  [GitHub] 

Conferences

  1. Robust Models of Distance Metric Learning by Interval-Valued Training Data (2023) [DOI]   

  2. Modifications of SHAP for Local Explanation of Function-Valued Predictions Using the Divergence Measures (2023) [DOI]   

  3. GBMILs: Gradient Boosting Models for Multiple Instance Learning (2023) [DOI]   

  4. Neural Attention Forests: Transformer-Based Forest Improvement (2023) [DOI]  [GitHub]  [ArXiV]

  5. Random Forests with Attentive Nodes (2022) [DOI]   

  6. Multiple Instance Learning through Explanation by Using a Histopathology Example (2022) [DOI]   

  7. An Approach for the Robust Machine Learning Explanation Based on Imprecise Statistical Models (2022) [DOI]   

  8. AGBoost: Attention-based modification of gradient boosting machine (2022) [DOI]   

  9. The Deep Survival Forest and Elastic-Net-Cox Cascade Models as Extensions of the Deep Forest (2021) [DOI]   

  10. Combining an autoencoder and a variational autoencoder for explaining the machine learning model predictions (2021) [DOI]   

  11. Semi-supervised Learning for Medical Image Segmentation (2021) [DOI]   

  12. Gradient Boosting Machine with Partially Randomized Decision Trees (2021) [DOI]   

  13. Адаптивный весовой глубокий лес выживаемости (2020)    

  14. A Deep Forest Improvement by Using Weighted Schemes (2019) [DOI]   

  15. Сегментация трёхмерных медицинских изображений на основе алгоритмов классификации (2018)    

  16. Алгоритмы фиксации уровня и быстрого распространения контура для полуавтоматической сегментации медицинских изображений (2017)    

Preprints

  1. Dual feature-based and example-based explanation methods (2024)     [ArXiV]

  2. Multiple Instance Learning with Trainable Decision Tree Ensembles (2023)     [ArXiV]

  3. Interpretable Ensembles of Hyper-Rectangles as Base Models (2023)     [ArXiV]

  4. A New Computationally Simple Approach for Implementing Neural Networks with Output Hard Constraints (2023)   [GitHub]  [ArXiV]

  5. SurvBeX: An explanation method of the machine learning survival models based on the Beran estimator (2023)     [ArXiV]

  6. SurvBeNIM: The Beran-Based Neural Importance Model for Explaining the Survival Models (2023)     [ArXiV]

  7. BENK: The Beran Estimator with Neural Kernels for Estimating the Heterogeneous Treatment Effect (2022) [DOI]   

  8. An Imprecise SHAP as a Tool for Explaining the Class Probability Distributions under Limited Training Data (2021)     [ArXiV]

  9. An adaptive weighted deep forest classifier (2019)     [ArXiV]