This page contains a list of articles in peer-reviewed scientific journals. [BibTex]

Journals Conferences Preprints
~31 ~17 ~10

Selected

An Explicit Concept-Based Approach for Incorporating Expert Rules into Machine Learning Models 2024
Imposing Star-Shaped Hard Constraints on the Neural Network Output 2024
Interpretable ensembles of hyper-rectangles as base models 2023
A new computationally simple approach for implementing neural networks with output hard constraints 2023
Neural Attention Forests: Transformer-Based Forest Improvement 2023
Multi-attention multiple instance learning 2022
Interpretable machine learning with an ensemble of gradient boosting machines 2021
A weighted random survival forest 2019

All

Journals

  1. Predictive models and dynamics of estimates of applied tasks characteristics using machine learning methods (2024) [DOI]   

  2. Interpretation methods for machine learning models in the framework of survival analysis with censored data: a brief overview (2024) [DOI]   

  3. BENK: The Beran Estimator with Neural Kernels for Estimating the Heterogeneous Treatment Effect (2024) [DOI]    [ArXiV]

  4. SurvBeX: an explanation method of the machine learning survival models based on the Beran estimator (2024) [DOI]    [ArXiV]

  5. Imposing Star-Shaped Hard Constraints on the Neural Network Output (2024) [DOI]  [GitHub] 

  6. LARF: Two-Level Attention-Based Random Forests with a Mixture of Contamination Models (2023) [DOI]   

  7. Attention-like feature explanation for tabular data (2023) [DOI]  [GitHub] 

  8. Heterogeneous Treatment Effect with Trained Kernels of the Nadaraya–Watson Regression (2023) [DOI]   

  9. Attention and self-attention in random forests (2023) [DOI]  [GitHub]  [ArXiV]

  10. Multiple Instance Learning with Trainable Soft Decision Tree Ensembles (2023) [DOI]   

  11. Attention-Based Random Forests and the Imprecise Pari-Mutual Model (2023) [DOI]   

  12. Interpretable ensembles of hyper-rectangles as base models (2023) [DOI]  [GitHub]  [ArXiV]

  13. Flexible deep forest classifier with multi-head attention (2023) [DOI]  [GitHub] 

  14. A new computationally simple approach for implementing neural networks with output hard constraints (2023) [DOI]  [GitHub]  [ArXiV]

  15. Random Survival Forests Incorporated By The Nadaraya-Watson Regression (2022) [DOI]   

  16. Improved Anomaly Detection by Using the Attention-Based Isolation Forest (2022) [DOI]   

  17. An Extension of the Neural Additive Model for Uncertainty Explanation of Machine Learning Survival Models (2022) [DOI]   

  18. Multi-attention multiple instance learning (2022) [DOI]  [GitHub]  [ArXiV]

  19. SurvNAM: The machine learning survival model explanation (2022) [DOI]   

  20. Ensembles of random SHAPs (2022) [DOI]   

  21. Attention-based random forest and contamination model (2022) [DOI]   

  22. Uncertainty Interpretation of the Machine Learning Survival Model Predictions (2021) [DOI]   

  23. Deep Gradient Boosting For Regression Problems (2021) [DOI]   

  24. A Generalized Stacking for Implementing Ensembles of Gradient Boosting Machines (2021) [DOI]   

  25. Counterfactual explanation of machine learning survival models (2021) [DOI]   

  26. Interpretable machine learning with an ensemble of gradient boosting machines (2021) [DOI]  [GitHub]  [ArXiV]

  27. An Adaptive Weighted Deep Survival Forest (2020) [DOI]   

  28. Estimation of Personalized Heterogeneous Treatment Effects Using Concatenation and Augmentation of Feature Vectors (2020) [DOI]   

  29. A new adaptive weighted deep forest and its modifications (2020) [DOI]   

  30. Deep Forest as a framework for a new class of machine-learning models (2019) [DOI]   

  31. A weighted random survival forest (2019) [DOI]  [GitHub] 

Conferences

  1. An Explicit Concept-Based Approach for Incorporating Expert Rules into Machine Learning Models (2024) [DOI]  [GitHub] 

  2. Robust Models of Distance Metric Learning by Interval-Valued Training Data (2023) [DOI]   

  3. Modifications of SHAP for Local Explanation of Function-Valued Predictions Using the Divergence Measures (2023) [DOI]   

  4. GBMILs: Gradient Boosting Models for Multiple Instance Learning (2023) [DOI]   

  5. Neural Attention Forests: Transformer-Based Forest Improvement (2023) [DOI]  [GitHub]  [ArXiV]

  6. Random Forests with Attentive Nodes (2022) [DOI]   

  7. Multiple Instance Learning through Explanation by Using a Histopathology Example (2022) [DOI]   

  8. An Approach for the Robust Machine Learning Explanation Based on Imprecise Statistical Models (2022) [DOI]   

  9. AGBoost: Attention-based modification of gradient boosting machine (2022) [DOI]   

  10. The Deep Survival Forest and Elastic-Net-Cox Cascade Models as Extensions of the Deep Forest (2021) [DOI]   

  11. Combining an autoencoder and a variational autoencoder for explaining the machine learning model predictions (2021) [DOI]   

  12. Semi-supervised Learning for Medical Image Segmentation (2021) [DOI]   

  13. Gradient Boosting Machine with Partially Randomized Decision Trees (2021) [DOI]   

  14. Адаптивный весовой глубокий лес выживаемости (2020)    

  15. A Deep Forest Improvement by Using Weighted Schemes (2019) [DOI]   

  16. Сегментация трёхмерных медицинских изображений на основе алгоритмов классификации (2018)    

  17. Алгоритмы фиксации уровня и быстрого распространения контура для полуавтоматической сегментации медицинских изображений (2017)    

Preprints

  1. Dual feature-based and example-based explanation methods (2024)     [ArXiV]

  2. Generating Survival Interpretable Trajectories and Data (2024)   [GitHub]  [ArXiV]

  3. FI-CBL: A Probabilistic Method for Concept-Based Learning with Expert Rules (2024)   [GitHub]  [ArXiV]

  4. Incorporating Expert Rules into Neural Networks in the Framework of Concept-Based Learning (2024)   [GitHub]  [ArXiV]

  5. Multiple Instance Learning with Trainable Decision Tree Ensembles (2023)     [ArXiV]

  6. Interpretable Ensembles of Hyper-Rectangles as Base Models (2023)     [ArXiV]

  7. A New Computationally Simple Approach for Implementing Neural Networks with Output Hard Constraints (2023)   [GitHub]  [ArXiV]

  8. SurvBeNIM: The Beran-Based Neural Importance Model for Explaining the Survival Models (2023)     [ArXiV]

  9. An Imprecise SHAP as a Tool for Explaining the Class Probability Distributions under Limited Training Data (2021)     [ArXiV]

  10. An adaptive weighted deep forest classifier (2019)     [ArXiV]