[**Paper Link**](https://pubs.acs.org/doi/pdf/10.1021/acs.jcim.2c01564) [**Supporting Information**](https://pubs.acs.org/doi/suppl/10.1021/acs.jcim.2c01564/suppl_file/ci2c01564_si_001.pdf) ## Keyword - RF, GB, SVM, XGBoost - GCN, GAT, Attentive FP - Transfer Learning - Shapley additive explanations (SHAP) method - matched molecular pair analysis (MMPA) ## Transfer Learning - Pre-trained on computational data - Fine-tuned on target dataset Other examples of pre-training - ChemNet - MGBert - K-Bert ## Hyerparameter Optimization - Grid Search: RF, GB, SVM, XGBoost - Tree-Structured Parzen Estimators (TPE): GCN, GAT, Attentive FP - Early Stopping (if not improved in 10 successive epochs) ## Explainability - SHAP (for XGBoost) - Attention Mechanism (for GAT) - MMPA ## Reference - [ChemNet](https://arxiv.org/pdf/1712.02734.pdf) - [MGBert](https://pubmed.ncbi.nlm.nih.gov/33951729/) - [K-BERT](https://academic.oup.com/bib/article/23/3/bbac131/6570013) - [SHAP](https://arxiv.org/pdf/1905.04610.pdf) - [MMPA](https://www.knime.com/knime-analytics-platform) - [MMPA in RDKit](https://zhuanlan.zhihu.com/p/389763022)
×
Sign in
Email
Password
Forgot password
or
By clicking below, you agree to our
terms of service
.
Sign in via Facebook
Sign in via Twitter
Sign in via GitHub
Sign in via Dropbox
Sign in with Wallet
Wallet (
)
Connect another wallet
New to HackMD?
Sign up