
Dongxu Li
Articles
-
2 weeks ago |
pubs.rsc.org | Jinyu Sun |Dongxu Li |Jie Jenny Zou |Xiaoxiao Tan
BiBERTa: A Self-Supervised Framework for Accelerating the Discovery of Stable Organic Photovoltaic Materials The discovery of high-performance organic photovoltaic materials remains a time-consuming and resource-intensive process due to the combinatorial complexity of donor-acceptor pairs and the limited availability of experimental data.
-
Jan 9, 2025 |
nature.com | Xiaorui Su |Pengwei Hu |Dongxu Li |Bowei Zhao |Lun Hu |Thomas Herget | +2 more
Graph representation learning has been leveraged to identify cancer genes from biological networks. However, its applicability is limited by insufficient interpretability and generalizability under integrative network analysis. Here we report the development of an interpretable and generalizable transformer-based model that accurately predicts cancer genes by leveraging graph representation learning and the integration of multi-omics data with the topologies of homogeneous and heterogeneous networks of biological interactions. The model allows for the interpretation of the respective importance of multi-omic and higher-order structural features, achieved state-of-the-art performance in the prediction of cancer genes across biological networks (including networks of interactions between miRNA and proteins, transcription factors and proteins, and transcription factors and miRNA) in pan-cancer and cancer-specific scenarios, and predicted 57 cancer-gene candidates (including three genes that had not been identified by other models) among 4,729 unlabelled genes across 8 pan-cancer datasets. The model’s interpretability and generalization may facilitate the understanding of gene-related regulatory mechanisms and the discovery of new cancer genes. An interpretable transformer-based model leveraging graph representation learning accurately predicts cancer genes across homogeneous and heterogeneous pan-cancer networks of biological interactions.
-
Nov 4, 2024 |
nature.com | Dongxu Li |Baoshan Li |Qi Li |Yueming Wang |Mei Yang |Mingshuo Han
In farming scenarios, cattle identification has become a key issue for the development of precision farming. In precision livestock farming, single-feature recognition methods are prone to misjudgment in complex scenarios involving multiple cattle obscuring each other during drinking and feeding. This paper proposes a decision-level identification method based on the multi-feature fusion of cattle faces, muzzle patterns, and ear tags. The method utilizes the SOLO algorithm to segment images and employs the FaceNet and PP-OCRv4 networks to extract features for the cattle’s faces, muzzle patterns, and ear tags. These features are compared with the Ground truth, from which the Top 3 features are extracted. The corresponding cattle IDs of these features are then processed using One-Hot encoding to serve as the final input for the decision layer, and various ensemble strategies are used to optimize the model. The results show that using the multimodal decision fusion method makes the recognition accuracy reach 95.74%, 1.4% higher than the traditional optimal unimodal recognition accuracy. The verification rate reaches 94.72%, 10.65% higher than the traditional optimal unimodal recognition verification rate. The research results demonstrate that the multi-feature fusion recognition method has significant advantages in drinking and feeding farm environments, providing an efficient and reliable solution for precise identification and management of cattle in farms and significantly improving recognition accuracy and stability.
Try JournoFinder For Free
Search and contact over 1M+ journalist profiles, browse 100M+ articles, and unlock powerful PR tools.
Start Your 7-Day Free Trial →