Leveraging transformers-based language models in proteome bioinformatics

研究成果: 雜誌貢獻回顧型文獻同行評審

17 引文 斯高帕斯(Scopus)

摘要

In recent years, the rapid growth of biological data has increased interest in using bioinformatics to analyze and interpret this data. Proteomics, which studies the structure, function, and interactions of proteins, is a crucial area of bioinformatics. Using natural language processing (NLP) techniques in proteomics is an emerging field that combines machine learning and text mining to analyze biological data. Recently, transformer-based NLP models have gained significant attention for their ability to process variable-length input sequences in parallel, using self-attention mechanisms to capture long-range dependencies. In this review paper, we discuss the recent advancements in transformer-based NLP models in proteome bioinformatics and examine their advantages, limitations, and potential applications to improve the accuracy and efficiency of various tasks. Additionally, we highlight the challenges and future directions of using these models in proteome bioinformatics research. Overall, this review provides valuable insights into the potential of transformer-based NLP models to revolutionize proteome bioinformatics.
原文英語
文章編號2300011
期刊Proteomics
23
發行號23-24
DOIs
出版狀態已發佈 - 12月 2023

ASJC Scopus subject areas

  • 生物化學
  • 分子生物學

指紋

深入研究「Leveraging transformers-based language models in proteome bioinformatics」主題。共同形成了獨特的指紋。

引用此