2023 |
Zafeiropoulos, Haris; Beracochea, Martin; Ninidakis, Stelios; Exter, Katrina; Potirakis, Antonis; De Moro, Gianluca; Richardson, Lorna; Corre, Erwan; Machado, João; Pafilis, Evangelos; Kotoulas, Georgios; Santi, Ioulia; Finn, Robert D; Cox, Cymon J; Pavloudi, Christina metaGOflow: a workflow for the analysis of marine Genomic Observatories shotgun metagenomics data Journal Article GigaScience, 12 , pp. giad078, 2023, ISSN: 2047-217X. @article{zafeiropoulos_metagoflow_2022, title = {metaGOflow: a workflow for the analysis of marine Genomic Observatories shotgun metagenomics data}, author = {Haris Zafeiropoulos and Martin Beracochea and Stelios Ninidakis and Katrina Exter and Antonis Potirakis and Gianluca De Moro and Lorna Richardson and Erwan Corre and João Machado and Evangelos Pafilis and Georgios Kotoulas and Ioulia Santi and Robert D Finn and Cymon J Cox and Christina Pavloudi}, url = {https://imbbc.hcmr.gr/wp-content/uploads/2023/10/2023-Zafeiropoulos-GiGa-57.pdf https://academic.oup.com/gigascience/article/doi/10.1093/gigascience/giad078/7321054}, doi = {10.1093/gigascience/giad078}, issn = {2047-217X}, year = {2023}, date = {2023-10-19}, urldate = {2023-10-23}, journal = {GigaScience}, volume = {12}, pages = {giad078}, abstract = {Abstract Background Genomic Observatories (GOs) are sites of long-term scientific study that undertake regular assessments of the genomic biodiversity. The European Marine Omics Biodiversity Observation Network (EMO BON) is a network of GOs that conduct regular biological community samplings to generate environmental and metagenomic data of microbial communities from designated marine stations around Europe. The development of an effective workflow is essential for the analysis of the EMO BON metagenomic data in a timely and reproducible manner. Findings Based on the established MGnify resource, we developed metaGOflow. metaGOflow supports the fast inference of taxonomic profiles from GO-derived data based on ribosomal RNA genes and their functional annotation using the raw reads. Thanks to the Research Object Crate packaging, relevant metadata about the sample under study, and the details of the bioinformatics analysis it has been subjected to, are inherited to the data product while its modular implementation allows running the workflow partially. The analysis of 2 EMO BON samples and 1 Tara Oceans sample was performed as a use case. Conclusions metaGOflow is an efficient and robust workflow that scales to the needs of projects producing big metagenomic data such as EMO BON. It highlights how containerization technologies along with modern workflow languages and metadata package approaches can support the needs of researchers when dealing with ever-increasing volumes of biological data. Despite being initially oriented to address the needs of EMO BON, metaGOflow is a flexible and easy-to-use workflow that can be broadly used for one-sample-at-a-time analysis of shotgun metagenomics data.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Abstract Background Genomic Observatories (GOs) are sites of long-term scientific study that undertake regular assessments of the genomic biodiversity. The European Marine Omics Biodiversity Observation Network (EMO BON) is a network of GOs that conduct regular biological community samplings to generate environmental and metagenomic data of microbial communities from designated marine stations around Europe. The development of an effective workflow is essential for the analysis of the EMO BON metagenomic data in a timely and reproducible manner. Findings Based on the established MGnify resource, we developed metaGOflow. metaGOflow supports the fast inference of taxonomic profiles from GO-derived data based on ribosomal RNA genes and their functional annotation using the raw reads. Thanks to the Research Object Crate packaging, relevant metadata about the sample under study, and the details of the bioinformatics analysis it has been subjected to, are inherited to the data product while its modular implementation allows running the workflow partially. The analysis of 2 EMO BON samples and 1 Tara Oceans sample was performed as a use case. Conclusions metaGOflow is an efficient and robust workflow that scales to the needs of projects producing big metagenomic data such as EMO BON. It highlights how containerization technologies along with modern workflow languages and metadata package approaches can support the needs of researchers when dealing with ever-increasing volumes of biological data. Despite being initially oriented to address the needs of EMO BON, metaGOflow is a flexible and easy-to-use workflow that can be broadly used for one-sample-at-a-time analysis of shotgun metagenomics data. |
2021 |
Zafeiropoulos, Haris; Gioti, Anastasia; Ninidakis, Stelios; Potirakis, Antonis; Paragkamian, Savvas; Angelova, Nelina; Antoniou, Aglaia; Danis, Theodoros; Kaitetzidou, Eliza; Kasapidis, Panagiotis; Kristoffersen, Jon Bent; Papadogiannis, Vasileios; Pavloudi, Christina; Ha, Quoc Viet; Lagnel, Jacques; Pattakos, Nikos; Perantinos, Giorgos; Sidirokastritis, Dimitris; Vavilis, Panagiotis; Kotoulas, Georgios; Manousaki, Tereza; Sarropoulou, Elena; Tsigenopoulos, Costas S; Arvanitidis, Christos; Magoulas, Antonios; Pafilis, Evangelos 0s and 1s in marine molecular research: a regional HPC perspective Journal Article GigaScience, 10 (8), pp. giab053, 2021, ISSN: 2047-217X. @article{zafeiropoulos_0s_2021, title = {0s and 1s in marine molecular research: a regional HPC perspective}, author = {Haris Zafeiropoulos and Anastasia Gioti and Stelios Ninidakis and Antonis Potirakis and Savvas Paragkamian and Nelina Angelova and Aglaia Antoniou and Theodoros Danis and Eliza Kaitetzidou and Panagiotis Kasapidis and Jon Bent Kristoffersen and Vasileios Papadogiannis and Christina Pavloudi and Quoc Viet Ha and Jacques Lagnel and Nikos Pattakos and Giorgos Perantinos and Dimitris Sidirokastritis and Panagiotis Vavilis and Georgios Kotoulas and Tereza Manousaki and Elena Sarropoulou and Costas S Tsigenopoulos and Christos Arvanitidis and Antonios Magoulas and Evangelos Pafilis}, url = {https://imbbc.hcmr.gr/wp-content/uploads/2021/08/2021-Zafeiropoulos-GiGa-63.pdf https://academic.oup.com/gigascience/article/doi/10.1093/gigascience/giab053/6353916}, doi = {10.1093/gigascience/giab053}, issn = {2047-217X}, year = {2021}, date = {2021-01-01}, urldate = {2021-08-23}, journal = {GigaScience}, volume = {10}, number = {8}, pages = {giab053}, abstract = {Abstract High-performance computing (HPC) systems have become indispensable for modern marine research, providing support to an increasing number and diversity of users. Pairing with the impetus offered by high-throughput methods to key areas such as non-model organism studies, their operation continuously evolves to meet the corresponding computational challenges. Here, we present a Tier 2 (regional) HPC facility, operating for over a decade at the Institute of Marine Biology, Biotechnology, and Aquaculture of the Hellenic Centre for Marine Research in Greece. Strategic choices made in design and upgrades aimed to strike a balance between depth (the need for a few high-memory nodes) and breadth (a number of slimmer nodes), as dictated by the idiosyncrasy of the supported research. Qualitative computational requirement analysis of the latter revealed the diversity of marine fields, methods, and approaches adopted to translate data into knowledge. In addition, hardware and software architectures, usage statistics, policy, and user management aspects of the facility are presented. Drawing upon the last decade’s experience from the different levels of operation of the Institute of Marine Biology, Biotechnology, and Aquaculture HPC facility, a number of lessons are presented; these have contributed to the facility’s future directions in light of emerging distribution technologies (e.g., containers) and Research Infrastructure evolution. In combination with detailed knowledge of the facility usage and its upcoming upgrade, future collaborations in marine research and beyond are envisioned.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Abstract High-performance computing (HPC) systems have become indispensable for modern marine research, providing support to an increasing number and diversity of users. Pairing with the impetus offered by high-throughput methods to key areas such as non-model organism studies, their operation continuously evolves to meet the corresponding computational challenges. Here, we present a Tier 2 (regional) HPC facility, operating for over a decade at the Institute of Marine Biology, Biotechnology, and Aquaculture of the Hellenic Centre for Marine Research in Greece. Strategic choices made in design and upgrades aimed to strike a balance between depth (the need for a few high-memory nodes) and breadth (a number of slimmer nodes), as dictated by the idiosyncrasy of the supported research. Qualitative computational requirement analysis of the latter revealed the diversity of marine fields, methods, and approaches adopted to translate data into knowledge. In addition, hardware and software architectures, usage statistics, policy, and user management aspects of the facility are presented. Drawing upon the last decade’s experience from the different levels of operation of the Institute of Marine Biology, Biotechnology, and Aquaculture HPC facility, a number of lessons are presented; these have contributed to the facility’s future directions in light of emerging distribution technologies (e.g., containers) and Research Infrastructure evolution. In combination with detailed knowledge of the facility usage and its upcoming upgrade, future collaborations in marine research and beyond are envisioned. |
2020 |
Zafeiropoulos, Haris; Viet, Ha Quoc; Vasileiadou, Katerina; Potirakis, Antonis; Arvanitidis, Christos; Topalis, Pantelis; Pavloudi, Christina; Pafilis, Evangelos PEMA: a flexible Pipeline for Environmental DNA Metabarcoding Analysis of the 16S/18S ribosomal RNA, ITS, and COI marker genes Journal Article GigaScience, 9 (3), 2020, ISSN: 2047-217X, (_eprint: https://academic.oup.com/gigascience/article-pdf/9/3/giaa022/32894405/giaa022.pdf). @article{zafeiropoulos_pema_2020, title = {PEMA: a flexible Pipeline for Environmental DNA Metabarcoding Analysis of the 16S/18S ribosomal RNA, ITS, and COI marker genes}, author = {Haris Zafeiropoulos and Ha Quoc Viet and Katerina Vasileiadou and Antonis Potirakis and Christos Arvanitidis and Pantelis Topalis and Christina Pavloudi and Evangelos Pafilis}, url = {https://doi.org/10.1093/gigascience/giaa022}, doi = {10.1093/gigascience/giaa022}, issn = {2047-217X}, year = {2020}, date = {2020-01-01}, journal = {GigaScience}, volume = {9}, number = {3}, abstract = {Environmental DNA and metabarcoding allow the identification of a mixture of species and launch a new era in bio- and eco-assessment. Many steps are required to obtain taxonomically assigned matrices from raw data. For most of these, a plethora of tools are available; each tool's execution parameters need to be tailored to reflect each experiment's idiosyncrasy. Adding to this complexity, the computation capacity of high-performance computing systems is frequently required for such analyses. To address the difficulties, bioinformatic pipelines need to combine state-of-the art technologies and algorithms with an easy to get-set-use framework, allowing researchers to tune each study. Software containerization technologies ease the sharing and running of software packages across operating systems; thus, they strongly facilitate pipeline development and usage. Likewise programming languages specialized for big data pipelines incorporate features like roll-back checkpoints and on-demand partial pipeline execution.PEMA is a containerized assembly of key metabarcoding analysis tools that requires low effort in setting up, running, and customizing to researchers’ needs. Based on third-party tools, PEMA performs read pre-processing, (molecular) operational taxonomic unit clustering, amplicon sequence variant inference, and taxonomy assignment for 16S and 18S ribosomal RNA, as well as ITS and COI marker gene data. Owing to its simplified parameterization and checkpoint support, PEMA allows users to explore alternative algorithms for specific steps of the pipeline without the need of a complete re-execution. PEMA was evaluated against both mock communities and previously published datasets and achieved results of comparable quality.A high-performance computing–based approach was used to develop PEMA; however, it can be used in personal computers as well. PEMA's time-efficient performance and good results will allow it to be used for accurate environmental DNA metabarcoding analysis, thus enhancing the applicability of next-generation biodiversity assessment studies.}, note = {_eprint: https://academic.oup.com/gigascience/article-pdf/9/3/giaa022/32894405/giaa022.pdf}, keywords = {}, pubstate = {published}, tppubtype = {article} } Environmental DNA and metabarcoding allow the identification of a mixture of species and launch a new era in bio- and eco-assessment. Many steps are required to obtain taxonomically assigned matrices from raw data. For most of these, a plethora of tools are available; each tool's execution parameters need to be tailored to reflect each experiment's idiosyncrasy. Adding to this complexity, the computation capacity of high-performance computing systems is frequently required for such analyses. To address the difficulties, bioinformatic pipelines need to combine state-of-the art technologies and algorithms with an easy to get-set-use framework, allowing researchers to tune each study. Software containerization technologies ease the sharing and running of software packages across operating systems; thus, they strongly facilitate pipeline development and usage. Likewise programming languages specialized for big data pipelines incorporate features like roll-back checkpoints and on-demand partial pipeline execution.PEMA is a containerized assembly of key metabarcoding analysis tools that requires low effort in setting up, running, and customizing to researchers’ needs. Based on third-party tools, PEMA performs read pre-processing, (molecular) operational taxonomic unit clustering, amplicon sequence variant inference, and taxonomy assignment for 16S and 18S ribosomal RNA, as well as ITS and COI marker gene data. Owing to its simplified parameterization and checkpoint support, PEMA allows users to explore alternative algorithms for specific steps of the pipeline without the need of a complete re-execution. PEMA was evaluated against both mock communities and previously published datasets and achieved results of comparable quality.A high-performance computing–based approach was used to develop PEMA; however, it can be used in personal computers as well. PEMA's time-efficient performance and good results will allow it to be used for accurate environmental DNA metabarcoding analysis, thus enhancing the applicability of next-generation biodiversity assessment studies. |
Antonis Potirakis
2023 |
metaGOflow: a workflow for the analysis of marine Genomic Observatories shotgun metagenomics data Journal Article GigaScience, 12 , pp. giad078, 2023, ISSN: 2047-217X. |
2021 |
0s and 1s in marine molecular research: a regional HPC perspective Journal Article GigaScience, 10 (8), pp. giab053, 2021, ISSN: 2047-217X. |
2020 |
PEMA: a flexible Pipeline for Environmental DNA Metabarcoding Analysis of the 16S/18S ribosomal RNA, ITS, and COI marker genes Journal Article GigaScience, 9 (3), 2020, ISSN: 2047-217X, (_eprint: https://academic.oup.com/gigascience/article-pdf/9/3/giaa022/32894405/giaa022.pdf). |