High Performance Computing (HPC)
High Performance Computing is an essential tool for analyzing data that requires high computing capacity. It is used to develop computational modeling and simulation of problems, which due to their complexity will require thousands or millions of hours of processing on a common desktop computer.
High Performance Computing integrates a set of tools and techniques for the distribution and execution of jobs on a supercomputer, which in our case is a cluster of powerful servers linked together through a dedicated network, and that access a system of files shared via NFS.
Parallel and distributed job execution can be solved using specifications such as the Message Pass Interface (MPI), which defines how processes communicate with each other by sending and receiving messages.
Uses of High Performance Computing:
The enormous resources generated can be used in very diverse research projects, ranging from the design of new drugs, molecular dynamics, phologenetic studies, climate forecasting, among others.
In the field of Bioinformatics and Life Sciences, it allows us to carry out the processing associated with next generation sequencing (NGS), involving the processing of large volumes of data that traditional computer systems cannot handle or, if they do, they do, it is at a high cost of computing time.
The system is designed for general purpose applications, so it is capable of incorporating any type of scientific calculation tool. However, a set of applications have already been installed based on the main research projects of the BioCubaFarma centers that are already connected. Among them we can mention:
- GROMACS (molecular dynamics)
- Rosseta Commons
- Mathematica
- MATLAB
- Freesurfer (neuroimaging processing)
- PYTHON libraries for analysis of large volumes of data
- NAMD
- Weather research and Forecasting (WRF)
On this heterogeneous general-purpose distributed memory platform, it is possible to run various scientific calculation applications, according to customer demand, which is why our specialists are in charge of compiling them according to the characteristics of the computing node. When researchers need to use a new application, our specialists will provide users with an evaluation (parallelism profile) and the best options for optimization, all in order to achieve the most efficient processing in the shortest possible time.
The system is developed on free software, the most recent version of CentOS 8 is used as the operating system, as the resource management system we use Sun Grid Engine for planning and orderly dispatch of tasks. It has a monitoring website (http://gan-glia.biocubafarma.cu) in which users can see the status of their work and the consumption of resources in real time.
Get to know closely what we do; check our professionalism and dedication. We show you the experiences in the deployment of our services and products.