On this site you will find information about how researchers can get access to HPC ressources on either AU's own facilities or through national or international HPC cooperation.
AU offers access to national HPC resources in collaboration with DeiC (Danish e-infrastructure Coorporation).
This collaboration means that researchers at AU can apply for acces to the national HPC resources under DeiC.
Moreover, AU itself has access to a number of HPC resources, that we make available to our reseachers.
HPC stands for High-Performance Computing. This describes a number og different machines capable of calculations and data handling that surpasses the scope of a normal PC. In other words, HPC is a research tool that makes formerly impossible projects possible.
If your project lives up to the criteria described here, you can apply for HPC resources.
DeiC has four different HPC types at their disposal. The HPC types are offered by a number of consortia, and the universities are parties to these. The HPC types are categorised according to which hardware they consist of, as well as which demands the system makes of the user. Here we will go through the different types of HPC you can apply for.
For further information and details, visit the EuroCC Knowledge Pool thorugh this link.
|DeiC Interactive HPC|| |
This type is easy to use for new users and for educational purposes.
The primary target group for Interactive HPC are users, who desire an HPC-system, as close as possible to a normal computer. You access the computer through a browser and through that can use a large number of pre-installed programmes, just like on your own PC. Because Interactive HPC is so simple mto use, it is ideal for new users and students. On the other hand, the many available programmes offer experienced users a fount of opportunity.
For further information see https://interactivehpc.dk/#/.
|DeiC Throughput HPC|| |
Ideal for small and medium sized tasks that use big data/files and for analyses of sensitive personal data.
DeiC Throughput HPC offers a traditional HPC-setup, that you access via an SSH-connection to a Linux server. From here, you can run jobs thrugh a queuing system (e.g. Slurm), which handles resource allocation and runs. The system can handle large quantities og data with a high level of security, and is ideal for parallelisable programmes. All in all you have a large amount of control over your jobs on Throughput HPC, which is why it can take longer for new user to learn how to use it.
For more information anbout the system and how to gain access, see https://www.deic.dk/en/Supercomputing/Instructions-and-Guides/How-to-get-access-to-HPC-Type-2
|DeiC Large Memory HPC|| |
Used for large matrix problems or operations requiring large memory capacity and a relatively small number of effective computational cores.
Like the Throughput, Large Memory HPC uses a setup with access to a Linux server through an SSH-connection. The difference is that the computer behind Large Memory offers fewer CPU-cores, that have access to large amounts of fast memory. The system is designed for programmes that are limited by the amount of memory or latency as well as programmes, that cannot be effectively parallelised.
You can find documentation for the computer at https://docs.hpc-type3.sdu.dk/.
|DeiC Accelerated HPC|| |
Primarily for development plants that seek to prepare Danish researcher for the next generation of supercomputers.
As GPUs have increased the possibilities for extremely parallelisable programmes over the past few years, research is now being carried out into other types of hardware that can accelerate specific operations. Accelerated HPC offers the opportunity to test those kinds of solutions with a view to the HPC of the future.
Behind the project stands a collaboration between ITU, RUC and KU, and the plant is being developed and will be serviced by KU.
Accelerated HPC is being built, and is not accepting applications at this time.
|LUMI Capability HPC|| |
Capability HPC is the European pre-exascale supercomputer, LUMI. LUMI stands for "Large Unified Modern Infrastructure", and it is housed in the CSC datacentre in Kajaani, Finland.
LUMI offers a setup similar to that of Throughput HPC, but with the very height of hardware. Specifically, the computer's nodes are connected to the memory, the storage, and each other in a way that minimises latency during communication. This means that LUMI can handle calculations that surpass the capacity of Throughput HPC due to latency or memory.
LUMI is financed 50% by EuroHPC Joint Undertaking, and 50% by the consortium countries, Denmark among them. Denmark has access to 3% of the LUMI resources, and AU has access to its own part hereof, which is portioned by the HPC forum through recurring calls.
You can read more about LUMI here: https://www.lumi-supercomputer.eu/
You can apply for HPC resources for both small and large project. The two kinds of projects each have their own application process.
A small project (or a ”sandbox project”) is defined as a project that demands less than:
You apply by emailing firstname.lastname@example.org, with a brief description of the following:
|5th of January||Call published by DeiC: H1-2023 Call for applications for access to the e-resources|
|5th of March||Deadline for application to DeiC via email: email@example.com|
The resources allocated through the current call will be ready for use on the 1st of July, 2023.
There is no deadline for a small projects, so you can always send an application.
The local HPC Front-office will handle your application and return with a response within 1-2 weeks.
The applications are assessed in relation to research quality, qualification of the applicant, feasibility, publication and dissemination of the result.
Applications that are judged as not fulfilling the criteriae will be rejected.
Upon rejection, the application is informed of the possiblity of applying for resources in the "sandbox".
|5th of January - 5th of March||Application is open to DeiC via email firstname.lastname@example.org.|
|15th of June|| |
DeiC has completed assessment of the applications.
Applicants, universities, and HPC centres are briefed about the result.
Resources are expected to be ready for use on 1st of July
|16th of June -16th July||AU is free to dispose of a part of the HPC-ressourcer itself. These resources are granted to those AU-applicants, who did not receive resources from DeiC, if the researchers in question decide to apply to the HPC Forum. The Forum handles these applications at their first meeting after the DeiC results are announced.|
Out of the national HPC resources DeiC coordinates, 50% are allocated locally at the individual universities, while 45% are allocated following calls. The final 5% are so-called "sandbox", and is used for testing calculation methods by both new and experienced users.
The "sandbox" is allocated by DeiC's head of HPC, Eske Christiansen. You can read more about how to apply for sandbox resources here: link.
Should you decide to apply for access to the 45% resources allocated following calls, you application will be assessed by the E-ressource Committee, a national committee for allocation convened by DeiC, and containing representatives from all the Danish universities and scientific fields. There are two annual calls, in February (with a deadline in April) and August (with a deadline i October).
The individual university is responsible for allocating its own resources among the researchers.
At AU it is decided that 50% of the resources are reserved for an ”AU sandbox”. This is reserved for projects that need a relatively small calculating capacity.