Home
Submit a proposal
Sign In
The terms of use for the HPI AI Service Centre can be found at the
project description page.
Proposal Information
Title of Proposal*
Abstract*
Please descripe the idea of your proposed project. Max. 350 characters.
Categories of your Proposal*
Machine Learning
Other
Generative AI
Data Science
Predictive Analytics
Natural Language Processing
Computer Vision
Healthcare
Explainable & Trustworthy AI
Recommendation Systems
Please select all categories that might fit your proposal.
Optional PDF or TXT file (max. 2.0 MiB)
This abstract will be used for presenting the current research at the HPI AI Service Centre (i.e. published on our website). You may provide an extended abstract and further insight into the proposed pilot project (pdf format, max. 2.0 MiB). (Approach to address the project topic, project plan, references to prior practical experience or related publications).
Organization
Please fill in the information about your organization.
Organization Name*
Department
Address*
Country*
Website*
Principal Investigator
This is the main responsible person when using the resources (e.g. a professor, managing director, head of department).
Title
Firstname*
Lastname*
Email*
If you are working on your project with another organisation, please provide your main contact person from your project partner.
Partner Main Contact
The main contact of your project partner.
Title
Firstname*
Lastname*
Email*
Project Partner Details: Organization Name, Address
Requested Resources
Training Cluster: NVIDIA H100
---------
1 GPUs
2 GPUs
4 GPUs
8 GPUs
12 GPUs
16 GPUs
20 GPUs
24 GPUs
The NVIDIA H100, based on the Hopper architecture, is a flagship GPU for AI and high-performance computing. Successor of A100 and basis for our training cluster.
Inference Cluster: NVIDIA A30
---------
2 GPUs
4 GPUs
The NVIDIA A30 is a data center GPU designed for AI inference, training, and high-performance computing. It can be used for diverse workloads such as conversational AI, recommendation systems, and scientific simulations. Basis of our infernce cluster.
ARM Server: L40S
---------
yes
no
The NVIDIA L40S is an enhanced version of the L40, featuring increased performance and scalability for AI, rendering, and immersive graphics.
Edge Computing: NVIDIA Jetson
---------
yes
no
NVIDIA Jetson is a family of edge AI platforms designed for robotics, IoT, and embedded systems based on an ARM architecture central processing unit (CPU). Jetson is a low-power system and is designed for accelerating machine learning applications to simulate environments for edge devices.
Data Storage Size
---------
< 250 GB
< 500 GB
< 750 GB
< 1 TB
< 10 TB
>= 10 TB
Exclusive resource access
---------
yes
no
If you need an exclusive access for a resource please contact us via email at least 2 weeks before.
VM M:
2 vCPU, 2 GB RAM, 20 GB HDD
VM L:
4 vCPU, 4 GB RAM, 40 GB HDD
VM XL:
8 vCPU, 8 GB RAM, 80 GB HDD
GPU Server: L40:
The NVIDIA L40 is a professional GPU optimized for graphics, AI, and rendering workloads.
Captcha
Submit