The group uses modern concepts of High Performance Computing HPC for the development of methods of seismic data processing. Pre-Stack Pro is pre-stack seismic analysis software that combines pre-stack visualization, processing, and interpretation in one powerful platform.
ALOMA is a failure tolerant runtime system that allows seismic applications to run efficiently on large distributed systems. Seismic depth migration algorithms calculate images of the Earth's subsurface from the measured and pre-processed seismic reflection data.
The group Green by IT works on research and development of new technologies for the innovative use of renewable energies. The mySmartGrid infrastructure offers flexible components for energy information systems.
They range from the measurement component to the home automation system. The project for the energy management of rental housing with open source smart meters EMOS makes it possible to improve the own room climate. PVCAST is a forecasting service that generates the seven day forecast for the production of a photovoltaic plant.
And, don't miss the biggest and best summary table I ever created on the last page. In case you missed part one you can find it here. Many a CPU cycle is wasted waiting for that data block. Read on how to feed you data appetite. Phil show doesn't it? While you get a huge bang for the buck from them, somehow you have to get the data to and from the processors.
Moreover, some applications have fairly benign IO requirements and others need really large amounts of IO. Regardless of your IO requirements you will need some type of file system for your cluster.
Originally, I had wanted to update the original article, however, the updates became so large that it's really an entirely new article. This facilitates the changeover and use of a cluster. Thus, it is not enough to provide user interfaces and libraries. There must also be a smooth integration of these into existing and emerging clusters.
An intuitive software environment on the clusters increases the usability for all users. By using container images, we quickly and easily provide the software required for a wide variety of applications and meet the needs of the user without overloading the operating system of the computing nodes. Since the software is located in the image, it can be managed and updated via it. For most users, working with a graphical user interface is much more familiar than on the command line of a Linux system.
With web-based front-ends like Jupyter Notebooks or Theia, users are not forced to install extra software on their operating system to access the cluster. We have simplified the process to such an extent that the user only needs to specify the number of GPUs and compute nodes required, Carme takes care of the rest.
Thanks to the parallel file system BeeGFS developed in-house, data can be made available quickly and effectively during the running simulation. With the help of monitoring tools such as Zabbix, the cluster administrator can see GPU, CPU, memory, and network utilization, as well as share this information with the user through diagrams.
0コメント