Tagged: Performance Toggle Comment Threads | Keyboard Shortcuts
-
Wang
-
Wang
What is High Performance Computing?
High Performance Computing most generally refers to the practice of aggregating computing power in a way that delivers much higher performance than one could get out of a typical desktop computer or workstation in order to solve large problems in science, engineering, or business.
https://www.usgs.gov/core-science-systems/sas/arc/about/what-high-performance-computing -
Wang
-
Wang
K8S Tools Sharing
Kubecost Core Architecture Overview
Kustomize
kustomize lets you customize raw, template-free YAML files for multiple purposes, leaving the original YAML untouched and usable as is.
-
Wang
-
Wang
Alluxio
Data Locality: Bring your data close to compute.
https://www.alluxio.io/
Make your data local to compute workloads for Spark caching, Presto caching, Hive caching and more.
Data Accessibility: Make your data accessible.
No matter if it sits on-prem or in the cloud, HDFS or S3, make your files and objects accessible in many different ways.
Data On-Demand: Make your data as elastic as compute.
Effortlessly orchestrate your data for compute in any cloud, even if data is spread across multiple clouds. -
Wang
-
Wang
-
Wang
-
Wang
Slurm
Slurm is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for large and small Linux clusters. Slurm requires no kernel modifications for its operation and is relatively self-contained. As a cluster workload manager, Slurm has three key functions. First, it allocates exclusive and/or non-exclusive access to resources (compute nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work (normally a parallel job) on the set of allocated nodes. Finally, it arbitrates contention for resources by managing a queue of pending work.
https://slurm.schedmd.com/
Reply