High-Performance Computing Applications in Edge Devices and Data Centers
Non-stationary equipment will increasingly use machine learning to improve adaptability in changing environments. Examples include on-board analytics on drones, in-field diagnostics with medical instrumentation, and autonomous decision-making by factory robots. However, power-efficient machine learning requires specialized computing cores with multiple processing nodes and distributed embedded memory caches. For flexible designs aiming at high-yield manufacturing, a heterogeneous mix of computing architectures will be implemented using chiplets and integrated into a single package.
In more traditional high-performance computing applications, such as data centers and self-driving vehicles, computational density and power efficiency are critical concerns. Even though power is less constrained in these applications, overall power costs and thermal management challenges are on the rise as the computational load of modern AI and machine learning grows. Use of chiplets is driven by the need for computational density. Power delivery and thermal management are key facets of the resulting highly integrated designs.
Return to topic overview