BrandPost: Use More Cores to Speed Execution

IT Articles

by Frank 16 Views 0

It seems obvious, right? All those cores in that CPU ready and waiting for your elegant, threaded and parallelized code to blast off, reducing execution time to a small fraction of what it used to be. Or that’s the thought process. In reality, threading and parallelizing code can be challenging, which is why Intel has developed libraries to do the heavy lifting for you. One in particular every C++ developer should have is the Threading Building Blocks (TBB) library, which provide a broad range of features for not only using multiple cores on a single Intel CPU, but supports heterogeneous programming – enabling distribution of threads to differing systems such as Intel Core, Xeon, and Xeon Phi, even Arm and Power architecture CPUs – all of which can participate.

Requiring NO special compiler support, this C++ library can make the impossible reality, supporting compute-intensive applications like finite element analysis, AI and automation, and medical applications by automagically mapping logical parallelism onto threads – making the best use of CPU resources with the least work on your part.

Unlike other threading packages, TBB lets you specify logical parallelism instead of threads, and includes generic parallel algorithms, concurrent containers, a scalable memory allocator, work-stealing task scheduler, and low-level synchronization primitives.

Intel offers TBB for free, and in an open-source version too. If you want priority support and a seamless out-of-the-box experience, you can get TBB as part of Intel® Parallel Studio XE and Intel® System Studio suites, which include direct interaction with Intel engineers, responsive help with your technical questions, the ability to learn from other experts via community product forms, and access to a huge self-help library built from decades of experience. The result? Future-proof code that will scale with your system.

When big problems need more horsepower, cluster computing may be the answer. When that need arises, you can extend to cores on other systems with the help of Intel’s MPI Library.

Designed for shared-memory clusters, Intel® MPI Library supports Linux and Windows platforms running C, C++, or Fortran code, and leverages the MPI standard to scale your code across a compute cluster interconnected with high speed fabric, such as those built with Intel® Omni-Path Architecture. The result can be parallel computing performance boost built on an industry standard that can scale forward as your platforms expand and leverage the latest multicore CPUs.

Like TBB, you can get the Intel® MPI library for free, or sign up for a license to get priority support.

How many cores is enough? You tell me.

Comments