Computing On Many-Cores [call for collaboration]

Computing On Many-Cores [call for collaboration]

Six years ago, I graduated from The University of Perpignan Via Domitia in France. I was working with The Dali Team at LIRMM Research group. Now they continue their microarchitecture research adventure working in Computing On Many Cores project.

This project presents an alternative method to parallelize programs, better suited to many-core processors than actual OS/API based approaches like OpenMP and MPI. The method relies on a parallelizing hardware and an adapted programming style. It frees and captures the Instruction Level Parallelism (ILP). A many-core design is presented in which cores are multithreaded and able to fork new threads. The programming style is based on functions. The hardware creates a concurrent thread at each function call. The programming style and the hardware create the conditions to free the ILP, by eliminating the architectural dependences between a call and its continuation after return. Dali group illustrate the method on a sum reduction, a matrix multiplication and a sort. They measure the ILP of the parallel runs and show that it is high enough to feed thousands of cores because it increases with data size. They compare our method to pthread parallelization, showing that (i) parallel execution is deterministic, (ii) our thread management is cheap, (iii) the parallelism is implicit and(iv) the method parallelizes functions and loops. Implicit parallelism makes parallel code easy to write and read. Deterministic parallel execution makes parallel code easy to debug.

If you are interested working in this project (idea, collaboration, resources, ... ) please contact the Dali Team:

Dali Lab

Team working in Computing On Many Cores project: Bernard Goossens, David Parello, Katarzyna Porada and Djallal Rahmoune

Papers: Toward a Core Design to Distribute an Execution on a Manycore Processor, Parallel Locality and Parallelization Quality by Bernard Goossens et all.

To view or add a comment, sign in

More articles by Mourad Bouache, Ph.D.

  • Big Data PROBLEM!

    Big data is basically saying that we're collecting more data than we can handle, that it's much easier now to create…

    1 Comment
  • Combine Spark with MLibrary and Big Data tools

    Spark is a distributed, data processing platform for big data. Distributed means Spark runs on a cluster of servers.

  • Smart Java Garbage Collection

    Java coding in enterprise applications is increasing and is more and more often replacing the old or previous C/C++…

  • CPU or GPU?

    GPU is the Graphic Processing Unit. And for years, computers were dependent upon the CPU.

    4 Comments
  • CPU Microarchitecture Simulation Using Data Science

    CPU Simulators CPU simulators can be created at different levels of abstraction. Early design stages require simulation…

  • Storing Big Data

    Following the previous article about Big Data, I would like to stay within the same BD context. Let's now talk about…

  • What’s Big Data?

    Big Data refers to data that because of its size, speed or format, that is, its volume, velocity or variety, cannot be…

    4 Comments
  • Secure Sockets Layer Performance

    Secure Sockets Layer (SSL) is cryptographic protocol that’s used to provide security for communication over networks…

    1 Comment
  • Deep Learning Platforms

    WHAT IS DEEP LEARNING Since 2006, Deep Structured Learning, also known today as Deep learning or Hierarchical Learning,…

  • Memory Performance Across CPU Microarchitectures

    The modern evolution of CPU and memory showcase that Moore’s law is still applicable to this day. The transistor count…

Others also viewed

Explore content categories