Réunions
Réunion du 25 janvier 07
La réunion aura lieu à l'ENS de Lyon dans l'amphi B. Venir a l'ENS.
Planning
- 09H30 -> 10H00 : Accueil
- : Generation aleatoire de graphes pour le benchmarking par Matthieu Perotin (LI-tours)
- : Generation de structure combinatoire par Jean-Marc Vincent (LIG)
- 11H30 -> 13h00 : Déjeuner
- : Ordonnancement sur machine heterogene a but de Makespan et de Fiabilite par Emmanuel Jeannot(LORIA) et Erik Saule(LIG)
- : Mapping pipeline skeletons on heterogeneous platforms par Anne Benoit Et Yves Robert(LIP)
- : Break
- : Performance des algorithmes a veracite garantie pour l'ordonnancement de taches individualistes par Fanny Pascual(LIG)
- : Multiplication de matrices sur plate-forme maitre esclave. par Jean-Francois Pineau(LIP)
- 16H00 -> 17H00 : Discussion
Details
Accueil
Generation aleatoire de graphes pour le benchmarking
- Slide Show (PDF)
Generation de structure combinatoire
Déjeuner
Ordonnancement sur machine heterogene a but de Makespan et de Fiabilite
- Slide Show (PDF)
Mapping pipeline skeletons on heterogeneous platforms
Mapping applications onto parallel platforms is a challenging problem, that becomes even more difficult when platforms are heterogeneous --nowadays a standard assumption. A high-level approach to parallel programming not only eases the application developer's task, but it also provides additional information which can help realize an efficient mapping of the application.
In this talk, we discuss the mapping of pipeline skeletons onto different types of platforms: fully homogeneous platforms with identical processors and interconnection links; communication homogeneous platforms, with identical links but different speed processors; and finally, heterogeneous platforms. We assume that a pipeline stage must be mapped on a single processor, and we establish new theoretical complexity results for two different mapping policies: a mapping can be either one-to-one (a processor is assigned at most one stage), or interval-based (a processor is assigned an interval of consecutive stages). We provide several efficient polynomial heuristics for the most important policy/platform combination, namely interval-based mappings on communication homogeneous platforms.
- Slide Show (PDF)
Break
Performance des algorithmes a veracite garantie pour l'ordonnancement de taches individualistes
This talk deals with problems which fall into the domain of selfish scheduling: a protocol is in charge of building a schedule for a set of tasks without directly knowing their lengths. The protocol gets these informations from the agents who control the tasks. The aim of each agent is to minimize the completion time of her task while the protocol tries to minimize the maximal completion time. When an agent reports the length of her task, she is aware of what the others bid and also of the protocol's algorithm. Then, an agent can bid a false value in order to optimize her individual objective function. With erroneous information, even the most efficient algorithm may produce unreasonable solutions. An algorithm is truthful if it prevents the selfish agents from lying about the length of their tasks. The central question in this paper is: ``How efficient a truthful algorithm can be?" We study the problem of scheduling selfish tasks on parallel identical machines. Without considering side payments, our goal is to give a picture of the performance under the condition of truthfulness.
- Slide Show (PPT)
Multiplication de matrices sur plate-forme maitre esclave.
This work is aimed at designing efficient parallel matrix-product algorithms for heterogeneous master-worker platforms. There are three key hypotheses that render our work original and innovative:
- We assume that all matrix files originate from, and must be returned to, the master. The master distributes both data and computations to the workers (while in ScaLAPACK, input and output matrices are initially distributed among participating resources). Typically, our approach is useful in the context of speeding up MATLAB or SCILAB clients running on a server.
- We target fully heterogeneous platforms, where computational resources have different computing powers. Also, the workers are connected to the master by links of different capacities. This framework is realistic when deploying the application from the server, which is responsible for enrolling authorized resources.
- Because we investigate the parallelization of large problems, we cannot assume that full matrix panels can be stored in the worker memories and re-used for subsequent updates. The amount of memory available in each worker is expressed as a given number $\mem_i$ of buffers, where a buffer can store a square block of matrix elements. The size of these square blocks is chosen so as to harness the power of Level 3 BLAS routines
- Slide Show (PDF)