Skip to content

Commit

Permalink
Merge branch 'master' of https://gitlab.inria.fr/starpu/starpu
Browse files Browse the repository at this point in the history
  • Loading branch information
Maxime Gonthier committed Apr 3, 2024
2 parents a7b808a + 8a373cc commit 03d42ce
Show file tree
Hide file tree
Showing 15 changed files with 103 additions and 67 deletions.
10 changes: 5 additions & 5 deletions configure.ac
Original file line number Diff line number Diff line change
Expand Up @@ -159,18 +159,18 @@ AC_MSG_RESULT($enable_prof_tool)

###############################################################################
# #
# Hierarchical dags support #
# Recursive tasks support #
# #
###############################################################################

AC_ARG_ENABLE(bubble, [AS_HELP_STRING([--enable-bubble],
[build the hierarchical dags (a.k.a bubble) support])],
[build the recursive tasks (a.k.a bubble) support])],
enable_bubble=$enableval, enable_bubble=no)

AC_MSG_CHECKING([for hierarchical dags - a.k.a bubble - support])
AC_MSG_CHECKING([for recursive tasks - a.k.a bubble - support])

if test x$enable_bubble = xyes; then
AC_DEFINE(STARPU_BUBBLE, [1], [Define this to enable hierarchical dags support])
AC_DEFINE(STARPU_BUBBLE, [1], [Define this to enable recursive tasks support])
fi

AM_CONDITIONAL([STARPU_BUBBLE], [test "x$enable_bubble" = "xyes"])
Expand Down Expand Up @@ -4548,7 +4548,7 @@ AC_MSG_NOTICE([
Native fortran support: $enable_build_fortran
Native MPI fortran support: $use_mpi_fort
Support for multiple linear regression models: $support_mlr
Hierarchical dags support: $enable_bubble
Recursive tasks support: $enable_bubble
JULIA enabled: $enable_julia
])

Expand Down
6 changes: 3 additions & 3 deletions doc/doxygen/chapters/api/bubble_support.doxy
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
/* StarPU --- Runtime system for heterogeneous multicore architectures.
*
* Copyright (C) 2017-2023 Université de Bordeaux, CNRS (LaBRI UMR 5800), Inria
* Copyright (C) 2017-2024 Université de Bordeaux, CNRS (LaBRI UMR 5800), Inria
*
* StarPU is free software; you can redistribute it and/or modify
* it under the terms of the GNU Lesser General Public License as published by
Expand All @@ -18,8 +18,8 @@
* The file is empty but necessary to define the group API_Bubble
*/

/*! \defgroup API_Bubble Hierarchical Dags
/*! \defgroup API_Bubble Recursive Tasks

\brief API for Hierarchical DAGS
\brief API for Recursive Tasks

*/
14 changes: 7 additions & 7 deletions doc/doxygen/chapters/starpu_extensions/bubble.doxy
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
/* StarPU --- Runtime system for heterogeneous multicore architectures.
*
* Copyright (C) 2017-2023 Université de Bordeaux, CNRS (LaBRI UMR 5800), Inria
* Copyright (C) 2017-2024 Université de Bordeaux, CNRS (LaBRI UMR 5800), Inria
*
* StarPU is free software; you can redistribute it and/or modify
* it under the terms of the GNU Lesser General Public License as published by
Expand All @@ -14,7 +14,7 @@
* See the GNU Lesser General Public License in COPYING.LGPL for more details.
*/

/*! \page HierarchicalDAGS Hierarchical DAGS
/*! \page RecursiveTasks Recursive Tasks

The STF model has the intrinsic limitation of supporting static task
graphs only, which leads to potential submission overhead and to a
Expand All @@ -23,11 +23,11 @@ heterogeneous systems.

To address these problems, we have extended the STF model to enable
tasks subgraphs at runtime. We refer to these tasks as
<em>hierarchical tasks</em>. This approach allows for a more dynamic
<em>recursive tasks</em>. This approach allows for a more dynamic
task graph. This allows to dynamically adapt the granularity to meet
the optimal size of the targeted computing resource.

<em>Hierarchical tasks</em> are tasks that can transform themselves into
<em>Recursive tasks</em> are tasks that can transform themselves into
a new task-graph dynamically at runtime. Programmers submit a coarse
version of the DAG, called the bubbles graph, which represents the
general shape of the application tasks graph. The execution of this
Expand All @@ -42,7 +42,7 @@ tasks.

\section BubblesExamples An Example

In order to understand the hierarchical tasks model, an example of
In order to understand the recursive tasks model, an example of
"bubblification" is showed here. We start from a simple example,
multiplying the elements of a vector.

Expand Down Expand Up @@ -168,7 +168,7 @@ int vector_bubble()

The full example is available in the file <c>bubble/tests/vector/vector.c</c>.

To define a hierarchical task, one needs to define the fields
To define a recursive task, one needs to define the fields
starpu_codelet::bubble_func and starpu_codelet::bubble_gen_dag_func.

The field starpu_codelet::bubble_func is a pointer function which will
Expand All @@ -188,7 +188,7 @@ as parameter the task being checked, and the value specified with
::STARPU_BUBBLE_FUNC_ARG.

When executed, the function starpu_codelet::bubble_gen_dag_func will be
given as parameter the task being turned into a hierarchical task and
given as parameter the task being turned into a recursive task and
the value specified with ::STARPU_BUBBLE_GEN_DAG_FUNC_ARG.

An example involving these functions is in <c>bubble/tests/basic/brec.c</c>. And more examples are available in <c>bubble/tests/basic/*.c</c>.
Expand Down
6 changes: 3 additions & 3 deletions doc/doxygen/chapters/starpu_extensions/extensions_intro.doxy
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
/* StarPU --- Runtime system for heterogeneous multicore architectures.
*
* Copyright (C) 2009-2023 Université de Bordeaux, CNRS (LaBRI UMR 5800), Inria
* Copyright (C) 2009-2024 Université de Bordeaux, CNRS (LaBRI UMR 5800), Inria
*
* StarPU is free software; you can redistribute it and/or modify
* it under the terms of the GNU Lesser General Public License as published by
Expand Down Expand Up @@ -85,8 +85,8 @@ can transparently be run using StarPU, by givings unified access
to every available OpenCL device.
</li>
<li>
We propose a hierarchical tasks model in Chapter \ref
HierarchicalDAGS to enable tasks subgraphs at runtime for a more
We propose a recursive tasks model in Chapter \ref
RecursiveTasks to enable tasks subgraphs at runtime for a more
dynamic task graph.
</li>
<li>
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
/* StarPU --- Runtime system for heterogeneous multicore architectures.
*
* Copyright (C) 2009-2023 Université de Bordeaux, CNRS (LaBRI UMR 5800), Inria
* Copyright (C) 2009-2024 Université de Bordeaux, CNRS (LaBRI UMR 5800), Inria
* Copyright (C) 2020 Federal University of Rio Grande do Sul (UFRGS)
*
* StarPU is free software; you can redistribute it and/or modify
Expand Down Expand Up @@ -559,7 +559,7 @@ Enable LLVM OpenMP Support (\ref OMPLLVM)
<dd>
\anchor enable-bubble
\addindex __configure__--enable-bubble
Enable Hierarchical dags support (\ref HierarchicalDAGS)
Enable recursive tasks support (\ref RecursiveTasks)

<dt>--enable-parallel-worker</dt>
<dd>
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
/* StarPU --- Runtime system for heterogeneous multicore architectures.
*
* Copyright (C) 2009-2023 Université de Bordeaux, CNRS (LaBRI UMR 5800), Inria
* Copyright (C) 2009-2024 Université de Bordeaux, CNRS (LaBRI UMR 5800), Inria
*
* StarPU is free software; you can redistribute it and/or modify
* it under the terms of the GNU Lesser General Public License as published by
Expand Down Expand Up @@ -76,7 +76,7 @@ The documentation chapters include
<li> \ref FaultTolerance
<li> \ref FFTSupport
<li> \ref SOCLOpenclExtensions
<li> \ref HierarchicalDAGS
<li> \ref RecursiveTasks
<li> \ref ParallelWorker
<li> \ref InteroperabilitySupport
<li> \ref SimGridSupport
Expand Down
8 changes: 4 additions & 4 deletions doc/doxygen/refman.tex
Original file line number Diff line number Diff line change
Expand Up @@ -273,10 +273,10 @@ \chapter{SOCL OpenCL Extensions}
\hypertarget{SOCLOpenclExtensions}{}
\input{SOCLOpenclExtensions}

\chapter{Hierarchical DAGS}
\label{HierarchicalDAGS}
\hypertarget{HierarchicalDAGS}{}
\input{HierarchicalDAGS}
\chapter{Recursive Tasks}
\label{RecursiveTasks}
\hypertarget{RecursiveTasks}{}
\input{RecursiveTasks}

\chapter{Parallel Workers}
\label{ParallelWorker}
Expand Down
10 changes: 5 additions & 5 deletions doc/doxygen_web_extensions/refman.tex
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
% StarPU --- Runtime system for heterogeneous multicore architectures.
%
% Copyright (C) 2013-2023 Université de Bordeaux, CNRS (LaBRI UMR 5800), Inria
% Copyright (C) 2013-2024 Université de Bordeaux, CNRS (LaBRI UMR 5800), Inria
% Copyright (C) 2013 Simon Archipoff
%
% StarPU is free software; you can redistribute it and/or modify
Expand Down Expand Up @@ -103,10 +103,10 @@ \chapter{SOCL OpenCL Extensions}
\hypertarget{SOCLOpenclExtensions}{}
\input{SOCLOpenclExtensions}

\chapter{Hierarchical DAGS}
\label{HierarchicalDAGS}
\hypertarget{HierarchicalDAGS}{}
\input{HierarchicalDAGS}
\chapter{Recursive Tasks}
\label{RecursiveTasks}
\hypertarget{RecursiveTasks}{}
\input{RecursiveTasks}

\chapter{Parallel Workers}
\label{ParallelWorker}
Expand Down
20 changes: 10 additions & 10 deletions include/starpu_task.h
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
/* StarPU --- Runtime system for heterogeneous multicore architectures.
*
* Copyright (C) 2009-2023 Université de Bordeaux, CNRS (LaBRI UMR 5800), Inria
* Copyright (C) 2009-2024 Université de Bordeaux, CNRS (LaBRI UMR 5800), Inria
* Copyright (C) 2011 Télécom-SudParis
* Copyright (C) 2016 Uppsala University
*
Expand Down Expand Up @@ -210,13 +210,13 @@ typedef void (*starpu_opencl_func_t)(void **, void *);
typedef void (*starpu_max_fpga_func_t)(void **, void *);

/**
@ingroup API_Bubble Hierarchical Dags
@ingroup API_Bubble Recursive Tasks
Bubble decision function
*/
typedef int (*starpu_bubble_func_t)(struct starpu_task *t, void *arg);
typedef int (*starpu_bubble_func_t)(struct starpu_task *, void *);

/**
@ingroup API_Bubble Hierarchical Dags
@ingroup API_Bubble Recursive Tasks
Bubble DAG generation function
*/
typedef void (*starpu_bubble_gen_dag_func_t)(struct starpu_task *t, void *arg);
Expand Down Expand Up @@ -1423,31 +1423,31 @@ struct starpu_task
#endif

/**
When using hierarchical dags, the job identifier of the
When using recursive tasks, the job identifier of the
bubble task which created the current task
*/
unsigned long bubble_parent;

/**
When using hierarchical dags, a pointer to the bubble
When using recursive tasks, a pointer to the bubble
decision function
*/
starpu_bubble_func_t bubble_func;

/**
When using hierarchical dags, a pointer to an argument to
When using recursive tasks, a pointer to an argument to
be given when calling the bubble decision function
*/
void *bubble_func_arg;

/**
When using hierarchical dags, a pointer to the bubble
When using recursive tasks, a pointer to the bubble
DAG generation function
*/
starpu_bubble_gen_dag_func_t bubble_gen_dag_func;

/**
When using hierarchical dags, a pointer to an argument to
When using recursive tasks, a pointer to an argument to
be given when calling the bubble DAG generation function
*/
void *bubble_gen_dag_func_arg;
Expand Down Expand Up @@ -1538,7 +1538,7 @@ struct starpu_task
starpu_codelet::nbuffers, or starpu_task::nbuffers if the former is
\ref STARPU_VARIABLE_NBUFFERS.
*/
#define STARPU_TASK_GET_NBUFFERS(task) ((unsigned)((task)->cl->nbuffers == STARPU_VARIABLE_NBUFFERS ? ((task)->nbuffers) : ((task)->cl->nbuffers)))
#define STARPU_TASK_GET_NBUFFERS(task) ((unsigned) ( ((task)->cl) ? (((task)->cl->nbuffers == STARPU_VARIABLE_NBUFFERS) ? ((task)->nbuffers) : ((task)->cl->nbuffers)) : (task)->nbuffers))

/**
Return the \p i -th data handle of \p task. If \p task is defined
Expand Down
21 changes: 17 additions & 4 deletions mpi/tests/mpi_data_cpy.c
Original file line number Diff line number Diff line change
Expand Up @@ -46,12 +46,19 @@ int main(int argc, char **argv)
int ret;
int value = 0;
starpu_data_handle_t *data;
struct starpu_conf conf;
int mpi_init;
int i;

MPI_INIT_THREAD(&argc, &argv, MPI_THREAD_SERIALIZED, &mpi_init);

ret = starpu_mpi_init_conf(&argc, &argv, mpi_init, MPI_COMM_WORLD, NULL);
starpu_conf_init(&conf);
starpu_conf_noworker(&conf);
conf.ncpus = -1;
conf.nmpi_ms = -1;
conf.ntcpip_ms = -1;

ret = starpu_mpi_init_conf(&argc, &argv, mpi_init, MPI_COMM_WORLD, &conf);
STARPU_CHECK_RETURN_VALUE(ret, "starpu_mpi_init_conf");

starpu_mpi_comm_rank(MPI_COMM_WORLD, &rank);
Expand All @@ -71,7 +78,8 @@ int main(int argc, char **argv)
{
int j;

starpu_mpi_task_insert(MPI_COMM_WORLD, &mycodelet, STARPU_RW, data[i%size], STARPU_VALUE, &rank, sizeof(rank), 0);
ret = starpu_mpi_task_insert(MPI_COMM_WORLD, &mycodelet, STARPU_RW, data[i%size], STARPU_VALUE, &rank, sizeof(rank), 0);
if (ret == -ENODEV) goto enodev;

for(j = 0; j<size; j++)
{
Expand All @@ -80,13 +88,18 @@ int main(int argc, char **argv)
}

starpu_task_wait_for_all();

enodev:
for(i=0; i<size; i++)
{
starpu_data_unregister(data[i]);
}

FPRINTF_MPI(stderr, "value after calculation: %d (expected %d)\n", value, INC_COUNT);
STARPU_ASSERT_MSG(value == INC_COUNT, "[rank %d] value %d is not the expected value %d\n", rank, value, INC_COUNT);
if (ret == 0)
{
FPRINTF_MPI(stderr, "value after calculation: %d (expected %d)\n", value, INC_COUNT);
STARPU_ASSERT_MSG(value == INC_COUNT, "[rank %d] value %d is not the expected value %d\n", rank, value, INC_COUNT);
}

starpu_mpi_shutdown();

Expand Down
14 changes: 10 additions & 4 deletions src/core/perfmodel/energy_model.c
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
/* StarPU --- Runtime system for heterogeneous multicore architectures.
*
* Copyright (C) 2008-2022 Université de Bordeaux, CNRS (LaBRI UMR 5800), Inria
* Copyright (C) 2008-2024 Université de Bordeaux, CNRS (LaBRI UMR 5800), Inria
*
* StarPU is free software; you can redistribute it and/or modify
* it under the terms of the GNU Lesser General Public License as published by
Expand Down Expand Up @@ -146,15 +146,18 @@ int starpu_energy_start(int workerid STARPU_ATTRIBUTE_UNUSED, enum starpu_worker
#ifdef HAVE_NVMLDEVICEGETTOTALENERGYCONSUMPTION
case STARPU_CUDA_WORKER:
{
if (!_starpu_nvmlDeviceGetHandleByIndex || !_starpu_nvmlDeviceGetTotalEnergyConsumption)
return -1;

STARPU_ASSERT_MSG(workerid != -1, "For CUDA GPUs we measure each GPU separately, please specify a worker\n");
int devid = starpu_worker_get_devid(workerid);
int ret = nvmlDeviceGetHandleByIndex_v2(devid, &device);
int ret = _starpu_nvmlDeviceGetHandleByIndex(devid, &device);
if (ret != NVML_SUCCESS)
{
_STARPU_DISP("Could not get CUDA device %d from nvml\n", devid);
return -1;
}
ret = nvmlDeviceGetTotalEnergyConsumption(device, &energy_begin);
ret = _starpu_nvmlDeviceGetTotalEnergyConsumption(device, &energy_begin);
if (ret != NVML_SUCCESS)
{
_STARPU_DISP("Could not measure energy used by CUDA device %d\n", devid);
Expand Down Expand Up @@ -225,8 +228,11 @@ int starpu_energy_stop(struct starpu_perfmodel *model, struct starpu_task *task,
#ifdef HAVE_NVMLDEVICEGETTOTALENERGYCONSUMPTION
case STARPU_CUDA_WORKER:
{
if (!_starpu_nvmlDeviceGetTotalEnergyConsumption)
return -1;

STARPU_ASSERT_MSG(workerid != -1, "For CUDA GPUs we measure each GPU separately, please specify a worker\n");
int ret = nvmlDeviceGetTotalEnergyConsumption(device, &energy_end);
int ret = _starpu_nvmlDeviceGetTotalEnergyConsumption(device, &energy_end);
if (ret != NVML_SUCCESS)
return -1;
energy = (energy_end - energy_begin) / 1000.;
Expand Down
4 changes: 3 additions & 1 deletion src/core/perfmodel/perfmodel_bus.c
Original file line number Diff line number Diff line change
Expand Up @@ -51,6 +51,8 @@
#ifdef STARPU_HAVE_HWLOC
#include <hwloc.h>
#ifdef STARPU_HAVE_NVML_H
#define nvmlDeviceGetPciInfo _starpu_nvmlDeviceGetPciInfo
#define nvmlDeviceGetUUID _starpu_nvmlDeviceGetUUID
#include <hwloc/nvml.h>
#endif
#ifndef HWLOC_API_VERSION
Expand Down Expand Up @@ -2090,7 +2092,7 @@ static hwloc_obj_t get_hwloc_cuda_obj(hwloc_topology_t topology, unsigned devid)
#if defined(STARPU_HAVE_NVML_H) && !defined(STARPU_USE_CUDA0) && !defined(STARPU_USE_CUDA1)
nvmlDevice_t nvmldev = _starpu_cuda_get_nvmldev(&props);

if (nvmldev && _starpu_nvmlDeviceGetIndex)
if (nvmldev && _starpu_nvmlDeviceGetIndex && _starpu_nvmlDeviceGetPciInfo && _starpu_nvmlDeviceGetUUID)
{
unsigned int index;
if (_starpu_nvmlDeviceGetIndex(nvmldev, &index) == NVML_SUCCESS)
Expand Down
Loading

0 comments on commit 03d42ce

Please sign in to comment.