diff --git a/.github/workflows/linting.yml b/.github/workflows/linting.yml
index 46cf268483..bdeab2abdd 100644
--- a/.github/workflows/linting.yml
+++ b/.github/workflows/linting.yml
@@ -17,4 +17,4 @@ on:
jobs:
call-workflow-passing-data:
name: Documentation
- uses: ROCm/rocm-docs-core/.github/workflows/linting.yml@develop
+ uses: ROCm/rocm-docs-core/.github/workflows/linting.yml@local_spellcheck
diff --git a/.gitignore b/.gitignore
index 6bdb3a4030..b918919f24 100644
--- a/.gitignore
+++ b/.gitignore
@@ -1,4 +1,5 @@
.*
+!.spellcheck.local.yaml
!.gitignore
*.o
*.exe
diff --git a/.spellcheck.local.yaml b/.spellcheck.local.yaml
new file mode 100644
index 0000000000..b82d3b9374
--- /dev/null
+++ b/.spellcheck.local.yaml
@@ -0,0 +1,10 @@
+matrix:
+- name: Markdown
+ sources:
+ - []
+- name: reST
+ sources:
+ - []
+- name: Cpp
+ sources:
+ - [ 'include/hip/*' ]
diff --git a/.wordlist.txt b/.wordlist.txt
index b3b8686678..2746b336b7 100644
--- a/.wordlist.txt
+++ b/.wordlist.txt
@@ -7,16 +7,19 @@ APUs
AQL
AXPY
asm
-Asynchrony
+asynchrony
backtrace
Bitcode
bitcode
bitcodes
+blockDim
+blockIdx
builtins
Builtins
CAS
clr
compilable
+constexpr
coroutines
Ctx
cuBLASLt
@@ -51,6 +54,7 @@ FNUZ
fp
gedit
GPGPU
+gridDim
GROMACS
GWS
hardcoded
@@ -87,6 +91,7 @@ iteratively
Lapack
latencies
libc
+libhipcxx
libstdc
lifecycle
linearizing
@@ -97,6 +102,7 @@ makefile
Malloc
malloc
MALU
+maxregcount
MiB
memset
multicore
@@ -118,6 +124,7 @@ overindexing
oversubscription
overutilized
parallelizable
+parallelized
pixelated
pragmas
preallocated
@@ -125,6 +132,7 @@ preconditioners
predefining
prefetched
preprocessor
+printf
profilers
PTX
PyHIP
@@ -149,10 +157,12 @@ sinewave
SOMA
SPMV
structs
+struct's
SYCL
syntaxes
texel
texels
+threadIdx
tradeoffs
templated
toolkits
@@ -167,5 +177,6 @@ unregister
upscaled
variadic
vulkan
+warpSize
WinGDB
zc
diff --git a/LICENSE.txt b/LICENSE.txt
index 797310b44b..a8d7060d44 100644
--- a/LICENSE.txt
+++ b/LICENSE.txt
@@ -1,4 +1,4 @@
-Copyright (c) 2008 - 2024 Advanced Micro Devices, Inc.
+Copyright (c) 2008 - 2025 Advanced Micro Devices, Inc.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
@@ -17,4 +17,3 @@ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
-
diff --git a/README.md b/README.md
index 610b2a89c7..32031be961 100644
--- a/README.md
+++ b/README.md
@@ -12,6 +12,9 @@ Key features include:
New projects can be developed directly in the portable HIP C++ language and can run on either NVIDIA or AMD platforms. Additionally, HIP provides porting tools which make it easy to port existing CUDA codes to the HIP layer, with no loss of performance as compared to the original CUDA application. HIP is not intended to be a drop-in replacement for CUDA, and developers should expect to do some manual coding and performance tuning work to complete the port.
+> [!NOTE]
+> The published documentation is available at [HIP documentation](https://rocm.docs.amd.com/projects/HIP/en/latest/index.html) in an organized, easy-to-read format, with search and a table of contents. The documentation source files reside in the `HIP/docs` folder of this GitHub repository. As with all ROCm projects, the documentation is open source. For more information on contributing to the documentation, see [Contribute to ROCm documentation](https://rocm.docs.amd.com/en/latest/contribute/contributing.html).
+
## DISCLAIMER
The information presented in this document is for informational purposes only and may contain technical inaccuracies, omissions, and typographical errors. The information contained herein is subject to change and may be rendered inaccurate for many reasons, including but not limited to product and roadmap changes, component and motherboard versionchanges, new model and/or product releases, product differences between differing manufacturers, software changes, BIOS flashes, firmware upgrades, or the like. Any computer system has risks of security vulnerabilities that cannot be completely prevented or mitigated.AMD assumes no obligation to update or otherwise correct or revise this information. However, AMD reserves the right to revise this information and to make changes from time to time to the content hereof without obligation of AMD to notify any person of such revisions or changes.THIS INFORMATION IS PROVIDED ‘AS IS.” AMD MAKES NO REPRESENTATIONS OR WARRANTIES WITH RESPECT TO THE CONTENTS HEREOF AND ASSUMES NO RESPONSIBILITY FOR ANY INACCURACIES, ERRORS, OR OMISSIONS THAT MAY APPEAR IN THIS INFORMATION. AMD SPECIFICALLY DISCLAIMS ANY IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR ANY PARTICULAR PURPOSE. IN NO EVENT WILL AMD BE LIABLE TO ANY PERSON FOR ANY RELIANCE, DIRECT, INDIRECT, SPECIAL, OR OTHER CONSEQUENTIAL DAMAGES ARISING FROM THE USE OF ANY INFORMATION CONTAINED HEREIN, EVEN IF AMD IS EXPRESSLY ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. AMD, the AMD Arrow logo, and combinations thereof are trademarks of Advanced Micro Devices, Inc. Other product names used in this publication are for identification purposes only and may be trademarks of their respective companies.
@@ -124,19 +127,9 @@ provides source portability to either platform. HIP provides the _hipcc_ compi
## Examples and Getting Started
-* A sample and [blog](https://github.com/ROCm/hip-tests/tree/develop/samples/0_Intro/square) that uses any of [HIPIFY](https://github.com/ROCm/HIPIFY/blob/amd-staging/README.md) tools to convert a simple app from CUDA to HIP:
-
- ```shell
- cd samples/01_Intro/square
- # follow README / blog steps to hipify the application.
- ```
-
-* Guide to [Porting a New Cuda Project](https://rocm.docs.amd.com/projects/HIP/en/latest/how-to/hip_porting_guide.html#porting-a-new-cuda-project)
-
-## More Examples
+* The [ROCm-examples](https://github.com/ROCm/rocm-examples) repository includes many examples with explanations that help users getting started with HIP, as well as providing advanced examples for HIP and its libraries.
-The GitHub repository [HIP-Examples](https://github.com/ROCm/HIP-Examples) contains a hipified version of benchmark suite.
-Besides, there are more samples in Github [HIP samples](https://github.com/ROCm/hip-tests/tree/develop/samples), showing how to program with different features, build and run.
+* HIP's documentation includes a guide for [Porting a New Cuda Project](https://rocm.docs.amd.com/projects/HIP/en/latest/how-to/hip_porting_guide.html#porting-a-new-cuda-project).
## Tour of the HIP Directories
diff --git a/docs/data/how-to/hip_runtime_api/asynchronous/sequential_async_event.drawio b/docs/data/how-to/hip_runtime_api/asynchronous/sequential_async_event.drawio
new file mode 100644
index 0000000000..2ea9376cf3
--- /dev/null
+++ b/docs/data/how-to/hip_runtime_api/asynchronous/sequential_async_event.drawio
@@ -0,0 +1,274 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/docs/data/how-to/hip_runtime_api/asynchronous/sequential_async_event.svg b/docs/data/how-to/hip_runtime_api/asynchronous/sequential_async_event.svg
new file mode 100644
index 0000000000..fe52799858
--- /dev/null
+++ b/docs/data/how-to/hip_runtime_api/asynchronous/sequential_async_event.svg
@@ -0,0 +1,2 @@
+
\ No newline at end of file
diff --git a/docs/faq.rst b/docs/faq.rst
index 2867cf06e7..5e67ec465d 100644
--- a/docs/faq.rst
+++ b/docs/faq.rst
@@ -47,7 +47,7 @@ The :doc:`HIP API documentation ` describes each API and
its limitations, if any, compared with the equivalent CUDA API.
The kernel language features are documented in the
-:doc:`/reference/cpp_language_extensions` page.
+:doc:`/how-to/hip_cpp_language_extensions` page.
Relation to other GPGPU frameworks
==================================
diff --git a/docs/how-to/debugging_env.rst b/docs/how-to/debugging_env.rst
index 7b3204143d..b3544a967f 100644
--- a/docs/how-to/debugging_env.rst
+++ b/docs/how-to/debugging_env.rst
@@ -1,3 +1,7 @@
+.. meta::
+ :description: Debug environment variables for HIP.
+ :keywords: AMD, ROCm, HIP, debugging, Environment variables, ROCgdb
+
.. list-table::
:header-rows: 1
:widths: 35,14,51
diff --git a/docs/how-to/hip_cpp_language_extensions.rst b/docs/how-to/hip_cpp_language_extensions.rst
new file mode 100644
index 0000000000..6b18bd01e3
--- /dev/null
+++ b/docs/how-to/hip_cpp_language_extensions.rst
@@ -0,0 +1,922 @@
+.. meta::
+ :description: This chapter describes the built-in variables and functions that
+ are accessible from HIP kernels and HIP's C++ support. It's
+ intended for users who are familiar with CUDA kernel syntax and
+ want to learn how HIP differs from CUDA.
+ :keywords: AMD, ROCm, HIP, CUDA, c++ language extensions, HIP functions
+
+################################################################################
+HIP C++ language extensions
+################################################################################
+
+HIP extends the C++ language with additional features designed for programming
+heterogeneous applications. These extensions mostly relate to the kernel
+language, but some can also be applied to host functionality.
+
+********************************************************************************
+HIP qualifiers
+********************************************************************************
+
+Function-type qualifiers
+================================================================================
+
+HIP introduces three different function qualifiers to mark functions for
+execution on the device or the host, and also adds new qualifiers to control
+inlining of functions.
+
+.. _host_attr:
+
+__host__
+--------------------------------------------------------------------------------
+
+The ``__host__`` qualifier is used to specify functions for execution
+on the host. This qualifier is implicitly defined for any function where no
+``__host__``, ``__device__`` or ``__global__`` qualifier is added, in order to
+not break compatibility with existing C++ functions.
+
+You can't combine ``__host__`` with ``__global__``.
+
+__device__
+--------------------------------------------------------------------------------
+
+The ``__device__`` qualifier is used to specify functions for execution on the
+device. They can only be called from other ``__device__`` functions or from
+``__global__`` functions.
+
+You can combine it with the ``__host__`` qualifier and mark functions
+``__host__ __device__``. In this case, the function is compiled for the host and
+the device. Note that these functions can't use the HIP built-ins (e.g.,
+:ref:`threadIdx.x ` or :ref:`warpSize `), as
+they are not available on the host. If you need to use HIP grid coordinate
+functions, you can pass the necessary coordinate information as an argument.
+
+__global__
+--------------------------------------------------------------------------------
+
+Functions marked ``__global__`` are executed on the device and are referred to
+as kernels. Their return type must be ``void``. Kernels have a special launch
+mechanism, and have to be launched from the host.
+
+There are some restrictions on the parameters of kernels. Kernels can't:
+
+* have a parameter of type ``std::initializer_list`` or ``va_list``
+* have a variable number of arguments
+* use references as parameters
+* use parameters having different sizes in host and device code, e.g. long double arguments, or structs containing long double members.
+* use struct-type arguments which have different layouts in host and device code.
+
+Kernels can have variadic template parameters, but only one parameter pack,
+which must be the last item in the template parameter list.
+
+.. note::
+ Unlike CUDA, HIP does not support dynamic parallelism, meaning that kernels
+ can not be called from the device.
+
+Calling __global__ functions
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The launch mechanism for kernels differs from standard function calls, as they
+need an additional configuration, that specifies the grid and block dimensions
+(i.e. the amount of threads to be launched), as well as specifying the amount of
+shared memory per block and which stream to execute the kernel on.
+
+Kernels are called using the triple chevron ``<<<>>>`` syntax known from CUDA,
+but HIP also supports the ``hipLaunchKernelGGL`` macro.
+
+When using ``hipLaunchKernelGGL``, the first five configuration parameters must
+be:
+
+* ``symbol kernelName``: The name of the kernel you want to launch. To support
+ template kernels that contain several template parameters separated by use the
+ ``HIP_KERNEL_NAME`` macro to wrap the template instantiation
+ (:doc:`HIPIFY ` inserts this automatically).
+* ``dim3 gridDim``: 3D-grid dimensions that specifies the number of blocks to
+ launch.
+* ``dim3 blockDim``: 3D-block dimensions that specifies the number of threads in
+ each block.
+* ``size_t dynamicShared``: The amount of additional shared dynamic memory to
+ allocate per block.
+* ``hipStream_t``: The stream on which to run the kernel. A value of ``0``
+ corresponds to the default stream.
+
+The kernel arguments are listed after the configuration parameters.
+
+.. code-block:: cpp
+
+ #include
+ #include
+
+ #define HIP_CHECK(expression) \
+ { \
+ const hipError_t err = expression; \
+ if(err != hipSuccess){ \
+ std::cerr << "HIP error: " << hipGetErrorString(err) \
+ << " at " << __LINE__ << "\n"; \
+ } \
+ }
+
+ // Performs a simple initialization of an array with the thread's index variables.
+ // This function is only available in device code.
+ __device__ void init_array(float * const a, const unsigned int arraySize){
+ // globalIdx uniquely identifies a thread in a 1D launch configuration.
+ const int globalIdx = threadIdx.x + blockIdx.x * blockDim.x;
+ // Each thread initializes a single element of the array.
+ if(globalIdx < arraySize){
+ a[globalIdx] = globalIdx;
+ }
+ }
+
+ // Rounds a value up to the next multiple.
+ // This function is available in host and device code.
+ __host__ __device__ constexpr int round_up_to_nearest_multiple(int number, int multiple){
+ return (number + multiple - 1)/multiple;
+ }
+
+ __global__ void example_kernel(float * const a, const unsigned int N)
+ {
+ // Initialize array.
+ init_array(a, N);
+ // Perform additional work:
+ // - work with the array
+ // - use the array in a different kernel
+ // - ...
+ }
+
+ int main()
+ {
+ constexpr int N = 100000000; // problem size
+ constexpr int blockSize = 256; //configurable block size
+
+ //needed number of blocks for the given problem size
+ constexpr int gridSize = round_up_to_nearest_multiple(N, blockSize);
+
+ float *a;
+ // allocate memory on the GPU
+ HIP_CHECK(hipMalloc(&a, sizeof(*a) * N));
+
+ std::cout << "Launching kernel." << std::endl;
+ example_kernel<<>>(a, N);
+ // make sure kernel execution is finished by synchronizing. The CPU can also
+ // execute other instructions during that time
+ HIP_CHECK(hipDeviceSynchronize());
+ std::cout << "Kernel execution finished." << std::endl;
+
+ HIP_CHECK(hipFree(a));
+ }
+
+Inline qualifiers
+--------------------------------------------------------------------------------
+
+HIP adds the ``__noinline__`` and ``__forceinline__`` function qualifiers.
+
+``__noinline__`` is a hint to the compiler to not inline the function, whereas
+``__forceinline__`` forces the compiler to inline the function. These qualifiers
+can be applied to both ``__host__`` and ``__device__`` functions.
+
+``__noinline__`` and ``__forceinline__`` can not be used in combination.
+
+__launch_bounds__
+--------------------------------------------------------------------------------
+
+GPU multiprocessors have a fixed pool of resources (primarily registers and
+shared memory) which are shared by the actively running warps. Using more
+resources per thread can increase executed instructions per cycle but reduces
+the resources available for other warps and may therefore limit the occupancy,
+i.e. the number of warps that can be executed simultaneously. Thus GPUs have to
+balance resource usage between instruction- and thread-level parallelism.
+
+``__launch_bounds__`` allows the application to provide hints that influence the
+resource (primarily registers) usage of the generated code. It is a function
+attribute that must be attached to a __global__ function:
+
+.. code-block:: cpp
+
+ __global__ void __launch_bounds__(MAX_THREADS_PER_BLOCK, MIN_WARPS_PER_EXECUTION_UNIT)
+ kernel_name(/*args*/);
+
+The ``__launch_bounds__`` parameters are explained in the following sections:
+
+MAX_THREADS_PER_BLOCK
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+This parameter is a guarantee from the programmer, that kernel will not be
+launched with more threads than ``MAX_THREADS_PER_BLOCK``.
+
+If no ``__launch_bounds__`` are specified, ``MAX_THREADS_PER_BLOCK`` is
+the maximum block size supported by the device (see
+:doc:`../reference/hardware_features`). Reducing ``MAX_THREADS_PER_BLOCK``
+allows the compiler to use more resources per thread than an unconstrained
+compilation. This might however reduce the amount of blocks that can run
+concurrently on a CU, thereby reducing occupancy and trading thread-level
+parallelism for instruction-level parallelism.
+
+``MAX_THREADS_PER_BLOCK`` is particularly useful in cases, where the compiler is
+constrained by register usage in order to meet requirements of large block sizes
+that are never used at launch time.
+
+The compiler can only use the hints to manage register usage, and does not
+automatically reduce shared memory usage. The compilation fails, if the compiler
+can not generate code that satisfies the launch bounds.
+
+On NVCC this parameter maps to the ``.maxntid`` PTX directive.
+
+When launching kernels HIP will validate the launch configuration to make sure
+the requested block size is not larger than ``MAX_THREADS_PER_BLOCK`` and
+return an error if it is exceeded.
+
+If :doc:`AMD_LOG_LEVEL <./logging>` is set, detailed information will be shown
+in the error log message, including the launch configuration of the kernel and
+the specified ``__launch_bounds__``.
+
+MIN_WARPS_PER_EXECUTION_UNIT
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+This parameter specifies the minimum number of warps that must be able to run
+concurrently on an execution unit.
+``MIN_WARPS_PER_EXECUTION_UNIT`` is optional and defaults to 1 if not specified.
+Since active warps compete for the same fixed pool of resources, the compiler
+must constrain the resource usage of the warps. This option gives a lower
+bound to the occupancy of the kernel.
+
+From this parameter, the compiler derives a maximum number of registers that can
+be used in the kernel. The amount of registers that can be used at most is
+:math:`\frac{\text{available registers}}{\text{MIN_WARPS_PER_EXECUTION_UNIT}}`,
+but it might also have other, architecture specific, restrictions.
+
+The available registers per Compute Unit are listed in
+:doc:`rocm:reference/gpu-arch-specs`. Beware that these values are per Compute
+Unit, not per Execution Unit. On AMD GPUs a Compute Unit consists of 4 Execution
+Units, also known as SIMDs, each with their own register file. For more
+information see :doc:`../understand/hardware_implementation`.
+:cpp:struct:`hipDeviceProp_t` also has a field ``executionUnitsPerMultiprocessor``.
+
+Porting from CUDA __launch_bounds__
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+CUDA also defines a ``__launch_bounds__`` qualifier which works similar to HIP's
+implementation, however it uses different parameters:
+
+.. code-block:: cpp
+
+ __launch_bounds__(MAX_THREADS_PER_BLOCK, MIN_BLOCKS_PER_MULTIPROCESSOR)
+
+The first parameter is the same as HIP's implementation, but
+``MIN_BLOCKS_PER_MULTIPROCESSOR`` must be converted to
+``MIN_WARPS_PER_EXECUTION``, which uses warps and execution units rather than
+blocks and multiprocessors. This conversion is performed automatically by
+:doc:`HIPIFY `, or can be done manually with the following
+equation.
+
+.. code-block:: cpp
+
+ MIN_WARPS_PER_EXECUTION_UNIT = (MIN_BLOCKS_PER_MULTIPROCESSOR * MAX_THREADS_PER_BLOCK) / warpSize
+
+Directly controlling the warps per execution unit makes it easier to reason
+about the occupancy, unlike with blocks, where the occupancy depends on the
+block size.
+
+The use of execution units rather than multiprocessors also provides support for
+architectures with multiple execution units per multiprocessor. For example, the
+AMD GCN architecture has 4 execution units per multiprocessor.
+
+maxregcount
+""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
+
+Unlike ``nvcc``, ``amdclang++`` does not support the ``--maxregcount`` option.
+Instead, users are encouraged to use the ``__launch_bounds__`` directive since
+the parameters are more intuitive and portable than micro-architecture details
+like registers. The directive allows per-kernel control.
+
+Memory space qualifiers
+================================================================================
+
+HIP adds qualifiers to specify the memory space in which the variables are
+located.
+
+Generally, variables allocated in host memory are not directly accessible within
+device code, while variables allocated in device memory are not directly
+accessible from the host code. More details on this can be found in
+:ref:`unified_memory`.
+
+__device__
+--------------------------------------------------------------------------------
+
+Variables marked with ``__device__`` reside in device memory. It can be
+combined together with one of the following qualifiers, however these qualifiers
+also imply the ``__device__`` qualifier.
+
+By default it can only be accessed from the threads on the device. In order to
+access it from the host, its address and size need to be queried using
+:cpp:func:`hipGetSymbolAddress` and :cpp:func:`hipGetSymbolSize` and copied with
+:cpp:func:`hipMemcpyToSymbol` or :cpp:func:`hipMemcpyFromSymbol`.
+
+__constant__
+--------------------------------------------------------------------------------
+
+Variables marked with ``__constant__`` reside in device memory. Variables in
+that address space are routed through the constant cache, but that address space
+has a limited logical size.
+This memory space is read-only from within kernels and can only be set by the
+host before kernel execution.
+
+To get the best performance benefit, these variables need a special access
+pattern to benefit from the constant cache - the access has to be uniform within
+a warp, otherwise the accesses are serialized.
+
+The constant cache reduces the pressure on the other caches and may enable
+higher throughput and lower latency accesses.
+
+To set the ``__constant__`` variables the host must copy the data to the device
+using :cpp:func:`hipMemcpyToSymbol`, for example:
+
+.. code-block:: cpp
+
+ __constant__ int const_array[8];
+
+ void set_constant_memory(){
+ int host_data[8] {1,2,3,4,5,6,7,8};
+
+ hipMemcpyToSymbol(const_array, host_data, sizeof(int) * 8);
+
+ // call kernel that accesses const_array
+ }
+
+__shared__
+--------------------------------------------------------------------------------
+
+Variables marked with ``__shared__`` are only accessible by threads within the
+same block and have the lifetime of that block. It is usually backed by on-chip
+shared memory, providing fast access to all threads within a block, which makes
+it perfectly suited for sharing variables.
+
+Shared memory can be allocated statically within the kernel, but the size
+of it has to be known at compile time.
+
+In order to dynamically allocate shared memory during runtime, but before the
+kernel is launched, the variable has to be declared ``extern``, and the kernel
+launch has to specify the needed amount of ``extern`` shared memory in the launch
+configuration. The statically allocated shared memory is allocated without this
+parameter.
+
+.. code-block:: cpp
+
+ #include
+
+ extern __shared__ int shared_array[];
+
+ __global__ void kernel(){
+ // initialize shared memory
+ shared_array[threadIdx.x] = threadIdx.x;
+ // use shared memory - synchronize to make sure, that all threads of the
+ // block see all changes to shared memory
+ __syncthreads();
+ }
+
+ int main(){
+ //shared memory in this case depends on the configurable block size
+ constexpr int blockSize = 256;
+ constexpr int sharedMemSize = blockSize * sizeof(int);
+ constexpr int gridSize = 2;
+
+ kernel<<>>();
+ }
+
+__managed__
+--------------------------------------------------------------------------------
+
+Managed memory is a special qualifier, that makes the marked memory available on
+the device and on the host. For more details see :ref:`unified_memory`.
+
+__restrict__
+--------------------------------------------------------------------------------
+
+The ``__restrict__`` keyword tells the compiler that the associated memory
+pointer does not alias with any other pointer in the function. This can help the
+compiler perform better optimizations. For best results, every pointer passed to
+a function should use this keyword.
+
+********************************************************************************
+Built-in constants
+********************************************************************************
+
+HIP defines some special built-in constants for use in device code.
+
+These built-ins are not implicitly defined by the compiler, the
+``hip_runtime.h`` header has to be included instead.
+
+Index built-ins
+================================================================================
+
+Kernel code can use these identifiers to distinguish between the different
+threads and blocks within a kernel.
+
+These built-ins are of type dim3, and are constant for each thread, but differ
+between the threads or blocks, and are initialized at kernel launch.
+
+blockDim and gridDim
+--------------------------------------------------------------------------------
+
+``blockDim`` and ``gridDim`` contain the sizes specified at kernel launch.
+``blockDim`` contains the amount of threads in the x-, y- and z-dimensions of
+the block of threads. Similarly ``gridDim`` contains the amount of blocks in the
+grid.
+
+.. _thread_and_block_idx:
+
+threadIdx and blockIdx
+--------------------------------------------------------------------------------
+
+``threadIdx`` and ``blockIdx`` can be used to identify the threads and blocks
+within the kernel.
+
+``threadIdx`` identifies the thread within a block, meaning its values are
+within ``0`` and ``blockDim.{x,y,z} - 1``. Likewise ``blockIdx`` identifies the
+block within the grid, and the values are within ``0`` and ``gridDim.{} - 1``.
+
+A global unique identifier of a three-dimensional grid can be calculated using
+the following code:
+
+.. code-block:: cpp
+
+ (threadIdx.x + blockIdx.x * blockDim.x) +
+ (threadIdx.y + blockIdx.y * blockDim.y) * blockDim.x +
+ (threadIdx.z + blockIdx.z * blockDim.z) * blockDim.x * blockDim.y
+
+.. _warp_size:
+
+warpSize
+================================================================================
+
+The ``warpSize`` constant contains the number of threads per warp for the given
+target device. It can differ between different architectures, and on RDNA
+architectures it can even differ between kernel launches, depending on whether
+they run in CU or WGP mode. See the
+:doc:`hardware features <../reference/hardware_features>` for more
+information.
+
+Since ``warpSize`` can differ between devices, it can not be assumed to be a
+compile-time constant on the host. It has to be queried using
+:cpp:func:`hipDeviceGetAttribute` or :cpp:func:`hipDeviceGetProperties`, e.g.:
+
+.. code-block:: cpp
+
+ int val;
+ hipDeviceGetAttribute(&val, hipDeviceAttributeWarpSize, deviceId);
+
+.. note::
+
+ ``warpSize`` should not be assumed to be a specific value in portable HIP
+ applications. NVIDIA devices return 32 for this variable; AMD devices return
+ 64 for gfx9 and 32 for gfx10 and above. While code that assumes a ``warpSize``
+ of 32 can run on devices with a ``warpSize`` of 64, it only utilizes half of
+ the the compute resources.
+
+********************************************************************************
+Vector types
+********************************************************************************
+
+These types are not automatically provided by the compiler. The
+``hip_vector_types.h`` header, which is also included by ``hip_runtime.h`` has
+to be included to use these types.
+
+Fundamental vector types
+================================================================================
+
+Fundamental vector types derive from the `fundamental C++ integral and
+floating-point types `_. These
+types are defined in ``hip_vector_types.h``, which is included by
+``hip_runtime.h``.
+
+All vector types can be created with ``1``, ``2``, ``3`` or ``4`` elements, the
+corresponding type is ``i``, where ``i`` is the number of
+elements.
+
+All vector types support a constructor function of the form
+``make_()``. For example,
+``float3 make_float3(float x, float y, float z)`` creates a vector of type
+``float3`` with value ``(x,y,z)``.
+The elements of the vectors can be accessed using their members ``x``, ``y``,
+``z``, and ``w``.
+
+.. code-block:: cpp
+
+ double2 d2_vec = make_double2(2.0, 4.0);
+ double first_elem = d2_vec.x;
+
+HIP supports vectors created from the following fundamental types:
+
+.. list-table::
+ :widths: 50 50
+
+ *
+ - **Integral Types**
+ -
+ *
+ - ``char``
+ - ``uchar``
+ *
+ - ``short``
+ - ``ushort``
+ *
+ - ``int``
+ - ``uint``
+ *
+ - ``long``
+ - ``ulong``
+ *
+ - ``longlong``
+ - ``ulonglong``
+ *
+ - **Floating-Point Types**
+ -
+ *
+ - ``float``
+ -
+ *
+ - ``double``
+ -
+
+.. _dim3:
+
+dim3
+================================================================================
+
+``dim3`` is a special three-dimensional unsigned integer vector type that is
+commonly used to specify grid and group dimensions for kernel launch
+configurations.
+
+Its constructor accepts up to three arguments. The unspecified dimensions are
+initialized to 1.
+
+********************************************************************************
+Built-in device functions
+********************************************************************************
+
+.. _memory_fence_instructions:
+
+Memory fence instructions
+================================================================================
+
+HIP does not enforce strict ordering on memory operations, meaning, that the
+order in which memory accesses are executed, is not necessarily the order in
+which other threads observe these changes. So it can not be assumed, that data
+written by one thread is visible by another thread without synchronization.
+
+Memory fences are a way to enforce a sequentially consistent order on the memory
+operations. This means, that all writes to memory made before a memory fence are
+observed by all threads after the fence. The scope of these fences depends on
+what specific memory fence is called.
+
+HIP supports ``__threadfence()``, ``__threadfence_block()`` and
+``__threadfence_system()``:
+
+* ``__threadfence_block()`` orders memory accesses for all threads within a thread block.
+* ``__threadfence()`` orders memory accesses for all threads on a device.
+* ``__threadfence_system()`` orders memory accesses for all threads in the system, making writes to memory visible to other devices and the host
+
+.. _synchronization_functions:
+
+Synchronization functions
+================================================================================
+
+Synchronization functions cause all threads in a group to wait at this
+synchronization point until all threads reached it. These functions implicitly
+include a :ref:`threadfence `, thereby ensuring
+visibility of memory accesses for the threads in the group.
+
+The ``__syncthreads()`` function comes in different versions.
+
+``void __syncthreads()`` simply synchronizes the threads of a block. The other
+versions additionally evaluate a predicate:
+
+``int __syncthreads_count(int predicate)`` returns the number of threads for
+which the predicate evaluates to non-zero.
+
+``int __syncthreads_and(int predicate)`` returns non-zero if the predicate
+evaluates to non-zero for all threads.
+
+``int __syncthreads_or(int predicate)`` returns non-zero if any of the
+predicates evaluates to non-zero.
+
+The Cooperative Groups API offers options to synchronize threads on a developer
+defined set of thread groups. For further information, check the
+:ref:`Cooperative Groups API reference ` or the
+:ref:`Cooperative Groups section in the programming guide
+`.
+
+Math functions
+================================================================================
+
+HIP-Clang supports a set of math operations that are callable from the device.
+HIP supports most of the device functions supported by CUDA. These are described
+on :ref:`Math API page `.
+
+Texture functions
+================================================================================
+
+The supported texture functions are listed in ``texture_fetch_functions.h`` and
+``texture_indirect_functions.h`` header files in the
+`HIP-AMD backend repository `_.
+
+Texture functions are not supported on some devices. To determine if texture functions are supported
+on your device, use ``Macro __HIP_NO_IMAGE_SUPPORT == 1``. You can query the attribute
+``hipDeviceAttributeImageSupport`` to check if texture functions are supported in the host runtime
+code.
+
+Surface functions
+================================================================================
+
+The supported surface functions are located on :ref:`Surface object reference
+page `.
+
+Timer functions
+================================================================================
+
+HIP provides device functions to read a high-resolution timer from within the
+kernel.
+
+The following functions count the cycles on the device, where the rate varies
+with the actual frequency.
+
+.. code-block:: cpp
+
+ clock_t clock()
+ long long int clock64()
+
+.. note::
+
+ ``clock()`` and ``clock64()`` do not work properly on AMD RDNA3 (GFX11) graphic processors.
+
+The difference between the returned values represents the cycles used.
+
+.. code-block:: cpp
+
+ __global void kernel(){
+ long long int start = clock64();
+ // kernel code
+ long long int stop = clock64();
+ long long int cycles = stop - start;
+ }
+
+``long long int wall_clock64()`` returns the wall clock time on the device, with a constant, fixed frequency.
+The frequency is device dependent and can be queried using:
+
+.. code-block:: cpp
+
+ int wallClkRate = 0; //in kilohertz
+ hipDeviceGetAttribute(&wallClkRate, hipDeviceAttributeWallClockRate, deviceId);
+
+.. _atomic functions:
+
+Atomic functions
+================================================================================
+
+Atomic functions are read-modify-write (RMW) operations, whose result is visible
+to all other threads on the scope of the atomic operation, once the operation
+completes.
+
+If multiple instructions from different devices or threads target the same
+memory location, the instructions are serialized in an undefined order.
+
+Atomic operations in kernels can operate on block scope (i.e. shared memory),
+device scope (global memory), or system scope (system memory), depending on
+:doc:`hardware support <../reference/hardware_features>`.
+
+The listed functions are also available with the ``_system`` (e.g.
+``atomicAdd_system``) suffix, operating on system scope, which includes host
+memory and other GPUs' memory. The functions without suffix operate on shared
+or global memory on the executing device, depending on the memory space of the
+variable.
+
+HIP supports the following atomic operations, where ``TYPE`` is one of ``int``,
+``unsigned int``, ``unsigned long``, ``unsigned long long``, ``float`` or
+``double``, while ``INTEGER`` is ``int``, ``unsigned int``, ``unsigned long``,
+``unsigned long long``:
+
+.. list-table:: Atomic operations
+
+ * - ``TYPE atomicAdd(TYPE* address, TYPE val)``
+
+ * - ``TYPE atomicSub(TYPE* address, TYPE val)``
+
+ * - ``TYPE atomicMin(TYPE* address, TYPE val)``
+ * - ``long long atomicMin(long long* address, long long val)``
+
+ * - ``TYPE atomicMax(TYPE* address, TYPE val)``
+ * - ``long long atomicMax(long long* address, long long val)``
+
+ * - ``TYPE atomicExch(TYPE* address, TYPE val)``
+
+ * - ``TYPE atomicCAS(TYPE* address, TYPE compare, TYPE val)``
+
+ * - ``INTEGER atomicAnd(INTEGER* address, INTEGER val)``
+
+ * - ``INTEGER atomicOr(INTEGER* address, INTEGER val)``
+
+ * - ``INTEGER atomicXor(INTEGER* address, INTEGER val)``
+
+ * - ``unsigned int atomicInc(unsigned int* address)``
+
+ * - ``unsigned int atomicDec(unsigned int* address)``
+
+Unsafe floating-point atomic operations
+--------------------------------------------------------------------------------
+
+Some HIP devices support fast atomic operations on floating-point values. For
+example, ``atomicAdd`` on single- or double-precision floating-point values may
+generate a hardware instruction that is faster than emulating the atomic
+operation using an atomic compare-and-swap (CAS) loop.
+
+On some devices, fast atomic instructions can produce results that differ from
+the version implemented with atomic CAS loops. For example, some devices
+will use different rounding or denormal modes, and some devices produce
+incorrect answers if fast floating-point atomic instructions target fine-grained
+memory allocations.
+
+The HIP-Clang compiler offers compile-time options to control the generation of
+unsafe atomic instructions. By default the compiler does not generate unsafe
+instructions. This is the same behaviour as with the ``-mno-unsafe-fp-atomics``
+compilation flag. The ``-munsafe-fp-atomics`` flag indicates to the compiler
+that all floating-point atomic function calls are allowed to use an unsafe
+version, if one exists. For example, on some devices, this flag indicates to the
+compiler that no floating-point ``atomicAdd`` function can target fine-grained
+memory. These options are applied globally for the entire compilation.
+
+HIP provides special functions that override the global compiler option for safe
+or unsafe atomic functions.
+
+The ``safe`` prefix always generates safe atomic operations, even when
+``-munsafe-fp-atomics`` is used, whereas ``unsafe`` always generates fast atomic
+instructions, even when ``-mno-unsafe-fp-atomics``. The following table lists
+the safe and unsafe atomic functions, where ``FLOAT_TYPE`` is either ``float``
+or ``double``.
+
+.. list-table:: AMD specific atomic operations
+
+ * - ``FLOAT_TYPE unsafeAtomicAdd(FLOAT_TYPE* address, FLOAT_TYPE val)``
+
+ * - ``FLOAT_TYPE safeAtomicAdd(FLOAT_TYPE* address, FLOAT_TYPE val)``
+
+.. _warp-cross-lane:
+
+Warp cross-lane functions
+================================================================================
+
+Threads in a warp are referred to as ``lanes`` and are numbered from ``0`` to
+``warpSize - 1``. Warp cross-lane functions cooperate across all lanes in a
+warp. AMD GPUs guarantee, that all warp lanes are executed in lockstep, whereas
+NVIDIA GPUs that support Independent Thread Scheduling might require additional
+synchronization, or the use of the ``__sync`` variants.
+
+Note that different devices can have different warp sizes. You should query the
+:ref:`warpSize ` in portable code and not assume a fixed warp size.
+
+All mask values returned or accepted by these built-ins are 64-bit unsigned
+integer values, even when compiled for a device with 32 threads per warp. On
+such devices the higher bits are unused. CUDA code ported to HIP requires
+changes to ensure that the correct type is used.
+
+Note that the ``__sync`` variants are made available in ROCm 6.2, but disabled by
+default to help with the transition to 64-bit masks. They can be enabled by
+setting the preprocessor macro ``HIP_ENABLE_WARP_SYNC_BUILTINS``. These built-ins
+will be enabled unconditionally in the next ROCm release. Wherever possible, the
+implementation includes a static assert to check that the program source uses
+the correct type for the mask.
+
+The ``_sync`` variants require a 64-bit unsigned integer mask argument that
+specifies the lanes of the warp that will participate. Each participating thread
+must have its own bit set in its mask argument, and all active threads specified
+in any mask argument must execute the same call with the same mask, otherwise
+the result is undefined.
+
+.. _warp_vote_functions:
+
+Warp vote and ballot functions
+--------------------------------------------------------------------------------
+
+.. code-block:: cpp
+
+ int __all(int predicate)
+ int __any(int predicate)
+ unsigned long long __ballot(int predicate)
+ unsigned long long __activemask()
+
+ int __all_sync(unsigned long long mask, int predicate)
+ int __any_sync(unsigned long long mask, int predicate)
+ unsigned long long __ballot_sync(unsigned long long mask, int predicate)
+
+You can use ``__any`` and ``__all`` to get a summary view of the predicates evaluated by the
+participating lanes.
+
+* ``__any()``: Returns 1 if the predicate is non-zero for any participating lane, otherwise it returns 0.
+
+* ``__all()``: Returns 1 if the predicate is non-zero for all participating lanes, otherwise it returns 0.
+
+To determine if the target platform supports the any/all instruction, you can
+query the ``hasWarpVote`` device property on the host or use the
+``HIP_ARCH_HAS_WARP_VOTE`` compiler definition in device code.
+
+``__ballot`` returns a bit mask containing the 1-bit predicate value from each
+lane. The nth bit of the result contains the bit contributed by the nth lane.
+
+``__activemask()`` returns a bit mask of currently active warp lanes. The nth
+bit of the result is 1 if the nth lane is active.
+
+Note that the ``__ballot`` and ``__activemask`` built-ins in HIP have a 64-bit return
+value (unlike the 32-bit value returned by the CUDA built-ins). Code ported from
+CUDA should be adapted to support the larger warp sizes that the HIP version
+requires.
+
+Applications can test whether the target platform supports the ``__ballot`` or
+``__activemask`` instructions using the ``hasWarpBallot`` device property in host
+code or the ``HIP_ARCH_HAS_WARP_BALLOT`` macro defined by the compiler for device
+code.
+
+Warp match functions
+--------------------------------------------------------------------------------
+
+.. code-block:: cpp
+
+ unsigned long long __match_any(T value)
+ unsigned long long __match_all(T value, int *pred)
+
+ unsigned long long __match_any_sync(unsigned long long mask, T value)
+ unsigned long long __match_all_sync(unsigned long long mask, T value, int *pred)
+
+``T`` can be a 32-bit integer type, 64-bit integer type or a single precision or
+double precision floating point type.
+
+``__match_any`` returns a bit mask where the n-th bit is set to 1 if the n-th
+lane has the same ``value`` as the current lane, and 0 otherwise.
+
+``__match_all`` returns a bit mask with the bits of the participating lanes are
+set to 1 if all lanes have the same ``value``, and 0 otherwise.
+The predicate ``pred`` is set to true if all participating threads have the same
+``value``, and false otherwise.
+
+Warp shuffle functions
+--------------------------------------------------------------------------------
+
+.. code-block:: cpp
+
+ T __shfl (T var, int srcLane, int width=warpSize);
+ T __shfl_up (T var, unsigned int delta, int width=warpSize);
+ T __shfl_down (T var, unsigned int delta, int width=warpSize);
+ T __shfl_xor (T var, int laneMask, int width=warpSize);
+
+ T __shfl_sync (unsigned long long mask, T var, int srcLane, int width=warpSize);
+ T __shfl_up_sync (unsigned long long mask, T var, unsigned int delta, int width=warpSize);
+ T __shfl_down_sync (unsigned long long mask, T var, unsigned int delta, int width=warpSize);
+ T __shfl_xor_sync (unsigned long long mask, T var, int laneMask, int width=warpSize);
+
+``T`` can be a 32-bit integer type, 64-bit integer type or a single precision or
+double precision floating point type.
+
+The warp shuffle functions exchange values between threads within a warp.
+
+The optional ``width`` argument specifies subgroups, in which the warp can be
+divided to share the variables.
+It has to be a power of two smaller than or equal to ``warpSize``. If it is
+smaller than ``warpSize``, the warp is grouped into separate groups, that are each
+indexed from 0 to width as if it was its own entity, and only the lanes within
+that subgroup participate in the shuffle. The lane indices in the subgroup are
+given by ``laneIdx % width``.
+
+The different shuffle functions behave as following:
+
+``__shfl``
+ The thread reads the value from the lane specified in ``srcLane``.
+
+``__shfl_up``
+ The thread reads ``var`` from lane ``laneIdx - delta``, thereby "shuffling"
+ the values of the lanes of the warp "up". If the resulting source lane is out
+ of range, the thread returns its own ``var``.
+
+``__shfl_down``
+ The thread reads ``var`` from lane ``laneIdx - delta``, thereby "shuffling"
+ the values of the lanes of the warp "down". If the resulting source lane is
+ out of range, the thread returns its own ``var``.
+
+``__shfl_xor``
+ The thread reads ``var`` from lane ``laneIdx xor lane_mask``. If ``width`` is
+ smaller than ``warpSize``, the threads can read values from subgroups before
+ the current subgroup. If it tries to read values from later subgroups, the
+ function returns the ``var`` of the calling thread.
+
+Warp matrix functions
+--------------------------------------------------------------------------------
+
+Warp matrix functions allow a warp to cooperatively operate on small matrices
+that have elements spread over lanes in an unspecified manner.
+
+HIP does not support warp matrix types or functions.
+
+Cooperative groups functions
+================================================================================
+
+You can use cooperative groups to synchronize groups of threads across thread
+blocks. It also provide a way of communicating between these groups.
+
+For further information, check the :ref:`Cooperative Groups API reference
+` or the :ref:`Cooperative Groups programming
+guide `.
diff --git a/docs/how-to/hip_porting_guide.md b/docs/how-to/hip_porting_guide.md
index bc3a2deda9..a6027d4801 100644
--- a/docs/how-to/hip_porting_guide.md
+++ b/docs/how-to/hip_porting_guide.md
@@ -1,3 +1,9 @@
+
+
+
+
+
+
# HIP porting guide
In addition to providing a portable C++ programming environment for GPUs, HIP is designed to ease
@@ -373,7 +379,9 @@ run hipcc when appropriate.
### ``warpSize``
-Code should not assume a warp size of 32 or 64. See [Warp Cross-Lane Functions](https://rocm.docs.amd.com/projects/HIP/en/latest/reference/cpp_language_extensions.html#warp-cross-lane-functions) for information on how to write portable wave-aware code.
+Code should not assume a warp size of 32 or 64. See the
+:ref:`HIP language extension for warpSize ` for information on how
+to write portable wave-aware code.
### Kernel launch with group size > 256
diff --git a/docs/how-to/hip_rtc.md b/docs/how-to/hip_rtc.md
index b96c069cb2..0bf3a56570 100644
--- a/docs/how-to/hip_rtc.md
+++ b/docs/how-to/hip_rtc.md
@@ -1,3 +1,9 @@
+
+
+
+
+
+
# Programming for HIP runtime compiler (RTC)
HIP lets you compile kernels at runtime with the `hiprtc*` APIs.
diff --git a/docs/how-to/hip_runtime_api.rst b/docs/how-to/hip_runtime_api.rst
index 65c89a60ed..f76851e078 100644
--- a/docs/how-to/hip_runtime_api.rst
+++ b/docs/how-to/hip_runtime_api.rst
@@ -40,6 +40,7 @@ Here are the various HIP Runtime API high level functions:
* :doc:`./hip_runtime_api/initialization`
* :doc:`./hip_runtime_api/memory_management`
* :doc:`./hip_runtime_api/error_handling`
+* :doc:`./hip_runtime_api/asynchronous`
* :doc:`./hip_runtime_api/cooperative_groups`
* :doc:`./hip_runtime_api/hipgraph`
* :doc:`./hip_runtime_api/call_stack`
diff --git a/docs/how-to/hip_runtime_api/asynchronous.rst b/docs/how-to/hip_runtime_api/asynchronous.rst
new file mode 100644
index 0000000000..81769da48e
--- /dev/null
+++ b/docs/how-to/hip_runtime_api/asynchronous.rst
@@ -0,0 +1,534 @@
+.. meta::
+ :description: This topic describes asynchronous concurrent execution in HIP
+ :keywords: AMD, ROCm, HIP, asynchronous concurrent execution, asynchronous, async, concurrent, concurrency
+
+.. _asynchronous_how-to:
+
+*******************************************************************************
+Asynchronous concurrent execution
+*******************************************************************************
+
+Asynchronous concurrent execution is important for efficient parallelism and
+resource utilization, with techniques such as overlapping computation and data
+transfer, managing concurrent kernel execution with streams on single or
+multiple devices, or using HIP graphs.
+
+Streams and concurrent execution
+===============================================================================
+
+All asynchronous APIs, such as kernel execution, data movement and potentially
+data allocation/freeing all happen in the context of device streams.
+
+Streams are FIFO buffers of commands to execute in order on a given device.
+Commands which enqueue tasks on a stream all return promptly and the task is
+executed asynchronously. Multiple streams can point to the same device and
+those streams might be fed from multiple concurrent host-side threads. Multiple
+streams tied to the same device are not guaranteed to execute their commands in
+order.
+
+Managing streams
+-------------------------------------------------------------------------------
+
+Streams enable the overlap of computation and data transfer, ensuring
+continuous GPU activity. By enabling tasks to run concurrently within the same
+GPU or across different GPUs, streams improve performance and throughput in
+high-performance computing (HPC).
+
+To create a stream, the following functions are used, each defining a handle
+to the newly created stream:
+
+- :cpp:func:`hipStreamCreate`: Creates a stream with default settings.
+- :cpp:func:`hipStreamCreateWithFlags`: Creates a stream, with specific
+ flags, listed below, enabling more control over stream behavior:
+
+ - ``hipStreamDefault``: creates a default stream suitable for most
+ operations. The default stream is a blocking operation.
+ - ``hipStreamNonBlocking``: creates a non-blocking stream, allowing
+ concurrent execution of operations. It ensures that tasks can run
+ simultaneously without waiting for each other to complete, thus improving
+ overall performance.
+
+- :cpp:func:`hipStreamCreateWithPriority`: Allows creating a stream with a
+ specified priority, enabling prioritization of certain tasks.
+
+The :cpp:func:`hipStreamSynchronize` function is used to block the calling host
+thread until all previously submitted tasks in a specified HIP stream have
+completed. It ensures that all operations in the given stream, such as kernel
+executions or memory transfers, are finished before the host thread proceeds.
+
+.. note::
+
+ If the :cpp:func:`hipStreamSynchronize` function input stream is 0 (or the
+ default stream), it waits for all operations in the default stream to
+ complete.
+
+Concurrent execution between host and device
+-------------------------------------------------------------------------------
+
+Concurrent execution between the host (CPU) and device (GPU) allows the CPU to
+perform other tasks while the GPU is executing kernels. Kernels are launched
+asynchronously using ``hipLaunchKernelGGL`` or using the triple chevron with a stream,
+enabling the CPU to continue executing other code while the GPU processes the
+kernel. Similarly, memory operations like :cpp:func:`hipMemcpyAsync` are
+performed asynchronously, allowing data transfers between the host and device
+without blocking the CPU.
+
+Concurrent kernel execution
+-------------------------------------------------------------------------------
+
+Concurrent execution of multiple kernels on the GPU allows different kernels to
+run simultaneously to maximize GPU resource usage. Managing dependencies
+between kernels is crucial for ensuring correct execution order. This can be
+achieved using :cpp:func:`hipStreamWaitEvent`, which allows a kernel to wait
+for a specific event before starting execution.
+
+Independent kernels can only run concurrently if there are enough registers
+and shared memory for the kernels. To enable concurrent kernel executions, the
+developer may have to reduce the block size of the kernels. The kernel runtimes
+can be misleading for concurrent kernel runs, that is why during optimization
+it is a good practice to check the trace files, to see if one kernel is blocking
+another kernel, while they are running in parallel. For more information about
+the application tracing, check::doc:`rocprofiler:/how-to/using-rocprof`.
+
+When running kernels in parallel, the execution time can increase due to
+contention for shared resources. This is because multiple kernels may attempt
+to access the same GPU resources simultaneously, leading to delays.
+
+Multiple kernels executing concurrently is only beneficial under specific conditions. It
+is most effective when the kernels do not fully utilize the GPU's resources. In
+such cases, overlapping kernel execution can improve overall throughput and
+efficiency by keeping the GPU busy without exceeding its capacity.
+
+Overlap of data transfer and kernel execution
+===============================================================================
+
+One of the primary benefits of asynchronous operations and multiple streams is
+the ability to overlap data transfer with kernel execution, leading to better
+resource utilization and improved performance.
+
+Asynchronous execution is particularly advantageous in iterative processes. For
+instance, if a kernel is initiated, it can be efficient to prepare the input
+data simultaneously, provided that this preparation does not depend on the
+kernel's execution. Such iterative data transfer and kernel execution overlap
+can be find in the :ref:`async_example`.
+
+Querying device capabilities
+-------------------------------------------------------------------------------
+
+Some AMD HIP-enabled devices can perform asynchronous memory copy operations to
+or from the GPU concurrently with kernel execution. Applications can query this
+capability by checking the ``asyncEngineCount`` device property. Devices with
+an ``asyncEngineCount`` greater than zero support concurrent data transfers.
+Additionally, if host memory is involved in the copy, it should be page-locked
+to ensure optimal performance. Page-locking (or pinning) host memory increases
+the bandwidth between the host and the device, reducing the overhead associated
+with data transfers. For more details, visit :ref:`host_memory` page.
+
+Asynchronous memory operations
+-------------------------------------------------------------------------------
+
+Asynchronous memory operations do not block the host while copying data and,
+when used with multiple streams, allow data to be transferred between the host
+and device while kernels are executed on the same GPU. Using operations like
+:cpp:func:`hipMemcpyAsync` or :cpp:func:`hipMemcpyPeerAsync`, developers can
+initiate data transfers without waiting for the previous operation to complete.
+This overlap of computation and data transfer ensures that the GPU is not idle
+while waiting for data. :cpp:func:`hipMemcpyPeerAsync` enables data transfers
+between different GPUs, facilitating multi-GPU communication.
+
+:ref:`async_example`` include launching kernels in one stream while performing
+data transfers in another. This technique is especially useful in applications
+with large data sets that need to be processed quickly.
+
+Concurrent data transfers with intra-device copies
+-------------------------------------------------------------------------------
+
+Devices that support the ``concurrentKernels`` property can perform
+intra-device copies concurrently with kernel execution. Additionally, devices
+that support the ``asyncEngineCount`` property can perform data transfers to
+or from the GPU simultaneously with kernel execution. Intra-device copies can
+be initiated using standard memory copy functions with destination and source
+addresses residing on the same device.
+
+Synchronization, event management and synchronous calls
+===============================================================================
+
+Synchronization and event management are important for coordinating tasks and
+ensuring correct execution order, and synchronous calls are necessary for
+maintaining data consistency.
+
+Synchronous calls
+-------------------------------------------------------------------------------
+
+Synchronous calls ensure task completion before moving to the next operation.
+For example, :cpp:func:`hipMemcpy` for data transfers waits for completion
+before returning control to the host. Similarly, synchronous kernel launches
+are used when immediate completion is required. When a synchronous function is
+called, control is not returned to the host thread before the device has
+completed the requested task. The behavior of the host thread—whether to yield,
+block, or spin—can be specified using :cpp:func:`hipSetDeviceFlags` with
+appropriate flags. Understanding when to use synchronous calls is important for
+managing execution flow and avoiding data races.
+
+Events for synchronization
+-------------------------------------------------------------------------------
+
+By creating an event with :cpp:func:`hipEventCreate` and recording it with
+:cpp:func:`hipEventRecord`, developers can synchronize operations across
+streams, ensuring correct task execution order. :cpp:func:`hipEventSynchronize`
+lets the application wait for an event to complete before proceeding with the next
+operation.
+
+Programmatic dependent launch and synchronization
+-------------------------------------------------------------------------------
+
+While CUDA supports programmatic dependent launches allowing a secondary kernel
+to start before the primary kernel finishes, HIP achieves similar functionality
+using streams and events. By employing :cpp:func:`hipStreamWaitEvent`, it is
+possible to manage the execution order without explicit hardware support. This
+mechanism allows a secondary kernel to launch as soon as the necessary
+conditions are met, even if the primary kernel is still running.
+
+.. _async_example:
+
+Example
+-------------------------------------------------------------------------------
+
+The examples shows the difference between sequential, asynchronous calls and
+asynchronous calls with ``hipEvents``.
+
+.. figure:: ../../data/how-to/hip_runtime_api/asynchronous/sequential_async_event.svg
+ :alt: Compare the different calls
+ :align: center
+
+The example codes
+
+.. tab-set::
+
+ .. tab-item:: Sequential
+
+ .. code-block:: cpp
+
+ #include
+ #include
+ #include
+
+ #define HIP_CHECK(expression) \
+ { \
+ const hipError_t status = expression; \
+ if(status != hipSuccess){ \
+ std::cerr << "HIP error " \
+ << status << ": " \
+ << hipGetErrorString(status) \
+ << " at " << __FILE__ << ":" \
+ << __LINE__ << std::endl; \
+ } \
+ }
+
+ // GPU Kernels
+ __global__ void kernelA(double* arrayA, size_t size){
+ const size_t x = threadIdx.x + blockDim.x * blockIdx.x;
+ if(x < size){arrayA[x] += 1.0;}
+ };
+ __global__ void kernelB(double* arrayA, double* arrayB, size_t size){
+ const size_t x = threadIdx.x + blockDim.x * blockIdx.x;
+ if(x < size){arrayB[x] += arrayA[x] + 3.0;}
+ };
+
+ int main()
+ {
+ constexpr int numOfBlocks = 1 << 20;
+ constexpr int threadsPerBlock = 1024;
+ constexpr int numberOfIterations = 50;
+ // The array size smaller to avoid the relatively short kernel launch compared to memory copies
+ constexpr size_t arraySize = 1U << 25;
+ double *d_dataA;
+ double *d_dataB;
+
+ double initValueA = 0.0;
+ double initValueB = 2.0;
+
+ std::vector vectorA(arraySize, initValueA);
+ std::vector vectorB(arraySize, initValueB);
+ // Allocate device memory
+ HIP_CHECK(hipMalloc(&d_dataA, arraySize * sizeof(*d_dataA)));
+ HIP_CHECK(hipMalloc(&d_dataB, arraySize * sizeof(*d_dataB)));
+ for(int iteration = 0; iteration < numberOfIterations; iteration++)
+ {
+ // Host to Device copies
+ HIP_CHECK(hipMemcpy(d_dataA, vectorA.data(), arraySize * sizeof(*d_dataA), hipMemcpyHostToDevice));
+ HIP_CHECK(hipMemcpy(d_dataB, vectorB.data(), arraySize * sizeof(*d_dataB), hipMemcpyHostToDevice));
+ // Launch the GPU kernels
+ hipLaunchKernelGGL(kernelA, dim3(numOfBlocks), dim3(threadsPerBlock), 0, 0, d_dataA, arraySize);
+ hipLaunchKernelGGL(kernelB, dim3(numOfBlocks), dim3(threadsPerBlock), 0, 0, d_dataA, d_dataB, arraySize);
+ // Device to Host copies
+ HIP_CHECK(hipMemcpy(vectorA.data(), d_dataA, arraySize * sizeof(*vectorA.data()), hipMemcpyDeviceToHost));
+ HIP_CHECK(hipMemcpy(vectorB.data(), d_dataB, arraySize * sizeof(*vectorB.data()), hipMemcpyDeviceToHost));
+ }
+ // Wait for all operations to complete
+ HIP_CHECK(hipDeviceSynchronize());
+
+ // Verify results
+ const double expectedA = (double)numberOfIterations;
+ const double expectedB =
+ initValueB + (3.0 * numberOfIterations) +
+ (expectedA * (expectedA + 1.0)) / 2.0;
+ bool passed = true;
+ for(size_t i = 0; i < arraySize; ++i){
+ if(vectorA[i] != expectedA){
+ passed = false;
+ std::cerr << "Validation failed! Expected " << expectedA << " got " << vectorA[i] << " at index: " << i << std::endl;
+ break;
+ }
+ if(vectorB[i] != expectedB){
+ passed = false;
+ std::cerr << "Validation failed! Expected " << expectedB << " got " << vectorB[i] << " at index: " << i << std::endl;
+ break;
+ }
+ }
+
+ if(passed){
+ std::cout << "Sequential execution completed successfully." << std::endl;
+ }else{
+ std::cerr << "Sequential execution failed." << std::endl;
+ }
+
+ // Cleanup
+ HIP_CHECK(hipFree(d_dataA));
+ HIP_CHECK(hipFree(d_dataB));
+
+ return 0;
+ }
+
+ .. tab-item:: Asynchronous
+
+ .. code-block:: cpp
+
+ #include
+ #include
+ #include
+
+ #define HIP_CHECK(expression) \
+ { \
+ const hipError_t status = expression; \
+ if(status != hipSuccess){ \
+ std::cerr << "HIP error " \
+ << status << ": " \
+ << hipGetErrorString(status) \
+ << " at " << __FILE__ << ":" \
+ << __LINE__ << std::endl; \
+ } \
+ }
+
+ // GPU Kernels
+ __global__ void kernelA(double* arrayA, size_t size){
+ const size_t x = threadIdx.x + blockDim.x * blockIdx.x;
+ if(x < size){arrayA[x] += 1.0;}
+ };
+ __global__ void kernelB(double* arrayA, double* arrayB, size_t size){
+ const size_t x = threadIdx.x + blockDim.x * blockIdx.x;
+ if(x < size){arrayB[x] += arrayA[x] + 3.0;}
+ };
+
+ int main()
+ {
+ constexpr int numOfBlocks = 1 << 20;
+ constexpr int threadsPerBlock = 1024;
+ constexpr int numberOfIterations = 50;
+ // The array size smaller to avoid the relatively short kernel launch compared to memory copies
+ constexpr size_t arraySize = 1U << 25;
+ double *d_dataA;
+ double *d_dataB;
+
+ double initValueA = 0.0;
+ double initValueB = 2.0;
+
+ std::vector vectorA(arraySize, initValueA);
+ std::vector vectorB(arraySize, initValueB);
+ // Allocate device memory
+ HIP_CHECK(hipMalloc(&d_dataA, arraySize * sizeof(*d_dataA)));
+ HIP_CHECK(hipMalloc(&d_dataB, arraySize * sizeof(*d_dataB)));
+ // Create streams
+ hipStream_t streamA, streamB;
+ HIP_CHECK(hipStreamCreate(&streamA));
+ HIP_CHECK(hipStreamCreate(&streamB));
+ for(unsigned int iteration = 0; iteration < numberOfIterations; iteration++)
+ {
+ // Stream 1: Host to Device 1
+ HIP_CHECK(hipMemcpyAsync(d_dataA, vectorA.data(), arraySize * sizeof(*d_dataA), hipMemcpyHostToDevice, streamA));
+ // Stream 2: Host to Device 2
+ HIP_CHECK(hipMemcpyAsync(d_dataB, vectorB.data(), arraySize * sizeof(*d_dataB), hipMemcpyHostToDevice, streamB));
+ // Stream 1: Kernel 1
+ hipLaunchKernelGGL(kernelA, dim3(numOfBlocks), dim3(threadsPerBlock), 0, streamA, d_dataA, arraySize);
+ // Wait for streamA finish
+ HIP_CHECK(hipStreamSynchronize(streamA));
+ // Stream 2: Kernel 2
+ hipLaunchKernelGGL(kernelB, dim3(numOfBlocks), dim3(threadsPerBlock), 0, streamB, d_dataA, d_dataB, arraySize);
+ // Stream 1: Device to Host 2 (after Kernel 1)
+ HIP_CHECK(hipMemcpyAsync(vectorA.data(), d_dataA, arraySize * sizeof(*vectorA.data()), hipMemcpyDeviceToHost, streamA));
+ // Stream 2: Device to Host 2 (after Kernel 2)
+ HIP_CHECK(hipMemcpyAsync(vectorB.data(), d_dataB, arraySize * sizeof(*vectorB.data()), hipMemcpyDeviceToHost, streamB));
+ }
+ // Wait for all operations in both streams to complete
+ HIP_CHECK(hipStreamSynchronize(streamA));
+ HIP_CHECK(hipStreamSynchronize(streamB));
+ // Verify results
+ double expectedA = (double)numberOfIterations;
+ double expectedB =
+ initValueB + (3.0 * numberOfIterations) +
+ (expectedA * (expectedA + 1.0)) / 2.0;
+ bool passed = true;
+ for(size_t i = 0; i < arraySize; ++i){
+ if(vectorA[i] != expectedA){
+ passed = false;
+ std::cerr << "Validation failed! Expected " << expectedA << " got " << vectorA[i] << " at index: " << i << std::endl;
+ break;
+ }
+ if(vectorB[i] != expectedB){
+ passed = false;
+ std::cerr << "Validation failed! Expected " << expectedB << " got " << vectorB[i] << " at index: " << i << std::endl;
+ break;
+ }
+ }
+ if(passed){
+ std::cout << "Asynchronous execution completed successfully." << std::endl;
+ }else{
+ std::cerr << "Asynchronous execution failed." << std::endl;
+ }
+
+ // Cleanup
+ HIP_CHECK(hipStreamDestroy(streamA));
+ HIP_CHECK(hipStreamDestroy(streamB));
+ HIP_CHECK(hipFree(d_dataA));
+ HIP_CHECK(hipFree(d_dataB));
+
+ return 0;
+ }
+
+ .. tab-item:: hipStreamWaitEvent
+
+ .. code-block:: cpp
+
+ #include
+ #include
+ #include
+
+ #define HIP_CHECK(expression) \
+ { \
+ const hipError_t status = expression; \
+ if(status != hipSuccess){ \
+ std::cerr << "HIP error " \
+ << status << ": " \
+ << hipGetErrorString(status) \
+ << " at " << __FILE__ << ":" \
+ << __LINE__ << std::endl; \
+ } \
+ }
+
+ // GPU Kernels
+ __global__ void kernelA(double* arrayA, size_t size){
+ const size_t x = threadIdx.x + blockDim.x * blockIdx.x;
+ if(x < size){arrayA[x] += 1.0;}
+ };
+ __global__ void kernelB(double* arrayA, double* arrayB, size_t size){
+ const size_t x = threadIdx.x + blockDim.x * blockIdx.x;
+ if(x < size){arrayB[x] += arrayA[x] + 3.0;}
+ };
+
+ int main()
+ {
+ constexpr int numOfBlocks = 1 << 20;
+ constexpr int threadsPerBlock = 1024;
+ constexpr int numberOfIterations = 50;
+ // The array size smaller to avoid the relatively short kernel launch compared to memory copies
+ constexpr size_t arraySize = 1U << 25;
+ double *d_dataA;
+ double *d_dataB;
+ double initValueA = 0.0;
+ double initValueB = 2.0;
+
+ std::vector vectorA(arraySize, initValueA);
+ std::vector vectorB(arraySize, initValueB);
+ // Allocate device memory
+ HIP_CHECK(hipMalloc(&d_dataA, arraySize * sizeof(*d_dataA)));
+ HIP_CHECK(hipMalloc(&d_dataB, arraySize * sizeof(*d_dataB)));
+ // Create streams
+ hipStream_t streamA, streamB;
+ HIP_CHECK(hipStreamCreate(&streamA));
+ HIP_CHECK(hipStreamCreate(&streamB));
+ // Create events
+ hipEvent_t event, eventA, eventB;
+ HIP_CHECK(hipEventCreate(&event));
+ HIP_CHECK(hipEventCreate(&eventA));
+ HIP_CHECK(hipEventCreate(&eventB));
+ for(unsigned int iteration = 0; iteration < numberOfIterations; iteration++)
+ {
+ // Stream 1: Host to Device 1
+ HIP_CHECK(hipMemcpyAsync(d_dataA, vectorA.data(), arraySize * sizeof(*d_dataA), hipMemcpyHostToDevice, streamA));
+ // Stream 2: Host to Device 2
+ HIP_CHECK(hipMemcpyAsync(d_dataB, vectorB.data(), arraySize * sizeof(*d_dataB), hipMemcpyHostToDevice, streamB));
+ // Stream 1: Kernel 1
+ hipLaunchKernelGGL(kernelA, dim3(numOfBlocks), dim3(threadsPerBlock), 0, streamA, d_dataA, arraySize);
+ // Record event after the GPU kernel in Stream 1
+ HIP_CHECK(hipEventRecord(event, streamA));
+ // Stream 2: Wait for event before starting Kernel 2
+ HIP_CHECK(hipStreamWaitEvent(streamB, event, 0));
+ // Stream 2: Kernel 2
+ hipLaunchKernelGGL(kernelB, dim3(numOfBlocks), dim3(threadsPerBlock), 0, streamB, d_dataA, d_dataB, arraySize);
+ // Stream 1: Device to Host 2 (after Kernel 1)
+ HIP_CHECK(hipMemcpyAsync(vectorA.data(), d_dataA, arraySize * sizeof(*vectorA.data()), hipMemcpyDeviceToHost, streamA));
+ // Stream 2: Device to Host 2 (after Kernel 2)
+ HIP_CHECK(hipMemcpyAsync(vectorB.data(), d_dataB, arraySize * sizeof(*vectorB.data()), hipMemcpyDeviceToHost, streamB));
+ // Wait for all operations in both streams to complete
+ HIP_CHECK(hipEventRecord(eventA, streamA));
+ HIP_CHECK(hipEventRecord(eventB, streamB));
+ HIP_CHECK(hipStreamWaitEvent(streamA, eventA, 0));
+ HIP_CHECK(hipStreamWaitEvent(streamB, eventB, 0));
+ }
+ // Verify results
+ double expectedA = (double)numberOfIterations;
+ double expectedB =
+ initValueB + (3.0 * numberOfIterations) +
+ (expectedA * (expectedA + 1.0)) / 2.0;
+ bool passed = true;
+ for(size_t i = 0; i < arraySize; ++i){
+ if(vectorA[i] != expectedA){
+ passed = false;
+ std::cerr << "Validation failed! Expected " << expectedA << " got " << vectorA[i] << std::endl;
+ break;
+ }
+ if(vectorB[i] != expectedB){
+ passed = false;
+ std::cerr << "Validation failed! Expected " << expectedB << " got " << vectorB[i] << std::endl;
+ break;
+ }
+ }
+ if(passed){
+ std::cout << "Asynchronous execution with events completed successfully." << std::endl;
+ }else{
+ std::cerr << "Asynchronous execution with events failed." << std::endl;
+ }
+
+ // Cleanup
+ HIP_CHECK(hipEventDestroy(event));
+ HIP_CHECK(hipEventDestroy(eventA));
+ HIP_CHECK(hipEventDestroy(eventB));
+ HIP_CHECK(hipStreamDestroy(streamA));
+ HIP_CHECK(hipStreamDestroy(streamB));
+ HIP_CHECK(hipFree(d_dataA));
+ HIP_CHECK(hipFree(d_dataB));
+
+ return 0;
+ }
+
+HIP Graphs
+===============================================================================
+
+HIP graphs offer an efficient alternative to the standard method of launching
+GPU tasks via streams. Comprising nodes for operations and edges for
+dependencies, HIP graphs reduce kernel launch overhead and provide a high-level
+abstraction for managing dependencies and synchronization. By representing
+sequences of kernels and memory operations as a single graph, they simplify
+complex workflows and enhance performance, particularly for applications with
+intricate dependencies and multiple execution stages.
+For more details, see the :ref:`how_to_HIP_graph` documentation.
diff --git a/docs/how-to/hip_runtime_api/memory_management/device_memory.rst b/docs/how-to/hip_runtime_api/memory_management/device_memory.rst
index 32f3fd7d58..13fba386bb 100644
--- a/docs/how-to/hip_runtime_api/memory_management/device_memory.rst
+++ b/docs/how-to/hip_runtime_api/memory_management/device_memory.rst
@@ -1,52 +1,285 @@
.. meta::
:description: This chapter describes the device memory of the HIP ecosystem
ROCm software.
- :keywords: AMD, ROCm, HIP, device memory
+ :keywords: AMD, ROCm, HIP, GPU, device memory, global, constant, texture, surface, shared
.. _device_memory:
-*******************************************************************************
+********************************************************************************
Device memory
-*******************************************************************************
+********************************************************************************
-Device memory exists on the device, e.g. on GPUs in the video random access
-memory (VRAM), and is accessible by the kernels operating on the device. Recent
-architectures use graphics double data rate (GDDR) synchronous dynamic
-random-access memory (SDRAM) such as GDDR6, or high-bandwidth memory (HBM) such
-as HBM2e. Device memory can be allocated as global memory, constant, texture or
-surface memory.
+Device memory is random access memory that is physically located on a GPU. In
+general it is memory with a bandwidth that is an order of magnitude higher
+compared to RAM available to the host. That high bandwidth is only available to
+on-device accesses, accesses from the host or other devices have to go over a
+special interface which is considerably slower, usually the PCIe bus or the AMD
+Infinity Fabric.
+
+On certain architectures like APUs, the GPU and CPU share the same physical
+memory.
+
+There is also a special local data share on-chip directly accessible to the
+:ref:`compute units `, that can be used for shared
+memory.
+
+The physical device memory can be used to back up several different memory
+spaces in HIP, as described in the following.
Global memory
================================================================================
-Read-write storage visible to all threads on a given device. There are
-specialized versions of global memory with different usage semantics which are
-typically backed by the same hardware, but can use different caching paths.
+Global memory is the general read-write accessible memory visible to all threads
+on a given device. Since variables located in global memory have to be marked
+with the ``__device__`` qualifier, this memory space is also referred to as
+device memory.
+
+Without explicitly copying it, it can only be accessed by the threads within a
+kernel operating on the device, however :ref:`unified_memory` can be used to
+let the runtime manage this, if desired.
+
+Allocating global memory
+--------------------------------------------------------------------------------
+
+This memory needs to be explicitly allocated.
+
+It can be allocated from the host via the :ref:`HIP runtime memory management
+functions ` like :cpp:func:`hipMalloc`, or can be
+defined using the ``__device__`` qualifier on variables.
+
+It can also be allocated within a kernel using ``malloc`` or ``new``.
+The specified amount of memory is allocated by each thread that executes the
+instructions. The recommended way to allocate the memory depends on the use
+case. If the memory is intended to be shared between the threads of a block, it
+is generally beneficial to allocate one large block of memory, due to the way
+the memory is accessed.
+
+.. note::
+ Memory allocated within a kernel can only be freed in kernels, not by the HIP
+ runtime on the host, like :cpp:func:`hipFree`. It is also not possible to
+ free device memory allocated on the host, with :cpp:func:`hipMalloc` for
+ example, in a kernel.
+
+
+An example for how to share memory allocated within a kernel by only one thread
+is given in the following example. In case the device memory is only needed for
+communication between the threads in a single block, :ref:`shared_memory` is the
+better option, but is also limited in size.
+
+.. code-block:: cpp
+
+ __global__ void kernel_memory_allocation(TYPE* pointer){
+ // The pointer is stored in shared memory, so that all
+ // threads of the block can access the pointer
+ __shared__ int *memory;
+
+ size_t blockSize = blockDim.x;
+ constexpr size_t elementsPerThread = 1024;
+ if(threadIdx.x == 0){
+ // allocate memory in one contiguous block
+ memory = new int[blockDim.x * elementsPerThread];
+ }
+ __syncthreads();
+
+ // load pointer into thread-local variable to avoid
+ // unnecessary accesses to shared memory
+ int *localPtr = memory;
+
+ // work with allocated memory, e.g. initialization
+ for(int i = 0; i < elementsPerThread; ++i){
+ // access in a contiguous way
+ localPtr[i * blockSize + threadIdx.x] = i;
+ }
+
+ // synchronize to make sure no thread is accessing the memory before freeing
+ __syncthreads();
+ if(threadIdx.x == 0){
+ delete[] memory;
+ }
+}
+
+Copying between device and host
+--------------------------------------------------------------------------------
+
+When not using :ref:`unified_memory`, memory has to be explicitly copied between
+the device and the host, using the HIP runtime API.
+
+.. code-block:: cpp
+
+ size_t elements = 1 << 20;
+ size_t size_bytes = elements * sizeof(int);
+
+ // allocate host and device memory
+ int *host_pointer = new int[elements];
+ int *device_input, *device_result;
+ HIP_CHECK(hipMalloc(&device_input, size_bytes));
+ HIP_CHECK(hipMalloc(&device_result, size_bytes));
+
+ // copy from host to the device
+ HIP_CHECK(hipMemcpy(device_input, host_pointer, size_bytes, hipMemcpyHostToDevice));
+
+ // Use memory on the device, i.e. execute kernels
+
+ // copy from device to host, to e.g. get results from the kernel
+ HIP_CHECK(hipMemcpy(host_pointer, device_result, size_bytes, hipMemcpyDeviceToHost));
+
+ // free memory when not needed any more
+ HIP_CHECK(hipFree(device_result));
+ HIP_CHECK(hipFree(device_input));
+ delete[] host_pointer;
Constant memory
================================================================================
-Read-only storage visible to all threads on a given device. It is a limited
-segment backed by device memory with queryable size. It needs to be set by the
-host before kernel execution. Constant memory provides the best performance
-benefit when all threads within a warp access the same address.
+Constant memory is read-only storage visible to all threads on a given device.
+It is a limited segment backed by device memory, that takes a different caching
+route than normal device memory accesses. It needs to be set by the host before
+kernel execution.
+
+In order to get the highest bandwidth from the constant memory, all threads of
+a warp have to access the same memory address. If they access different
+addresses, the accesses get serialized and the bandwidth is therefore reduced.
+
+Using constant memory
+--------------------------------------------------------------------------------
+
+Constant memory can not be dynamically allocated, and the size has to be
+specified during compile time. If the values can not be specified during compile
+time, they have to be set by the host before the kernel, that accesses the
+constant memory, is called.
+
+.. code-block:: cpp
+
+ constexpr size_t const_array_size = 32;
+ __constant__ double const_array[const_array_size];
+
+ void set_constant_memory(double* values){
+ hipMemcpyToSymbol(const_array, values, const_array_size * sizeof(double));
+ }
+
+ __global__ void kernel_using_const_memory(double* array){
+
+ int warpIdx = threadIdx.x / warpSize;
+ // uniform access of warps to const_array for best performance
+ array[blockDim.x] *= const_array[warpIdx];
+ }
Texture memory
================================================================================
-Read-only storage visible to all threads on a given device and accessible
-through additional APIs. Its origins come from graphics APIs, and provides
-performance benefits when accessing memory in a pattern where the
-addresses are close to each other in a 2D representation of the memory.
+Texture memory is special read-only memory visible to all threads on a given
+device and accessible through additional APIs. Its origins come from graphics
+APIs, and provides performance benefits when accessing memory in a pattern where
+the addresses are close to each other in a 2D or 3D representation of the
+memory. It also provides additional features like filtering and addressing for
+out-of-bounds accesses, which are further explained in :ref:`texture_fetching`.
+
+The original use of the texture cache was also to take pressure off the global
+memory and other caches, however on modern GPUs, that support textures, the L1
+cache and texture cache are combined, so the main purpose is to make use of the
+texture specific features.
+
+To find out whether textures are supported on a device, query
+:cpp:enumerator:`hipDeviceAttributeImageSupport`.
+
+Using texture memory
+--------------------------------------------------------------------------------
+
+Textures are more complex than just a region of memory, so their layout has to
+be specified. They are represented by ``hipTextureObject_t`` and created using
+:cpp:func:`hipCreateTextureObject`.
-The :ref:`texture management module ` of the HIP
-runtime API reference contains the functions of texture memory.
+The underlying memory is a 1D, 2D or 3D ``hipArray_t``, that needs to be
+allocated using :cpp:func:`hipMallocArray`.
+
+On the device side, texture objects are accessed using the ``tex1D/2D/3D``
+functions.
+
+The texture management functions can be found in the :ref:`Texture management
+API reference `
+
+A full example for how to use textures can be found in the `ROCm texture
+management example `_
Surface memory
================================================================================
-A read-write version of texture memory, which can be useful for applications
-that require direct manipulation of 1D, 2D, or 3D hipArray_t.
+A read-write version of texture memory. It is created in the same way as a
+texture, but with :cpp:func:`hipCreateSurfaceObject`.
+
+Since surfaces are also cached in the read-only texture cache, the changes
+written back to the surface can't be observed in the same kernel. A new kernel
+has to be launched in order to see the updated surface.
+
+The corresponding functions are listed in the :ref:`Surface object API reference
+`.
+
+.. _shared_memory:
+
+Shared memory
+================================================================================
+
+Shared memory is read-write memory, that is only visible to the threads within a
+block. It is allocated per thread block, and needs to be either statically
+allocated at compile time, or can be dynamically allocated when launching the
+kernel, but not during kernel execution. Its general use-case is to share
+variables between the threads within a block, but can also be used as scratch
+pad memory.
+
+Shared memory is not backed by the same physical memory as the other address
+spaces. It is on-chip memory local to the :ref:`compute units
+`, providing low-latency, high-bandwidth access,
+comparable to the L1 cache. It is however limited in size, and as it is
+allocated per block, can restrict how many blocks can be scheduled to a compute
+unit concurrently, thereby potentially reducing occupancy.
+
+An overview of the size of the local data share (LDS), that backs up shared
+memory, is given in the
+:doc:`GPU hardware specifications `.
+
+Allocate shared memory
+--------------------------------------------------------------------------------
+
+Memory can be dynamically allocated by declaring an ``extern __shared__`` array,
+whose size can be set during kernel launch, which can then be accessed in the
+kernel.
+
+.. code-block:: cpp
+
+ extern __shared__ int dynamic_shared[];
+ __global__ void kernel(int array1SizeX, int array1SizeY, int array2Size){
+ // at least (array1SizeX * array1SizeY + array2Size) * sizeof(int) bytes
+ // dynamic shared memory need to be allocated when the kernel is launched
+ int* array1 = dynamic_shared;
+ // array1 is interpreted as 2D of size:
+ int array1Size = array1SizeX * array1SizeY;
+
+ int* array2 = &(array1[array1Size]);
+
+ if(threadIdx.x < array1SizeX && threadIdx.y < array1SizeY){
+ // access array1 with threadIdx.x + threadIdx.y * array1SizeX
+ }
+ if(threadIdx.x < array2Size){
+ // access array2 threadIdx.x
+ }
+ }
+
+A more in-depth example on dynamically allocated shared memory can be found in
+the `ROCm dynamic shared example
+`_.
+
+To statically allocate shared memory, just declare it in the kernel. The memory
+is allocated per block, not per thread. If the kernel requires more shared
+memory than is available to the architecture, the compilation fails.
+
+.. code-block:: cpp
+
+ __global__ void kernel(){
+ __shared__ int array[128];
+ __shared__ double result;
+ }
+
+A more in-depth example on statically allocated shared memory can be found in
+the `ROCm shared memory example
+`_.
-The :ref:`surface objects module ` of HIP runtime API
-contains the functions for creating, destroying and reading surface memory.
\ No newline at end of file
diff --git a/docs/how-to/hip_runtime_api/memory_management/device_memory/texture_fetching.rst b/docs/how-to/hip_runtime_api/memory_management/device_memory/texture_fetching.rst
index a7f2873dd5..646d8afca6 100644
--- a/docs/how-to/hip_runtime_api/memory_management/device_memory/texture_fetching.rst
+++ b/docs/how-to/hip_runtime_api/memory_management/device_memory/texture_fetching.rst
@@ -5,56 +5,67 @@
.. _texture_fetching:
-*******************************************************************************
+********************************************************************************
Texture fetching
-*******************************************************************************
-
-`Textures <../../../../doxygen/html/group___texture.html>`_ are more than just a buffer
-interpreted as a 1D, 2D, or 3D array.
-
-As textures are associated with graphics, they are indexed using floating-point
-values. The index can be in the range of [0 to size-1] or [0 to 1].
-
-Depending on the index, texture sampling or texture addressing is performed,
-which decides the return value.
-
-**Texture sampling**: When a texture is indexed with a fraction, the queried
-value is often between two or more texels (texture elements). The sampling
-method defines what value to return in such cases.
-
-**Texture addressing**: Sometimes, the index is outside the bounds of the
-texture. This condition might look like a problem but helps to put a texture on
-a surface multiple times or to create a visible sign of out-of-bounds indexing,
-in computer graphics. The addressing mode defines what value to return when
-indexing a texture out of bounds.
-
-The different sampling and addressing modes are described in the following
-sections.
-
-Here is the sample texture used in this document for demonstration purposes. It
+********************************************************************************
+
+Textures give access to specialized hardware on GPUs that is usually used in
+graphics processing. In particular, textures use a different way of accessing
+their underlying device memory. Memory accesses to textures are routed through
+a special read-only texture cache, that is optimized for logical spatial
+locality, e.g. locality in 2D grids. This can also benefit certain algorithms
+used in GPGPU computing, when the access pattern is the same as used when
+accessing normal textures.
+
+Additionally, textures can be indexed using floating-point values. This is used
+in graphics applications to interpolate between neighboring values of a texture.
+Depending on the interpolation mode the index can be in the range of ``0`` to
+``size - 1`` or ``0`` to ``1``. Textures also have a way of handling
+out-of-bounds accesses.
+
+Depending on the value of the index, :ref:`texture filtering `
+or :ref:`texture addressing ` is performed.
+
+Here is the example texture used in this document for demonstration purposes. It
is 2x2 texels and indexed in the [0 to 1] range.
.. figure:: ../../../../data/how-to/hip_runtime_api/memory_management/textures/original.png
:width: 150
- :alt: Sample texture
+ :alt: Example texture
:align: center
Texture used as example
-Texture sampling
-===============================================================================
+In HIP textures objects are of type :cpp:struct:`hipTextureObject_t` and created
+using :cpp:func:`hipCreateTextureObject`.
+
+For a full list of available texture functions see the :ref:`HIP texture API
+reference `.
+
+A code example for how to use textures can be found in the `ROCm texture
+management example `_
+
+.. _texture_filtering:
-Texture sampling handles the usage of fractional indices. It is the method that
-describes, which nearby values will be used, and how they are combined into the
-resulting value.
+Texture filtering
+================================================================================
-The various texture sampling methods are discussed in the following sections.
+Texture filtering handles the usage of fractional indices. When the index is a
+fraction, the queried value lies between two or more texels (texture elements),
+depending on the dimensionality of the texture. The filtering method defines how
+to interpolate between these values.
+
+The filter modes are specified in :cpp:enumerator:`hipTextureFilterMode`.
+
+The various texture filtering methods are discussed in the following sections.
.. _texture_fetching_nearest:
-Nearest point sampling
+Nearest point filtering
-------------------------------------------------------------------------------
+This filter mode corresponds to ``hipFilterModePoint``.
+
In this method, the modulo of index is calculated as:
``tex(x) = T[floor(x)]``
@@ -70,22 +81,24 @@ of the nearest texel.
.. figure:: ../../../../data/how-to/hip_runtime_api/memory_management/textures/nearest.png
:width: 300
- :alt: Texture upscaled with nearest point sampling
+ :alt: Texture upscaled with nearest point filtering
:align: center
- Texture upscaled with nearest point sampling
+ Texture upscaled with nearest point filtering
.. _texture_fetching_linear:
Linear filtering
-------------------------------------------------------------------------------
+This filter mode corresponds to ``hipFilterModeLinear``.
+
The linear filtering method does a linear interpolation between values. Linear
interpolation is used to create a linear transition between two values. The
formula used is ``(1-t)P1 + tP2`` where ``P1`` and ``P2`` are the values and
``t`` is within the [0 to 1] range.
-In the case of texture sampling the following formulas are used:
+In the case of linear texture filtering the following formulas are used:
* For one dimensional textures: ``tex(x) = (1-α)T[i] + αT[i+1]``
* For two dimensional textures: ``tex(x,y) = (1-α)(1-β)T[i,j] + α(1-β)T[i+1,j] + (1-α)βT[i,j+1] + αβT[i+1,j+1]``
@@ -95,7 +108,7 @@ Where x, y, and, z are the floating-point indices. i, j, and, k are the integer
indices and, α, β, and, γ values represent how far along the sampled point is on
the three axes. These values are calculated by these formulas: ``i = floor(x')``, ``α = frac(x')``, ``x' = x - 0.5``, ``j = floor(y')``, ``β = frac(y')``, ``y' = y - 0.5``, ``k = floor(z')``, ``γ = frac(z')`` and ``z' = z - 0.5``
-This following image shows a texture stretched out to a 4x4 pixel quad, but
+The following image shows a texture stretched out to a 4x4 pixel quad, but
still indexed in the [0 to 1] range. The in-between values are interpolated
between the neighboring texels.
@@ -106,12 +119,18 @@ between the neighboring texels.
Texture upscaled with linear filtering
+.. _texture_addressing:
+
Texture addressing
===============================================================================
-Texture addressing mode handles the index that is out of bounds of the texture.
-This mode describes which values of the texture or a preset value to use when
-the index is out of bounds.
+The texture addressing modes are specified in
+:cpp:enumerator:`hipTextureAddressMode`.
+
+The texture addressing mode handles out-of-bounds accesses to the texture. This
+can be used in graphics applications to e.g. repeat a texture on a surface
+multiple times in various ways or create visible signs of out-of-bounds
+indexing.
The following sections describe the various texture addressing methods.
@@ -120,8 +139,10 @@ The following sections describe the various texture addressing methods.
Address mode border
-------------------------------------------------------------------------------
-In this method, the texture fetching returns a border value when indexing out of
-bounds. The border value must be set before texture fetching.
+This addressing mode is set using ``hipAddressModeBorder``.
+
+This addressing mode returns a border value when indexing out of bounds. The
+border value must be set before texture fetching.
The following image shows the texture on a 4x4 pixel quad, indexed in the
[0 to 3] range. The out-of-bounds values are the border color, which is yellow.
@@ -141,6 +162,8 @@ the addressing begins.
Address mode clamp
-------------------------------------------------------------------------------
+This addressing mode is set using ``hipAddressModeClamp``.
+
This mode clamps the index between [0 to size-1]. Due to this, when indexing
out-of-bounds, the values on the edge of the texture repeat. The clamp mode is
the default addressing mode.
@@ -164,6 +187,8 @@ the addressing begins.
Address mode wrap
-------------------------------------------------------------------------------
+This addressing mode is set using ``hipAddressModeWrap``.
+
Wrap mode addressing is only available for normalized texture coordinates. In
this addressing mode, the fractional part of the index is used:
@@ -189,6 +214,8 @@ the addressing begins.
Address mode mirror
-------------------------------------------------------------------------------
+This addressing mode is set using ``hipAddressModeMirror``.
+
Similar to the wrap mode the mirror mode is only available for normalized
texture coordinates and also creates a repeating image, but mirroring the
neighboring instances.
diff --git a/docs/how-to/hip_runtime_api/memory_management/unified_memory.rst b/docs/how-to/hip_runtime_api/memory_management/unified_memory.rst
index 5c55035151..c253416928 100644
--- a/docs/how-to/hip_runtime_api/memory_management/unified_memory.rst
+++ b/docs/how-to/hip_runtime_api/memory_management/unified_memory.rst
@@ -111,8 +111,7 @@ allocator can be used.
❌: **Unsupported**
:sup:`1` Works only with ``XNACK=1`` and kernels with HMM support. First GPU
-access causes recoverable page-fault. For more details, visit `GPU memory
-`_.
+access causes recoverable page-fault.
.. _unified memory allocators:
@@ -144,8 +143,7 @@ GPUs, it is essential to configure the environment variable ``XNACK=1`` and use
a kernel that supports `HMM
`_. Without this
configuration, the behavior will be similar to that of systems without HMM
-support. For more details, visit
-`GPU memory `_.
+support.
The table below illustrates the expected behavior of managed and unified memory
functions on ROCm and CUDA, both with and without HMM support.
diff --git a/docs/how-to/kernel_language_cpp_support.rst b/docs/how-to/kernel_language_cpp_support.rst
new file mode 100644
index 0000000000..e5ad9c733f
--- /dev/null
+++ b/docs/how-to/kernel_language_cpp_support.rst
@@ -0,0 +1,209 @@
+.. meta::
+ :description: This chapter describes HIP's kernel language's C++ support.
+ :keywords: AMD, ROCm, HIP, C++ support
+
+################################################################################
+Kernel language C++ support
+################################################################################
+
+The HIP host API can be compiled with any conforming C++ compiler, as long as no
+kernel launch is present in the code.
+
+To compile device code and include kernel launches, a compiler with full HIP
+support is needed, such as ``amdclang++``. For more information, see :doc:`ROCm
+compilers `.
+
+In host code all modern C++ standards that are supported by the compiler can be
+used. Device code compilation has some restrictions on modern C++ standards, but
+in general also supports all C++ standards. The biggest restriction is the
+reduced support of the C++ standard library in device code, as functions are
+only compiled for the host by default. An exception to this are ``constexpr``
+functions that are resolved at compile time and can be used in device code.
+There are ongoing efforts to implement C++ standard library functionality with
+`libhipcxx `_.
+
+********************************************************************************
+Supported kernel language C++ features
+********************************************************************************
+
+This section describes HIP's kernel language C++ feature support for the
+different versions of the standard.
+
+General C++ features
+===============================================================================
+
+Exception handling
+-------------------------------------------------------------------------------
+
+An important difference between the host and device code C++ support is
+exception handling. In device code, exceptions aren't available due to
+the hardware architecture. The device code must use return codes to handle
+errors.
+
+Assertions
+--------------------------------------------------------------------------------
+
+The ``assert`` function is supported in device code. Assertions are used for
+debugging purposes. When the input expression equals zero, the execution will be
+stopped. HIP provides its own implementation for ``assert`` for usage in device
+code in ``hip/hip_runtime.h``.
+
+.. code-block:: cpp
+
+ void assert(int input)
+
+HIP also provides the function ``abort()`` which can be used to terminate the
+application when terminal failures are detected. It is implemented using the
+``__builtin_trap()`` function.
+
+This function produces a similar effect as using CUDA's ``asm("trap")``.
+In HIP, ``abort()`` terminates the entire application, while in CUDA,
+``asm("trap")`` only terminates the current kernel and the application continues
+to run.
+
+printf
+--------------------------------------------------------------------------------
+
+``printf`` is supported in device code, and can be used just like in host code.
+
+.. code-block:: cpp
+
+ #include
+
+ __global__ void run_printf() { printf("Hello World\n"); }
+
+ int main() {
+ run_printf<<>>();
+ }
+
+Device-Side Dynamic Global Memory Allocation
+--------------------------------------------------------------------------------
+
+Device code can use ``new`` or ``malloc`` to dynamically allocate global
+memory on the device, and ``delete`` or ``free`` to deallocate global memory.
+
+Classes
+--------------------------------------------------------------------------------
+
+Classes work on both host and device side, with some constraints on the device
+side.
+
+Member functions with the appropriate qualifiers can be called in host and
+device code, and the corresponding overload is executed.
+
+``virtual`` member functions are also supported, however calling these functions
+from the host if the object was created on the device, or the other way around,
+is undefined behaviour.
+
+The ``__host__``, ``__device__``, ``__managed__``, ``__shared__`` and
+``__constant__`` memory space qualifiers can not be applied to member variables.
+
+C++11 support
+===============================================================================
+
+``constexpr``
+ Full support in device code. ``constexpr`` implicitly defines ``__host__
+ __device__``, so standard library functions that are marked ``constexpr`` can
+ be used in device code.
+ ``constexpr`` variables can be used in both host and device code.
+
+Lambdas
+ Lambdas are implicitly marked with ``__host__ __device__``. To mark them as
+ only executable for the host or the device, they can be explicitly marked like
+ any other function. There are restrictions on variable capture, however. Host
+ and device specific variables can only be accessed on other devices or the
+ host by explicitly copying them. Accessing captured the variables by
+ reference, when the variable is not located on the executing device or host,
+ causes undefined behaviour.
+
+Polymorphic function wrappers
+ HIP does not support the polymorphic function wrapper ``std::function``
+
+
+C++14 support
+===============================================================================
+
+All `C++14 language features `_ are
+supported.
+
+C++17 support
+===============================================================================
+
+All `C++17 language features `_ are
+supported.
+
+C++20 support
+===============================================================================
+
+Most `C++20 language features `_ are
+supported, but some restrictions apply. Coroutines are not available in device
+code.
+
+********************************************************************************
+Compiler features
+********************************************************************************
+
+Pragma Unroll
+================================================================================
+
+The unroll pragma for unrolling loops with a compile-time constant is supported:
+
+.. code-block:: cpp
+
+ #pragma unroll 16 /* hint to compiler to unroll next loop by 16 */
+ for (int i=0; i<16; i++) ...
+
+.. code-block:: cpp
+
+ #pragma unroll 1 /* tell compiler to never unroll the loop */
+ for (int i=0; i<16; i++) ...
+
+.. code-block:: cpp
+
+ #pragma unroll /* hint to compiler to completely unroll next loop. */
+ for (int i=0; i<16; i++) ...
+
+In-Line Assembly
+================================================================================
+
+GCN ISA In-line assembly can be included in device code.
+
+It has to be mentioned however, that in-line assembly should be used carefully.
+For more information, please refer to the
+:doc:`Inline ASM statements section of amdclang`.
+
+A short example program including inline assembly can be found in
+`HIP inline_assembly sample
+`_.
+
+For information on what special AMD GPU hardware features are available
+through assembly, please refer to the `ISA manuals of the corresponding
+architecture
+`_.
+
+Kernel Compilation
+================================================================================
+
+``hipcc`` now supports compiling C++/HIP kernels to binary code objects. The
+file format for the binary files is usually ``.co`` which means Code Object.
+The following command builds the code object using ``hipcc``.
+
+.. code-block:: bash
+
+ hipcc --genco --offload-arch=[TARGET GPU] [INPUT FILE] -o [OUTPUT FILE]
+
+ [TARGET GPU] = GPU architecture
+ [INPUT FILE] = Name of the file containing source code
+ [OUTPUT FILE] = Name of the generated code object file
+
+For an example on how to use these object files, refer to the `HIP module_api
+sample
+`_.
+
+Architecture specific code
+================================================================================
+
+``amdclang++`` defines ``__gfx*__`` macros based on the GPU architecture to be
+compiled for. These macros can be used to include GPU architecture specific
+code. Refer to the sample in `HIP gpu_arch sample
+`_.
diff --git a/docs/how-to/performance_guidelines.rst b/docs/how-to/performance_guidelines.rst
index d71c646657..33dbbb4af4 100644
--- a/docs/how-to/performance_guidelines.rst
+++ b/docs/how-to/performance_guidelines.rst
@@ -3,6 +3,8 @@
developers optimize the performance of HIP-capable GPU architectures.
:keywords: AMD, ROCm, HIP, CUDA, performance, guidelines
+.. _how_to_performance_guidelines:
+
*******************************************************************************
Performance guidelines
*******************************************************************************
@@ -32,12 +34,14 @@ reveal and efficiently provide as much parallelism as possible. The parallelism
can be performed at the application level, device level, and multiprocessor
level.
+.. _application_parallel_execution:
+
Application level
--------------------------------------------------------------------------------
To enable parallel execution of the application across the host and devices, use
-asynchronous calls and streams. Assign workloads based on efficiency: serial to
-the host or parallel to the devices.
+:ref:`asynchronous calls and streams `. Assign workloads
+based on efficiency: serial to the host or parallel to the devices.
For parallel workloads, when threads belonging to the same block need to
synchronize to share data, use :cpp:func:`__syncthreads()` (see:
diff --git a/docs/index.md b/docs/index.md
index 7b3f3bc513..eb2eb1e6da 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -30,6 +30,8 @@ The HIP documentation is organized into the following categories:
* [Debugging with HIP](./how-to/debugging)
* {doc}`./how-to/logging`
* {doc}`./how-to/hip_runtime_api`
+* {doc}`./how-to/hip_cpp_language_extensions`
+* {doc}`./how-to/kernel_language_cpp_support`
* [HIP porting guide](./how-to/hip_porting_guide)
* [HIP porting: driver API guide](./how-to/hip_porting_driver_api)
* {doc}`./how-to/hip_rtc`
@@ -41,8 +43,6 @@ The HIP documentation is organized into the following categories:
* [HIP runtime API](./reference/hip_runtime_api_reference)
* [HSA runtime API for ROCm](./reference/virtual_rocr)
-* [C++ language extensions](./reference/cpp_language_extensions)
-* [C++ language support](./reference/cpp_language_support)
* [HIP math API](./reference/math_api)
* [HIP environment variables](./reference/env_variables)
* [Comparing syntax for different APIs](./reference/terms)
@@ -55,8 +55,7 @@ The HIP documentation is organized into the following categories:
:::{grid-item-card} Tutorial
* [HIP basic examples](https://github.com/ROCm/rocm-examples/tree/develop/HIP-Basic)
-* [HIP examples](https://github.com/ROCm/HIP-Examples)
-* [HIP test samples](https://github.com/ROCm/hip-tests/tree/develop/samples)
+* [HIP examples](https://github.com/ROCm/rocm-examples)
* [SAXPY tutorial](./tutorial/saxpy)
* [Reduction tutorial](./tutorial/reduction)
* [Cooperative groups tutorial](./tutorial/cooperative_groups_tutorial)
diff --git a/docs/install/build.rst b/docs/install/build.rst
index 4f8f8bf505..64deba241b 100644
--- a/docs/install/build.rst
+++ b/docs/install/build.rst
@@ -238,4 +238,4 @@ Run HIP
=================================================
After installation and building HIP, you can compile your application and run.
-A simple example is `square sample `_.
+Simple examples can be found in the `ROCm-examples repository `_.
diff --git a/docs/reference/cpp_language_extensions.rst b/docs/reference/cpp_language_extensions.rst
deleted file mode 100644
index 09a6d8f5dc..0000000000
--- a/docs/reference/cpp_language_extensions.rst
+++ /dev/null
@@ -1,1209 +0,0 @@
-.. meta::
- :description: This chapter describes the built-in variables and functions that are accessible from the
- HIP kernel. It's intended for users who are familiar with CUDA kernel syntax and want to
- learn how HIP differs from CUDA.
- :keywords: AMD, ROCm, HIP, CUDA, c++ language extensions, HIP functions
-
-********************************************************************************
-C++ language extensions
-********************************************************************************
-
-HIP provides a C++ syntax that is suitable for compiling most code that commonly appears in
-compute kernels (classes, namespaces, operator overloading, and templates). HIP also defines other
-language features that are designed to target accelerators, such as:
-
-* A kernel-launch syntax that uses standard C++ (this resembles a function call and is portable to all
- HIP targets)
-* Short-vector headers that can serve on a host or device
-* Math functions that resemble those in ``math.h``, which is included with standard C++ compilers
-* Built-in functions for accessing specific GPU hardware capabilities
-
-.. note::
-
- This chapter describes the built-in variables and functions that are accessible from the HIP kernel. It's
- intended for users who are familiar with CUDA kernel syntax and want to learn how HIP differs from
- CUDA.
-
-Features are labeled with one of the following keywords:
-
-* **Supported**: HIP supports the feature with a CUDA-equivalent function
-* **Not supported**: HIP does not support the feature
-* **Under development**: The feature is under development and not yet available
-
-Function-type qualifiers
-========================================================
-
-``__device__``
------------------------------------------------------------------------
-
-Supported ``__device__`` functions are:
-
- * Run on the device
- * Called from the device only
-
-You can combine ``__device__`` with the host keyword (:ref:`host_attr`).
-
-``__global__``
------------------------------------------------------------------------
-
-Supported ``__global__`` functions are:
-
- * Run on the device
- * Called (launched) from the host
-
-HIP ``__global__`` functions must have a ``void`` return type.
-
-HIP doesn't support dynamic-parallelism, which means that you can't call ``__global__`` functions from
-the device.
-
-.. _host_attr:
-
-``__host__``
------------------------------------------------------------------------
-
-Supported ``__host__`` functions are:
-
- * Run on the host
- * Called from the host
-
-You can combine ``__host__`` with ``__device__``; in this case, the function compiles for the host and the
-device. Note that these functions can't use the HIP grid coordinate functions (e.g., ``threadIdx.x``). If
-you need to use HIP grid coordinate functions, you can pass the necessary coordinate information as
-an argument.
-
-You can't combine ``__host__`` with ``__global__``.
-
-HIP parses the ``__noinline__`` and ``__forceinline__`` keywords and converts them into the appropriate
-Clang attributes.
-
-Calling ``__global__`` functions
-=============================================================
-
-`__global__` functions are often referred to as *kernels*. When you call a global function, you're
-*launching a kernel*. When launching a kernel, you must specify an execution configuration that includes the
-grid and block dimensions. The execution configuration can also include other information for the launch,
-such as the amount of additional shared memory to allocate and the stream where you want to execute the
-kernel.
-
-HIP introduces a standard C++ calling convention (``hipLaunchKernelGGL``) to pass the run
-configuration to the kernel. However, you can also use the CUDA ``<<< >>>`` syntax.
-
-When using ``hipLaunchKernelGGL``, your first five parameters must be:
-
- * ``symbol kernelName``: The name of the kernel you want to launch. To support template kernels
- that contain ``","``, use the ``HIP_KERNEL_NAME`` macro (HIPIFY tools insert this automatically).
- * ``dim3 gridDim``: 3D-grid dimensions that specify the number of blocks to launch.
- * ``dim3 blockDim``: 3D-block dimensions that specify the number of threads in each block.
- * ``size_t dynamicShared``: The amount of additional shared memory that you want to allocate
- when launching the kernel (see :ref:`shared-variable-type`).
- * ``hipStream_t``: The stream where you want to run the kernel. A value of ``0`` corresponds to the
- NULL stream (see :ref:`synchronization_functions`).
-
-You can include your kernel arguments after these parameters.
-
-.. code-block:: cpp
-
- // Example hipLaunchKernelGGL pseudocode:
- __global__ void MyKernel(float *A, float *B, float *C, size_t N)
- {
- ...
- }
-
- MyKernel<<>> (a,b,c,n);
-
- // Alternatively, you can launch the kernel using:
- // hipLaunchKernelGGL(MyKernel, dim3(gridDim), dim3(groupDim), 0/*dynamicShared*/, 0/*stream), a, b, c, n);
-
-You can use HIPIFY tools to convert CUDA launch syntax to ``hipLaunchKernelGGL``. This includes the
-conversion of optional ``<<< >>>`` arguments into the five required ``hipLaunchKernelGGL``
-parameters.
-
-.. note::
-
- HIP doesn't support dimension sizes of :math:`gridDim * blockDim \ge 2^{32}` when launching a kernel.
-
-.. _kernel-launch-example:
-
-Kernel launch example
-==========================================================
-
-.. code-block:: cpp
-
- // Example showing device function, __device__ __host__
- // <- compile for both device and host
- #include
- // Example showing device function, __device__ __host__
- __host__ __device__ float PlusOne(float x) // <- compile for both device and host
- {
- return x + 1.0;
- }
-
- __global__ void MyKernel (const float *a, const float *b, float *c, unsigned N)
- {
- const int gid = threadIdx.x + blockIdx.x * blockDim.x; // <- coordinate index function
- if (gid < N) {
- c[gid] = a[gid] + PlusOne(b[gid]);
- }
- }
-
- void callMyKernel()
- {
- float *a, *b, *c; // initialization not shown...
- unsigned N = 1000000;
- const unsigned blockSize = 256;
- const int gridSize = (N + blockSize - 1)/blockSize;
-
- MyKernel<<>> (a,b,c,N);
- // Alternatively, kernel can be launched by
- // hipLaunchKernelGGL(MyKernel, dim3(gridSize), dim3(blockSize), 0, 0, a,b,c,N);
- }
-
-Variable type qualifiers
-========================================================
-
-``__constant__``
------------------------------------------------------------------------------
-
-The host writes constant memory before launching the kernel. This memory is read-only from the GPU
-while the kernel is running. The functions for accessing constant memory are:
-
-* ``hipGetSymbolAddress()``
-* ``hipGetSymbolSize()``
-* ``hipMemcpyToSymbol()``
-* ``hipMemcpyToSymbolAsync()``
-* ``hipMemcpyFromSymbol()``
-* ``hipMemcpyFromSymbolAsync()``
-
-.. note::
-
- Add ``__constant__`` to a template can lead to undefined behavior. Refer to `HIP Issue #3201 `_ for details.
-
-.. _shared-variable-type:
-
-``__shared__``
------------------------------------------------------------------------------
-
-To allow the host to dynamically allocate shared memory, you can specify ``extern __shared__`` as a
-launch parameter.
-
-.. note::
-
- Prior to the HIP-Clang compiler, dynamic shared memory had to be declared using the
- ``HIP_DYNAMIC_SHARED`` macro in order to ensure accuracy. This is because using static shared
- memory in the same kernel could've resulted in overlapping memory ranges and data-races. The
- HIP-Clang compiler provides support for ``extern __shared_`` declarations, so ``HIP_DYNAMIC_SHARED``
- is no longer required.
-
-``__managed__``
------------------------------------------------------------------------------
-
-Managed memory, including the ``__managed__`` keyword, is supported in HIP combined host/device
-compilation.
-
-``__restrict__``
------------------------------------------------------------------------------
-
-``__restrict__`` tells the compiler that the associated memory pointer not to alias with any other pointer
-in the kernel or function. This can help the compiler generate better code. In most use cases, every
-pointer argument should use this keyword in order to achieve the benefit.
-
-Built-in variables
-====================================================
-
-Coordinate built-ins
------------------------------------------------------------------------------
-
-The kernel uses coordinate built-ins (``thread*``, ``block*``, ``grid*``) to determine the coordinate index
-and bounds for the active work item.
-
-Built-ins are defined in ``amd_hip_runtime.h``, rather than being implicitly defined by the compiler.
-
-Coordinate variable definitions for built-ins are the same for HIP and CUDA. For example: ``threadIdx.x``,
-``blockIdx.y``, and ``gridDim.y``. The products ``gridDim.x * blockDim.x``, ``gridDim.y * blockDim.y``, and
-``gridDim.z * blockDim.z`` are always less than ``2^32``.
-
-Coordinate built-ins are implemented as structures for improved performance. When used with
-``printf``, they must be explicitly cast to integer types.
-
-``warpSize``
------------------------------------------------------------------------------
-The ``warpSize`` variable type is ``int``. It contains the warp size (in threads) for the target device.
-``warpSize`` should only be used in device functions that develop portable wave-aware code.
-
-.. note::
-
- NVIDIA devices return 32 for this variable; AMD devices return 64 for gfx9 and 32 for gfx10 and above.
-
-Vector types
-====================================================
-
-The following vector types are defined in ``hip_runtime.h``. They are not automatically provided by the
-compiler.
-
-Short vector types
---------------------------------------------------------------------------------------------
-
-Short vector types derive from basic integer and floating-point types. These structures are defined in
-``hip_vector_types.h``. The first, second, third, and fourth components of the vector are defined by the
-``x``, ``y``, ``z``, and ``w`` fields, respectively. All short vector types support a constructor function of the
-form ``make_()``. For example, ``float4 make_float4(float x, float y, float z, float w)`` creates
-a vector with type ``float4`` and value ``(x,y,z,w)``.
-
-HIP supports the following short vector formats:
-
-* Signed Integers:
-
- * ``char1``, ``char2``, ``char3``, ``char4``
- * ``short1``, ``short2``, ``short3``, ``short4``
- * ``int1``, ``int2``, ``int3``, ``int4``
- * ``long1``, ``long2``, ``long3``, ``long4``
- * ``longlong1``, ``longlong2``, ``longlong3``, ``longlong4``
-
-* Unsigned Integers:
-
- * ``uchar1``, ``uchar2``, ``uchar3``, ``uchar4``
- * ``ushort1``, ``ushort2``, ``ushort3``, ``ushort4``
- * ``uint1``, ``uint2``, ``uint3``, ``uint4``
- * ``ulong1``, ``ulong2``, ``ulong3``, ``ulong4``
- * ``ulonglong1``, ``ulonglong2``, ``ulonglong3``, ``ulonglong4``
-
-* Floating Points:
-
- * ``float1``, ``float2``, ``float3``, ``float4``
- * ``double1``, ``double2``, ``double3``, ``double4``
-
-.. _dim3:
-
-dim3
---------------------------------------------------------------------------------------------
-
-``dim3`` is a three-dimensional integer vector type that is commonly used to specify grid and group
-dimensions.
-
-The dim3 constructor accepts between zero and three arguments. By default, it initializes unspecified
-dimensions to 1.
-
-.. code-block:: cpp
-
- typedef struct dim3 {
- uint32_t x;
- uint32_t y;
- uint32_t z;
-
- dim3(uint32_t _x=1, uint32_t _y=1, uint32_t _z=1) : x(_x), y(_y), z(_z) {};
- };
-
-.. _memory_fence_instructions:
-
-Memory fence instructions
-====================================================
-
-HIP supports ``__threadfence()`` and ``__threadfence_block()``. If you're using ``threadfence_system()`` in the HIP-Clang path, you can use the following workaround:
-
-#. Build HIP with the ``HIP_COHERENT_HOST_ALLOC`` environment variable enabled.
-#. Modify kernels that use ``__threadfence_system()`` as follows:
-
- * Ensure the kernel operates only on fine-grained system memory, which should be allocated with
- ``hipHostMalloc()``.
- * Remove ``memcpy`` for all allocated fine-grained system memory regions.
-
-.. _synchronization_functions:
-
-Synchronization functions
-====================================================
-
-Synchronization functions causes all threads in the group to wait at this synchronization point, and for all shared and global memory accesses by the threads to complete, before running synchronization. This guarantees the visibility of accessed data for all threads in the group.
-
-The ``__syncthreads()`` built-in function is supported in HIP. The ``__syncthreads_count(int)``,
-``__syncthreads_and(int)``, and ``__syncthreads_or(int)`` functions are under development.
-
-The Cooperative Groups API offer options to do synchronization on a developer defined set of thread groups. For further information, check :ref:`Cooperative Groups API ` or :ref:`Cooperative Groups how to `.
-
-Math functions
-====================================================
-
-HIP-Clang supports a set of math operations that are callable from the device.
-HIP supports most of the device functions supported by CUDA. These are described
-on :ref:`Math API page `.
-
-Texture functions
-===============================================
-
-The supported texture functions are listed in ``texture_fetch_functions.h`` and
-``texture_indirect_functions.h`` header files in the
-`HIP-AMD backend repository `_.
-
-Texture functions are not supported on some devices. To determine if texture functions are supported
-on your device, use ``Macro __HIP_NO_IMAGE_SUPPORT == 1``. You can query the attribute
-``hipDeviceAttributeImageSupport`` to check if texture functions are supported in the host runtime
-code.
-
-Surface functions
-===============================================
-
-The supported surface functions are located on :ref:`Surface object reference
-page `.
-
-Timer functions
-===============================================
-
-To read a high-resolution timer from the device, HIP provides the following built-in functions:
-
-* Returning the incremental counter value for every clock cycle on a device:
-
- .. code-block:: cpp
-
- clock_t clock()
- long long int clock64()
-
- The difference between the values that are returned represents the cycles used.
-
-* Returning the wall clock count at a constant frequency on the device:
-
- .. code-block:: cpp
-
- long long int wall_clock64()
-
- This can be queried using the HIP API with the ``hipDeviceAttributeWallClockRate`` attribute of the
- device in HIP application code. For example:
-
- .. code-block:: cpp
-
- int wallClkRate = 0; //in kilohertz
- HIPCHECK(hipDeviceGetAttribute(&wallClkRate, hipDeviceAttributeWallClockRate, deviceId));
-
- Where ``hipDeviceAttributeWallClockRate`` is a device attribute. Note that wall clock frequency is a
- per-device attribute.
-
- Note that ``clock()`` and ``clock64()`` do not work properly on AMD RDNA3 (GFX11) graphic processors.
-
-.. _atomic functions:
-
-Atomic functions
-===============================================
-
-Atomic functions are run as read-modify-write (RMW) operations that reside in global or shared
-memory. No other device or thread can observe or modify the memory location during an atomic
-operation. If multiple instructions from different devices or threads target the same memory location,
-the instructions are serialized in an undefined order.
-
-To support system scope atomic operations, you can use the HIP APIs that contain the ``_system`` suffix.
-For example:
-
-* ``atomicAnd``: This function is atomic and coherent within the GPU device running the function
-
-* ``atomicAnd_system``: This function extends the atomic operation from the GPU device to other CPUs and GPU devices in the system.
-
-HIP supports the following atomic operations.
-
-.. list-table:: Atomic operations
-
- * - **Function**
- - **Supported in HIP**
- - **Supported in CUDA**
-
- * - ``int atomicAdd(int* address, int val)``
- - ✓
- - ✓
-
- * - ``int atomicAdd_system(int* address, int val)``
- - ✓
- - ✓
-
- * - ``unsigned int atomicAdd(unsigned int* address,unsigned int val)``
- - ✓
- - ✓
-
- * - ``unsigned int atomicAdd_system(unsigned int* address, unsigned int val)``
- - ✓
- - ✓
-
- * - ``unsigned long long atomicAdd(unsigned long long* address,unsigned long long val)``
- - ✓
- - ✓
-
- * - ``unsigned long long atomicAdd_system(unsigned long long* address, unsigned long long val)``
- - ✓
- - ✓
-
- * - ``float atomicAdd(float* address, float val)``
- - ✓
- - ✓
-
- * - ``float atomicAdd_system(float* address, float val)``
- - ✓
- - ✓
-
- * - ``double atomicAdd(double* address, double val)``
- - ✓
- - ✓
-
- * - ``double atomicAdd_system(double* address, double val)``
- - ✓
- - ✓
-
- * - ``float unsafeAtomicAdd(float* address, float val)``
- - ✓
- - ✗
-
- * - ``float safeAtomicAdd(float* address, float val)``
- - ✓
- - ✗
-
- * - ``double unsafeAtomicAdd(double* address, double val)``
- - ✓
- - ✗
-
- * - ``double safeAtomicAdd(double* address, double val)``
- - ✓
- - ✗
-
- * - ``int atomicSub(int* address, int val)``
- - ✓
- - ✓
-
- * - ``int atomicSub_system(int* address, int val)``
- - ✓
- - ✓
-
- * - ``unsigned int atomicSub(unsigned int* address,unsigned int val)``
- - ✓
- - ✓
-
- * - ``unsigned int atomicSub_system(unsigned int* address, unsigned int val)``
- - ✓
- - ✓
-
- * - ``int atomicExch(int* address, int val)``
- - ✓
- - ✓
-
- * - ``int atomicExch_system(int* address, int val)``
- - ✓
- - ✓
-
- * - ``unsigned int atomicExch(unsigned int* address,unsigned int val)``
- - ✓
- - ✓
-
- * - ``unsigned int atomicExch_system(unsigned int* address, unsigned int val)``
- - ✓
- - ✓
-
- * - ``unsigned long long atomicExch(unsigned long long int* address,unsigned long long int val)``
- - ✓
- - ✓
-
- * - ``unsigned long long atomicExch_system(unsigned long long* address, unsigned long long val)``
- - ✓
- - ✓
-
- * - ``unsigned long long atomicExch_system(unsigned long long* address, unsigned long long val)``
- - ✓
- - ✓
-
- * - ``float atomicExch(float* address, float val)``
- - ✓
- - ✓
-
- * - ``int atomicMin(int* address, int val)``
- - ✓
- - ✓
-
- * - ``int atomicMin_system(int* address, int val)``
- - ✓
- - ✓
-
- * - ``unsigned int atomicMin(unsigned int* address,unsigned int val)``
- - ✓
- - ✓
-
- * - ``unsigned int atomicMin_system(unsigned int* address, unsigned int val)``
- - ✓
- - ✓
-
- * - ``unsigned long long atomicMin(unsigned long long* address,unsigned long long val)``
- - ✓
- - ✓
-
- * - ``int atomicMax(int* address, int val)``
- - ✓
- - ✓
-
- * - ``int atomicMax_system(int* address, int val)``
- - ✓
- - ✓
-
- * - ``unsigned int atomicMax(unsigned int* address,unsigned int val)``
- - ✓
- - ✓
-
- * - ``unsigned int atomicMax_system(unsigned int* address, unsigned int val)``
- - ✓
- - ✓
-
- * - ``unsigned long long atomicMax(unsigned long long* address,unsigned long long val)``
- - ✓
- - ✓
-
- * - ``unsigned int atomicInc(unsigned int* address)``
- - ✗
- - ✓
-
- * - ``unsigned int atomicDec(unsigned int* address)``
- - ✗
- - ✓
-
- * - ``int atomicCAS(int* address, int compare, int val)``
- - ✓
- - ✓
-
- * - ``int atomicCAS_system(int* address, int compare, int val)``
- - ✓
- - ✓
-
- * - ``unsigned int atomicCAS(unsigned int* address,unsigned int compare,unsigned int val)``
- - ✓
- - ✓
-
- * - ``unsigned int atomicCAS_system(unsigned int* address, unsigned int compare, unsigned int val)``
- - ✓
- - ✓
-
- * - ``unsigned long long atomicCAS(unsigned long long* address,unsigned long long compare,unsigned long long val)``
- - ✓
- - ✓
-
- * - ``unsigned long long atomicCAS_system(unsigned long long* address, unsigned long long compare, unsigned long long val)``
- - ✓
- - ✓
-
- * - ``int atomicAnd(int* address, int val)``
- - ✓
- - ✓
-
- * - ``int atomicAnd_system(int* address, int val)``
- - ✓
- - ✓
-
- * - ``unsigned int atomicAnd(unsigned int* address,unsigned int val)``
- - ✓
- - ✓
-
- * - ``unsigned int atomicAnd_system(unsigned int* address, unsigned int val)``
- - ✓
- - ✓
-
- * - ``unsigned long long atomicAnd(unsigned long long* address,unsigned long long val)``
- - ✓
- - ✓
-
- * - ``unsigned long long atomicAnd_system(unsigned long long* address, unsigned long long val)``
- - ✓
- - ✓
-
- * - ``int atomicOr(int* address, int val)``
- - ✓
- - ✓
-
- * - ``int atomicOr_system(int* address, int val)``
- - ✓
- - ✓
-
- * - ``unsigned int atomicOr(unsigned int* address,unsigned int val)``
- - ✓
- - ✓
-
- * - ``unsigned int atomicOr_system(unsigned int* address, unsigned int val)``
- - ✓
- - ✓
-
- * - ``unsigned int atomicOr_system(unsigned int* address, unsigned int val)``
- - ✓
- - ✓
-
- * - ``unsigned long long atomicOr(unsigned long long int* address,unsigned long long val)``
- - ✓
- - ✓
-
- * - ``unsigned long long atomicOr_system(unsigned long long* address, unsigned long long val)``
- - ✓
- - ✓
-
- * - ``int atomicXor(int* address, int val)``
- - ✓
- - ✓
-
- * - ``int atomicXor_system(int* address, int val)``
- - ✓
- - ✓
-
- * - ``unsigned int atomicXor(unsigned int* address,unsigned int val)``
- - ✓
- - ✓
-
- * - ``unsigned int atomicXor_system(unsigned int* address, unsigned int val)``
- - ✓
- - ✓
-
- * - ``unsigned long long atomicXor(unsigned long long* address,unsigned long long val)``
- - ✓
- - ✓
-
- * - ``unsigned long long atomicXor_system(unsigned long long* address, unsigned long long val)``
- - ✓
- - ✓
-
-Unsafe floating-point atomic RMW operations
-----------------------------------------------------------------------------------------------------------------
-Some HIP devices support fast atomic RMW operations on floating-point values. For example,
-``atomicAdd`` on single- or double-precision floating-point values may generate a hardware RMW
-instruction that is faster than emulating the atomic operation using an atomic compare-and-swap
-(CAS) loop.
-
-On some devices, fast atomic RMW instructions can produce results that differ from the same
-functions implemented with atomic CAS loops. For example, some devices will use different rounding
-or denormal modes, and some devices produce incorrect answers if fast floating-point atomic RMW
-instructions target fine-grained memory allocations.
-
-The HIP-Clang compiler offers a compile-time option, so you can choose fast--but potentially
-unsafe--atomic instructions for your code. On devices that support these instructions, you can include
-the ``-munsafe-fp-atomics`` option. This flag indicates to the compiler that all floating-point atomic
-function calls are allowed to use an unsafe version, if one exists. For example, on some devices, this
-flag indicates to the compiler that no floating-point ``atomicAdd`` function can target fine-grained
-memory.
-
-If you want to avoid using unsafe use a floating-point atomic RMW operations, you can use the
-``-mno-unsafe-fp-atomics`` option. Note that the compiler default is to not produce unsafe
-floating-point atomic RMW instructions, so the ``-mno-unsafe-fp-atomics`` option is not necessarily
-required. However, passing this option to the compiler is good practice.
-
-When you pass ``-munsafe-fp-atomics`` or ``-mno-unsafe-fp-atomics`` to the compiler's command line,
-the option is applied globally for the entire compilation. Note that if some of the atomic RMW function
-calls cannot safely use the faster floating-point atomic RMW instructions, you must use
-``-mno-unsafe-fp-atomics`` in order to ensure that your atomic RMW function calls produce correct
-results.
-
-HIP has four extra functions that you can use to more precisely control which floating-point atomic
-RMW functions produce unsafe atomic RMW instructions:
-
-* ``float unsafeAtomicAdd(float* address, float val)``
-* ``double unsafeAtomicAdd(double* address, double val)`` (Always produces fast atomic RMW
- instructions on devices that have them, even when ``-mno-unsafe-fp-atomics`` is used)
-* `float safeAtomicAdd(float* address, float val)`
-* ``double safeAtomicAdd(double* address, double val)`` (Always produces safe atomic RMW
- operations, even when ``-munsafe-fp-atomics`` is used)
-
-.. _warp-cross-lane:
-
-Warp cross-lane functions
-========================================================
-
-Threads in a warp are referred to as ``lanes`` and are numbered from ``0`` to ``warpSize - 1``.
-Warp cross-lane functions operate across all lanes in a warp. The hardware guarantees that all warp
-lanes will execute in lockstep, so additional synchronization is unnecessary, and the instructions
-use no shared memory.
-
-Note that NVIDIA and AMD devices have different warp sizes. You can use ``warpSize`` built-ins in you
-portable code to query the warp size.
-
-.. tip::
- Be sure to review HIP code generated from the CUDA path to ensure that it doesn't assume a
- ``waveSize`` of 32. "Wave-aware" code that assumes a ``waveSize`` of 32 can run on a wave-64
- machine, but it only utilizes half of the machine's resources.
-
-To get the default warp size of a GPU device, use ``hipGetDeviceProperties`` in you host functions.
-
-.. code-block:: cpp
-
- cudaDeviceProp props;
- cudaGetDeviceProperties(&props, deviceID);
- int w = props.warpSize;
- // implement portable algorithm based on w (rather than assume 32 or 64)
-
-Only use ``warpSize`` built-ins in device functions, and don't assume ``warpSize`` to be a compile-time
-constant.
-
-Note that assembly kernels may be built for a warp size that is different from the default.
-All mask values either returned or accepted by these builtins are 64-bit
-unsigned integer values, even when compiled for a wave-32 device, where all the
-higher bits are unused. CUDA code ported to HIP requires changes to ensure that
-the correct type is used.
-
-Note that the ``__sync`` variants are made available in ROCm 6.2, but disabled by
-default to help with the transition to 64-bit masks. They can be enabled by
-setting the preprocessor macro ``HIP_ENABLE_WARP_SYNC_BUILTINS``. These builtins
-will be enabled unconditionally in the next ROCm release. Wherever possible, the
-implementation includes a static assert to check that the program source uses
-the correct type for the mask.
-
-.. _warp_vote_functions:
-
-Warp vote and ballot functions
--------------------------------------------------------------------------------------------------------------
-
-.. code-block:: cpp
-
- int __all(int predicate)
- int __any(int predicate)
- unsigned long long __ballot(int predicate)
- unsigned long long __activemask()
-
- int __all_sync(unsigned long long mask, int predicate)
- int __any_sync(unsigned long long mask, int predicate)
- unsigned long long __ballot_sync(unsigned long long mask, int predicate)
-
-You can use ``__any`` and ``__all`` to get a summary view of the predicates evaluated by the
-participating lanes.
-
-* ``__any()``: Returns 1 if the predicate is non-zero for any participating lane, otherwise it returns 0.
-
-* ``__all()``: Returns 1 if the predicate is non-zero for all participating lanes, otherwise it returns 0.
-
-To determine if the target platform supports the any/all instruction, you can use the ``hasWarpVote``
-device property or the ``HIP_ARCH_HAS_WARP_VOTE`` compiler definition.
-
-``__ballot`` returns a bit mask containing the 1-bit predicate value from each
-lane. The nth bit of the result contains the 1 bit contributed by the nth warp
-lane.
-
-``__activemask()`` returns a bit mask of currently active warp lanes. The nth bit
-of the result is 1 if the nth warp lane is active.
-
-Note that the ``__ballot`` and ``__activemask`` builtins in HIP have a 64-bit return
-value (unlike the 32-bit value returned by the CUDA builtins). Code ported from
-CUDA should be adapted to support the larger warp sizes that the HIP version
-requires.
-
-Applications can test whether the target platform supports the ``__ballot`` or
-``__activemask`` instructions using the ``hasWarpBallot`` device property in host
-code or the ``HIP_ARCH_HAS_WARP_BALLOT`` macro defined by the compiler for device
-code.
-
-The ``_sync`` variants require a 64-bit unsigned integer mask argument that
-specifies the lanes in the warp that will participate in cross-lane
-communication with the calling lane. Each participating thread must have its own
-bit set in its mask argument, and all active threads specified in any mask
-argument must execute the same call with the same mask, otherwise the result is
-undefined.
-
-Warp match functions
--------------------------------------------------------------------------------------------------------------
-
-.. code-block:: cpp
-
- unsigned long long __match_any(T value)
- unsigned long long __match_all(T value, int *pred)
-
- unsigned long long __match_any_sync(unsigned long long mask, T value)
- unsigned long long __match_all_sync(unsigned long long mask, T value, int *pred)
-
-``T`` can be a 32-bit integer type, 64-bit integer type or a single precision or
-double precision floating point type.
-
-``__match_any`` returns a bit mask containing a 1-bit for every participating lane
-if and only if that lane has the same value in ``value`` as the current lane, and
-a 0-bit for all other lanes.
-
-``__match_all`` returns a bit mask containing a 1-bit for every participating lane
-if and only if they all have the same value in ``value`` as the current lane, and
-a 0-bit for all other lanes. The predicate ``pred`` is set to true if and only if
-all participating threads have the same value in ``value``.
-
-The ``_sync`` variants require a 64-bit unsigned integer mask argument that
-specifies the lanes in the warp that will participate in cross-lane
-communication with the calling lane. Each participating thread must have its own
-bit set in its mask argument, and all active threads specified in any mask
-argument must execute the same call with the same mask, otherwise the result is
-undefined.
-
-Warp shuffle functions
--------------------------------------------------------------------------------------------------------------
-
-The default width is ``warpSize`` (see :ref:`warp-cross-lane`). Half-float shuffles are not supported.
-
-.. code-block:: cpp
-
- T __shfl (T var, int srcLane, int width=warpSize);
- T __shfl_up (T var, unsigned int delta, int width=warpSize);
- T __shfl_down (T var, unsigned int delta, int width=warpSize);
- T __shfl_xor (T var, int laneMask, int width=warpSize);
-
- T __shfl_sync (unsigned long long mask, T var, int srcLane, int width=warpSize);
- T __shfl_up_sync (unsigned long long mask, T var, unsigned int delta, int width=warpSize);
- T __shfl_down_sync (unsigned long long mask, T var, unsigned int delta, int width=warpSize);
- T __shfl_xor_sync (unsigned long long mask, T var, int laneMask, int width=warpSize);
-
-``T`` can be a 32-bit integer type, 64-bit integer type or a single precision or
-double precision floating point type.
-
-The ``_sync`` variants require a 64-bit unsigned integer mask argument that
-specifies the lanes in the warp that will participate in cross-lane
-communication with the calling lane. Each participating thread must have its own
-bit set in its mask argument, and all active threads specified in any mask
-argument must execute the same call with the same mask, otherwise the result is
-undefined.
-
-Cooperative groups functions
-==============================================================
-
-You can use cooperative groups to synchronize groups of threads. Cooperative groups also provide a
-way of communicating between groups of threads at a granularity that is different from the block.
-
-HIP supports the following kernel language cooperative groups types and functions:
-
-.. list-table:: Cooperative groups functions
-
- * - **Function**
- - **Supported in HIP**
- - **Supported in CUDA**
-
- * - ``void thread_group.sync();``
- - ✓
- - ✓
-
- * - ``unsigned thread_group.size();``
- - ✓
- - ✓
-
- * - ``unsigned thread_group.thread_rank()``
- - ✓
- - ✓
-
- * - ``bool thread_group.is_valid();``
- - ✓
- - ✓
-
- * - ``grid_group this_grid()``
- - ✓
- - ✓
-
- * - ``void grid_group.sync()``
- - ✓
- - ✓
-
- * - ``unsigned grid_group.size()``
- - ✓
- - ✓
-
- * - ``unsigned grid_group.thread_rank()``
- - ✓
- - ✓
-
- * - ``bool grid_group.is_valid()``
- - ✓
- - ✓
-
- * - ``multi_grid_group this_multi_grid()``
- - ✓
- - ✓
-
- * - ``void multi_grid_group.sync()``
- - ✓
- - ✓
-
- * - ``unsigned multi_grid_group.size()``
- - ✓
- - ✓
-
- * - ``unsigned multi_grid_group.thread_rank()``
- - ✓
- - ✓
-
- * - ``bool multi_grid_group.is_valid()``
- - ✓
- - ✓
-
- * - ``unsigned multi_grid_group.num_grids()``
- - ✓
- - ✓
-
- * - ``unsigned multi_grid_group.grid_rank()``
- - ✓
- - ✓
-
- * - ``thread_block this_thread_block()``
- - ✓
- - ✓
-
- * - ``multi_grid_group this_multi_grid()``
- - ✓
- - ✓
-
- * - ``void multi_grid_group.sync()``
- - ✓
- - ✓
-
- * - ``void thread_block.sync()``
- - ✓
- - ✓
-
- * - ``unsigned thread_block.size()``
- - ✓
- - ✓
-
- * - ``unsigned thread_block.thread_rank()``
- - ✓
- - ✓
-
- * - ``bool thread_block.is_valid()``
- - ✓
- - ✓
-
- * - ``dim3 thread_block.group_index()``
- - ✓
- - ✓
-
- * - ``dim3 thread_block.thread_index()``
- - ✓
- - ✓
-
-For further information, check :ref:`Cooperative Groups API ` or :ref:`Cooperative Groups how to `.
-
-Warp matrix functions
-============================================================
-
-Warp matrix functions allow a warp to cooperatively operate on small matrices that have elements
-spread over lanes in an unspecified manner.
-
-HIP does not support kernel language warp matrix types or functions.
-
-.. list-table:: Warp matrix functions
-
- * - **Function**
- - **Supported in HIP**
- - **Supported in CUDA**
-
- * - ``void load_matrix_sync(fragment<...> &a, const T* mptr, unsigned lda)``
- - ✗
- - ✓
-
- * - ``void load_matrix_sync(fragment<...> &a, const T* mptr, unsigned lda, layout_t layout)``
- - ✗
- - ✓
-
- * - ``void store_matrix_sync(T* mptr, fragment<...> &a, unsigned lda, layout_t layout)``
- - ✗
- - ✓
-
- * - ``void fill_fragment(fragment<...> &a, const T &value)``
- - ✗
- - ✓
-
- * - ``void mma_sync(fragment<...> &d, const fragment<...> &a, const fragment<...> &b, const fragment<...> &c , bool sat)``
- - ✗
- - ✓
-
-Independent thread scheduling
-============================================================
-
-Certain architectures that support CUDA allow threads to progress independently of each other. This
-independent thread scheduling makes intra-warp synchronization possible.
-
-HIP does not support this type of scheduling.
-
-Profiler Counter Function
-============================================================
-
-The CUDA ``__prof_trigger()`` instruction is not supported.
-
-Assert
-============================================================
-
-The assert function is supported in HIP.
-Assert function is used for debugging purpose, when the input expression equals to zero, the execution will be stopped.
-
-.. code-block:: cpp
-
- void assert(int input)
-
-There are two kinds of implementations for assert functions depending on the use sceneries,
-- One is for the host version of assert, which is defined in ``assert.h``,
-- Another is the device version of assert, which is implemented in ``hip/hip_runtime.h``.
-Users need to include ``assert.h`` to use ``assert``. For assert to work in both device and host functions, users need to include ``"hip/hip_runtime.h"``.
-
-HIP provides the function ``abort()`` which can be used to terminate the application when terminal failures are detected. It is implemented using the ``__builtin_trap()`` function.
-
-This function produces a similar effect of using ``asm("trap")`` in the CUDA code.
-
-.. note::
-
- In HIP, the function terminates the entire application, while in CUDA, ``asm("trap")`` only terminates the dispatch and the application continues to run.
-
-
-``printf``
-============================================================
-
-``printf`` function is supported in HIP.
-The following is a simple example to print information in the kernel.
-
-.. code-block:: cpp
-
- #include
-
- __global__ void run_printf() { printf("Hello World\n"); }
-
- int main() {
- run_printf<<>>();
- }
-
-
-Device-Side Dynamic Global Memory Allocation
-============================================================
-
-Device-side dynamic global memory allocation is under development. HIP now includes a preliminary
-implementation of malloc and free that can be called from device functions.
-
-``__launch_bounds__``
-============================================================
-
-GPU multiprocessors have a fixed pool of resources (primarily registers and shared memory) which are shared by the actively running warps. Using more resources can increase IPC of the kernel but reduces the resources available for other warps and limits the number of warps that can be simultaneously running. Thus GPUs have a complex relationship between resource usage and performance.
-
-``__launch_bounds__`` allows the application to provide usage hints that influence the resources (primarily registers) used by the generated code. It is a function attribute that must be attached to a __global__ function:
-
-.. code-block:: cpp
-
- __global__ void __launch_bounds__(MAX_THREADS_PER_BLOCK, MIN_WARPS_PER_EXECUTION_UNIT)
- MyKernel(hipGridLaunch lp, ...)
- ...
-
-``__launch_bounds__`` supports two parameters:
-- MAX_THREADS_PER_BLOCK - The programmers guarantees that kernel will be launched with threads less than MAX_THREADS_PER_BLOCK. (On NVCC this maps to the ``.maxntid`` PTX directive). If no launch_bounds is specified, MAX_THREADS_PER_BLOCK is the maximum block size supported by the device (typically 1024 or larger). Specifying MAX_THREADS_PER_BLOCK less than the maximum effectively allows the compiler to use more resources than a default unconstrained compilation that supports all possible block sizes at launch time.
-The threads-per-block is the product of (``blockDim.x * blockDim.y * blockDim.z``).
-- MIN_WARPS_PER_EXECUTION_UNIT - directs the compiler to minimize resource usage so that the requested number of warps can be simultaneously active on a multi-processor. Since active warps compete for the same fixed pool of resources, the compiler must reduce resources required by each warp(primarily registers). MIN_WARPS_PER_EXECUTION_UNIT is optional and defaults to 1 if not specified. Specifying a MIN_WARPS_PER_EXECUTION_UNIT greater than the default 1 effectively constrains the compiler's resource usage.
-
-When launch kernel with HIP APIs, for example, ``hipModuleLaunchKernel()``, HIP will do validation to make sure input kernel dimension size is not larger than specified launch_bounds.
-In case exceeded, HIP would return launch failure, if AMD_LOG_LEVEL is set with proper value (for details, please refer to ``docs/markdown/hip_logging.md``), detail information will be shown in the error log message, including
-launch parameters of kernel dim size, launch bounds, and the name of the faulting kernel. It's helpful to figure out which is the faulting kernel, besides, the kernel dim size and launch bounds values will also assist in debugging such failures.
-
-Compiler Impact
---------------------------------------------------------------------------------------------
-
-The compiler uses these parameters as follows:
-- The compiler uses the hints only to manage register usage, and does not automatically reduce shared memory or other resources.
-- Compilation fails if compiler cannot generate a kernel which meets the requirements of the specified launch bounds.
-- From MAX_THREADS_PER_BLOCK, the compiler derives the maximum number of warps/block that can be used at launch time.
-Values of MAX_THREADS_PER_BLOCK less than the default allows the compiler to use a larger pool of registers : each warp uses registers, and this hint constrains the launch to a warps/block size which is less than maximum.
-- From MIN_WARPS_PER_EXECUTION_UNIT, the compiler derives a maximum number of registers that can be used by the kernel (to meet the required #simultaneous active blocks).
-If MIN_WARPS_PER_EXECUTION_UNIT is 1, then the kernel can use all registers supported by the multiprocessor.
-- The compiler ensures that the registers used in the kernel is less than both allowed maximums, typically by spilling registers (to shared or global memory), or by using more instructions.
-- The compiler may use heuristics to increase register usage, or may simply be able to avoid spilling. The MAX_THREADS_PER_BLOCK is particularly useful in this cases, since it allows the compiler to use more registers and avoid situations where the compiler constrains the register usage (potentially spilling) to meet the requirements of a large block size that is never used at launch time.
-
-CU and EU Definitions
---------------------------------------------------------------------------------------------
-
-A compute unit (CU) is responsible for executing the waves of a work-group. It is composed of one or more execution units (EU) which are responsible for executing waves. An EU can have enough resources to maintain the state of more than one executing wave. This allows an EU to hide latency by switching between waves in a similar way to symmetric multithreading on a CPU. In order to allow the state for multiple waves to fit on an EU, the resources used by a single wave have to be limited. Limiting such resources can allow greater latency hiding, but can result in having to spill some register state to memory. This attribute allows an advanced developer to tune the number of waves that are capable of fitting within the resources of an EU. It can be used to ensure at least a certain number will fit to help hide latency, and can also be used to ensure no more than a certain number will fit to limit cache thrashing.
-
-Porting from CUDA ``__launch_bounds``
---------------------------------------------------------------------------------------------
-
-CUDA defines a ``__launch_bounds`` which is also designed to control occupancy:
-
-.. code-block:: cpp
-
- __launch_bounds(MAX_THREADS_PER_BLOCK, MIN_BLOCKS_PER_MULTIPROCESSOR)
-
-- The second parameter ``__launch_bounds`` parameters must be converted to the format used __hip_launch_bounds, which uses warps and execution-units rather than blocks and multi-processors (this conversion is performed automatically by HIPIFY tools).
-
-.. code-block:: cpp
-
- MIN_WARPS_PER_EXECUTION_UNIT = (MIN_BLOCKS_PER_MULTIPROCESSOR * MAX_THREADS_PER_BLOCK) / 32
-
-The key differences in the interface are:
-- Warps (rather than blocks):
-The developer is trying to tell the compiler to control resource utilization to guarantee some amount of active Warps/EU for latency hiding. Specifying active warps in terms of blocks appears to hide the micro-architectural details of the warp size, but makes the interface more confusing since the developer ultimately needs to compute the number of warps to obtain the desired level of control.
-- Execution Units (rather than multiprocessor):
-The use of execution units rather than multiprocessors provides support for architectures with multiple execution units/multi-processor. For example, the AMD GCN architecture has 4 execution units per multiprocessor. The ``hipDeviceProps`` has a field ``executionUnitsPerMultiprocessor``.
-Platform-specific coding techniques such as ``#ifdef`` can be used to specify different launch_bounds for NVCC and HIP-Clang platforms, if desired.
-
-``maxregcount``
---------------------------------------------------------------------------------------------
-
-Unlike NVCC, HIP-Clang does not support the ``--maxregcount`` option. Instead, users are encouraged to use the hip_launch_bounds directive since the parameters are more intuitive and portable than
-micro-architecture details like registers, and also the directive allows per-kernel control rather than an entire file. hip_launch_bounds works on both HIP-Clang and NVCC targets.
-
-Asynchronous Functions
-============================================================
-
-The supported asynchronous functions reference are located on the following pages:
-
-* :ref:`stream_management_reference`
-* :ref:`stream_ordered_memory_allocator_reference`
-* :ref:`peer_to_peer_device_memory_access_reference`
-* :ref:`memory_management_reference`
-* :ref:`external_resource_interoperability_reference`
-
-Register Keyword
-============================================================
-
-The register keyword is deprecated in C++, and is silently ignored by both NVCC and HIP-Clang. You can pass the option ``-Wdeprecated-register`` the compiler warning message.
-
-Pragma Unroll
-============================================================
-
-Unroll with a bounds that is known at compile-time is supported. For example:
-
-.. code-block:: cpp
-
- #pragma unroll 16 /* hint to compiler to unroll next loop by 16 */
- for (int i=0; i<16; i++) ...
-
-.. code-block:: cpp
-
- #pragma unroll 1 /* tell compiler to never unroll the loop */
- for (int i=0; i<16; i++) ...
-
-.. code-block:: cpp
-
- #pragma unroll /* hint to compiler to completely unroll next loop. */
- for (int i=0; i<16; i++) ...
-
-In-Line Assembly
-============================================================
-
-GCN ISA In-line assembly is supported.
-
-There are some usage limitations in ROCm compiler for inline asm support, please refer to `Inline ASM statements `_ for details.
-
-Users can get related background resources on `how to use inline assembly `_ for any usage of inline assembly features.
-
-A short example program including an inline assembly statement can be found at `inline asm tutorial `_.
-
-For further usage of special AMD GPU hardware features that are available through assembly, please refer to the ISA manual for `AMDGPU usage `_, in which AMD GCN is listed from gfx906 to RDNA 3.5.
-
-C++ Support
-============================================================
-
-The following C++ features are not supported:
-
-* Run-time-type information (RTTI)
-* Try/catch
-
-Partially supported features:
-
-* Virtual functions
-
-Virtual functions are not supported if objects containing virtual function tables are passed between GPU's of different offload arch's, e.g. between gfx906 and gfx1030. Otherwise virtual functions are supported.
-
-Kernel Compilation
-============================================================
-
-hipcc now supports compiling C++/HIP kernels to binary code objects.
-The file format for binary is ``.co`` which means Code Object. The following command builds the code object using ``hipcc``.
-
-.. code-block:: bash
-
- hipcc --genco --offload-arch=[TARGET GPU] [INPUT FILE] -o [OUTPUT FILE]
-
- [TARGET GPU] = GPU architecture
- [INPUT FILE] = Name of the file containing kernels
- [OUTPUT FILE] = Name of the generated code object file
-
-.. note::
-
- When using binary code objects is that the number of arguments to the kernel is different on HIP-Clang and NVCC path. Refer to the `HIP module_api sample `_ for differences in the arguments to be passed to the kernel.
-
-gfx-arch-specific-kernel
-============================================================
-
-Clang defined '__gfx*__' macros can be used to execute gfx arch specific codes inside the kernel. Refer to the sample in `HIP 14_gpu_arch sample `_.
diff --git a/docs/reference/cpp_language_support.rst b/docs/reference/cpp_language_support.rst
deleted file mode 100644
index 1635258ccf..0000000000
--- a/docs/reference/cpp_language_support.rst
+++ /dev/null
@@ -1,171 +0,0 @@
-.. meta::
- :description: This chapter describes the C++ support of the HIP ecosystem
- ROCm software.
- :keywords: AMD, ROCm, HIP, C++
-
-*******************************************************************************
-C++ language support
-*******************************************************************************
-
-The ROCm platform enables the power of combined C++ and HIP (Heterogeneous-computing
-Interface for Portability) code. This code is compiled with a ``clang`` or ``clang++``
-compiler. The official compilers support the HIP platform, or you can use the
-``amdclang`` or ``amdclang++`` included in the ROCm installation, which are a wrapper for
-the official versions.
-
-The source code is compiled according to the ``C++03``, ``C++11``, ``C++14``, ``C++17``,
-and ``C++20`` standards, along with HIP-specific extensions, but is subject to
-restrictions. The key restriction is the reduced support of standard library in device
-code. This is due to the fact that by default a function is considered to run on host,
-except for ``constexpr`` functions, which can run on host and device as well.
-
-.. _language_modern_cpp_support:
-
-Modern C++ support
-===============================================================================
-
-C++ is considered a modern programming language as of C++11. This section describes how
-HIP supports these new C++ features.
-
-C++11 support
--------------------------------------------------------------------------------
-
-The C++11 standard introduced many new features. These features are supported in HIP host
-code, with some notable omissions on the device side. The rule of thumb here is that
-``constexpr`` functions work on device, the rest doesn't. This means that some important
-functionality like ``std::function`` is missing on the device, but unfortunately the
-standard library wasn't designed with HIP in mind, which means that the support is in a
-state of "works as-is".
-
-Certain features have restrictions and clarifications. For example, any functions using
-the ``constexpr`` qualifier or the new ``initializer lists``, ``std::move`` or
-``std::forward`` features are implicitly considered to have the ``__host__`` and
-``__device__`` execution space specifier. Also, ``constexpr`` variables that are static
-members or namespace scoped can be used from both host and device, but only for read
-access. Dereferencing a static ``constexpr`` outside its specified execution space causes
-an error.
-
-Lambdas are supported, but there are some extensions and restrictions on their usage. For
-more information, see the `Extended lambdas`_ section below.
-
-C++14 support
--------------------------------------------------------------------------------
-
-The C++14 language features are supported.
-
-C++17 support
--------------------------------------------------------------------------------
-
-All C++17 language features are supported.
-
-C++20 support
--------------------------------------------------------------------------------
-
-All C++20 language features are supported, but extensions and restrictions apply. C++20
-introduced coroutines and modules, which fundamentally changed how programs are written.
-HIP doesn't support these features. However, ``consteval`` functions can be called from
-host and device, even if specified for host use only.
-
-The three-way comparison operator (spaceship operator ``<=>``) works with host and device
-code.
-
-.. _language_restrictions:
-
-Extensions and restrictions
-===============================================================================
-
-In addition to the deviations from the standard, there are some general extensions and
-restrictions to consider.
-
-Global functions
--------------------------------------------------------------------------------
-
-Functions that serve as an entry point for device execution are called kernels and are
-specified with the ``__global__`` qualifier. To call a kernel function, use the triple
-chevron operator: ``<<< >>>``. Kernel functions must have a ``void`` return type. These
-functions can't:
-
-* have a ``constexpr`` specifier
-* have a parameter of type ``std::initializer_list`` or ``va_list``
-* use an rvalue reference as a parameter.
-* use parameters having different sizes in host and device code, e.g. long double arguments, or structs containing long double members.
-* use struct-type arguments which have different layout in host and device code.
-
-Kernels can have variadic template parameters, but only one parameter pack, which must be
-the last item in the template parameter list.
-
-Device space memory specifiers
--------------------------------------------------------------------------------
-
-HIP includes device space memory specifiers to indicate whether a variable is allocated
-in host or device memory and how its memory should be allocated. HIP supports the
-``__device__``, ``__shared__``, ``__managed__``, and ``__constant__`` specifiers.
-
-The ``__device__`` and ``__constant__`` specifiers define global variables, which are
-allocated within global memory on the HIP devices. The only difference is that
-``__constant__`` variables can't be changed after allocation. The ``__shared__``
-specifier allocates the variable within shared memory, which is available for all threads
-in a block.
-
-The ``__managed__`` variable specifier creates global variables that are initially
-undefined and unaddressed within the global symbol table. The HIP runtime allocates
-managed memory and defines the symbol when it loads the device binary. A managed variable
-can be accessed in both device and host code.
-
-It's important to know where a variable is stored because it is only available from
-certain locations. Generally, variables allocated in the host memory are not accessible
-from the device code, while variables allocated in the device memory are not directly
-accessible from the host code. Dereferencing a pointer to device memory on the host
-results in a segmentation fault. Accessing device variables in host code should be done
-through kernel execution or HIP functions like ``hipMemCpyToSymbol``.
-
-Exception handling
--------------------------------------------------------------------------------
-
-An important difference between the host and device code is exception handling. In device
-code, this control flow isn't available due to the hardware architecture. The device
-code must use return codes to handle errors.
-
-Kernel parameters
--------------------------------------------------------------------------------
-
-There are some restrictions on kernel function parameters. They cannot be passed by
-reference, because these functions are called from the host but run on the device. Also,
-a variable number of arguments is not allowed.
-
-Classes
--------------------------------------------------------------------------------
-
-Classes work on both the host and device side, but there are some constraints. The
-``static`` member functions can't be ``__global__``. ``Virtual`` member functions work,
-but a ``virtual`` function must not be called from the host if the parent object was
-created on the device, or the other way around, because this behavior is undefined.
-Another minor restriction is that ``__device__`` variables, that are global scoped must
-have trivial constructors.
-
-Polymorphic function wrappers
--------------------------------------------------------------------------------
-
-HIP doesn't support the polymorphic function wrapper ``std::function``, which was
-introduced in C++11.
-
-Extended lambdas
--------------------------------------------------------------------------------
-
-HIP supports Lambdas, which by default work as expected.
-
-Lambdas have implicit host device attributes. This means that they can be executed by
-both host and device code, and works the way you would expect. To make a lambda callable
-only by host or device code, users can add ``__host__`` or ``__device__`` attribute. The
-only restriction is that host variables can only be accessed through copy on the device.
-Accessing through reference will cause undefined behavior.
-
-Inline namespaces
--------------------------------------------------------------------------------
-
-Inline namespaces are supported, but with a few exceptions. The following entities can't
-be declared in namespace scope within an inline unnamed namespace:
-
-* ``__managed__``, ``__device__``, ``__shared__`` and ``__constant__`` variables
-* ``__global__`` function and function templates
-* variables with surface or texture type
diff --git a/docs/reference/terms.md b/docs/reference/terms.md
index ea2b9d96ab..713bf6eb81 100644
--- a/docs/reference/terms.md
+++ b/docs/reference/terms.md
@@ -1,3 +1,9 @@
+
+
+
+
+
+
# Table comparing syntax for different compute APIs
|Term|CUDA|HIP|OpenCL|
diff --git a/docs/sphinx/_toc.yml.in b/docs/sphinx/_toc.yml.in
index ba90d82efd..04e1ce18a6 100644
--- a/docs/sphinx/_toc.yml.in
+++ b/docs/sphinx/_toc.yml.in
@@ -49,12 +49,15 @@ subtrees:
- file: how-to/hip_runtime_api/memory_management/virtual_memory
- file: how-to/hip_runtime_api/memory_management/stream_ordered_allocator
- file: how-to/hip_runtime_api/error_handling
- - file: how-to/hip_runtime_api/cooperative_groups
- - file: how-to/hip_runtime_api/hipgraph
- file: how-to/hip_runtime_api/call_stack
+ - file: how-to/hip_runtime_api/asynchronous
+ - file: how-to/hip_runtime_api/hipgraph
+ - file: how-to/hip_runtime_api/cooperative_groups
- file: how-to/hip_runtime_api/multi_device
- file: how-to/hip_runtime_api/opengl_interop
- file: how-to/hip_runtime_api/external_interop
+ - file: how-to/hip_cpp_language_extensions
+ - file: how-to/kernel_language_cpp_support
- file: how-to/hip_porting_guide
- file: how-to/hip_porting_driver_api
- file: how-to/hip_rtc
@@ -106,10 +109,6 @@ subtrees:
- file: doxygen/html/annotated
- file: doxygen/html/files
- file: reference/virtual_rocr
- - file: reference/cpp_language_extensions
- title: C++ language extensions
- - file: reference/cpp_language_support
- title: C++ language support
- file: reference/math_api
- file: reference/env_variables
- file: reference/terms
@@ -124,10 +123,8 @@ subtrees:
entries:
- url: https://github.com/ROCm/rocm-examples/tree/develop/HIP-Basic
title: HIP basic examples
- - url: https://github.com/ROCm/HIP-Examples
+ - url: https://github.com/ROCm/rocm-examples
title: HIP examples
- - url: https://github.com/ROCm/hip-tests/tree/develop/samples
- title: HIP test samples
- file: tutorial/saxpy
- file: tutorial/reduction
- file: tutorial/cooperative_groups_tutorial
diff --git a/docs/sphinx/requirements.in b/docs/sphinx/requirements.in
index 0a0f69fc45..0bd422bee5 100644
--- a/docs/sphinx/requirements.in
+++ b/docs/sphinx/requirements.in
@@ -1,2 +1,2 @@
-rocm-docs-core[api_reference]==1.10.0
+rocm-docs-core[api_reference]==1.13.0
sphinxcontrib.doxylink
diff --git a/docs/sphinx/requirements.txt b/docs/sphinx/requirements.txt
index b23b89a21d..bfbafa055a 100644
--- a/docs/sphinx/requirements.txt
+++ b/docs/sphinx/requirements.txt
@@ -116,7 +116,7 @@ requests==2.32.3
# via
# pygithub
# sphinx
-rocm-docs-core[api-reference]==1.10.0
+rocm-docs-core[api-reference]==1.13.0
# via -r requirements.in
six==1.16.0
# via python-dateutil
diff --git a/docs/understand/compilers.rst b/docs/understand/compilers.rst
index 12273f800d..53512e76e5 100644
--- a/docs/understand/compilers.rst
+++ b/docs/understand/compilers.rst
@@ -96,5 +96,6 @@ Static libraries
ar rcsD libHipDevice.a hipDevice.o
hipcc libHipDevice.a test.cpp -fgpu-rdc -o test.out
-For more information, see `HIP samples host functions `_
-and `device functions `_.
+A full example for this can be found in the ROCm-examples, see the examples for
+`static host libraries `_
+or `static device libraries `_.
diff --git a/docs/what_is_hip.rst b/docs/what_is_hip.rst
index d5af8d5937..0e4d0560d2 100644
--- a/docs/what_is_hip.rst
+++ b/docs/what_is_hip.rst
@@ -95,5 +95,5 @@ language features that are designed to target accelerators, such as:
* Math functions that resemble those in ``math.h``, which is included with standard C++ compilers
* Built-in functions for accessing specific GPU hardware capabilities
-For further details, check :doc:`C++ language extensions `
-and :doc:`C++ language support `.
+For further details, check :doc:`HIP C++ language extensions `
+and :doc:`Kernel language C++ support `.
diff --git a/include/hip/hip_runtime_api.h b/include/hip/hip_runtime_api.h
index 14599522a8..d29ca8dfe6 100644
--- a/include/hip/hip_runtime_api.h
+++ b/include/hip/hip_runtime_api.h
@@ -1836,13 +1836,15 @@ hipError_t hipInit(unsigned int flags);
*
* @param [out] driverVersion driver version
*
+ * HIP driver version shows up in the format:
+ * HIP_VERSION_MAJOR * 10000000 + HIP_VERSION_MINOR * 100000 + HIP_VERSION_PATCH.
+ *
* @returns #hipSuccess, #hipErrorInvalidValue
*
- * @warning The HIP feature set does not correspond to an exact CUDA SDK driver revision.
- * This function always set *driverVersion to 4 as an approximation though HIP supports
- * some features which were introduced in later CUDA SDK revisions.
- * HIP apps code should not rely on the driver revision number here and should
- * use arch feature flags to test device capabilities or conditional compilation.
+ * @warning The HIP driver version does not correspond to an exact CUDA driver revision.
+ * On AMD platform, the API returns the HIP driver version, while on NVIDIA platform, it calls
+ * the corresponding CUDA runtime API and returns the CUDA driver version.
+ * There is no mapping/correlation between HIP driver version and CUDA driver version.
*
* @see hipRuntimeGetVersion
*/
@@ -6379,7 +6381,7 @@ hipError_t hipExtLaunchKernel(const void* function_address, dim3 numBlocks, dim3
*
* @returns #hipSuccess, #hipErrorInvalidValue, #hipErrorNotSupported, #hipErrorOutOfMemory
*
- * @note 3D liner filter isn't supported on GFX90A boards, on which the API @p hipCreateTextureObject will
+ * @note 3D linear filter isn't supported on GFX90A boards, on which the API @p hipCreateTextureObject will
* return hipErrorNotSupported.
*
*/