Built-in Utilities¶
Memory Pools¶
Memory allocation (e.g. in the form of the pyopencl.Buffer()
constructor)
can be expensive if used frequently. For example, code based on
pyopencl.array.Array
can easily run into this issue because a fresh
memory area is allocated for each intermediate result. Memory pools are a
remedy for this problem based on the observation that often many of the block
allocations are of the same sizes as previously used ones.
Then, instead of fully returning the memory to the system and incurring the associated reallocation overhead, the pool holds on to the memory and uses it to satisfy future allocations of similarly-sized blocks. The pool reacts appropriately to out-of-memory conditions as long as all memory allocations are made through it. Allocations performed from outside of the pool may run into spurious out-of-memory conditions due to the pool owning much or all of the available memory.
There are two flavors of allocators and memory pools:
Using pyopencl.array.Array
s can be used with memory pools in a
straightforward manner:
mem_pool = pyopencl.tools.MemoryPool(pyopencl.tools.ImmediateAllocator(queue))
a_dev = cl_array.arange(queue, 2000, dtype=np.float32, allocator=mem_pool)
Likewise, SVM-based allocators are directly usable with
pyopencl.array.Array
.
Buffer
-based Allocators and Memory Pools¶
- class pyopencl.tools.PooledBuffer¶
An object representing a
MemoryPool
-based allocation ofBuffer
-style device memory. Analogous toBuffer
, however once this object is deleted, its associated device memory is returned to the pool.Is a
pyopencl.MemoryObject
.
- class pyopencl.tools.AllocatorBase¶
An interface implemented by various memory allocation functions in
pyopencl
.- __call__(self, size: int) pyopencl._cl.Buffer ¶
Allocate and return a
pyopencl.Buffer
of the given size.
- class pyopencl.tools.DeferredAllocator(*args, **kwargs)¶
mem_flags takes its values from
pyopencl.mem_flags
and corresponds to the flags argument ofpyopencl.Buffer
. DeferredAllocator has the same semantics as regular OpenCL buffer allocation, i.e. it may promise memory to be available that may (in any call to a buffer-using CL function) turn out to not exist later on. (Allocations in CL are bound to contexts, not devices, and memory availability depends on which device the buffer is used with.)Implements
AllocatorBase
.Changed in version 2013.1:
CLAllocator
was deprecated and replaced byDeferredAllocator
.- __init__(context, mem_flags=pyopencl.mem_flags.READ_WRITE)¶
- __call__(self, size: int) pyopencl._cl.Buffer ¶
Allocate a
pyopencl.Buffer
of the given size.Changed in version 2020.2: The allocator will succeed even for allocations of size zero, returning None.
- class pyopencl.tools.ImmediateAllocator(*args, **kwargs)¶
mem_flags takes its values from
pyopencl.mem_flags
and corresponds to the flags argument ofpyopencl.Buffer
.ImmediateAllocator
will attempt to ensure at allocation time that allocated memory is actually available. If no memory is available, an out-of-memory error is reported at allocation time.Implements
AllocatorBase
.Added in version 2013.1.
- __init__(queue, mem_flags=pyopencl.mem_flags.READ_WRITE)¶
- __call__(self, size: int) pyopencl._cl.Buffer ¶
Allocate a
pyopencl.Buffer
of the given size.Changed in version 2020.2: The allocator will succeed even for allocations of size zero, returning None.
- class pyopencl.tools.MemoryPool(*args, **kwargs)¶
A memory pool for OpenCL device memory in
pyopencl.Buffer
form. allocator must be an instance of one of the above classes, and should be anImmediateAllocator
. The memory pool assumes that allocation failures are reported by the allocator immediately, and not in the OpenCL-typical deferred manner.Implements
AllocatorBase
.Changed in version 2019.1: Current bin allocation behavior documented, leading_bits_in_bin_id added.
- __init__(self, allocator: pyopencl._cl.AllocatorBase, leading_bits_in_bin_id: int = 4) None ¶
- allocate(self, size: int) pyopencl._cl.PooledBuffer ¶
Return a
PooledBuffer
of the given size.
- __call__(self, size: int) pyopencl._cl.PooledBuffer ¶
Synonym for
allocate()
to matchAllocatorBase
.Added in version 2011.2.
Note
The current implementation of the memory pool will retain allocated memory after it is returned by the application and keep it in a bin identified by the leading leading_bits_in_bin_id bits of the allocation size. To ensure that allocations within each bin are interchangeable, allocation sizes are rounded up to the largest size that shares the leading bits of the requested allocation size.
The current default value of leading_bits_in_bin_id is four, but this may change in future versions and is not guaranteed.
leading_bits_in_bin_id must be passed by keyword, and its role is purely advisory. It is not guaranteed that future versions of the pool will use the same allocation scheme and/or honor leading_bits_in_bin_id.
- held_blocks¶
The number of unused blocks being held by this pool.
- active_blocks¶
The number of blocks in active use that have been allocated through this pool.
- managed_bytes¶
“Managed” memory is “active” and “held” memory.
Added in version 2021.1.2.
- active_bytes¶
“Active” bytes are bytes under the control of the application. This may be smaller than the actual allocated size reflected in
managed_bytes
.Added in version 2021.1.2.
- free_held()¶
Free all unused memory that the pool is currently holding.
- stop_holding()¶
Instruct the memory to start immediately freeing memory returned to it, instead of holding it for future allocations. Implicitly calls
free_held()
. This is useful as a cleanup action when a memory pool falls out of use.
SVM-Based Allocators and Memory Pools¶
SVM functionality requires OpenCL 2.0.
- class pyopencl.tools.PooledSVM¶
An object representing a
SVMPool
-based allocation of Shared Virtual Memory (SVM). Analogous toSVMAllocation
, however once this object is deleted, its associated device memory is returned to the pool from which it came.Added in version 2022.2.
Note
If the
SVMAllocator
for theSVMPool
that allocated an object of this type is associated with an (in-order)CommandQueue
, sufficient synchronization is provided to ensure operations enqueued before deallocation complete before operations from a different use (possibly in a different queue) are permitted to start. This applies whenrelease
is called and also when the object is freed automatically by the garbage collector.Is a
pyopencl.SVMPointer
.Supports structural equality and hashing.
- release(self) None ¶
Return the held memory to the pool. See the note about synchronization behavior during deallocation above.
- enqueue_release(self) None ¶
Synonymous to
release()
, for consistency withSVMAllocation
. Note that, unlikepyopencl.SVMAllocation.enqueue_release()
, specifying a queue or events to be waited for is not supported.
- bind_to_queue(self, arg: pyopencl._cl.CommandQueue, /) None ¶
Analogous to
pyopencl.SVMAllocation.bind_to_queue()
.
- unbind_from_queue(self) None ¶
Analogous to
pyopencl.SVMAllocation.unbind_from_queue()
.
- class pyopencl.tools.SVMAllocator(*args, **kwargs)¶
Added in version 2022.2.
- __init__(self, context: pyopencl._cl.Context, alignment: int = 0, flags: int = 1, queue: pyopencl._cl.CommandQueue | None = None) None ¶
- Parameters:
flags – See
svm_mem_flags
.queue –
If not specified, allocations will be freed eagerly, irrespective of whether pending/enqueued operations are still using the memory.
If specified, deallocation of memory will be enqueued with the given queue, and will only be performed after previously-enqueue operations in the queue have completed.
It is an error to specify an out-of-order queue.
Warning
Not specifying a queue will typically lead to undesired behavior, including crashes and memory corruption. See the warning in Shared Virtual Memory (SVM).
- __call__(self, size: int) pyopencl._cl.SVMAllocation ¶
Return a
SVMAllocation
of the given size.
- class pyopencl.tools.SVMPool(*args, **kwargs)¶
A memory pool for OpenCL device memory in SVM form. allocator must be an instance of
SVMAllocator
.Added in version 2022.2.
- __init__(self, allocator: pyopencl._cl.SVMAllocator, leading_bits_in_bin_id: int = 4) None ¶
- __call__(self, size: int) pyopencl._cl.PooledSVM ¶
Return a
PooledSVM
of the given size.
Note
The current implementation of the memory pool will retain allocated memory after it is returned by the application and keep it in a bin identified by the leading leading_bits_in_bin_id bits of the allocation size. To ensure that allocations within each bin are interchangeable, allocation sizes are rounded up to the largest size that shares the leading bits of the requested allocation size.
The current default value of leading_bits_in_bin_id is four, but this may change in future versions and is not guaranteed.
leading_bits_in_bin_id must be passed by keyword, and its role is purely advisory. It is not guaranteed that future versions of the pool will use the same allocation scheme and/or honor leading_bits_in_bin_id.
- held_blocks¶
The number of unused blocks being held by this pool.
- active_blocks¶
The number of blocks in active use that have been allocated through this pool.
- managed_bytes¶
“Managed” memory is “active” and “held” memory.
Added in version 2021.1.2.
- active_bytes¶
“Active” bytes are bytes under the control of the application. This may be smaller than the actual allocated size reflected in
managed_bytes
.Added in version 2021.1.2.
- free_held()¶
Free all unused memory that the pool is currently holding.
- stop_holding()¶
Instruct the memory to start immediately freeing memory returned to it, instead of holding it for future allocations. Implicitly calls
free_held()
. This is useful as a cleanup action when a memory pool falls out of use.
CL-Object-dependent Caching¶
- pyopencl.tools.clear_first_arg_caches()[source]¶
Empties all first-argument-dependent memoization caches. Also releases all held reference contexts. If it is important to you that the program detaches from its context, you might need to call this function to free all remaining references to your context.
Added in version 2011.2.
Testing¶
- pyopencl.tools.pytest_generate_tests_for_pyopencl(metafunc)[source]¶
Using the line:
from pyopencl.tools import pytest_generate_tests_for_pyopencl as pytest_generate_tests
in your pytest test scripts allows you to use the arguments ctx_factory, device, or platform in your test functions, and they will automatically be run for each OpenCL device/platform in the system, as appropriate.
The following two environment variabls is also supported to control device/platform choice:
PYOPENCL_TEST=0:0,1;intel=i5,i7
Argument Types¶
- class pyopencl.tools.VectorArg(dtype: Any, name: str, with_offset: bool = False)[source]¶
Inherits from
DtypedArgument
.
- class pyopencl.tools.ScalarArg(dtype: Any, name: str)[source]¶
Inherits from
DtypedArgument
.
- pyopencl.tools.parse_arg_list(arguments: str | List[str] | List[DtypedArgument], with_offset: bool = False) List[DtypedArgument] [source]¶
Parse a list of kernel arguments. arguments may be a comma-separate list of C declarators in a string, a list of strings representing C declarators, or
Argument
objects.
Device Characterization¶
- pyopencl.characterize.get_fast_inaccurate_build_options(dev)[source]¶
Return a list of flags valid on device dev that enable fast, but potentially inaccurate floating point math.
- pyopencl.characterize.get_simd_group_size(dev, type_size)[source]¶
Return an estimate of how many work items will be executed across SIMD lanes. This returns the size of what Nvidia calls a warp and what AMD calls a wavefront.
Only refers to implicit SIMD.
- Parameters:
type_size – number of bytes in vector entry type.
- pyopencl.characterize.has_amd_double_support(dev)[source]¶
“Fix to allow incomplete amd double support in low end boards
- pyopencl.characterize.has_src_build_cache(dev: Device) bool | None [source]¶
Return True if dev has internal support for caching builds from source, False if it doesn’t, and None if unknown.
- pyopencl.characterize.has_struct_arg_count_bug(dev, ctx=None)[source]¶
Checks whether the device is expected to have the argument counting bug.
- pyopencl.characterize.local_memory_access_granularity(dev)[source]¶
Return the number of bytes per bank in local memory.
- pyopencl.characterize.local_memory_bank_count(dev)[source]¶
Return the number of banks present in local memory.
- pyopencl.characterize.nv_compute_capability(dev)[source]¶
If dev is an Nvidia GPU
pyopencl.Device
, return a tuple (major, minor) indicating the device’s compute capability.
- pyopencl.characterize.simultaneous_work_items_on_local_access(dev)[source]¶
Return the number of work items that access local memory simultaneously and thereby may conflict with each other.
- pyopencl.characterize.usable_local_mem_size(dev, nargs=None)[source]¶
Return an estimate of the usable local memory size. :arg nargs: Number of 32-bit arguments passed.
- pyopencl.characterize.why_not_local_access_conflict_free(dev, itemsize, array_shape, array_stored_shape=None)[source]¶
- Parameters:
itemsize – size of accessed data in bytes
array_shape – array dimensions, fastest-moving last (C order)
- Returns:
a tuple (multiplicity, explanation), where multiplicity is the number of work items that will conflict on a bank when accessing local memory. explanation is a string detailing the found conflict.
Type aliases¶
- class pyopencl._cl.AllocatorBase¶