Built-in Utilities

Memory Pools

The constructor pyopencl.Buffer() can consume a fairly large amount of processing time if it is invoked very frequently. For example, code based on pyopencl.array.Array can easily run into this issue because a fresh memory area is allocated for each intermediate result. Memory pools are a remedy for this problem based on the observation that often many of the block allocations are of the same sizes as previously used ones.

Then, instead of fully returning the memory to the system and incurring the associated reallocation overhead, the pool holds on to the memory and uses it to satisfy future allocations of similarly-sized blocks. The pool reacts appropriately to out-of-memory conditions as long as all memory allocations are made through it. Allocations performed from outside of the pool may run into spurious out-of-memory conditions due to the pool owning much or all of the available memory.

Using pyopencl.array.Array instances with a MemoryPool is not complicated:

mem_pool = pyopencl.tools.MemoryPool(pyopencl.tools.ImmediateAllocator(queue))
a_dev = cl_array.arange(queue, 2000, dtype=np.float32, allocator=mem_pool)
class pyopencl.tools.PooledBuffer

An object representing a MemoryPool-based allocation of device memory. Once this object is deleted, its associated device memory is returned to the pool. This supports the same interface as pyopencl.Buffer.

class pyopencl.tools.DeferredAllocator(context, mem_flags=pyopencl.mem_flags.READ_WRITE)

mem_flags takes its values from pyopencl.mem_flags and corresponds to the flags argument of pyopencl.Buffer. DeferredAllocator has the same semantics as regular OpenCL buffer allocation, i.e. it may promise memory to be available that may (in any call to a buffer-using CL function) turn out to not exist later on. (Allocations in CL are bound to contexts, not devices, and memory availability depends on which device the buffer is used with.)

Changed in version In: version 2013.1, CLAllocator was deprecated and replaced by DeferredAllocator.


Allocate a pyopencl.Buffer of the given size.

class pyopencl.tools.ImmediateAllocator(queue, mem_flags=pyopencl.mem_flags.READ_WRITE)

mem_flags takes its values from pyopencl.mem_flags and corresponds to the flags argument of pyopencl.Buffer. DeferredAllocator has the same semantics as regular OpenCL buffer allocation, i.e. it may promise memory to be available that later on (in any call to a buffer-using CL function).

New in version 2013.1.


Allocate a pyopencl.Buffer of the given size.

class pyopencl.tools.MemoryPool(allocator)

A memory pool for OpenCL device memory. allocator must be an instance of one of the above classes, and should be an ImmediateAllocator. The memory pool assumes that allocation failures are reported by the allocator immediately, and not in the OpenCL-typical deferred manner.


The number of unused blocks being held by this pool.


The number of blocks in active use that have been allocated through this pool.


Return a PooledBuffer of the given size.


Synonym for allocate() to match CLAllocator interface.


Free all unused memory that the pool is currently holding.


Instruct the memory to start immediately freeing memory returned to it, instead of holding it for future allocations. Implicitly calls free_held(). This is useful as a cleanup action when a memory pool falls out of use.

CL-Object-dependent Caching

pyopencl.tools.first_arg_dependent_memoize(func, cl_object, *args)

Provides memoization for a function. Typically used to cache things that get created inside a pyopencl.Context, e.g. programs and kernels. Assumes that the first argument of the decorated function is an OpenCL object that might go away, such as a pyopencl.Context or a pyopencl.CommandQueue, and based on which we might want to clear the cache.

New in version 2011.2.


Empties all first-argument-dependent memoization caches. Also releases all held reference contexts. If it is important to you that the program detaches from its context, you might need to call this function to free all remaining references to your context.

New in version 2011.2.



Using the line:

from pyopencl.tools import pytest_generate_tests_for_pyopencl \
        as pytest_generate_tests

in your pytest test scripts allows you to use the arguments ctx_factory, device, or platform in your test functions, and they will automatically be run for each OpenCL device/platform in the system, as appropriate.

The following two environment variables are also supported to control device/platform choice:


Device Characterization


Return a list of flags valid on device dev that enable fast, but potentially inaccurate floating point math.

pyopencl.characterize.get_simd_group_size(dev, type_size)

Return an estimate of how many work items will be executed across SIMD lanes. This returns the size of what Nvidia calls a warp and what AMD calls a wavefront.

Only refers to implicit SIMD.

Parameters:type_size – number of bytes in vector entry type.

“Fix to allow incomplete amd double support in low end boards

pyopencl.characterize.has_struct_arg_count_bug(dev, ctx=None)

Checks whether the device is expected to have the argument counting bug.


Return the number of bytes per bank in local memory.


Return the number of banks present in local memory.


If dev is an Nvidia GPU pyopencl.Device, return a tuple (major, minor) indicating the device’s compute capability.


Return the number of work items that access local memory simultaneously and thereby may conflict with each other.

pyopencl.characterize.usable_local_mem_size(dev, nargs=None)

Return an estimate of the usable local memory size. :arg nargs: Number of 32-bit arguments passed.

pyopencl.characterize.why_not_local_access_conflict_free(dev, itemsize, array_shape, array_stored_shape=None)
  • itemsize – size of accessed data in bytes
  • array_shape – array dimensions, fastest-moving last (C order)

a tuple (multiplicity, explanation), where multiplicity is the number of work items that will conflict on a bank when accessing local memory. explanation is a string detailing the found conflict.