Simply add the AMDGPU.jl package to your Julia environment:
using Pkg
Pkg.add("AMDGPU")
Julia 1.10+:
MI300X requires Julia 1.12+.
64-bit Linux or Windows.
ROCm 6.0+.
Linux | Windows |
---|---|
ROCm | ROCm |
- | AMD Software: Adrenalin Edition |
Note
On Windows AMD Software: Adrenalin Edition contains HIP library itself, while ROCm provides support for other functionality.
To ensure that everything works, you can run the test suite:
using AMDGPU
using Pkg
Pkg.test("AMDGPU")
Element-wise addition via high-level interface & low-level kernel:
using AMDGPU
function vadd!(c, a, b)
i = workitemIdx().x + (workgroupIdx().x - 1) * workgroupDim().x
if i ≤ length(a)
c[i] = a[i] + b[i]
end
return
end
a = AMDGPU.ones(Int, 1024)
b = AMDGPU.ones(Int, 1024)
c = AMDGPU.zeros(Int, 1024)
groupsize = 256
gridsize = cld(length(c), groupsize)
@roc groupsize=groupsize gridsize=gridsize vadd!(c, a, b)
@assert (a .+ b) ≈ c
Usage questions can be posted on the Julia Discourse forum under the GPU domain and/or in the #gpu
channel of the Julia Slack.
Contributions are very welcome, as are feature requests and suggestions. Please open an issue if you encounter any problems.
AMDGPU.jl would not have been possible without the work by Tim Besard and contributors to CUDA.jl and LLVM.jl.
AMDGPU.jl is licensed under the MIT License.