Performance study of filtered back-projection algorithms implemented on GPUs
In recent years the use of graphical processing units (GPUs) in the diverse fields of science has increase dramatically. This increase is not only due to the GPU tremendous computational power, but also because they are relatively cheap when compared to clusters. In this work we explore the use of the GPU to reduce the computational time of the filtered back projection algorithm on computed tomography. To address the computational gain of the GPU implementation, two CPU implementations have been developed: single-core and multi-core. Both CPU implementations are benchmarked against the GPU implementation. In either case the GPU implementation proved to be a faster alternative. When compared to the CPU implementations the GPU was 273 times faster than the single-core CPU implementation and 34 times than the multi-core implementation.