The Spatial Rich Model (SRM) generates powerful steganalysis features, but has high computational complexity since it requires calculating tens of thousands of convolutions with image noise residuals. Practical applications dealing with a massive amount of image transferred through the Internet would suffer a long computing time if using CPU. To accelerate the steganalysis, we present a parallel SRM feature extraction algorithm based on GPU architecture. We exploit parallelism of the algorithm, modify the original SRM extraction algorithm and employ some strategies to avoid the disadvantage of its sequentiality. Some OpenCL optimization technologies are also used to accelerate the extraction process, such as convolution unrolling, combined memory access, split-merge strategy for co-occurrence matrix calculation. The experimental results show that the speed of the proposed parallel extraction algorithm for different size images is 25~55 times faster than the original single thread algorithm. In addition, when using AMD GPU HD 6850, our algorithm runs 2~4.2 times faster than using a Intel Quad-core CPU. This indicates our algorithm makes good use of the GPU cores.