TY - JOUR
T1 - An Effective GPGPU Visual Secret Sharing by Contrast-Adaptive ConvNet Super-Resolution
AU - Holla, M. Raviraja
AU - Pais, Alwyn R.
N1 - Publisher Copyright:
© 2021, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.
PY - 2022/4
Y1 - 2022/4
N2 - In this paper, we propose an effective secret image sharing model with super-resolution utilizing a Contrast-adaptive Convolution Neural Network (CCNN or CConvNet). The two stages of this model are the share generation and secret image reconstruction. The share generation step generates information embedded shadows (shares) equal to the number of participants. The activities involved in the share generation are to create a halftone image, create shadows, and transforming the image to the wavelet domain using Discrete Wavelet Transformation (DWT) to embed information into the shadows. The reconstruction stage is the inverse of the share generation supplemented with CCNN to improve the reconstructed image’s quality. This work is significant as it exploits the computational power of the General-Purpose Graphics Processing Unit (GPGPU) to perform the operations. The extensive use of memory optimization using GPGPU-constant memory in all the activities brings uniqueness and efficiency to the proposed model. The contrast-adaptive normalization between the CCNN layers in improving the quality during super-resolution impart novelty to our investigation. The objective quality assessment proved that the proposed model produces a high-quality reconstructed image with the SSIM of (89 - 99.8 %) for the noise-like shares and (71.6 - 90 %) for the meaningful shares. The proposed technique achieved a speedup of 800 × in comparison with the sequential model.
AB - In this paper, we propose an effective secret image sharing model with super-resolution utilizing a Contrast-adaptive Convolution Neural Network (CCNN or CConvNet). The two stages of this model are the share generation and secret image reconstruction. The share generation step generates information embedded shadows (shares) equal to the number of participants. The activities involved in the share generation are to create a halftone image, create shadows, and transforming the image to the wavelet domain using Discrete Wavelet Transformation (DWT) to embed information into the shadows. The reconstruction stage is the inverse of the share generation supplemented with CCNN to improve the reconstructed image’s quality. This work is significant as it exploits the computational power of the General-Purpose Graphics Processing Unit (GPGPU) to perform the operations. The extensive use of memory optimization using GPGPU-constant memory in all the activities brings uniqueness and efficiency to the proposed model. The contrast-adaptive normalization between the CCNN layers in improving the quality during super-resolution impart novelty to our investigation. The objective quality assessment proved that the proposed model produces a high-quality reconstructed image with the SSIM of (89 - 99.8 %) for the noise-like shares and (71.6 - 90 %) for the meaningful shares. The proposed technique achieved a speedup of 800 × in comparison with the sequential model.
UR - https://www.scopus.com/pages/publications/85118312547
UR - https://www.scopus.com/inward/citedby.url?scp=85118312547&partnerID=8YFLogxK
U2 - 10.1007/s11277-021-09245-x
DO - 10.1007/s11277-021-09245-x
M3 - Article
AN - SCOPUS:85118312547
SN - 0929-6212
VL - 123
SP - 2367
EP - 2391
JO - Wireless Personal Communications
JF - Wireless Personal Communications
IS - 3
ER -