International Journal of Interactive Mobile Technologies(iJIM) – eISSN: 1865-7923 – Vol 16 No 10 (2022) Paper—A Blind Video Copyright Protection Technique in Maximum and Minimum Energy Frames… A Blind Video Copyright Protection Technique in Maximum and Minimum Energy Frames Based on The Fast Walsh Hadamard Transform (FWHT) and Discrete Wavelet Transform (DWT) and Arnold Map https://doi.org/10.3991/ijim.v16i10.30039 Mohammed S. Resen1(*), Muna M. Laftah2 1College of Science, University of Baghdad, Baghdad, Iraq 2College of Education for Women, University of Baghdad, Baghdad, Iraq swaelmohammed7@gmail.com Abstract—Video copyright protection is the most generally acknowledged method of preventing data piracy. This paper proposes a blind video copyright protection technique based on the Fast Walsh Hadamard Transform (FWHT), Discrete Wavelet Transform (DWT), and Arnold Map. The proposed method chooses only frames with maximum and minimum energy features to host the watermark. It also exploits the advantages of both the fast Walsh Hadamard transform (FWHT) and discrete wavelet transforms (DWT) for watermark embedding. The Arnold map encrypts watermarks before the embedding process and decrypts watermarks after extraction. The results show that the proposed method can achieve a fast embedding time, good transparency, and robustness against various attacks. Keywords—Fast Walsh Hadamard Transform (FWHT), Discrete Wavelet Transform (DWT), Arnold map 1 Introduction There is a vast amount of digital data on the Web in this period of rapid development of computer and Internet technologies. Anyone can access and process this data using processing tools. Based on the preceding facts, it is evident that data authentication is a significant barrier for legitimate owners. Watermarking, steganography, and authenti- cation are the three main methods for protecting video information from unauthorized access [1]. The imperceptibility, robustness, security, and capacity of a watermarking technique are all essential factors to consider [2,3]. Spatial domain and frequency domain are two techniques used in watermarking [4]. The techniques of the frequency domain, also known as the transform domain, are more robust than those of the spatial domain [5]. The watermark image is embedded in all video frames in some existing video watermark techniques. This makes the process of detecting frames that hide the watermark easy for attackers, as well as increases the storage capacity of the video. iJIM ‒ Vol. 16, No. 10, 2022 163 https://doi.org/10.3991/ijim.v16i10.30039 mailto:swaelmohammed7@gmail.com Paper—A Blind Video Copyright Protection Technique in Maximum and Minimum Energy Frames… The purpose of the work presented in this paper is to provide a robust video copyright protection system that works with high efficiency and a low degree of complexity. Specifically, we propose a method based on one of the textural features for frame election, the chaotic maps of the encoding of the watermark, and both FWHT-DWT transforms. Texture features include contrast, homogeneity, energy (angular second moment), correlation [6]. The frames are chosen based on their energy features, with the minimum and maximum energy chosen. Then, the watermark information is encrypted by Arnold’s transformation. Arnold Transform is mostly used to scramble images to provide security in the context of information concealment. It is straightforward to apply and has regularity. After performing the same number of iterations, the original image may be recovered from the jumbled one because of its qualities. It is employed in a variety of image processing applications [7,8]. The final step of watermarking is dividing each selected frame into individual blocks. As well, each block undergoes an FWHT-DWT-dependent double transformation, and one bit from a binary watermark is inserted into each block through a new embedding and extraction process, allowing for blind watermarking. FWHT allows improving the robustness of the watermark by exploiting properties of that technique with 1-less com- putational complexity [9] and 2-more robustness against compression quality [10,11]. 3-Walsh is more resilient for image modification than other common transforms since the latter’s multi-valued kernels cause the values of pixels in each block to change by different amounts [10]. The DWT is used to decompose any block of a frame into four sub-bands: LL, LH, HL, and HH, and the mid-sub-band (HL) is used to embed a watermark. This paper is structured as: Section 2 discusses related work. Section 3 lists some background works, such as the Gray level co-occurrence matrix, FWHT, DWT, Arnold Map. Section 4 depicts the proposed work. Section 5 describes the results and discussion of the method, and Section 6 concludes the suggested work. 2 Related work According to certain researches, the watermark is included in every frame of the video, causing the video to consume time and affect the perceptibility of its quality. Each frame’s Y component is transformed by a 1-level 2D-DWT in [12]; following that, the middle sub-bands (LH and HL) were selected. As a result, 2D-DCT is applied to certain sub-bands but in an alternate frame. After that, a zigzag scan was utilized to rearrange. Finally, in the middle-frequency coefficient, a watermark is placed. According to the testing results of the recommended approach, the PSNR values of the watermarked video were approximately 37 dB. The proposed approach has been demonstrated to be resistant to HEVC stream compression. Another study [13] proposed a robust scheme using the DWT and SVD transforms. The suggested method uses an additive method to embed the watermark into the middle sub-bands. A blind detection technique is used in the extraction procedure. The watermark is embedded in all video frames to make the Scheme more resistant to frame-dropping attacks. The findings show that the suggested technique is immune to various attacks and achieves a high level of imperceptibility. 164 http://www.i-jim.org Paper—A Blind Video Copyright Protection Technique in Maximum and Minimum Energy Frames… Another scheme was A non-blind based on two algorithms: discrete cosine transform (DCT) and singular value decomposition (SVD) proposed by [14]. In this work, the zigzagging fashion is used to reorder the DCT coefficients of each host video frame and decompose those reordered DCT coefficients into four blocks. These blocks are frequency bands (LL, LH, HL, HH). As a result, SVD is performed to each block sepa- rately. Finally, the singular values in every block are adjusted by the singular values of the DCT watermark image to get a video watermark. Experiment results show that the proposed strategy is more transparent and resistant to attacks. On the other hand, some video watermarking systems embed the watermark at spe- cific frames depending on some methods, resulting in a shorter embedding time and a smaller change in video quality. The Scheme in [15] proposed a blind technique in frames that have fast motion using the SVD and Multiresolution SVD. The proposed algorithm chooses the fast motion frames as the cover for the watermark because alterations in fast-moving areas are indistinguishable to the human eye. QIM (a blind watermarking technique) embeds the watermark information after being encrypted by Logistic Map. The results of the experiments show that the suggested watermarking system has high imperceptibility, with a PSNR of more than 40 dB. Not only can the suggested method withstand image processing assaults, but it can also withstand syn- chronization and compression attacks. In another technique [16], the authors suggested an algorithm based on the hyper-chaotic Lorentz. The watermark image was encrypted using the hyper-chaotic Lorentz system to increase secrecy. Then, shot boundary detection is used to remove those frames with non-motion from the video, using the chaotic sequence to identify specific frames among the frames with non-motion. Finally, the DWT is performed on specific frames to embed the encrypted watermark into selected sub-bands. Experiments demonstrated that the proposed approach was highly undetectable and robust against a wide range of attacks. In [17], the researchers suggested a frame selection-based watermarking technique. The number of scene changes in the video is used to choose which frames to use. The watermark is embedded using a blend of Graph-Based Transform (GBT), SVD, and Hyper-chaotic Encryption techniques. The findings reveal that the suggested strategy works against a wide variety of attacks. Furthermore, the subject of quality loss was discussed. 3 Background work 3.1 Gray Level Co-occurrence Matrix (GLCM) The GLCM is a texture analysis approach that uses the texture of an item to quantify and analyze image attributes [18] objectively. The grey-level disparities between any two neighbouring pixels in a specific direction may be compared using GLCM or dis- placement of an image [19]. The GLCM of (I) as an image of size (N × N) was defined by Equation 1 [6]. p(i j) I(x y) i and I(x x y y) j otherwisei N j N , , , , � � � � �� � � � � � � �� 1 1 0 � � �� (1) iJIM ‒ Vol. 16, No. 10, 2022 165 Paper—A Blind Video Copyright Protection Technique in Maximum and Minimum Energy Frames… p (i, j) is the co-occurrence probability of a signal intensity pair of i and j at any given point. ∆x & ∆y in the whole image. Where x and y are the image’s specific positions. ∆x & ∆y specify the angle θ (0, 45, 90, and 135 degrees) and distance d between the pixels. The GLCM extracts many features (e.g., Correlation, Contrast, Energy, and Homo- geneity). In GLCM, energy represents the sum of square elements. Its value is between [0,1]. However, the value stays at 1 for the constant image. It is formalized as follows [5]: Energy p(i j) i N j N � � � �� 1 1 2, (2) 3.2 Fast Walsh Hadamard Transform FWHT The Walsh Hadamard transform [20] is a non-sinusoidal orthogonal transform technique that may deconstruct an image into its constituent functions. Walsh functions are orthogonal and have mostly +1 and –1 values. The Hadamard matrix is used to construct the general form of the Walsh transform, which is as follows: H H H H H kn k k k k � � � � � � � � � � � � � � � 2 2 2 2 1 1 1 1 1 2 3, , (3) K determine the order of the Hadamard matrix When k = 1, the order of the Hadamard matrix is 2: H2 1 1 1 1 � � � � � � � � (4) When k = 2, the order of Hadamard matrix 4: H4 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 � � � � � � � � � � � � � � � � � � � � � � � � � � � � � (5) The Fast Walsh Hadamard transform is the fast version of WHT. 2-D FWHT can be obtained by the following formula [11]: F N bw l x N y N x y bi(x) bn i(w) bi i n ( , ) ( , ) ( )� � � � � � � � � � �� � � �1 1 0 1 0 1 1 1 1 ((y) bn i(l)� � �� �1 (6) Here, F represents the Hadamard coefficient of b and N indicate the order of the Hadamard matrix. w, l are binary representation and x, y are 0,1,2 ….N–1. The inverse 2-D FWHT is given by Equation [11]: b N Fx y x N y N w l bi(x) bn i(w) bi i n ( , ) ( , ) ( )� � � � � � � � � � �� � � �1 1 0 1 0 1 1 1 1 ((y) bn i(l)� � �� �1 (7) 166 http://www.i-jim.org Paper—A Blind Video Copyright Protection Technique in Maximum and Minimum Energy Frames… 3.3 Discrete Wavelet Transform (DWT) DWT is used to decompose an image into sub-bands. Low-frequency sub-band (LL) of approximation coefficients and detail coefficients (HL) horizontal, (LH) vertical, and high-frequency sub-band (HH) diagonal [21,22]. Using the Haar filter, the coefficients of every sub-band can be computed as follows [23]: LL (i j) I (i j) I (i j +1) I (i 1+j) I (i 1, j+1) , , , , , � � � � 2 (8) LH (i j) I (i j) I (i j +1) I (i +1 j) I (j+1, j+1) , , , , � � � � 2 (9) HL (i j) I (i j) I (i j +1) I (i +1,j) I (i+1, j+1) , , , � � � � 2 (10) HH (i j) I (i j) I (i j +1) I (i +1,j) I (i+1, j+1) , , , � � � � 2 (11) Here, Ii, j indicates the original image. To perform multi-level decompositions, the process is repeated. 3.4 Arnold transform The Arnold transform [8] is a two-dimensional inverse transform with the following definition: � � � � � � � � � � � � � � � � � � � � � � � m n m n (mod N) m n f g N 1 1 1 2 0 1 1, , , , (12) Here, (m, n) represents the value of a pixel of the original image and (mʹ, nʹ) rep- resents the pixel value of the scrambled image. N indicates the image size. It’s possible to rewrite the transform as: �� �� � �� �� � � � � �� m m n mod N n m n mod N2 (13) The inverse Arnold map may be found by using the following formula [8]: � � � � � � � � � � � � � � � � � � � � � � � � � m n m n (mod N) m n f g N 2 1 1 1 0 1, , , , (14) � �� � � �� � � � � �� m m n mod N n m n mod N � � 2 (15) iJIM ‒ Vol. 16, No. 10, 2022 167 Paper—A Blind Video Copyright Protection Technique in Maximum and Minimum Energy Frames… 4 Proposed work The suggested system is divided into three stages: frame selection, embedding, and extraction. Figures 1 and 2 depict block diagrams of these processes, which are detailed individually in the sub-sections that follow: Fig. 1. Embedding process Fig. 2. Extraction process Figure 1 shows the watermark embedding steps, and Figure 2 demonstrates water- mark extraction steps, which will be explained in subsequent sections. 168 http://www.i-jim.org Paper—A Blind Video Copyright Protection Technique in Maximum and Minimum Energy Frames… 4.1 Embedding process As presented in Figure 1, the embedding process consists of several steps: 1. Read Video 2. RGB to YUV conversion 3. Find the appropriate frames from the frames extracted from the video based on calculating the energy feature of each video frame and choosing those frames that have a minimum and maximum energy based on Eq. 1 and 2. 4. Divide Y component to 4 × 4 non-overlapping blocks 5. The original watermark is encrypted by Arnold transform using Eq. 13. 6. Apply FWHT of order 4 for each block by Eq. 6. 7. Each Hadamard coefficient is transformed to the frequency domain using a one- level Discrete Wavelet Transform. 8. Modify HL coefficients by (4): if w = 1 and HL(1,1) > HL(2,1) HL(1,1) = HL(1,1) = abs(HL(1,1)) + T If w = 1 and (HL(1,1) < = HL(2,1)) HL(1,1) = abs(HL(1,1)) + T HL(2,1) = abs(HL(2,1))* (–1) (16) if w = 0 and HL(2,1) > HL(1,1) HL(2,1) = HL(2,1) = abs(HL(2,1)) + T If w = 0 and (HL(2,1) < = HL(1,1)) HL(2,1) = abs(HL(2,1)) + T HL(1,1) = abs(HL(1,1))* (–1) Equation 4 guarantees that HL(1,1) greater than HL(2,1) when w value is 1 and HL(2,1) greater than HL(1,1) when w value is 0. And T represents the threshold, and its value in our proposed work is 10. 9. inverse DWT 10. apply inverse FWHT 11. YUV to RGB conversion iJIM ‒ Vol. 16, No. 10, 2022 169 Paper—A Blind Video Copyright Protection Technique in Maximum and Minimum Energy Frames… 4.2 Extraction process The extraction process is the reversal of the embedding process, as seen in Figure 2. The method of detection is blind and are made up of multiple steps: 1. Read Watermarked Video 2. RGB to YUV conversion 3. Find the appropriate frames from the frames extracted from the video based on calculating the energy feature of each video frame and choosing those frames that have a minimum and maximum energy based on Eq. 1 and 2. 4. Divide Y component to 4  4 non-overlapping blocks 5. Apply FWHT for each block using Eq. 7. 6. Apply DWT for a result of the Hadamard coefficient. 7. Extract watermark pixel using Eq. 17 w � � � � � � 1 1 1 2 1 0 2 1 1 1 HL HL HL HL ( , ) ( , ) ( , ) ( , ) (17) 8. Apply inverse Arnold transform for watermark decryption by using Eq. 15. 5 Experimental results and discussion This paper tested the watermarking Scheme on several dynamic and static standard videos, as shown in Table 1. A binary logo of size 32 × 32 is used as a watermark. The proposed system is implemented on MATLAB 2019b. The experiment was run on a Pentium Core i5 processor with Windows 10 as the operating system. Table 1. Standard videos Id Name #frames Resolution 1 Akiyo 300 352 × 288 2 News 300 352 × 288 3 Foreman 300 352 × 288 5.1 Imperceptibility tests In general, imperceptibility is utilized as a performance indicator in watermarking systems. The PSNR was employed as an imperceptibility measure. The Peak Signal to Noise Ratio (PSNR) determines the quality of the watermarked video when seen by humans. PSNR is calculated as follows: PSNR Log w h I I x w y h x y x y � � � � � �� � 10 10 255 1 2 1 1 2( ), , (18) 170 http://www.i-jim.org Paper—A Blind Video Copyright Protection Technique in Maximum and Minimum Energy Frames… Here, I represent the original video frame, and I′ represent the watermarked video frame; (x, y) denote a pixel’s position in original and watermarked frames; and w × h denote width and height frame in the original video. The PSNR of a watermarked video can be calculated using the average_ PSNR of all watermarked frames. Table 2 shows the PSNR values of videos after watermark embedding. In terms of imperceptibility, a greater PSNR suggests better performance. AV PSNR nPSNR = (19) Table 2. PSNR values after the embedding process Watermarked Video Akiyo News Foreman AV_PSNR 44.325 43.821 40.120 5.2 Robustness tests Robustness is an important criterion in all watermarking schemes [24]. To validate it, we must examine the watermarked video under several attacks and then compare the similarities between the extracted and original watermarks using normal correlation coefficient (Nc) and Bit Error. Rate (BR). NC and BER are defined as [25,26]: Nc M(i j) M' i j M i j M' i j x W j L x W y L � � � � � � � � � 1 1 1 1 2 2 , , , , ( ) ( ) ( ) (20) Here, M and M′ represent the original and extracted watermark. W and L denote the rows and columns of the watermark. BR w (i j w'(i j N M %i N j M � � � �� � � �1 1 100 , ) , ) (21) The results displayed in Table 3 demonstrate the robustness of our algorithm after applying various attacks against watermarked videos. Image processing, frames syn- chronization, and compression attacks are examples of these types of attacks. iJIM ‒ Vol. 16, No. 10, 2022 171 Paper—A Blind Video Copyright Protection Technique in Maximum and Minimum Energy Frames… Table 3. Nc and BER for extracted watermark under different attacks Id Attack Akiyo News Foreman 1 Gaussian noise (0.01) Nc:0.536 BR:20.11 Nc:0.575 BR:17.77 Nc:0.720 BR:9.960 2 Salt & pepper noise (0.01) Nc: 0.955 BR: 1.269 Nc: 0.962 BR: 1.074 Nc: 0.970 BR: 1.171 3 Poisson noise Nc:0.978 BR:0.585 Nc:0.9957 BR:0.293 Nc:0.987 BR:0.293 4 Median filter (3*3) Nc:1.000 BR: 0 Nc:1.000 BR: 0 Nc:1.000 BR: 0 5 Histogram equalization Nc:1.000 BR: 0 Nc:1.000 BR 0 Nc:1.000 BR 0 6 Gamma correction Nc: 1.000 BR: 0 Nc:1.000 BR:0 Nc:1.0000 BR: 0 7 Sharpening Nc: 1.000 BR: 0 Nc:1.000 BR: 0 Nc:1.000 BR: 0 8 Jpeg(QF=90) Nc: 1.000 BR: 0 Nc:0.984 BR:4.589 Nc:0.995 BR:1.367 9 Jpeg2000 Nc: 1.000 BR: 0 Nc:1.000 BR:1.757 Nc:1.000 BR: 0 10 Frame dropping(20%) Nc: 1.000 BR: 0 Nc: 1.000 BR: 0 Nc:1.000 BR: 0 11 Frame swapping Nc: 1.000 BR: 0 Nc: 1.000 BR: 0 Nc: 1.000 BR: 0 12 Frame averaging Nc: 1.000 BR: 0 Nc: 1.000 BR: 0 Nc:0.787 BR:15.429 13 Frame insertion Nc: 1.000 BR: 0 Nc: 1.000 BR: 0 Nc: 1.000 BR: 0 5.3 Comparison This section compares the imperceptibility, robustness, and time complexity results of our Scheme with other video watermarking techniques working in the frequency domain to demonstrate the algorithm’s performance and quality. In the first comparison, different proposed methods are selected for imperceptibility comparative. Table 4 shows the results of the comparison. Table 4. Comparison of PSNR values with other schemes Akiyo News Foreman Our proposed 44.32 43.82 40.12 [25] 42.47 37.56 38.43 [17] – 36.682 36.652 [27] 41.4 42.1 40.6 172 http://www.i-jim.org Paper—A Blind Video Copyright Protection Technique in Maximum and Minimum Energy Frames… Table 5 demonstrates the second comparison results between our Scheme and other proposed schemes regarding robustness against attack. Table 5. Comparison of Nc values with other methods Akiyo Our Proposed [27] News Our Proposed [27] Foreman Our Proposed [27] Gaussian noise (0.01) Nc:0.536 Nc:0.720 Nc:0.575 Nc:0.680 Nc:0.720 Nc:0.647 Salt & pepper noise (0.01) Nc:0.955 Nc:0.717 Nc: 0.962 Nc:0.650 Nc: 0.970 Nc:0.687 Poisson noise Nc:0.978 Nc:0.728 Nc:0.962 Nc:0.719 Nc:0.987 Nc:0.710 Median filter (3*3) Nc:1.000 Nc:0.839 Nc:1.000 Nc:0.820 Nc:1.000 Nc:0.801 Histogram equalization Nc:1.000 Nc:0.694 Nc:1.000 Nc:0.739 Nc:1.000 Nc:0.735 Frame dropping Nc:1.000 Nc:1.000 Nc:1.000 Nc:1.000 Nc:1.000 Nc:0.989 Frame swapping Nc:1.000 Nc:1.000 Nc:1.000 Nc:0.987 Nc:1.000 Nc:1.000 Frame averaging Nc:1.000 Nc:0.618 Nc:1.000 Nc:0.935 Nc:0.787 Nc:0.873 Finally, when comparing our proposed system with several proposed techniques such as [16,22,23]. It was found that our system is distinguished by the fact that when applying a high-rate frame swapping attack or playing the video in reverse, the system can recover the watermark with a ratio of NC = 1. 6 Conclusion The addition of a watermark in each frame increases the temporal complexity of the embedding process and increases the video’s capacity. The frame selection method is presented to pick the correct number of frames. Frame selection is made based on one of the textural features (energy features) of video frames. Our Scheme used both FWHT and DWT for the embedding process. These two algorithms improve robustness and transparency. The Arnold map provides security. The result shows our method has good performance, with a PSNR value range of between 40 and 44.5 and an NC range of between 0.9 and 1 in most cases. 7 References [1] C. Sharma and A. Bhaskar, “A review on video watermarking techniques for compressed domain with optimization algorithms,” Mater. Today Proc., Dec. 2020. https://doi. org/10.1016/j.matpr.2020.11.319 [2] N. Agarwal, A. K. Singh, and P. K. Singh, “Survey of robust and imperceptible watermark- ing,” Multimed. Tools Appl., vol. 78, no. 7, pp. 8603–8633, 2019. https://doi.org/10.1007/ s11042-018-7128-5 [3] H. Alrikabi, and H. Tauma, “Enhanced data security of communication system using com- bined encryption and steganography,” International Journal of Interactive Mobile Technolo- gies, vol. 15, no. 16, pp. 144–157, 2021. https://doi.org/10.3991/ijim.v15i16.24557 [4] Ashish M. Kothari, Vedvyas Dwivedi, and Rohit M. Thanki, Watermarking Techniques for Copyright Protection of Videos. 2019. https://doi.org/10.1007/978-3-319-92837-1 iJIM ‒ Vol. 16, No. 10, 2022 173 https://doi.org/10.1016/j.matpr.2020.11.319 https://doi.org/10.1016/j.matpr.2020.11.319 https://doi.org/10.1007/s11042-018-7128-5 https://doi.org/10.1007/s11042-018-7128-5 https://doi.org/10.3991/ijim.v15i16.24557 https://doi.org/10.1007/978-3-319-92837-1 Paper—A Blind Video Copyright Protection Technique in Maximum and Minimum Energy Frames… [5] R. Shoitan, M. M. Moussa, and S. M. Elshoura, “A robust video watermarking scheme based on Laplacian pyramid, SVD, and DWT with improved robustness towards geometric attacks via SURF,” Multimed. Tools Appl., 2020. https://doi.org/10.1007/s11042-020-09258-x [6] Z. Abbas, M. U. Rehman, S. Najam, and S. M. Danish Rizvi, “An efficient Gray-Level Co-Occurrence Matrix (GLCM) based approach towards classification of skin lesion,” Proc. - 2019 Amity Int. Conf. Artif. Intell. AICAI 2019, pp. 317–320, 2019. https://doi. org/10.1109/AICAI.2019.8701374 [7] K. Jiao, G. Ye, Y. Dong, X. Huang, and J. He, “Image encryption scheme based on a gen- eralized arnold map and RSA algorithm,” Secur. Commun. Networks, vol. 2020, 2020. https://doi.org/10.1155/2020/9721675 [8] X. Liu, D. Xiao, W. Huang, and C. Liu, “Quantum block image encryption based on arnold transform and sine chaotification model,” IEEE Access, vol. 7, pp. 57188–57199, 2019. https://doi.org/10.1109/ACCESS.2019.2914184 [9] S. Helal and N. Salem, “A hybrid watermarking scheme using walsh hadamard transform and SVD,” Procedia Comput. Sci., vol. 194, pp. 246–254, 2021. https://doi.org/10.1016/j. procs.2021.10.080 [10] K. Meenakshi, C. S. Rao, and K. Satya Prasad, “A robust watermarking scheme based walsh-hadamard transform and SVD using ZIG ZAG scanning,” Proc. - 2014 13th Int. Conf. Inf. Technol. ICIT 2014, pp. 167–172, 2014. https://doi.org/10.1109/ICIT.2014.53 [11] H. S. Devi and K. M. Singh, “Red-cyan anaglyph image watermarking using DWT, Hadamard transform and singular value decomposition for copyright protection,” J. Inf. Secur. Appl., vol. 50, p. 102424, 2020. https://doi.org/10.1016/j.jisa.2019.102424 [12] J. Panyavaraporn, “DWT/DCT-based Invisible Digital Watermarking Scheme for Video Stream,” 2018 10th Int. Conf. Knowl. Smart Technol., pp. 154–157. https://doi.org/10.1109/ KST.2018.8426150 [13] A. Hammami, A. Ben Hamida, and C. Ben Amar, “A robust blind video watermarking scheme based on discrete wavelet transform and singular value decomposition,” VISIG- RAPP 2019 - Proc. 14th Int. Jt. Conf. Comput. Vision, Imaging Comput. Graph. Theory Appl., vol. 4, no. Visigrapp, pp. 597–604, 2019. https://doi.org/10.5220/0007685305970604 [14] K. Meenakshi, K. Swaraja, and P. Kora, A robust DCT-SVD based video water- marking using zigzag scanning, vol. 900. Springer Singapore, 2019. https://doi. org/10.1007/978-981-13-3600-3_45 [15] I. Nouioua, N. Amardjia, and S. Belilita, “A novel blind and robust video watermarking technique in fast motion frames based on SVD and MR-SVD,” Secur. Commun. Networks, vol. 2018, 2018. https://doi.org/10.1155/2018/6712065 [16] Z. Cao and L. Wang, “A secure video watermarking technique based on hyperchaotic Lorentz system,” Multimed. Tools Appl., vol. 78, no. 18, pp. 26089–26109, 2019. https:// doi.org/10.1007/s11042-019-07809-5 [17] C. Sharma, B. Amandeep, R. Sobti, T. K. Lohani, and M. Shabaz, “A secured frame selection based video watermarking technique to address quality loss of data: Combining graph based transform, singular valued decomposition, and hyperchaotic encryption,” Secur. Commun. Networks, vol. 2021, 2021. https://doi.org/10.1155/2021/5536170 [18] R. Kanai et al., “Discriminant analysis and interpretation of nuclear chromatin distribu- tion and coarseness using gray-level co-occurrence matrix features for lobular endocervical glandular hyperplasia,” Diagn. Cytopathol., vol. 48, no. 8, pp. 724–735, 2020. https://doi. org/10.1002/dc.24466 [19] M. U. Rehman, S. H. Khan, S. M. Danish Rizvi, Z. Abbas, and A. Zafar, “Classification of skin lesion by interference of segmentation and convolotion neural network,” 2018 2nd Int. Conf. Eng. Innov. ICEI 2018, pp. 81–85, 2018. https://doi.org/10.1109/ICEI18.2018.8448814 174 http://www.i-jim.org https://doi.org/10.1007/s11042-020-09258-x https://doi.org/10.1109/AICAI.2019.8701374 https://doi.org/10.1109/AICAI.2019.8701374 https://doi.org/10.1155/2020/9721675 https://doi.org/10.1109/ACCESS.2019.2914184 https://doi.org/10.1016/j.procs.2021.10.080 https://doi.org/10.1016/j.procs.2021.10.080 https://doi.org/10.1109/ICIT.2014.53 https://doi.org/10.1016/j.jisa.2019.102424 https://doi.org/10.1109/KST.2018.8426150 https://doi.org/10.1109/KST.2018.8426150 https://doi.org/10.5220/0007685305970604 https://doi.org/10.1007/978-981-13-3600-3_45 https://doi.org/10.1007/978-981-13-3600-3_45 https://doi.org/10.1155/2018/6712065 https://doi.org/10.1007/s11042-019-07809-5 https://doi.org/10.1007/s11042-019-07809-5 https://doi.org/10.1155/2021/5536170 https://doi.org/10.1002/dc.24466 https://doi.org/10.1002/dc.24466 https://doi.org/10.1109/ICEI18.2018.8448814 Paper—A Blind Video Copyright Protection Technique in Maximum and Minimum Energy Frames… [20] I. J. Image and P. G. Halakatti, “Digital image watermarking using DWT and FWHT,” no. June, pp. 50–67, 2018. https://doi.org/10.5815/ijigsp.2018.06.06 [21] T. K. Araghi and A. A. Manaf, “An enhanced hybrid image watermarking scheme for security of medical and non-medical images based on DWT and 2-D SVD,” Futur. Gener. Comput. Syst., vol. 101, pp. 1223–1246, 2019. https://doi.org/10.1016/j.future.2019.07.064 [22] I. A. Aljazaery, H. T. S. Alrikabi, and M. R. Aziz, “Combination of hiding and encryption for data security,” International Journal of Interactive Mobile Technologies, vol. 14, no. 9, pp. 34–47, 2020. https://doi.org/10.3991/ijim.v14i09.14173 [23] K. Fares, A. Khaldi, K. Redouane, and E. Salah, “DCT & DWT based watermarking scheme for medical information security,” Biomed. Signal Process. Control, vol. 66, no. January, p. 102403, 2021. https://doi.org/10.1016/j.bspc.2020.102403 [24] H. T. Salim, N. A. Jasim, “Design and implementation of smart city applications based on the internet of things,” International Journal of Interactive Mobile Technologies (iJIM), vol. 15, no. 13, pp. 4-15, 2021. https://doi.org/10.3991/ijim.v15i13.22331 [25] P. Ayubi, M. Jafari Barani, M. Yousefi Valandar, B. Yosefnezhad Irani, and R. Sedagheh Maskan Sadigh, “A new chaotic complex map for robust video watermarking,” vol. 54, no. 2. Springer Netherlands, 2021. https://doi.org/10.1007/s10462-020-09877-8 [26] H. T. Salim, and A. Ibtisam A. Aljazaery, “Encryption of color image based on DNA strand and exponential factor,” International journal of online and biomedical engineering (iJOE), vol. 18, no. 3, 2022. https://doi.org/10.3991/ijoe.v18i03.28021 [27] M. R. Keyvanpour, N. Khanbani, and M. Boreiry, “A secure method in digital video watermarking with transform domain algorithms,” Multimed. Tools Appl., vol. 80, no. 13, pp. 20449–20476, 2021. https://doi.org/10.1007/s11042-021-10730-5 8 Authors Mohammed Swael Resen is presently an MCs student at Baghdad University/ College of Science/Department of Computer Science. he received his B.Sc. degree in computer science in 2012 from the University in Baghdad, Iraq. (Email: swaelmohammed7@gmail.com) Muna Majeed Laftah is presently Asst. Prof in Baghdad University/College education for women/Computer Science department. She received his B.Sc. degree in computer science in 1995 from the Al Technology University in Baghdad, Iraq. Her M.Sc. degree in computer science focuses on multimedia security from the Iraqi Commission for Computers and Informatics/Iraq in 2003. Her PhD in computer science from Al technology University in Baghdad, Iraq/2017. Her current research interests include 3D security encryption of multimedia. (Email: muna.majeed@coeduw. uobaghdad.edu.iq) Article submitted 2022-02-11. Resubmitted 2022-03-11. Final acceptance 2022-03-11. Final version published as submitted by the authors. iJIM ‒ Vol. 16, No. 10, 2022 175 https://doi.org/10.5815/ijigsp.2018.06.06 https://doi.org/10.1016/j.future.2019.07.064 https://doi.org/10.3991/ijim.v14i09.14173 https://doi.org/10.1016/j.bspc.2020.102403 https://doi.org/10.3991/ijim.v15i13.22331 https://doi.org/10.1007/s10462-020-09877-8 https://doi.org/10.3991/ijoe.v18i03.28021 https://doi.org/10.1007/s11042-021-10730-5 mailto:swaelmohammed7@gmail.com mailto:muna.majeed@coeduw.uobaghdad.edu.iq mailto:muna.majeed@coeduw.uobaghdad.edu.iq