APPLICATION OF DIGITAL CELLULAR RADIO FOR MOBILE LOCATION ESTIMATION IIUM Engineering Journal, Vol. 21, No. 2, 2020 Kumar and Gupta https://doi.org/10.31436/iiumej.v21i2.1322 BORDER SURVEILLANCE USING FACE RECOGNITION, MOBILE OTP AND EMAIL NARESH KUMAR1 AND DEEPALI GUPTA2 1Department of Computer Science and Engineering, MSIT, Janakpuri, New Delhi, India. 2Chitkara University Institute of Engineering and Technology, Chitkara University, Punjab, India. *Corresponding author: narsumsaini@gmail.com (Received: 24th December 2019; Accepted: 13th March 2020; Published on-line: 4th July 2020) ABSTRACT: Expanding strains over Indian borders with illegal crossings and examining past assaults on the nation, it is clear that in a large portion of the cases, security powers are uninformed of the movement of these interlopers. For this reason, a framework is needed to manage the border issue that would be equipped for working in sloping landscapes where there is no power. This paper manages identification and situating of interlopers crossing the border utilizing PIR sensors and cameras. In the event of any undesirable crossing in the area, the sensor quickly detects it and the camera will stream pictures to the base station (BS). Relying upon the guidance originating from the BS, the sensor will either activate the camera for further streaming or turn it off. The objective of this paper is to give a framework that will help the Border Security Force (BSF) in controlling all sorts of illicit activities near the outskirt in a superior and precise manner. ABSTRAK: Merentas isu sempadan India dengan kegiatan pencerobohan sempadan dan dengan mengambil kira kutukan lepas terhadap bangsa kami, adalah jelas dalam banyak- banyak kes ini pegawai keselamatan tidak diberitahu tentang bahagian yang dicerobohi. Dalam keadaan ini, kita memerlukan rangka kerja bagi mengurus masalah sempadan di mana kelengkapan perlu dipasang di tebing landskap yang tidak mempunyai sumber tenaga. Kajian ini mengurus identiti pengenalan dan kedudukan kegiatan haram yang berleluasa di sempadan dengan mengguna pakai pengesan PIR dan kamera. Apabila terdapat perubahan pergerakan yang tidak diingini di sempadan, PIR akan mengesan pergerakan dengan cepat dan kamera akan menggaris arus gambar-gambar ke stesen utama (BS) dan bergantung kepada panduan pengkalan di BS, pengesan akan membuat kamera lebih bergaris arus atau berhenti merekod. Kajian ini pentng bagi menunjukkan rangka yang membantu Penguatkuasaan Keselamatan Sempadan (BSF) dalam mengawal semua kegiatan haram berhampiran sempadan dengan bermutu dan tepat. KEYWORDS: Eigenface; face detection; face recognition, training set 1. INTRODUCTION Outskirt Security System is a creative plan to verify wildernesses insightfully without the mediation of humans. It gives protection to the nation and simultaneously reduces manpower and asset usage. The outskirt security framework recognizes any interloper and also monitors every one of the activities occurring close to the territory. Because the border area is an enormous and perilous region, it is difficult to secure a constant power supply for the system. Nonetheless, an effective national shield requires a constant 143 IIUM Engineering Journal, Vol. 21, No. 2, 2020 Kumar and Gupta https://doi.org/10.31436/iiumej.v21i2.1322 monitoring of the border areas, with the associated labour and equipment costs. For this reason, an automated framework for surveillance is fundamental. The automated system must include specialized issues and use of legitimate calculation with the goal that any interruption identified in the outskirt can easily be transmitted to inform vital responses from authorities. Appropriate use of the framework may help Border Security Force (BSF) to control those exercises in a superior and increasingly exact manner. A report of [1] in the US in 2014 showed 1.7% of aggressive offense and robbery cases increased by 1% from total of 311936 cases of 2014. Lots of research has been carried out on face recognition border security systems and the information is available in [2-7]. Similar home security systems are also mentioned in [8-11]. Facial discovery is a PC innovation that decides the area and size of human face in a self-assertive (advanced) picture. The facial highlights are identified and some other items like trees, structures and bodies and so on are overlooked from the computerized picture. It can be seen as classification of instances where one can find the area with size of all the places and its class. Face recognition is basic element of face limitation. In face confinement, the errand is to discover the areas and sizes of a known number of appearances (normally one). Essentially there are two so rts of ways to deal with the distinguished facial part in the given picture for example highlight base and picture base approach. Feature base methodology attempts to remove highlights of the picture and compare it with the information on the face highlights. The method is based on the image to provide the best match between training and the image to be tested. Faces are a factor that is exceptionally simple to recall, all things considered. For the most part, people can recall and perceive an individual dependent all over. It is proven that a face is one of a mind boggling set of variations when seen from the point of view of PC vision. Human countenances have various highlights and qualities of every individual with the goal that face acknowledgment is generally excellent to be applied in different territories, including excitement, savvy cards, data security, and so forth. The essential idea of PC vision is as valuable data from a solitary picture or a succession of pictures, generally utilizing the technique of programmed extraction, investigation and learning. Probably the best technique to perform facial acknowledgment is to utilize the Principal Component Analysis (PCA) calculation. PCA is likely the most prevalent multivariate measurable procedure and it is utilized by practically all logical orders. It is additionally liable to be the most seasoned multivariate strategy. Multivariate investigation can essentially be translated as a strategy related with enormous factors in at least one examination. This strategy can separate the fundamental trademark information from different information that we have. PCA is a multivariate procedure that breaks down an information table wherein perceptions are depicted by a few between corresponded quantitative ward factors. The Eigen face based approach is used by PCA. In this paper, the proposed approach uses a human face with an Eigen face approach to provide security at border. 2. RELATED WORKS Face acknowledgment has been considered by numerous specialist utilizing different calculation and approach. Biometric use is the most noteworthy security framework contrasted with conventional frameworks (utilizing secret word or different Identity Card for verification) on the security framework. Face acknowledgment itself has two fundamental strides as a rule [12], namely face discovery and face acknowledgment. To 144 IIUM Engineering Journal, Vol. 21, No. 2, 2020 Kumar and Gupta https://doi.org/10.31436/iiumej.v21i2.1322 perform face identification, Cascade Classifier technique is used. In [13] the Feed Forward Neural Networks (FFNW) is used to accomplish face acknowledgment. Face recognition was performed in [12] and [13] by Principal Component Analysis (PCA). By watching the writing, particularly [12,13], and PCA calculation is chosen to play out the face acknowledgment. This exploration is likewise bolstered by another examination like [14]. 3. CHALLENGES OF BORDER SECURITY Some of the challenges encountered in border security are as follows: a) Infiltration and ex-filtration of armed militants b) Narcotics and armed smugglers c) Illegal migrations d) Export of fundamentalist terrorism 4. OBJECTIVES OF BORDER SECURITY Based on literature review and challenges identified in section II and III, the following are the major objectives of this paper: a) To identify intruders with cameras and sensors like PIR and IR sensors. b) To effectively eliminate the intruder using a gun firing mechanism. c) To eliminate the need for being physically present in any location for security of the border. 5. PROPOSED METHOD The architecture of proposed method is shown in Fig. 1, which elaborates the complete process of how the work in the proposed approach is accomplished. The architecture shown in Fig.1 utilizes Red Green Blue (RGB) images captured by webcam as essential information and can use optional information, such as xml classifier documents, for facial identification and recognition. It comprises info factors of the initial face picture and the prepared face picture. The yield factors of this examination are level of similarity and recognition of a suspect’s face [15]. The module-wise explanation of architecture is: 5.1 Object Detection The initial step to accomplish for face location is to get to the camera that will be utilized for face identification and acknowledgment. The detection process is shown in Fig. 2. This stage will also check whether the webcam is on. On the off chance that the camera is turned off, the procedure cannot be continued. The recognition procedure begins by identifying objects utilizing CASCADE_FIND_BIGGEST_OBJECT as a course classifier that will only look through the one biggest item. Moreover, the underlying information picture of RGB arrangement is converted to grayscale design. The following stage is to contract the camera picture to a sensible size on the grounds that the speed of face discovery depends on the size of the picture. It will be delayed for enormous pictures and will be quick for little pictures. It is likewise still genuinely dependable to identify faces even at low resolution. 145 IIUM Engineering Journal, Vol. 21, No. 2, 2020 Kumar and Gupta https://doi.org/10.31436/iiumej.v21i2.1322 Fig. 1: Architecture of the face detection system. Fig. 2: Face detection process. Training images (subset -1) Training images (subset -2) Convert RGB image to gray scale Recognize the faces Recognizer Image Compare the grayscale images for results Open camera Stop camera Take photo 146 IIUM Engineering Journal, Vol. 21, No. 2, 2020 Kumar and Gupta https://doi.org/10.31436/iiumej.v21i2.1322 To improve the contrast and sharpness, histogram adjustment is applied. From that point onward, we can at last recognize the face from a little improved grayscale picture. The next significant step is to grow the outcomes if the picture was briefly contracted due to a past contracting step. The last advance of distinguishing an object as a face is to restore the face recognized and put it away in "objects". Pre-processing is done to limit disappointment in the acknowledgment process. It starts by recognizing the area of the eye from the underlying information picture. It is graphically represented in Fig. 3. The captured eye area has been retained. The geometric change process is then completed by turning, scaling, and interpreting the pictures with the goal that the eyes are adjusted, trailed by the expulsion of the foundation from the face picture. To have a better arrangement, the recognized eyes are utilized to adjust the face so that the location of the two eyes line up consummately in the desired positions. The turning stage pivots the face with the goal that the two eyes are even. The scaling stage will make the separation between the two eyes consistent and equivalent. Then, the interpreting stage will decipher the face with the goal that the eyes are constant ly focused on a level plane and at an ideal height. A significant number of the cases wherein face acknowledgment failed, were due to the lighting factor, both the absence of lighting, the light just originating from one side, or as unreasonable lighting. In this manner, it is important to do a histogram evening out independently on the left and right parts of the face, to have institutionalized sharpness and complexity on each side of the face. The two histogram balances will be applied bit by bit from the left or right towards the middle and blend it in with an entire face histogram adjustment, so the inside utilizes a Fig. 3: Face preprocessing. Start Detect eye from original image Convert RGB image to grayscale Do geometric transformation on grayscale Apply histogram equalization on separate face sides Reduce pixel noise using bilateral filter End 147 IIUM Engineering Journal, Vol. 21, No. 2, 2020 Kumar and Gupta https://doi.org/10.31436/iiumej.v21i2.1322 smooth blend of left or right value and the entire face adjusted value. The last phase of this procedure is to apply the respective channel, which is valuable for smoothing the vast majority of a picture while keeping edges sharp. 5.2 Collect and Train the Faces Face information assortment is determined and information is prepared for the model. Each face capture takes place at a time interval of one second and should surpass the limit estimation of similarity > 0.3. The edge likeness esteem is helpful for checking whether recently captured and subsequent face pictures have variations, supposing that the information is increasingly changed the better the consequence of face acknowledgment is finished. To provide variation in the facial training data set, a mirror version is also added. Then the data is saved in the “Data” folder in .pgm format and the process of training is performed which converts the data into a model and then stores it in xml format. At the final average face value, eigenvector and Eigen values are calculated using a PCA algorithm whose steps are explained in Fig. 4. The possibility of an Eigenface is a movement of eigenvectors used to see human faces in a PC vision. Eigenvectors are obtained from covariance order which has a high probability scattering and vector space estimation to see the likelihood of a face [12]. Step 1: start. Step 2: the value of average face is compiled using the formula: Step 3: value of Substract mean is calculated from test image(D): Step 4: Find covariance matrix(C): Step 5: Find eigen values and eigenvector. The value of P dimension vector is also calculated. Step 6: Eigen image (EI)is determined as: EI = (D)(Eigenvector) Step 7: Choose highest eigen vector. Step 8: Weight matrix (WM) is calculated as: WM = (D) (Transpose of largest EI) Step 9: Stop Fig. 4: PCA algorithm. 148 IIUM Engineering Journal, Vol. 21, No. 2, 2020 Kumar and Gupta https://doi.org/10.31436/iiumej.v21i2.1322 5.3 Recognition To play out a face acknowledgment process, the initial step to do is to stack the .xml document from the prepared facial picture of the past procedure and the .xml record of the course classifier that will be utilized to recognize faces and eyes. In the event that the .xml document has been effectively stacked, at that point check whether the webcam is available. On the off chance that the webcam isn't available, the procedure will consequently stop and exit, generally the framework will continue the discovery process. The aftereffects of face pre-processing will be contrasted and the preparation model that has been handled into a reproducible face by back-anticipating the eigenvectors and eigenvalues [16]. The edge esteem utilized for examination is 0.5. In the event that the estimation of comparability/estimation of the correlation results is not exactly the edge esteem, then the client's face can be perceived by the framework, otherwise it will be considered as an "Obscure" client. 6. EXPERIMENTAL SETUP AND RESULTS DISCUSSION The proposed approach is implemented in Python. Some other circuitry used includes: - Arduino Uno is a microcontroller dependent on the ATmega328P with 14 computerized input pins with 6 information sources, 16 MHz, USB, a power jack, an ICSP header and a reset mechanism. - The HOG highlights are broadly used for object recognition. Hoard disintegrates a picture into little squared cells, figures a histogram of arranged angles in every cell, standardizes the outcome utilizing a square shrewd example, and returns a descriptor for every cell. Stacking the cells into a squared picture area can be utilized as a picture window descriptor for object identification, for instance by methods for a SVM. - Servo engine is an electric device to push an item with exactness. - A Cascade Classifier is prepared with two or three hundred example perspectives on a specific article (i.e., a face or a vehicle), called positive models, that are scaled to a similar size (say, 20x20), and negative models - subjective pictures of a similar size. A short image of complete setup is shown in Fig. 5. Fig. 5: Hardware setup of proposed architecture. 149 IIUM Engineering Journal, Vol. 21, No. 2, 2020 Kumar and Gupta https://doi.org/10.31436/iiumej.v21i2.1322 The proposed approach is tested by making a data set of 100 different human faces and the results found are extremely good. The whole process of the project is also explained with the help of snapshots. The facial recognition provides accurate results, so the initial step is to detect the face as shown in Fig. 6. Fig. 7 shows the finest Eigenface images available in the database. Eigen values can be used to reconstruct the initial input image. The reconstructed image is shown in Fig. 8. The face recognition by taking live image is performed on the system and is shown in Fig. 9. If the system recognizes the person then the barrier will open and allows the person to go on. If the person is not recognized, it means an unauthorized person as shown in Fig. 10, it will then restrict the person and display a message as shown in Fig. 11. Fig. 6: Average face detection. Fig. 7: Finest Eigenface images. Fig. 8: Reconstructed image. Fig. 9: Face recognition at barrier. Fig. 10: Unauthorized person Fig.11: Dialogue box showing unauthorized access 150 IIUM Engineering Journal, Vol. 21, No. 2, 2020 Kumar and Gupta https://doi.org/10.31436/iiumej.v21i2.1322 In parallel, it sends a message to higher authorities to warn of unauthorized access. If higher authorities know the person, they will send an OTP on user mobile, then the user will enter this OTP on the provided interface (as shown in Fig. 12) within a specified time limit and the motor will rotate to 90 degrees (as shown in Fig. 13) and the person will be allowed to go ahead. If the OTP is incorrect or not entered, the system will send an email to higher authorities as shown in Fig. 14. Fig. 12: OTP authentication. Fig. 13: Servo motor rotating at 90 degree. Fig. 14: Email for unauthorized access. 7. CONCLUSION AND FUTURE SCOPE The conclusion of the complete approach for the paper has been explained as below: Face recognition utilizing course classifier technique has an awesome and quick capacity to distinguish between faces of human. Light & edge feature identification process is compelling in support of the acknowledgment procedure, hence the pre-processing of faces must be complete that comprises of RGB shading varied nearly to grayscale picture and geometric picture change comprising of turning, scaling, and deciphering the picture, and perform histogram balance independently on the two sides of appearances to compose it balance among differentiate and brilliance. The varieties in faces preparing information about the framework has improved after effects of face acknowledgment and that can limit the acknowledgment of vague faces as one of the recognized face in the framework. 151 IIUM Engineering Journal, Vol. 21, No. 2, 2020 Kumar and Gupta https://doi.org/10.31436/iiumej.v21i2.1322 Acknowledgment procedure determination admirable when the discovery of caught face outcome is obvious and not hidden. Person face acknowledgment utilizing an Eigenface approach runs quite well and fast. The utilization of Eigen values and Eigen vectors in produces excellent facial pictures and can be presented as an examination of recently prepared pictures. In the current scenario, governments all over the world are looking the ways to tighten the security of respective borders. For this purpose they are inventing and experimenting different approaches and technologies to lower the occurrence of terrorism. A versatile human identification interface may become very important in the future. Face recognition can provide quick and accurate information in identification of any unauthorized access. We are currently working on this idea by incorporating biometric identification and eye retina to make it more successful. REFERENCES [1] Federal Bureau of Investigation United States Department of Justice. Crime in the United States, 2015. [2] Phillip IW, John F. (2006) Facial feature detection using haar classifiers. Journal of Computing Sciences in Colleges, 21(4):127-133. [3] Richard JQ, Ibrahim MS, Kristine EM. (1998) A robust real-time face tracking algorithm. In proceedings of Image Processing, 1998 International Conference on, volume 1, 131–135. [4] Ha MD, Craig M, Meiqin L, Weihua S. (2014) Humanrobot collaboration in a mobile visual sensor network. In Robotics and Automation (ICRA), 2014 IEEE International Conference on, pp 2203-2208. [5] Weihua S, Yongsheng O, Duy T, Eyosiyas T, Meiqin L, Gangfeng Y. (2013) An integrated manual and autonomous driving framework based on driver drowsiness detection. In Intelligent Robots and Systems (IROS), 2013 IEEE/RSJ International Conference on, 4376– 4381. [6] Duy T, Eyosiyas T, Weihua S, Yuge S, Meiqin L, Senlin Z. (2016) A driver assistance framework based on driver drowsiness detection. In Cyber Technology in Automation, Control, and Intelligent Systems (CYBER), 2016 IEEE International Conference on,173- 178. [7] Davis EK., Dlib-ml. (2009) A machine learning toolkit. Journal of Machine Learning Research, 10:1755–1758. [8] ADT. Security systems, home automation, alarms and surveillance. [visited: 08-12-2019]. [9] Vivint Smart Home Security Systems. Smart home security solutions. 2016. [visited: 08-12- 2019]. [10] Protect America. Affordable home security systems for everyone., 2016. [visited: 08-12- 2019]. [11] Jeffrey SC, Darryl I. (1999), Facial recognition system for security access and identification, US Patent 5,991,429. [12] Singh A, Kumar S. (2012) Face recognition using PCA and Eigenface approach, Thesis: National Institute of Technology Rourkela. [13] Kumar D, Rajni. (2014) An efficient method of pca based face recognition using simulink, International journal of advanced in computer science and technology, vol. 3, pp. 364-368. [14] Abdi H, Williams LJ. (2010) Principal component analysis, John Wiley & Sons Inc, vol. 2, pp. 433-459. [15] Urifan I, Hidayat R, Soesanti I. (2010) Pengenalan wajah dengan metode Eigenface, Jurnal Penelitian Teknik Elektro, 3: 320-323. [16] Slavkovic M, Jevtic D. (2012) Face recognition using Eigenface approach. Serbian Journal of Electrical Engineering, 9:121-130. 152 << /ASCII85EncodePages false /AllowTransparency false /AutoPositionEPSFiles false /AutoRotatePages /None /Binding /Left /CalGrayProfile (Gray Gamma 2.2) /CalRGBProfile (None) /CalCMYKProfile (None) /sRGBProfile (sRGB IEC61966-2.1) /CannotEmbedFontPolicy /Warning /CompatibilityLevel 1.7 /CompressObjects /Off /CompressPages true /ConvertImagesToIndexed true /PassThroughJPEGImages true /CreateJobTicket false /DefaultRenderingIntent /Default /DetectBlends true /DetectCurves 0.0000 /ColorConversionStrategy /LeaveColorUnchanged /DoThumbnails false /EmbedAllFonts true /EmbedOpenType false /ParseICCProfilesInComments true /EmbedJobOptions true /DSCReportingLevel 0 /EmitDSCWarnings false /EndPage -1 /ImageMemory 1048576 /LockDistillerParams true /MaxSubsetPct 100 /Optimize false /OPM 0 /ParseDSCComments false /ParseDSCCommentsForDocInfo false /PreserveCopyPage true /PreserveDICMYKValues true /PreserveEPSInfo false /PreserveFlatness true /PreserveHalftoneInfo true /PreserveOPIComments false /PreserveOverprintSettings true /StartPage 1 /SubsetFonts false /TransferFunctionInfo /Remove /UCRandBGInfo /Preserve /UsePrologue false /ColorSettingsFile () /AlwaysEmbed [ true ] /NeverEmbed [ true ] /AntiAliasColorImages false /CropColorImages true /ColorImageMinResolution 200 /ColorImageMinResolutionPolicy /OK /DownsampleColorImages true /ColorImageDownsampleType /Bicubic /ColorImageResolution 300 /ColorImageDepth -1 /ColorImageMinDownsampleDepth 1 /ColorImageDownsampleThreshold 1.50000 /EncodeColorImages true /ColorImageFilter /DCTEncode /AutoFilterColorImages false /ColorImageAutoFilterStrategy /JPEG /ColorACSImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] >> /ColorImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] >> /JPEG2000ColorACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 >> /JPEG2000ColorImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 >> /AntiAliasGrayImages false /CropGrayImages true /GrayImageMinResolution 200 /GrayImageMinResolutionPolicy /OK /DownsampleGrayImages true /GrayImageDownsampleType /Bicubic /GrayImageResolution 300 /GrayImageDepth -1 /GrayImageMinDownsampleDepth 2 /GrayImageDownsampleThreshold 1.50000 /EncodeGrayImages true /GrayImageFilter /DCTEncode /AutoFilterGrayImages false /GrayImageAutoFilterStrategy /JPEG /GrayACSImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] >> /GrayImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] >> /JPEG2000GrayACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 >> /JPEG2000GrayImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 >> /AntiAliasMonoImages false /CropMonoImages true /MonoImageMinResolution 400 /MonoImageMinResolutionPolicy /OK /DownsampleMonoImages true /MonoImageDownsampleType /Bicubic /MonoImageResolution 600 /MonoImageDepth -1 /MonoImageDownsampleThreshold 1.50000 /EncodeMonoImages true /MonoImageFilter /CCITTFaxEncode /MonoImageDict << /K -1 >> /AllowPSXObjects false /CheckCompliance [ /None ] /PDFX1aCheck false /PDFX3Check false /PDFXCompliantPDFOnly false /PDFXNoTrimBoxError true /PDFXTrimBoxToMediaBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXSetBleedBoxToMediaBox true /PDFXBleedBoxToTrimBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXOutputIntentProfile (None) /PDFXOutputConditionIdentifier () /PDFXOutputCondition () /PDFXRegistryName () /PDFXTrapped /False /CreateJDFFile false /Description << /ARA /BGR /CHS /CHT /CZE /DAN /DEU /ESP /ETI /FRA /GRE /HEB /HRV /HUN /ITA (Utilizzare queste impostazioni per creare documenti Adobe PDF adatti per visualizzare e stampare documenti aziendali in modo affidabile. I documenti PDF creati possono essere aperti con Acrobat e Adobe Reader 6.0 e versioni successive.) /JPN /KOR /LTH /LVI /NLD (Gebruik deze instellingen om Adobe PDF-documenten te maken waarmee zakelijke documenten betrouwbaar kunnen worden weergegeven en afgedrukt. De gemaakte PDF-documenten kunnen worden geopend met Acrobat en Adobe Reader 6.0 en hoger.) /NOR /POL /PTB /RUM /RUS /SKY /SLV /SUO /SVE /TUR /UKR /ENU (Use these settings to create Adobe PDF documents suitable for reliable viewing and printing of business documents. Created PDF documents can be opened with Acrobat and Adobe Reader 6.0 and later.) >> >> setdistillerparams << /HWResolution [600 600] /PageSize [595.440 841.680] >> setpagedevice