Bulletin of Electrical Engineering and Informatics Vol. 10, No. 2, April 2021, pp. 1105~1113 ISSN: 2302-9285, DOI: 10.1159 1/ee1.v1012.2859 O 1105

Real-time mask detection and face recognition using eigenfaces and local binary pattern histogram for attendance system

Mohd Suhairi Md Suhaimin!, Mohd Hanafi Ahmad Hijazi*, Chung Seng Kheau*, Chin Kim On? J S S 'Kuching Community College, Ministry of Higher Education, Sarawak, Malaysia "Faculty of Computing and Informatics, Universiti Malaysia Sabah, Sabah, Malaysia Faculty of Engineering, Universiti Malaysia Sabah, Sabah, Malaysia

Article Info ABSTRACT

Article history: Face recognition is gaining popularity as one of the biometrics methods for an attendance system in an organization. Due to the pandemic, the common

Received Nov 18, 2020 face recognition system needs to be modified to meet the current needs,

Revised Jan 27, 2021 whereby facemask detection is necessary. The main objective of this paper is

Accepted Feb 20, 2021 to investigate and develop a real-time face recognition system for the

attendance system based on the current scenarios. The proposed framework

consists of face detection, mask detection, face recognition, and attendance Keywords: report generation modules. The face and facemask detection is performed using the haar cascade classifier. Two techniques for face recognition were investigated, the eigenfaces and local binary pattern histogram. The initial experimental results and implementation at Kuching Community College Mask detection show the effectiveness of the system. For future work, an approach that is Real-time system able to perform masked face recognition will be investigated.

Attendance system Face recognition

This is an open access article under the CC BY-SA license.

Corresponding Author:

Mohd Hanafi Ahmad Hijazi

Faculty of Computing and Informatics Universiti Malaysia Sabah (UMS) 88400 Kota Kinabalu, Sabah, Malaysia Email: hanafi@ums.edu.my

1. INTRODUCTION

Face recognition is one of the biometrics promises that become more popular for the application of identification, access control and register in an organization. Manual and conventional methods have been replaced by biometrics due to their security, saving cost, fast and efficiency. Recently many attendance systems proposed using the face recognition advantage. It has reduced the burden of taking attendance manually, prevent fraud and embedded in the updated technology [1]. Among biometrics techniques, face recognition provides a contactless method that complies with current scenarios requirement compared to fingerprint analysis, palm prints and iris recognition. To prevent the spread of the COVID-19 pandemic, contactless and even wearing facemask has been made compulsory. The medium for diseases spread, such as sharing a paper and pen, also should be avoided. Besides, wearing a facemask to cover the nose and mouth has been identified as one of the prevention steps. Thus conventional attendance system should be replaced with a digitalization attendance system to provide a safe environment to the attendees. However, face recognition is hindered by the occlusion of facemask wearing by attendees. The detection of the facemask is required before face recognition is conducted by the system. The attendance record also needs to tailor to an organization's requirement, such as record entry and exit of the attendees, so that the time engaged in each event can be calculated.

Journal homepage: http://beei.org

1106 O ISSN: 2302-9285

The main objective of this paper is to investigate, implement and compare the real-time face recognition for the attendance system using eigenfaces and local binary pattern histogram (LBPH) techniques. The mask detection is embedded in the system to comply with the current pandemic situation, whereby wearing a facemask in public is compulsory in certain countries, including Malaysia. The contribution of this paper is a framework of embedded mask detection in a real-time face recognition system for attendance taking. The rest of the paper is organized as follows. Section 2 summarizes the related work. Section 3 describes the proposed method and section 4 presents the results. Section 5 provides a discussion of the results. Finally, section 6 concludes this paper and provides future work.

2. RELATED WORK

Several face recognition systems have been proposed for the attendance system using a holistic, geometric, local and deep learning approach [2]. Recently local-based approach using local binary pattern (LBP) has been adopted for attendance registration [3] and produced better image features for recognition [4]. The exploited features for LBP involved contrast adjustment, bilateral filter, histogram equalization and image blending to improve the accuracy of face recognition. The approach is embedded in an attendance system and able to record the attendance of individuals into the database. The system structure flows from input images to preprocessing, face detection, exploitation algorithm, feature comparison and recognition and finally, attendance recording. The approach tested using their linear blending dataset produced 99% accuracy. Previously, [5] experimented LBP with principal component analysis (PCA) for an automatic attendance system in a classroom environment. The algorithms, coupled with Haar-like features [6] for early face detection, able to detect and recognized attendees. Use case test images produced a 75% to 95% reliability result. Using the holistic approach, [7] applied the eigenfaces algorithm for a smart attendance system for their classroom. OpenCV library was utilized with microsoft access database for processing, recording and managing attendance of students and lecturers. The system attained satisfactory accuracy for detecting and recognizing attendees. The system structure consists of camera input, face detection and recognition, mark attendance and finally, generate the attendance report. In other work, [8] proposed a real-time smart attendance system for the classroom using eigenfaces and PCA. Two cameras were installed, one at the outside of the classroom for scanning and one inside the classroom for visibility. Both cameras were used to analyze the images for attendance registration. The system starts with camera input, face detection and face recognition, whereby the latter will be in a loop state if the face not present. Students were allowed to enter classroom once their faces were recognized.

A deep learning approach has been used for face recognition with the increase of processing power include a graphical processing unit and large data availability. Convolutional neural network (CNN) becomes a popular choice for the face recognition attendance system approach. Recently [1] experimented class attendance system using data augmentation to overcome the insufficient sample issue on CNN. The experiment fine-tuned the VGG-16 network to accomplished face recognition with an accuracy of 86.3%. The experiment was trained and run on a high specification machine of 12GB GPU and took 8 hours to complete each experiment. The attendance structure consists of auto acquired attendees face in data collection, preprocessing that involves data augmentation of geometric transformation, image brightness changes, application of different filter operations, and finally face recognition using CNN. Similarly, [9] adopted CNN modification, 1.e., multitask cascade neural network (MTCNN), by [10], for real-time face attendance marking system in their environment. The camera was placed at the building entrance. The system consists of video capturing, face detection, face tracking and face recognition. The MTCNN was coupled with Inception-Res-NetV1 [11] for feature extraction and face recognition. The system runs on a well-equipped machine assisted by a CPU of 3.60GHz and GPU of 128GB RAM. The proposed system able to take attendance in non-cooperative environment.

Current face recognition based attendance system have seen many modification and improvement with the rise of the modern deep learning approach [2]. However, it requires a high cost with respect to monetary and computational power. From the literature, it is found that efficient techniques that require low computation for face recognition could also produce a good real-time attendance system. In the work presented in this paper, a custom Haar cascade classifier for mask detection is combined with LBP and eigenfaces for face recognition. The facemask detection is embedded in the attendance system, which differs from the currently available face recognition system due to the requirement for pandemic COVID-19 prevention.

3. METHOD 3.1. The proposed system framework

The overall real-time mask detection and face recognition framework consist of image input or captured, face detection, mask detection, face recognition and attendance record. Figure 1 shows the

Bulletin of Electr Eng & Inf, Vol. 10, No. 2, April 2021 : 1105 1113

Bulletin of Electr Eng & Inf ISSN: 2302-9285 O 1107

framework of the system. The system starts with an image captured from the video camera at the entrance. Two cameras were used, one at the entrance door for entry records and another at the exit door for exit records. Face image of attendees captured and fed to the system for face detection. Once the face is detected, the system will perform mask detection. If the attendee is not wearing a mask, the system will perform face recognition. Else, the system will print 'Using Mask’ text on the screen to remind the attendee to remove his/ her mask. The face recognition module is conducted by comparing the face captured to the images saved in the face database. Finally, attendance information of recognized attendees and time captured during detection was recorded and saved to the database. The same process is repeated at the exit door, whereby two different times will be recorded for the final calculation of engaged time for each attendee.

Image 7 | Captured [esos]

Face Detected?

No

Yes

Mask Detection

Yes

Mask Detected?

No

Face Recognition Face Database

Attendance Record

Face Recognized?

No Figure 1. The framework of real-time mask detection and face recognition attendance system

3.2. Face and mask detection

For the face detection task, we adopted the work of [12]. First, the image was represented as a Haar- like feature representation. Second, AdaBoost was used to boost the classification performance. For mask detection, we adopted the custom Haar cascade [13]. The cascade function was trained using positive and negative images. Positive images refer to the face images whereby a facemask occurs. Face without facemask images construct the negative images. For each of the images where the face is successfully detected, preprocessing that involves contrast adjustment, intensity and size normalization was performed. The preprocessed images were then sent to the face recognition module.

3.3. Face recognition

Eigenfaces and local binary pattern histogram (LBPH) were selected to perform face recognition. These algorithms were chosen as they require less processing time for implementation. Furthermore, both have been shown to be feasible for implementation in a real-time environment and produced good recognition results [14-16].

3.3.1. Face recognition using eigenfaces Eigenfaces is a subspace or holistic based approach which used PCA to find a set of features that characterized variation among images. It has been used for face detection and recognition due to its

Real-time mask detection and face recognition using eigenfaces and local... (Mohd Suhairi Md Suhaimin)

1108 O ISSN: 2302-9285

superiority in extracting relevant facial information and efficient face image representation [17]. For each image in the training set, the eigenfaces were calculated and only images with the highest eigenvalues were selected. The weights for each of the selected image was then calculated and projected to the eigenspace. To recognize an unknown face, the eigenvectors need to be calculated, similar to the above steps. The new face image projected into eigenspace I is given by:

Nn =UTT- Y) (1)

where 2 is the weight vector representation of the new face, U^ is the set of significant eigenvectors, I" is the sum of vectors and ¥ is the average of the vectors. To determine which class face I belongs to is by minimizing the Euclidean distance, €g:

Ep = min || N AQ | (2)

where Q is representing the weight vector of the test image and Q_k is the weight vector representing the kth face class in the training set. €_k is corresponding to class k if the minimum e_k is smaller than a predefined threshold, Oe. If €_k is greater than Oe, the new face is unknown. The threshold, O€, can be manually chosen by defining maximum allowable distances from any face class or maximum distance from face space [17].

3.3.2. Face recognition using LBPH

LBP has been widely used in many applications due to its computational simplicity in real-time and robustness to monotonic greyscale changes such as illumination [18]. LBP is a local based appearance approach, which used the texture information as features extracted from the frontal view of an image. It labels the pixels of an image by thresholding the neighbor of each surrounding pixel. The LBP for a defined matrix size of 3x3 is given by:

1x20

LBPp rR = 2a 2Ps(io ip), with s(x) = ( 1x <0

(3)

where i, and 1, are the intensity value of the center pixel and neighborhood pixel. The notation P, R indicates the use of P sample points in the neighborhood R and s defines the thresholding function. The pixel of this matrix is a threshold with the value of the center pixel; use the intensity value of the center pixel i(p,) as a reference for thresholding to produce the binary code. If a neighbor pixel's value is lower than the center pixel value, it is given a zero; otherwise, it is given one. These processes are created for each part of the region in the image. Later, a histogram is extracted for each region, and all histograms of an image were then concatenated to produce only one histogram for each image. The new face image and training image are then compared using the chi-square comparison applied to the image's histogram [19].

4. RESULTS AND DISCUSSION

This section presented the experiments and implementation of the proposed system. The experiment was conducted to measure the performance of the described face recognition techniques, and the implementation was conducted to test the workability of the proposed system. The system has been experimented and implemented at the Kuching Community College (KCC), Malaysia. The system runs on a lightweight CPU of Intel 15 with 1.5GHz and 4GB RAM. Using Python and OpenCV, the system used an internal database for storage. The camera of a full HD with a 3.6mm lens was used with the support of LED ring light surrounding it. This is to ensure a better face image is captured [3] under control environment [20, 21]. Section 4.1 presents the results of the mask detection and face recognition performance, and section 4.2 describes the implementation of the proposed face recognition system.

4.1. The mask detection and face recognition performance

In this section, the performance of mask detection and face recognition are reported. A group of 15 students from a class was chosen for face detection and mask detection. The face detection using the Haar cascade classifier with AdaBoost able to capture the faces. Using the Haar cascade classifier for a facemask for mask detection, the classifier effectively detects all the faces with the facemask. Figure 2 shows the mask detection of the subjects and face detection. A warning sign ‘Using Mask' is shown on the screen if a facemask is detected.

For face recognition, two sets of experiments were conducted using eigenfaces and LBPH. The first is to identify the confidence value, C, for each algorithm that best represents the data used. The value C refers to the minimum distance between a new test image and the corresponding image in the recorded database or

Bulletin of Electr Eng & Inf, Vol. 10, No. 2, April 2021 : 1105 1113

Bulletin of Electr Eng & Inf ISSN: 2302-9285 O 1109

training image. For eigenfaces, the C refers to the minimum distance of Euclidean of the test and training image. Meanwhile, for LBPH, C refers to the histogram of the test image and training image using chi-square distance [21]. The identified C was then used to set a threshold (0) representing the set of data for recognition. The second experiment is to identify the best performing algorithm for face recognition using the identified C.

- 9 r `

Using Mask

ESC to exit.

Figure 2. Mask detection and face detection

The first experiment was performed on a subset of data used in this paper. The data consist of a group of 30 students in a class. Under the control environment, the face images of each student were used to train and test both algorithms. The confidence C value for each algorithm was recorded. The highest C value, which represents the dataset used for the experiment, was chosen for each algorithm. Table 1 shows the identified C value for both of the algorithms used. These values represent the maximum C for the dataset used and are important parameters for the second experiment.

The identified C values from the first experiment were used as threshold 0 in the second experiment. Both algorithms used their own @ set as the border for the recognition. The same group of students was chosen for the second experiment. Table 2 shows the accuracy of face recognition using eigenfaces and LBPH. From Table 2, LBPH outperformed eigenfaces for face recognition. Recognition accuracy for eigenfaces was recorded at 73.3% compared to 100% in LBPH.

Table 1. Confidence C value for eigenfaces and Table 2. Face recognition accuracy of eigenfaces and LBPH LBPH Algorithm C Algorithm Accuracy (%) Eigenfaces 4260 Eigenfaces Tad LBPH 76 LBPH 100.0

Concerning face recognition, LBPH outperformed eigenfaces using the proposed framework with the hardware and the control environment setting. The results show that LBPH is robust in greyscale and illumination in real time [15, 18, 22]. It is also observed that the eigenfaces have difficulties in recognizing faces in a situation where the position of the face is similar across images of different subjects. Figure 3 shows examples of error in recognizing the faces of an attendee using eigenfaces compared to LBPH. Eigenfaces with PCA in a holistic approach known to be good at data representation but not necessarily for class discrimination in face recognition [23, 24]. The texture-based features for face recognition seem to be more effective in this context, as shown by the LBPH. Under the controlled environment, the correct person who was wrongly classified in Figure 3(a) was able to be recognized by LBPH in Figure 3(b), Figure 3(c).

Eigenfaces =A = p F =

(a) (b) (c) Figure 3. Screen captured for wrong recognition in, (a) Eigenfaces, compared to correct recognition in LBPH (b) and (c)

Real-time mask detection and face recognition using eigenfaces and local... (Mohd Suhairi Md Suhaimin)

1110 O ISSN: 2302-9285

4.2. The implementation of the proposed system

The custom Haar cascade classifier and the best-performing face recognition algorithm, the LBPH from the experiment presented in section 4.1, is used for implementation. The implementation of the system consists of three main modules. The modules are data face recording, mask detection and face recognition, and attendance output generation.

4.2.1. Data face recording

The data face recording module involved collecting images of subjects and saving it to the database. The system first detects the face of members and capture the face according to the name or ID provided. Only a unique name or ID is saved to the database under the correspondence folder. The face image was resized, convert to greyscale and normalized before stored in the database. Figure 4 shows a sample of images captured, preprocessed and stored in the database.

(a) (b) (c)

Figure 4. (a) Original image captured, (b) Detected and resized face image, (c) Greyscale and saved image in database

4.2.2. Mask detection and face recognition

Mask detection and face recognition module involving two different steps, as shown in Figure 1. First, face detection of the subject is performed, followed by mask detection. If the subject is wearing a facemask, the system displays a warning 'using mask'. If no facemask is detected, the recognition process is performed using the LBPH. For each of the recognized faces, the name of the subject is printed on the screen and store the record of the subjects in the database. The system records information that includes the name of the subject, date and time of recognition, 1.e., attendance-entry and attendance-exit. Figure 5 shows examples of mask detection and face recognition of subjects. The subject in Figure 5(a) was unknown due to occlusion caused by the facemask. It is advisable that subject should focus on the camera for still image [20, 25], and any drastic change can cause a poorly captured image [26]. If a facemask is detected, the 'Using Mask’ warning was shown, which indicates the subject has to remove her mask to allow face recognition takes place. The face later had been recognized once the subject removes the facemask, as shown in Figure 5(b).

However, from the experiments, it is found that the non-frontal view of the face would also fail the face recognition. Figure 6(a) shows the unknown recognition due to a non-frontal image while the correct recognition is performed using a frontal image 6(b). The frontal view is one of the mandatory poses for better recognition performance even in a controlled environment [20, 21, 24].

(a) (b) (c)

Figure 5. (a) Mask detection with sign warning and unknown face due to mask occlusion, (b) Correct recognition after removing the mask, (c) Correct recognition

Bulletin of Electr Eng & Inf, Vol. 10, No. 2, April 2021 : 1105—1113

Bulletin of Electr Eng & Inf

ISSN: 2302-9285

(a)

(b)

=z

o

1111

Figure 6. (a) Unknown face recognition due to non-frontal image, (b) Correct recognition using a frontal

4.2.3. Attendance output generation

The final module of the implementation is the attendance report generated. The report consists of information that includes the entry time, the exit time and the total time spent in a class or event. Figure 7 shows the screenshot of the attendance report generated.

(Attendance In & Out:

‘in WOWDIHDUBWNHPR O

Name

aida aisyah grace iffah izzyie jessica liew liza marelyn noraziah nordiana norharizatul nurazlin safina shahnaz suhairi tazilah vivian Zidah

Datein

2020-10-25 2020-10-25 2020-10-25 2020-10-25 2020-10-25 2020-10-25 2020-10-25 2020-10-25 2020-10-25 2020-10-25 2020-10-25 2020-10-25 2020-10-25 2020-10-25 2020-10-25 2020-10-21 2020-10-25 2020-10-25 2020-10-25

image

TimeIn 728:

14

TimeOut Engage-Min 746: f.

@Engage-Hrs ð.

C 090 0200 E000 200 00 OD OD

Figure 7. Attendance report generated

5. CONCLUSION

This paper presents a real-time mask detection and face recognition for attendance system. A framework of mask detection embedded in face recognition was proposed for the system to comply with the current pandemic situation. The framework consists of image input, face detection, mask detection, face recognition, and attendance recording. Haar cascade was used for mask and face detection. Two techniques for face recognition were used, the eigenfaces and LBPH. In the experiments, the LBPH outperformed the Eigenfaces for face recognition. The system was implemented in KCC, Malaysia. Initial experiment and implementation show the effectiveness of the approach to use face recognition to register attendance. For future work, an investigation on an approach that is able to perform partial face occlusion recognition using a partial face image will be conducted. Such an approach would permit face recognition without the subjects’

need to remove their facemask or masked face recognition, which is compulsory in some countries due to the COVID-19 pandemic.

ACKNOWLEDGEMENTS

The publication of this paper is funded by Universiti Malaysia Sabah, Malaysia. The equipment and technical support are provided by Kuching Community College, Malaysia. We thank Luke Kenny Doring and Norshafiza Zakaria for student arrangement and technical support.

Real-time mask detection and face recognition using eigenfaces and local... (Mohd Suhairi Md Suhaimin)

1112 O ISSN: 2302-9285

REFERENCES

[1] Z. Pei, H. Xu, Y. Zhang, M. Guo, and Y.-H. Yang, “Face Recognition via Deep Learning Using Data Augmentation Based on Orthogonal Experiments,” Electronics, vol. 8, pp. 1-16, 2019, doi:10.3390/electronics8101088.

[2] I. Adjabi, A. Ouahabi, A. Benzaoui, and A. Taleb-Ahmed, “Past, Present, and Future of Face Recognition: A Review,” Electronics, vol. 9, no. 8, 2020, doi:10.3390/electronics908 1188.

[3] S.J. Elias, S. M. Hatim, N. A. Hassan, L. M. Abd Latif, R. B. Ahmad, M. Y. Darus, et al., “Face recognition attendance system using Local Binary Pattern (LBP),” Bulletin of Electrical Engineering and Informatics, vol. 8, no. 1, pp. 239-245, 2019, doi:10.11591/eei.v8i1.1439.

[4] S. M. Bah and F. Ming, “An improved face recognition algorithm and its application in attendance management system,” Array, vol. 5, 2020, doi:10.1016/j.array.2019.100014.

[5] O. Sanli and B. Ilgen, “Face detection and recognition for automatic attendance system,” in Proceedings of SAI Intelligent Systems Conference, pp. 237-245, 2018, doi: 10.1007/978-3-030-01054-6_17.

[6] L. N. Soni, A. Datar, and S. Datar, “Implementation of Viola-Jones Algorithm based approach for human face detection,” Int. J. Curr. Eng. Technol., vol. 7, no. 5, pp. 1819-1823, 2017.

[7] F. F. Alkhali and B. K. Oleiwi, “Smart E-Attendance System Utilizing Eigenfaces Algorithm,” Iraqi Journal of Computers, Communication and Control & Systems Engineering, vol. 18, no. 1, pp. 56-63, 2018.

[8] S. Sawhney, K. Kacker, S. Jain, S. N. Singh, and R. Garg, “Real-Time Smart Attendance System using Face Recognition Techniques,” in 20/9 9th International Conference on Cloud Computing, Data Science & Engineering (Confluence), pp. 522-525, 2019, doi: 10.1109/CONFLUENCE.2019.8776934.

[9] K. Jin, X. Xie, F. Wang, X. Gao, and G. Shi, “Real-Time Face Attendance Marking System in Non-cooperative Environments,” in Proceedings of the 2018 the 2nd International Conference on Video and Image Processing, pp. 29-34, 2018, doi:10.1145/3301506.3301546.

[10] K. Zhang, Z. Zhang, Z. Li, and Y. Qiao, “Joint face detection and alignment using multitask cascaded convolutional networks,” IEEE Signal Processing Letters, vol. 23, no. 10, pp. 1499-1503, 2016, doi: 10.1109/LSP.2016.2603342.

[11] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. Alemi, “Inception-v4, inception-resnet and the impact of residual connections on learning,” arXiv preprint arXiv:1602.07261, 2016.

[12] P. Viola and M. Jones, “Rapid object detection using a boosted cascade of simple features,” in Proceedings of the 2001 IEEE computer society conference on computer vision and pattern recognition. CVPR 2001, pp. I-I, 2001, doi: 10.1109/CVPR.2001.990517.

[13] A. F. Villán, “Mastering OpenCV 4 with Python: a practical guide covering topics from image processing, augmented reality to deep learning with OpenCV 4 and Python 3.7,” Packt Publishing Ltd, 2019.

[14] M. Xi, L. Chen, D. Polajnar, and W. Tong, “Local binary pattern network: A deep learning approach for face recognition,” in 20/6 IEEE international conference on Image processing (ICIP), pp. 3224-3228, 2016, dot: 10.1 109/ICIP.2016.7532955.

[15] T. Napoléon and A. Alfalou, “Local binary patterns preprocessing for face identification/verification using the VanderLugt correlator,” in Optical Pattern Recognition XXV, 2014, doi:10.1117/12.2051267.

[16] P. Khoi, L. H. Thien, and V. H. Viet, “Face Retrieval Based On Local Binary Pattern and Its Variants: A Comprehensive Study,” Int. J. Adv. Comput. Sci. Appl, vol. 7, no. 6, pp. 249-258, 2016.

[17] M. Turk and A. Pentland, “Eigenfaces for recognition,” Journal of cognitive neuroscience, vol. 3, no. 1, pp. 71-86, 1991.

[18] A. Hadid, M. Pietikainen, and T. Ahonen, “Face description with local binary patterns: Application to face recognition,” IEEE transactions on pattern analysis and machine intelligence, vol. 28, no. 12, pp. 2037-2041, 2006, doi: 10.1109/TPAMI.2006.244.

[19] Z. Yang and H. Ai, “Demographic classification with local binary patterns,’ Biometrics, pp. 464-473, 2007, doi:10.1007/978-3-540-74549-5_49.

[20] H. S. Simaremare, “Comparison of Face Recognition Accuracy Using LBPH and Eigenface Methods to Recognize Three Faces at Once in Real-Time” (In Indonesia: “Perbandingan Akurasi Pengenalan Wajah Menggunakan Metode LBPH dan Eigenface dalam Mengenali Tiga Wajah Sekaligus secara Real-Time),” Jurnal Sains dan Teknologi Industri, vol. 14, pp. 66-71, 2016.

[21] B. Surekha, K. J. Nazare, S. V. Raju, and N. Dey, “Attendance recording system using partial face recognition algorithm,” in Intelligent techniques in signal processing for multimedia security, ed: Springer, pp. 293-319, 2017, doi:10.1007/978-3-319-44790-2_ 14.

[22] F. Deeba, A. Ahmed, H. Memon, F. A. Dharejo, and A. Ghaffar, “LBPH-based enhanced real-time face recognition,” Int J Adv Comput Sci Appl, vol. 10, no. 5, pp. 274-280, 2019, doi:10.14569/IJACSA.2019.0100535.

[23] M. Savvides, J. Heo, and S. W. Park, “Face Recognition,” in Handbook of Biometrics, A. K. Jain, P. Flynn, and A. A. Ross, Eds., ed Boston, MA: Springer US, pp. 43-70, 2008.

[24] I. Taufik, M. Musthopa, A. R. Atmadja, M. A. Ramdhani, Y. A. Gerhana, and N. Ismail, “Comparison of principal component analysis algorithm and local binary pattern for feature extraction on face recognition system,” in MATEC Web of Conferences, vol. 197, 2018, doi:10.105 1/matecconf/2018 19703001.

[25] J. C. E. Tsun, C. W. Jen, and F. C. C. Mei, “Automated Attendance Capture System,” 2™ Eureca 2014, 2014.

[26] V. Shehu and A. Dika, “Using real time computer vision algorithms in automatic attendance management systems,” in Proceedings of the ITI 2010, 32nd International Conference on Information Technology Interfaces, pp. 397-402, 2010.

p)

in International Conference on

Bulletin of Electr Eng & Inf, Vol. 10, No. 2, April 2021 : 1105—1113

Bulletin of Electr Eng & Inf ISSN: 2302-9285 O 1113

BIOGRAPHIES OF AUTHORS

ai

Mohd Suhairi Md Suhaimin is a lecturer of Information Technology and General Studies at Kuching Community College, Sarawak. He has completed postgraduate study in Computer Science at Universiti Malaysia Sabah (UMS). His research interests span both data mining and machine learning. His previous project was 'Sarcasm Detection and Classification to Support Sentiment Analysis', focusing on bilingual, Malay and English. He has explored the presence and implication of sarcasm in sentiment analysis. Currently, he is focusing on machine learning from a computer vision perspective. He leads a face recognition project, 'Face Recognition and Its Application’; specifically in building a complete application system taking into account all the problems and factors involved.

Mohd Hanafi Ahmad Hijazi is an Associate Professor of Computer Science at the Faculty of Computing and Informatics, Universiti Malaysia Sabah in Malaysia. His research work addresses the challenges in knowledge discovery and data mining to identify patterns for prediction on structured and/ or unstructured data; his particular application domains are medical image analysis and understanding and sentiment analysis on social media data. He has authored/ co-authored more than 40 journals/ book chapters and conference papers, most of which are indexed by Scopus and ISI Web of Science. He also served on the program and organizing committees of numerous national and international conferences. He is the leader of the Data Technologies and Applications research group at the faculty.

Chung Seng Kheau is a senior lecturer at Universiti Malaysia Sabah. He graduated Ph.D., master and bachelor's degree in computer science. His research area is on IoT, automation, data mining, and online application. Application development experience in online shopping, office automation, mobile app, business accounting, IoT devices and factory workflow system.

Kim On, Chin is currently teaching at Universiti Malaysia Sabah in the Faculty of Computing and Informatics. His research interests are evolutionary computing, artificial neural networks, image processing, Internet of Things (IoT), and biometric security system. He has authored and co-authored 110 articles in the forms of journals, book chapters, and conference proceedings. His research publication has reached h-index 9 in Scopus and h-index 5 in the ISI Web of Science. He is a Senior Member of IEEE, certified Computational Thinking Master Trainer and certified Professional Technologist. His recent consultant tasks include Digital Maker Hub management for Science, Technology, Engineering, Arts, and Mathematics (STEAM), e-summon system using image processing and neural networks for campus surveillance, and plastic object classification based on near-infrared hyperspectral images using machine learning-based classifier for onshore plastic waste detection.

Real-time mask detection and face recognition using eigenfaces and local... (Mohd Suhairi Md Suhaimin)