https://mjlis.um.edu.my/index.php/mjca/issue/feed Malaysian Journal of Cybersecurity and Applications 2025-06-26T00:00:00+08:00 Editor mjca@csam.my Open Journal Systems <p><img src="blob:https://ejournal.um.edu.my/b550c001-490d-4fee-89f4-fd78d21b0ce9" />The Malaysian Journal of Cybersecurity and Applications (MJCA) is a scholarly platform dedicated to advancing research and innovation in cybersecurity, with a strong focus on its practical applications, governance, and policy implications. The journal addresses critical areas such as network security, data protection, cryptography, and the integration of emerging technologies in combating cyber threats.</p> <p>While rooted in addressing cybersecurity challenges in Malaysia and the Southeast Asian region, MJCA also covers global issues, providing a broader perspective on international cybersecurity trends, governance frameworks, and policies. It explores the development and implementation of strategies to build resilient cyber ecosystems, bridging technical advancements with policy insights. MJCA serves as a key resource for academics, practitioners, and policymakers worldwide, contributing to the global discourse on digital safety, risk management, and cybersecurity innovation.</p> <p>The Malaysian Journal of Cybersecurity and Applications (MJCA) is a journal published by the Cyber Security Academia Malaysia (CSAM), which is an association of cyber security academics in Malaysia. The journal aims to serve as a prestigious scholarly platform for disseminating advances and innovation in cybersecurity. While its primary focus is on issues pertinent to Malaysia and the Southeast Asian region, the journal recognizes the interconnected nature of the cyber landscape. Consequently, MJCA also covers global cybersecurity challenges and trends that impact the wider cyberspace.</p> <p>The Malaysian Journal of Cybersecurity and Applications (MJCA) is committed to promoting the dissemination of knowledge through open access. We believe that making research freely available to the public supports a greater global exchange of knowledge and contributes to the advancement of science and technology. Key principles of our open access policy are as follows:</p> <ul> <li>All articles published in MJCA are freely accessible to everyone, without subscription or paywall, immediately upon publication.</li> <li>Authors retain the copyright to their work and grant the journal the right of first publication. Authors are encouraged to deposit their published articles in institutional or subject repositories.</li> </ul> <p>We are dedicated to supporting open access and believe it is essential for advancing research and innovation in cybersecurity and related fields. By providing open access, we ensure that research from MJCA can reach a wider audience, including researchers, practitioners, and the general public, worldwide.</p> https://mjlis.um.edu.my/index.php/mjca/article/view/59335 FORENSIC FACIAL IDENTIFICATION BASED FACE HALLUCINATION TECHNIQUE WITH SPARSE REPRESENTATION 2025-06-10T11:49:08+08:00 Siti Norul Huda Sheikh Abdullah snhsabdullah@ukm.edu.my Nazri Ahmad Zamani nazri.az@cybersecurity.my Khairul Akram Zainol Arifin K.akram@ukm.edu.my Md Jan Nordin jan@ukm.edu.my Tutut Herawan tutut@um.edu.my Nazhatul Hafizah Kamarudin nazhatulhafizah@ukm.edu.my <p><em>In video forensics, the low resolution of the facial information inside the video evidence is found to be the leading cause of the low performance of the face facial identification system. Therefore, the super-resolution method is commonly used to recover low-resolution facial information inside a photo or a video to a higher resolution. However, in the current state, image resizing, especially super-resolution methods, cannot enhance the resolution of facial information with good quality at high magnification factors. This paper proposes a new forensic face identification based on the face hallucination technique with sparse representation. The proposed method, Sparse Resolution (SR), is a single-frame method that uses a representation of a signal with linear combinations of small elementary signals. These signals are then interpolated to synthesize low-resolution signals to a higher version. The signals are chosen via sparse coding from an over-complete dictionary with trained images. The active Appearance Model (AAM) and Support Vectors Machine (SVM) were subsequently used to extract features and classify data. In the experimental results, the SR face images are tested on two datasets: (1) 14 individuals who are collected via CCTV surveillance Digital Video Recorder and (2) the 2.5D partial images produced by a forensic facial identification system. The experiments show that the SR produced promising results. Also, the AAM-SVM facial matching results show that the SR images get higher matching performance than other state-of-the-art methods.</em></p> 2025-06-26T00:00:00+08:00 Copyright (c) 2025 Malaysian Journal of Cybersecurity and Applications https://mjlis.um.edu.my/index.php/mjca/article/view/59341 ARTIFACTS RETRIEVAL USING NETWORK FORENSIC APPROACH FOR SAAS CLOUD STORAGE ON ANDROID 2025-06-11T12:10:17+08:00 Aqeel M Nezar aqeeltamimi@hotmail.com Khairul Akram Zainol Ariffin k.akram@ukm.edu.my <p><em>The widespread implementation of cloud storage solutions has fundamentally transformed data governance; however, it has concurrently introduced intricate security dilemmas, particularly within entities that adopt Bring Your Own Device (BYOD) policies. While cloud storage facilitates scalability and economic efficiency, it concurrently offers pathways for cyber intrusions and data compromises, thereby necessitating the establishment of rigorous digital forensic (DF) methodologies. This investigation addresses the imperative requirement for DF professionals to proficiently recover and scrutinize data remnants from Android cloud storage applications, particularly in light of the continuously evolving security milieu of the Android ecosystem. The objective is to propose a digital forensic protocol for the recovery of data remnants from five distinct Android cloud storage applications—BigMind, Degoo, FEX NET, File.fm, and Koofr—utilizing network packet analysis as the primary methodology. NET, File.fm, and Koofr—utilizing network packet analysis as the primary method. By simulating a variety of user interactions, including login, uploading, downloading, and deletion, the study contrasts the data remnants obtained from both Android applications and mobile web browsers to elucidate significant forensic variances. The results indicate the feasibility of extracting sensitive information such as user credentials, file metadata, and access tokens, thereby equipping DF professionals with vital intelligence for cyberattack inquiries and security oversight. Moreover, the study emphasizes the difficulties posed by sophisticated security protocols in certain applications, which hinder the processes of network packet acquisition and decryption. Ultimately, the findings contribute to the formulation of enhanced BYOD security frameworks, empowering organizations to more effectively manage cloud utilization, identify unauthorized data access, and alleviate security vulnerabilities associated with the extensive adoption of cloud storage within the Android domain. This study enriches the expanding corpus of knowledge that is essential for securing cloud services and strengthening digital forensic methodologies in response to the dynamic landscape of cyber threats.</em></p> 2025-06-26T00:00:00+08:00 Copyright (c) 2025 Malaysian Journal of Cybersecurity and Applications https://mjlis.um.edu.my/index.php/mjca/article/view/60059 A COMPARATIVE STUDY OF PRE-TRAINED CNN ARCHITECTURES FOR DETECTING AI-GENERATED VERSUS HUMAN-CREATED IMAGES 2025-05-21T12:05:48+08:00 Ayat Abd-Muti Alrawahneh P125852@siswa.ukm.edu.my Siti Norul Huda Sheikh Abdullah snhsabdullah@ukm.edu.my Amelia Natasya Abdul Wahab anaw@ukm.edu.my Sarah Khadijah Taylor sarah@cybersecurity.my Nik Rafizal Nik Ab. Rahim nik_rafizal@hla-group.com <p><em>The widespread use of AI-generated imagery, enabled by advanced generative models, poses increasing challenges to digital content verification and authenticity. This study evaluates the performance of four widely adopted convolutional neural network (CNN) architectures—ResNet50, EfficientNetV2B0, InceptionV3, and VGG16—for classifying images as AI-generated or human-created. A balanced dataset of approximately 80,000 labeled images was used, and all models were trained using a consistent transfer learning pipeline with ImageNet pre-trained weights. Images were resized according to model-specific input dimensions and preprocessed using architecture-appropriate normalization methods. The dataset was split using an 80/10/10 ratio for training, validation, and testing, and each model was trained for eight epochs without data augmentation to focus on baseline performance.</em></p> <p><em>The evaluation was conducted using training and validation accuracy and loss. ResNet50 achieved the highest validation accuracy (97.13%) and the lowest validation loss (0.0861), indicating strong generalization capability. EfficientNetV2B0 followed closely, while InceptionV3 and VGG16 performed slightly lower in both metrics. Visualization of training dynamics, including accuracy and loss curves, showed that all models converged effectively, with ResNet50 demonstrating the most stable and efficient learning trajectory. A final performance comparison chart further highlighted the superior performance of ResNet50 and EfficientNetV2B0. These findings underscore the effectiveness of pre-trained CNN architectures in distinguishing between synthetic and real visual content. The study also establishes a performance baseline for future work in AI-generated image detection, contributing to the broader field of multimedia forensics and trustworthy AI.</em></p> 2025-06-26T00:00:00+08:00 Copyright (c) 2025 Malaysian Journal of Cybersecurity and Applications