Vision Transformer untuk Identifikasi 15 Variasi Citra Ikan Koi

UNSPECIFIED Vision Transformer untuk Identifikasi 15 Variasi Citra Ikan Koi.

[img] Text
yuhandri_sinta4.pdf

Download (1MB)

Abstract

This research aims to classify various types of koi fish using Vision Transformer (ViT). There is previous research [1] using Support Vector Machine (SVM) as a classifier to identify 15 types of koi fish with training and testing datasets respectively of 1200 and 300 images. This research was continued by research [2] which implemented a Convolutional Neural Network (CNN) as a classifier to identify 15 types of koi fish with the same amount dataset. As a result, the research achieved a classification accuracy rate of 84%. Although the accuracy obtained from using CNN is quite high, there is still room for improvement in classification accuracy. Overcoming obstacles such as limitations in classification accuracy in previous studies and further exploration of the use of new algorithms and techniques, this study proposes a ViT architecture to improve accuracy in Koi fish classification. ViT is a deep learning algorithm adopted from the Transformer algorithm which works by relying on self-attention mechanism tasks. Because the power of data representation is better than other deep learning algorithms including CNN, researchers have applied this Transformer task in the field of computer vision, one of the results of this application is ViT. This study was designed using class and number datasets retained from two previous studies. Meanwhile, the koi fish image dataset used in this research was collected from the internet and has been validated. The implementation of ViT as a classifier in koi classification in this research resulted in an accuracy level that reached an average of 89% in all classes of test data.

Item Type: Article
Depositing User: Prof. Dr. Yuhandri S.Kom., M.Kom
Date Deposited: 13 Aug 2024 07:47
Last Modified: 13 Aug 2024 07:47
URI: http://repository.upiyptk.ac.id/id/eprint/11691

Actions (login required)

View Item View Item