Sasikumar, P. and Kalaivani, K. (2023) An Enhanced Qualitative Evaluation of User-Generated Online Gaming Videos. In: 2023 International Conference on Innovative Computing, Intelligent Communication and Smart Electrical Systems (ICSES), Chennai, India.
![[thumbnail of An Enhanced Qualitative Evaluation of User-Generated Online Gaming Videos _ IEEE Conference Publication _ IEEE Xplore.pdf]](https://ir.vistas.ac.in/style/images/fileicons/archive.png)
An Enhanced Qualitative Evaluation of User-Generated Online Gaming Videos _ IEEE Conference Publication _ IEEE Xplore.pdf
Download (458kB)
Abstract
In recent times, there have been notable advancements in the field of virtual reality (VR) and augmented reality (AR). These technological developments have facilitated the generation of immersive content, hence raising concerns among both consumers and production organizations over the necessity for superior quality materials that provide a deeply engaging experience. According to this, researchers are now more focused on contributing to many disciplines of computer vision due to deep learning's current record-breaking success across several artificial intelligence areas. A crucial part of guaranteeing end-users' watching experiences is quality evaluation of User Generated Content (UGC) videos. The increased popularity of UGC videos for games, which has benefited from the fast expansion of the digital game business, has sped up the advancement of perceptual video quality assessment (VQA) models particularly for gaming videos. The suggested advanced spatial Visual Question Answering (VQA) model in the research utilizes a spatial feature extraction system that is trained to continuously learn the representation of spatial features with an emphasis on quality, using raw pixel data from the video frames. This is done in order to address the issue. In order to quantify the temporal-related aberrations that the spatial features are unable to predict, we additionally extract the motion features. The suggested model, which has a low computational cost, extracts motion data from dense frames with a very low spatial resolution using extremely sparse frames for spatial features. The experimental findings highlight the efficiency of the proposed model by showing that it performs well on well-known LIVE-VQA datasets.
Item Type: | Conference or Workshop Item (Paper) |
---|---|
Subjects: | Computer Science > Computer Networks |
Divisions: | Computer Science |
Depositing User: | Mr IR Admin |
Date Deposited: | 20 Sep 2024 07:13 |
Last Modified: | 20 Sep 2024 07:13 |
URI: | https://ir.vistas.ac.in/id/eprint/6673 |