Evaluating and Integrating Deep Learning and Audio Processing Capabilities for Heartbeat Sound Classification

Authors

  • Azim Author
  • V. Prathyusha Author

DOI:

https://doi.org/10.70914/

Keywords:

VGG-like architecture

Abstract

The use of machine learning in healthcare has been on the rise. It is of utmost importance to address issues linked to
heart-related statistics in light of the concerning number of fatalities worldwide caused by cardiovascular illnesses.
The effect of feature engineering on classification accuracy is explored in this work. A support vector machine was
equipped with three distinct feature extraction methods: first, features extracted from audio signal processing;
second, features extracted from a VGG-like architecture that had been pre-trained on Google's AudioSet; and lastly,
features extracted from the ImageNet dataset that had been concatenated with features extracted from the VGG16
and VGG19 architectures. Last but not least, we used feature concatenation or majority vote to merge all methods.
We compared our approaches to those in the literature and ran tests on two datasets from the PASCAL Classifying
Heart Sounds Challenge. The experimental findings demonstrate that spectrograms used in deep learning and audio
processing might potentially store the same pertinent information for this application, independent of the pre-
training dataset. It is encouraged to do experiments to confirm this. Classification of cardiac sounds using PASCAL,
feature engineering, deep learning, and transfer learning are all terms included in the index.

Downloads

Published

2025-06-28

How to Cite

Evaluating and Integrating Deep Learning and Audio Processing Capabilities for Heartbeat Sound Classification. (2025). INTERNATIONAL JOURNAL OF ADVANCED RESEARCH AND REVIEW (IJARR), 10(6), 82-88. https://doi.org/10.70914/

Similar Articles

1-10 of 41

You may also start an advanced similarity search for this article.