Google's HeAR AI model could improve health diagnostics through bioacoustic analysis
It has been trained on 300 million audio samples
Google has introduced an innovative AI model called Health Acoustic Representations (HeAR) designed to detect diseases through audio analysis.
Here are a few highlights:
This model has been trained on 300 million audio samples, including sounds like coughs, sneezes, and breaths.
By identifying patterns in these health-related sounds, HeAR can assist in early disease detection, making it a valuable tool for healthcare.
The model has demonstrated high performance even with less training data, which enhances its applicability in various healthcare settings.
An India-based company, Salcit Technologies, is leveraging HeAR to improve their product, Swaasa, which focuses on early detection of tuberculosis (TB) through cough analysis.
Keep reading with a 7-day free trial
Subscribe to The PhilaVerse to keep reading this post and get 7 days of free access to the full post archives.