The PhilaVerse

The PhilaVerse

Share this post

The PhilaVerse
The PhilaVerse
Google's HeAR AI model could improve health diagnostics through bioacoustic analysis

Google's HeAR AI model could improve health diagnostics through bioacoustic analysis

It has been trained on 300 million audio samples

Phil Siarri's avatar
Phil Siarri
Sep 02, 2024
∙ Paid

Share this post

The PhilaVerse
The PhilaVerse
Google's HeAR AI model could improve health diagnostics through bioacoustic analysis
1
Share
Image of humanoid head on top of smartphone and a cat face
Image credit: Image Creator in Microsoft Bing/DALL.E and Canva

Google has introduced an innovative AI model called Health Acoustic Representations (HeAR) designed to detect diseases through audio analysis.

Here are a few highlights:

  • This model has been trained on 300 million audio samples, including sounds like coughs, sneezes, and breaths. 

  • By identifying patterns in these health-related sounds, HeAR can assist in early disease detection, making it a valuable tool for healthcare. 

  • The model has demonstrated high performance even with less training data, which enhances its applicability in various healthcare settings. 

  • An India-based company, Salcit Technologies, is leveraging HeAR to improve their product, Swaasa, which focuses on early detection of tuberculosis (TB) through cough analysis.

Keep reading with a 7-day free trial

Subscribe to The PhilaVerse to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Phil Siarri
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share