Hi. I guess this kind of animation is based on wave animations, but there are a few cases where the amplitude of each wave depends on the peer's voice activity or the user's mic activity.
What ChatGPT says:
"The key aspects of such animations include:
> Amplitude Dependency: The amplitude (height) of each wave segment reflects the volume or intensity of the sound input. Louder sounds produce taller waves, while quieter sounds result in smaller waves.
> Frequency Distribution: The animation may use different frequencies to represent various components of the sound spectrum, creating a more visually dynamic representation of audio input.
> Real-Time Updates: The animation is synchronized with the audio input in real-time, ensuring the visuals accurately represent the ongoing voice activity.
> Interactivity or Reactivity: For applications like calls, individual waves may correspond to specific participants, making it visually clear who is speaking or generating noise."
Here is attached video link: