Jump to content

Stargazers Lounge Uses Cookies

Like most websites, SGL uses cookies in order to deliver a secure, personalised service, to provide social media functions and to analyse our traffic. Continued use of SGL indicates your acceptance of our cookie policy.

stargazine_ep3_banner.thumb.jpg.5533fb830ae914798f4dbbdd2c8a5853.jpg

Sign in to follow this  
Macavity

Acoustics and Interdisciplinary Science?

Recommended Posts

Posted (edited)

Acoustics & Interdisciplinary Science? Seen, but never heard (vice versa?)! 😇

Many fields of Science (Quantum mechanics, Spectroscopy, Steller Physics?)
are underpinned by the classical mechanics of vibration... and Acoustics? It is
a good thing to be wary of *analogies*, lest we end back in the "Age of the
Four Elements"? But I liked the idea of an "Acoustic Uncertainty Principle": 🥳

(And anyway, I have always wondered about):
What's the Shortest Note with a Recognisable Pitch  **
N.B. (internet) Guitarists forever compete re. who can play (shred) fastest! 😛

We then venture into other fields... Biology...  The Difference Limen?!?!
https://en.wikipedia.org/wiki/Just-noticeable_difference
Some of this made me think Visual Astronomy? 🤔

** Perhaps some of this is also experimentally accessible to Amateur?
But then, to me, that is the "Joy of Science"... The many overlaps! 😺

Edited by Macavity
  • Like 1

Share this post


Link to post
Share on other sites

Similar fundamental principles apply in terms of acoustic uncertainty. The longer the time window one uses to measure frequency, the better one can measure the frequency of a component and the better frequency resolution is obtained, but localising that frequency component in time is worsened. And vice versa. 

The spectrogram as used in sound analysis embodies a certain choice of the frequency/time resolution tradeoff, and what's more, the frequency resolution is the same at all frequencies.

The ear (specifically, the cochlea) is different. It has very good frequency resolution (and hence poor time resolution) at low frequencies (sufficiently good to easily resolve the harmonics of the human voice, for instance), but poor frequency resolution (and good time resolution) at high frequencies (say 2 kHz and above).

This is all accessible to the amateur. A good starting point for spectrograms is the excellent freeware praat (http://praat.org). I have a lot of Python code for going further e.g. producing cochleargrams.

Martin

  • Like 1

Share this post


Link to post
Share on other sites

Good stuff Martin! Looked through the Praat Manual and made me think of another (random) idea I had:
To create Musical CHORDS from the formulae of the individual notes. How many "cycles" would need to
be stored (and repeated). But that was back in the 80's when 8Kx8 static memories were state of the art? 😛

(Via work) We got our hands on one of the first readily available speech-synth chips. It was great if you
knew beforehand (had programmed) what it was supposed to be saying! (Not QUITE that bad, but? lol)  🥳

Trying not to drift to far on this... lest it cease to be "science"! But this one also caught my eye:
https://en.wikipedia.org/wiki/Two_hundred_fifty-sixth_note
I reckon that even the "best" (electric) guitarists might struggle to outpace Mozart... Bach etc. 😏

Share this post


Link to post
Share on other sites

Chris, have you checked out SonicPi (https://sonic-pi.net)? Its not just for RPi (I have it on my Mac and use it to try to get my son and his friends into programming - it seems to work!). Its great for exploring chords etc.

Speech synthesis has moved on a long way. Some forms of synthetic speech can be more intelligible than natural human speech when mixed in noise. But I know what you mean: once you know what is being said, it just pops out!

Speech recognition too. In 2017 speech recognition reached parity with humans on some reasonably challenging types of speech. 

 

  • Like 1

Share this post


Link to post
Share on other sites

It's all down to Uncle Fed Fourier and his transformation. Both QED and acoustics are field/wave theories and the same extensive mathematics applies.

Regards Andrew 

Share this post


Link to post
Share on other sites

The Fourier transform is certainly useful but mainly as an implementation device these days. The ear doesn't perform a Fourier transform. It might seem like a small distinction but is actually really important (music appreciation and speech perception most likely would be very different if the ear did do a Fourier transform!).

BTW Did you know Fourier is credited with discovering the greenhouse effect?

Martin

Share this post


Link to post
Share on other sites
Posted (edited)
1 hour ago, Martin Meredith said:

The Fourier transform is certainly useful but mainly as an implementation device these days. The ear doesn't perform a Fourier transform. It might seem like a small distinction but is actually really important (music appreciation and speech perception most likely would be very different if the ear did do a Fourier transform!).

BTW Did you know Fourier is credited with discovering the greenhouse effect?

Martin

I have no idea what the ear brain system actually does. It's not an area of research I have kept up with. What is the current view?

The point I was making was that the accuracy relation/uncertainty principle between frequency and wavelength on the one hand and distance and momentum on the other is ilinked via the Fourier transform. 

Regards Andrew 

PS I was not aware about the greenhouse effect and Fourier's contribution.  Thanks for that.

Edited by andrew s

Share this post


Link to post
Share on other sites

There's a lot going on in the ear-brain system but just dealing with the ear part of it... if one were to unroll the cochlear so that frequencies are laid out in a line from low to high, we'd have about 35mm (of basilar membrane) running from 20 Hz at one end to 20 kHz at the other. Frequencies are mapped on to those 35mm approximately logarithmically, so 1000 Hz is about half way along. The fact that is is quasi-log and not linear in frequency is important because it means it is easier for the brain to normalise differences in fundamental frequency (just a linear shift in the pattern) amongst other things. The reason is that when the fundamental frequency f0 of a harmonic series f0, 2*f0, 3*f0 , ... changes, all of the harmonics shift by a multiplicative amount in linear frequency (e.g. 100, 200, 300 -> 150, 300, 450) whereas they shift by a constant amount in log frequency. This not only applies to harmonics when say the voice pitch changes; it also applies to the frequencies that correspond to resonances of the vocal tract, the things that define for example which vowel we are hearing. Differences in vocal tract length between talkers (especially between gender, and between adults/children) can amount to 20% or so, which gives rise to different resonant frequencies for identical vowel. Hence the brain has to solve a vowel normalisation problem, and this amounts to a simple linear shift in pattern when the spectrum is represented on  a log scale.

Not that the ear was designed for speech. But harmonic sounds (i.e. sounds driven by a repetitive waveform) are common in nature.

MPEG compression exploits this (and other) properties of hearing.

  • Like 2
  • Thanks 1

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this  

  • Recently Browsing   0 members

    No registered users viewing this page.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.