Location: Germany (UTC +1/+2), EU citizen
Remote: preferred
Willing to relocate: no
Technologies: C# and previously C++ and Java, prefer functional style and privately dabble with F# and Haskell. I know SQL, Azure, Docker, high performance computing (Monte Carlo simulations), see also CV
CV: https://stash.ldr.name/wwtbh/rcv-202604-77bf.pdf
Email: whoshiring-77bf /-\T ldr D()T name
I work in mathematical finance so a lot of domain knowledge in that area (derivatives, pricing, probability theory).
For incoming mail, your client should check regardless of the server provider. On Thunderbird I have this extension: https://github.com/mcortt/EagleEye . It checks for any SPF, DKIM and DMARC fails and shows a banner. SPF/DKIM/DMARC is minimum and pretty useless against spam though. All phishing e-mails in my GMail account have impeccable SPF/DKIM records.
IPv6 still allows proper NAT (prefix translation), but even then finding your global address wouldn’t need TURN, just STUN, actually not even that, just a service like “What’s My IP.”
It does allow it in the sense that it's possible, and even useful in some scenarios, but then you're on a weird experimental network and not a normal one.
That's how it works in ipv6. If your network doesn't give you an address, it's broken. We do not assume unfiltered since we are talking about hole punching.
I thought TURN was for symmetrical PAT, not for proper NAT (which just needs STUN for address determination) or full/restricted cone PATs (which need STUN for address and port determination, and then, in case of restricted cone, performs a hole punch).
Standard-conforming IPv6 at most allows prefix translation (i.e., proper NAT, not PAT), which wouldn’t need it.
You can decode a PAL signal without any memory, the memory is only needed to correct for phase errors. In SECAM though, it's a hard requirement because the two color components, Db and Dr, are transmitted on alternating lines, and you need both on each line.
Someone else wrote that it was chosen to best match PAL and NTSC. IIRC there is also a Technology Connections video about those early PCM adaptor devices that would record to VHS tape.
4800kHz and 44100kHz devices appeared at roughly the same time. Sony's first 44100kHz device was shipped in 1979. Phillips wanted to use 44.0kHz.
If you can do 44.1khz on an NSTC recording device, you can do 44.0khz too. Neither NTSC digital format uses the fully available space in the horizontal blanking intervals on an NTSC VHS device, so using less really isn't a problem.
Why is 44Khz better? There's a very easy way to do excellent sample rate conversions from 44.0Khz to 48Khz, you upsample the audio by 12 (by inserting 11 zeros between each sample), apply a 22Khz low-pass filter, and then decimate by 11 (by keeping only every 11th sample. To go in the other direction, upsample by 11, filter, and decimate by 12. Plausibly implementable on 1979 tech. And trivially implementable on modern tech.
To perform the same conversion from 44.1kHz to 48kHz, you would have to upsample by 160, filter at at a sample rate of 160x44.1kHz, and then decimate by 147. Or upsample by 147, filter, and decimate by 160. Impossible with ancient tech, and challenging even on modern tech. (I would imagine modern solutions would use polyphase filters instead, with tables sizes that would be impractical on 1979 VLSI). Polyphase filter tables for 44.0kHz/48.0kHz conversion are massively smaller too.
As for the prime factors... factors of 7 (twice) of 44100 really aren't useful for anything. More useful would be factors of two (five times), which would in increase the greatest common divisor from 300 to 4,000!
> Now you could play it back wrong by emitting a sharp pulse f_s times per second with the indicated level. This will have a lot of frequency content above 20kHz and, in fact, above f_s/2. It will sounds all kinds of nasty.
Wouldn’t the additional frequencies be inaudible with the original frequencies still present? Why would that sound nasty?
Because the rest of the system is not necessarily designed to tolerate high frequency content gracefully. Any nonlinearities can easily cause that high frequency junk to turn back into audible junk.
This is like the issues xiphmont talks about with trying to reproduce sound above 20kHz, but worse, as this would be (trying to) play back high energy signals that weren’t even present in the original recording.
That would mean that higher sampling rates (which add more inaudible frequencies) could cause similar problems. OK xiphmont actually mentions that, sorry, I had only watched the video when I replied.
If I were designing a live audio workflow from scratch, my intuition would be to sample at a somewhat high frequency (at least 48kHz but maybe 96kHz), do the math to figure out the actual latency / data rate tradeoff, but to also filter the data as needed to minimize high frequency content (again, being careful with latency and fidelity tradeoffs).
But I have never done this and don't have any plans to do so, so I'll let other people worry about it. But maybe some day I'll carry out my evil plot to write an alternative to brutefir that gets good asymptotic complexity without adding latency. :)
This is a nice video. But I’m wondering: do we even need to get back the original signal from the samples? The zero-order hold output actually contains the same audible frequencies doesn’t it? If we only want to listen to it, the stepped wave would be enough then
ahh, it might have been spellcheck then. I turned off all that stuff. In the heat of the moment, maybe I was a bit too angry to do proper root cause analysis :P
I am looking for work in other domains as well.
Happy to provide you with a full CV personally.
reply