A Potentially Better Way To Balance Streaming Audio Levels
You are Unregistered, please register to gain Full access.    
Advertisements


The Studio All discussions relating to sound design, mixing techniques, music theory, composition, and gear.

Reply
Thread Tools
Old 04-11-2017, 09:44 PM   #1
TheStumps
Knob Twiddler
TheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond repute
TheStumps's Avatar
Alaska
Age: 38
Posts: 236
MC Status: 7110
Thanks: 44
Thanked 142 Times in 94 Posts
A Potentially Better Way To Balance Streaming Audio Levels

So I've started fiddling around with my own take on regulating "loudness" levels in a stream.

Current systems employ one of three ways:
A) LUFS
B) DNR
C) LUFS + DNR

Quite a few are moving to (A) because it's fairly easy to program as you just check the PLR + PSR and regulate as needed to match the stream.

This is similar to what's done in broadcast TV.
As it happens, broadcast TV is somewhat my back yard (glances over at my copy of SCTE handbook and several volumes of material on broadcast methodology and MPEG compression).

This is where I became aware of Orban (detailed at lenght in my thread on that subject).

What I noticed, however, was that this method of stream control loudness doesn't take into account the human ear in regards to bandwidth.

What I mean is that if you have a song full of mid to low frequency ranges, and another with low, mid, and high frequency ranges, and you set them both according to one of the previously three mentioned methods, the second song will feel louder just because it has a broader range of frequency to distribute that "db" over (whatever you are measuring with your dbs).

Now, I'm not going to concern over acoustics of distances, etc... as we'll assume the vast majority of listeners of streams are listening with headphones or are relatively close to the speakers (or the speakers are probably not really robust enough to consider worrying about the dynamics of acoustic concerns).

We're going for "bang-for-the-buck" here.

So I set about figuring out a way to factor all three aspects into account:
DNR (which is a combined factor of peak and body averages), body average (in lufs/lkfs), and frequency bandwidth.

So, before I describe what I've gotten worked out so far, here's a test file that I've put together.
It's one of my songs [a], Smells Like Teen Spirit (Nirvana) [b], and Less Than (NIN) [c].

I think this is an interesting mixture because [a] is purely digital with perfected note control and announcement, [b] is pure analog where everything shows up in a blur on a spectrogram, and [c] where there's a bit of both worlds crammed into the mix.

Further, [a] has a VERY wide spectrogram (fills the entire thing, actually), [b] and [c] are about half that, because they use cut-offs (and I don't - I won't get into that at this time), and [c] has a far lower DNR than [a] or [b].

Here's the stats:
Fugue (my song)
avg dnr: 10.31176678
avg peak: -8.33385159
avg lkfs: -18.64561837
---threashold for the following is frequency greater than -100 db--
frequency bandwidth: 21963.86719
average db in frequency bandwidth: -51.00393376
average db in frequency bandwidth stdv: 9.779809252
average frequency db bandwidth: -19.5596185

Nirvana
avg dnr: 10.7070073
avg peak: -8.702007299
avg lkfs: -19.4090146
---threashold for the following is frequency greater than -100 db--
frequency bandwidth: 16106.83594
average db in frequency bandwidth: -44.30591909
average db in frequency bandwidth stdv: 13.94248188
average frequency db bandwidth: -27.88496376

NIN
avg dnr: 6.249693396
avg peak: -8.057948113
avg lkfs: -14.30764151
---threashold for the following is frequency greater than -100 db--
frequency bandwidth: 15719.23828
average db in frequency bandwidth: -42.29652462
average db in frequency bandwidth stdv: 16.55803949
average frequency db bandwidth: -33.11607898

Oh, right...the audio file.
Here you go. Play it or download it direct from my Dropbox - this is to avoid influence from streaming services like Soundcloud, etc...
[Only registered and activated users can see links. Click here to register]


So, the issue is that we're looking at that average frequency db bandwidth.
That is, how powerful is the range of variation per frequency range.
The tighter that value, the more "loud" seeming something would be, while the wider, the less so.

Well...as a FACTOR anyway - keep in mind we're also going to figure in the Integrated Loudness in LUFS/LKFS and the DNR (Crest Factor converted to dbs), so the "loudness" that might be presently partly contributed from a narrow average frequency dB bandwidth...hmm...let's call that AFB for short.

Anyway, anything contributed for loudness from the AFB may be offset or mitigated by the Integrated Loudness or DNR considerations.

But first....I guess I should start with WHY these values?
Well, we already went over the AFB.
The Integrated Loudness is basically a K-weighted average of self-relative level (how loud is it with respect to itself in comparison to full scale level).
This is basically to say "how loud is it, essentially, at its core".

The DNR is as to ask "how tall is the amplitude spread".

So we have Girth (LUFS), Height (DNR), and total Width (AFB).

You can also think of this as a person's Shoulder Width, Height, and Arm Span - metaphorically.

We don't want to know how to line trunks of armless people next to each other.
We want to know how to line a whole bunch of people spinning in circles with their arms out next to each other.


So...this is a bit complicated, but ....

We set a ridiculous baseline that no one should reasonably hit so that we can compare against.
We only need them for LUFS/LKFS and DNR.
I used -23 for LUFS and 18 for DNR
(fun fact: -23 is the standard for TV, but no one's mixing music that low....yet).

We then compare how far off each of the samples is from these values.
For the LUFS, we take a note from the establishment of LUFS/LKFS in the first place, which did studies and found that individuals typically were sensitive to level changes in audio at +/- 4 dBs of difference between reference frames.

For us, that means for every 4 dBs difference in one direction the sample song is from the baseline LUFS, we move 1 dB in the opposite direction.

For the DNR, things are a tad bit different because this isn't a straight measure of dB...exactly per say.
It is and it isn't.
It's more a factor and it's also a bit more of value as to how tightly packed things are than not.
So we take the difference of the sample's DNR from the baseline DNR as a factor of 4 and multiply that by the LUFS factor from before.

This provides a reduction (or, theoretically, an increase) value when both are combined.


We're almost there.
Now for the AFB.

The AFB is a work in progress because I'm building this part from scratch.

Everything prior was just my take on an already extant practice.
-----
For those reading along with a bit of math savviness, you might be wondering why a PSD or FT/FFT wasn't employed below.
Basically, the reason for not doing that comes down to a lack of power information (voltz, etc...).
We're not on a real mixing board in this idea - we're just receiving files, ingesting them, and regulating them.
I do plan, however, at some point, to see about trying to figure out a way to employ PSD, and FT/FFT, as it's a bit down the ideal path line for what we're after in the AFB. Until such time, the below suffices.
-----

So, for the AFB we plot the spectrum to start with.
For this, I used Audacity as it has a spectrum plot logging export feature which also includes a dB reading for the plotted frequency.

For a mental image, this data looks like this:
Code:
Frequency (Hz)	Level (dB)
43.066406	-16.037096
86.132813	-15.827577
129.199219	-19.403009
172.265625	-22.222815
215.332031	-25.161371
258.398438	-25.621271
301.464844	-26.882242
344.53125	-27.288042
etc....
We set the threshold to ignore any frequency plot less than -100 dBs.

Then we take the lowest listed frequency and the highest listed frequency and create the bandwidth between them (the difference of the latter from the former).

Next, we take the average dBs from the spectrum plot greater than -100.
Following the average, we find our standard deviation.
With the standard deviation, we then apply it as a + and - to the average from just above and find the difference between the max stdv and min stdv.
This becomes our frequency dB bandwidth.
----------------------
KEEP IN MIND
What we're after here is to find a total amount of dBs that we should turn something down by finding how wide and verbose the frequency spectrum is. That means we have to figure out a way to convert a frequency bandwidth into some value of dBs of regulation.
So that's what's going on here; especially net.
----------------------
Now we take the bandwidth of frequency over -100 that we found earlier (which will typically be in the Khz range values) and divide it by the average frequency dB bandwidth.


This tells us a representative correlation between frequency per dB relationship.

Now, we flip this value to a positive value for computational purposes, and then multiply it by the DNR.
Why? Because we want the range of amplitude (how tight/loose) to have some level of influence on the quantification of the frequency bandwidth analysis.

The output of this will generate into the double thousands where the previous value will tend to be in the single thousands, due to most DNR's being at or near at least 8 through 10.

We don't want an artificially widened bandwidth, so we need to down convert the scale back now that we've had the DNR influence the value.
While we're at it, however, we're not only going to down-scale it, but we're going to derive our value to turn down by, and we'll do that through the age-old practice of grabbing for Pi. (nomnomnomnom)

So, we take the frequency spectrum value and divide it by (1000*Pi).
The result will be a value that will be used to turn a song down by.

For example, the previously listed stats of the [a], [b], and [c] songs above result in 3.685798921, 1.968604061, 0.944281381 respectively.

Now we pick back up where we left off previously where we deducted the differences of the Integrated Loudness and DNR.
That previously left us with: [a] -20.39561837, [b] -20.7215146, [c] -19.80764151

So...yep...we take those frequency bandwidth values and further subtract the already extant LUFS values.
Thus, arriving at: [a] -24.0814173, [b] -22.69011866, [c] -20.75192289


For this test phase, I decided to round a lot of these deductions because I'm a human near 40 moving dials with a mouse so I can't get it as perfect as a sql server set up to quantify all of this information automagically.
In reality, you wouldn't round off - so there's a BIT of a difference, but not much, between that sound file at the top and what a server would produce....but not much.

So these values became [a] -24, [b] -23, [c] -21.


All that was left (finally) was to whip up Audacity and reset each sample clip to each of the above, and then combine the three clips into one file and see if they lined up.

Because, YES, I COULD have just plopped them all into Audacity and visually made them fit, but that's not good because it wouldn't work for the idea of stream control.
I needed to see the numbers accomplish what my eyes would do naturally for the idea to be successful as a test system.

Also, after everything got THIS squashed, and patched back together, I then re-amplified the entire track of combined songs by around 10 or so to bump it back up to reasonable audio ranges of around a -14 to -16 lkfs/lufs range.
This is because the way this would be set on a stream is that the final output would be set at some typical level back to "normal" (probably would be around a +10 dB amplifier since everything was reduced down to -20's ranges).

The result looks like this - I outlined each song's section in the Left channel and let the Right channel remain as is so you can look at the wave file form raw:
Anywho....
Something I'm working on.


Also, I find this actually interesting to use for my own purposes.
So, if I set a song to - let's say - hit -14 LKFS, then I can take the frequency plotting information into consideration and adjust it down or up so that the right LKFS value is set to accomplish the intended IMPRESSION of the LKFS level.

Advertisements


------------------
You, or anyone else, are free to do anything with the music that I make. I consider all of my productions to be public domain. If someone asks, I will give them the source files and any related sample files if they are needed. Music is a dialogue, not a speech. Any listener must be free to become the speaker at any time for the life of the dialogue to be retained. Let us, then, discuss in tone.
TheStumps is offline   Thanks Reply With Quote
Old 05-11-2017, 02:26 AM   #2
TheStumps
Knob Twiddler
TheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond repute
TheStumps's Avatar
Alaska
Age: 38
Posts: 236
MC Status: 7110
Thanks: 44
Thanked 142 Times in 94 Posts
Re: A Potentially Better Way To Balance Streaming Audio Levels

Given that the above post is rather complicated (and not, probably, well articulated), and that I strongly doubt anyone around here runs a streaming service...

I'm going to build an excel workbook that does all the calculating for you (you'll have to feed it an Orban and Audacity data dump).
It will then spit out what to set your song to for LUFS.

Why?
Well, I'd be pretty damn interested to see opinions and results of "albums" or "sets" (basically multiple somgs of yours intended to be played in order or as a collection).
The idea is that it would help make unifying the levels of your collection of songs easier by taking out the fudge/guess work.

I'll post the tool in a new thread when I complete it.

------------------
You, or anyone else, are free to do anything with the music that I make. I consider all of my productions to be public domain. If someone asks, I will give them the source files and any related sample files if they are needed. Music is a dialogue, not a speech. Any listener must be free to become the speaker at any time for the life of the dialogue to be retained. Let us, then, discuss in tone.
TheStumps is offline   Thanks Reply With Quote
Old 05-11-2017, 05:56 PM   #3
Daggit
Master of the LFO
Daggit has a reputation beyond reputeDaggit has a reputation beyond reputeDaggit has a reputation beyond reputeDaggit has a reputation beyond reputeDaggit has a reputation beyond reputeDaggit has a reputation beyond reputeDaggit has a reputation beyond reputeDaggit has a reputation beyond reputeDaggit has a reputation beyond reputeDaggit has a reputation beyond reputeDaggit has a reputation beyond repute
Daggit's Avatar
"Let beauty loose!" --Nitzer Ebb
Posts: 469
MC Status: 8560
Thanks: 546
Thanked 171 Times in 132 Posts
Re: A Potentially Better Way To Balance Streaming Audio Levels

Interesting topic. I skipped over the math part and the rest of the post after thinking about the main ideas already stated. I'll go back and read the math and the rest later.

Personally, I find a little bit of merit to this topic even though I'm not entirely sure about some of it. As far as I had read, some of the loudness meters ARE modelled after human hearing in terms of frequency content. But I don't know much about the topic. After reading some techical articles and definitions, I thought about it, and then decided to give up on it because I don't feel that modern day standards and trends are better than the best of what came during the 1980s and 1990s and early 2000s by those productions that were NOT victims of the Loudness War.

I don't do streaming, so yeah, I'm staying out of that thing. I'm primarily focussed upon composing and mixing and mastering for RedBook Audio (44.1 kHz stereo 16-bit linear PCM) or better (48 kHz linear PCM at 16-bit stored as stereo FLAC, which is lossless and includes metadata) or originals (48 kHz linear PCM at 32-bit float / IEEE in a stereo WAV file).

If I get a portable media player again this year, I'm going to RockBox it so that it plays WAVpack, so I can listen to 48 kHz 32-bit float stereo WAVpacks. Most of the portable media players that can already play FLAC can also do WAVpack after being RockBoxed.

I figure that if the final mixdown sounds good as CD / WAV / FLAC / WV, then it will probably be just fine streamed as is. I don't believe in "mixing for the streaming format". I don't feel that enough time has passed to prove that this is a worthwhile activity.

Several extremely successful commercial releases from the 1990s and even a few from the the late 1980s sound great as is, especially in their original CDDA (compact disc digital audio) format or archived as FLAC (optionally with metadata and album artwork).

I do however, notice that some MP3 verions of such tunes and even the M4A versions sometimes don't sound as full and deep and bassy and wide as the CD originals. I think this wouldn't be as noticeable if for the MP3 files they transcoded to 320 kbps CBR in stereo mode instead of joint mode. Those I find to be usually indistinguishable from the originals even though I might still wonder if I'm missing out on anything. When I compared a M4A of one of my bass-heavy original tunes to the original WAV, I actually noticed that the bass seemed not as strong and wide as the WAV. At first I thought something was wrong with my playback system or headphones, but then I noticed I was playing the wrong file (the M4A). When I switched back to the WAV, it sounded like how it sounded in the original.

Naturally, I can't hear the difference between final RedBook audio and final 48 kHz 32-bit float; but I like that I don't have to transcode if I keep it as a high-quality float. I can rest assured it's exactly what I heard during mixdown and pretty much IS the same file as the final draft, because usually it ACTUALLY IS the same WAV file.

Also, sometimes the "remastered" versions sound worse than the originals in terms of collapsed stereo width or different elements sounding too loud or too quiet. A few remastered tunes sound exactly the same, just louder, but those seem to be rare to me. Most remastered tunes that I encounter sound worse than the originals depsite the advantages of more modern studio tools.

Also, I tend to be annoyed with some MP3 sale sites that sell versions of hit tunes that aren't the original hit tunes, but some kind of studio or artist release of the same tune retroactively taken out of the collection. Maybe it happens because the artists who upload the tunes don't own the copyright of the masters (the record label or studio does) and so they have to upload a different version. Or maybe it happens because they think that the different version is somehow better than the version that became a hit. But for me, my musical memory is stong enough for the tunes that I liked when they were hitting, so I can actually notice when they change something.

For example, Madonna's tune Live to Tell on True Blue only sounds good to me on the original CD. The rerelease sounds altered in a way that is subtle but noticeable. I'm not exactly sure of which technical thing is different; but I notice stuff like the synth flanges in the background are different.

Other times a rerelease has been tweaked so that the lead vocals are more "up front". Yet, I tend to enjoy music for everything else EXCEPT for the lyrics. So when the vocals are up front they usually sound too loud to me.

I think Imogen Heap was really smart to release a lot of her music as both with-vocals and without (instrumentals). Her instrumental versions reveal massive amounts of musical complexity which got lost underneath the vocals even though the vocal versions are impressive and fun too.

Anyways, I'm not trying to derail your thread or anything. I'm still interested in your topic. Like I said, I'll read more of what you wrote later and other people's ideas too... and I'll check out the tool too.

Thanks for posting.

------------------
{novel, fresh, different, recent, modern, unexampled}
Daggit is offline   Thanks Reply With Quote
Thanks to Daggit
TheStumps (05-11-2017)
Old 05-11-2017, 10:53 PM   #4
TheStumps
Knob Twiddler
TheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond repute
TheStumps's Avatar
Alaska
Age: 38
Posts: 236
MC Status: 7110
Thanks: 44
Thanked 142 Times in 94 Posts
Re: A Potentially Better Way To Balance Streaming Audio Levels

Quote:
Originally Posted by Daggit View Post
Interesting topic....
You are quite right that LUFS/LKFS takes into account frequency differences in assessment.
What the method doesn't take into effect is frequency bandwidth, which is a different thing all together.
LUFS/LKFS considers the amplitude impression of a tonal frequency discretely, not collectively as a range.
This is the difference between saying that a violin sounds louder than a cello at the same amplitude, and saying that an orchestra sounds louder than a rock band and the same amplitude and dynamic range because the orchestra has a wider range of frequency packed into the same amplitude and dynamic range bandwidth.

It's like this:
In one tube we pile in objects of relative volume (volume as in, how much space they consume).
We account for the difference of shape in measuring the volume of the objects rather than their dimensions alone.
Between their volume and dimensions we then say that we can pack the same tube with two different piles of objects as long as the volume and dimension information is within the same boundaries.

However, what we didn't account for was tension strength.
One pile of stuff compresses under stress and the other doesn't, so the piles aren't actually similar due to a difference of their range of compression capability.

Ignoring this information causes a difference in the actual delivery of the same volume and dimension within the tube.

That's not a 1:1 analogy and doesn't relate each part with a like concept in the world of audio, but it conveys the type of issue at hand in absenting frequency bandwidth considerations.

If I take a wider frequency bandwidth and put it at the same volume and the same dynamic range (which is a measure of average amplitude to peak amplitude and not frequency bandwidth range), and a narrower frequency bandwidth of the same volume and dynamic range, then the first will sound - to most ears - louder than the second because we've packed more information for the ear in the profile.

And it's not a matter of momentary frequency consideration, as is done in LUFS/LKFS.
It's in normalized expectation of the ear.
If you have a wide range over the course of the song, then the ear - after hearing a wide range and then moving to a more narrow range - is going to still feel like things are louder because the ear has been set to the impression of the wider range.

That's what I'm addressing.

Now, ideally I would one day replace my ad-hoc system with a PSD/FFT system as that's what that math category method is actually for (bandwidth vs. amplitude).



As to streaming as a platform...
It's what there is.
We can, as audiophiles, prefer all sorts of methods otherwise, but the mainstream distribution system us going to remain streaming until another techonological medium comes along to replace it.
The next switch won't be from strwaming, but with compression.
Mpeg compression has reached its near max capacity and needs a breakthrough.

Last edited by TheStumps; 06-11-2017 at 12:10 AM..

------------------
You, or anyone else, are free to do anything with the music that I make. I consider all of my productions to be public domain. If someone asks, I will give them the source files and any related sample files if they are needed. Music is a dialogue, not a speech. Any listener must be free to become the speaker at any time for the life of the dialogue to be retained. Let us, then, discuss in tone.
TheStumps is offline   Thanks Reply With Quote
Thanks to TheStumps
YoSoyPincho (24-11-2017)
Old 06-11-2017, 05:17 AM   #5
TheStumps
Knob Twiddler
TheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond repute
TheStumps's Avatar
Alaska
Age: 38
Posts: 236
MC Status: 7110
Thanks: 44
Thanked 142 Times in 94 Posts
Re: A Potentially Better Way To Balance Streaming Audio Levels

The bones of the tool is done.
I just need to clean up the presentation, finish the help/instructions information, and cite the needed program downloads and test file.

On the home stretch.

------------------
You, or anyone else, are free to do anything with the music that I make. I consider all of my productions to be public domain. If someone asks, I will give them the source files and any related sample files if they are needed. Music is a dialogue, not a speech. Any listener must be free to become the speaker at any time for the life of the dialogue to be retained. Let us, then, discuss in tone.
TheStumps is offline   Thanks Reply With Quote
Thanks to TheStumps
YoSoyPincho (24-11-2017)
Old 07-11-2017, 09:10 AM   #6
TheStumps
Knob Twiddler
TheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond repute
TheStumps's Avatar
Alaska
Age: 38
Posts: 236
MC Status: 7110
Thanks: 44
Thanked 142 Times in 94 Posts
Re: A Potentially Better Way To Balance Streaming Audio Levels

Here's a quick screenshot of the interface for the tool.
The aesthetics and trimmings are all done.
All that's left is to write all of the instructions and help information.

------------------
You, or anyone else, are free to do anything with the music that I make. I consider all of my productions to be public domain. If someone asks, I will give them the source files and any related sample files if they are needed. Music is a dialogue, not a speech. Any listener must be free to become the speaker at any time for the life of the dialogue to be retained. Let us, then, discuss in tone.
TheStumps is offline   Thanks Reply With Quote
Old 14-11-2017, 02:06 AM   #7
TheStumps
Knob Twiddler
TheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond repute
TheStumps's Avatar
Alaska
Age: 38
Posts: 236
MC Status: 7110
Thanks: 44
Thanked 142 Times in 94 Posts
Re: A Potentially Better Way To Balance Streaming Audio Levels

Just a quick update.
The bulk of the raw material for the tutorial is finished. I'm just working on putting it all together in a tidy package.

I'm hoping to have this deployed into a thread sometime this week.

------------------
You, or anyone else, are free to do anything with the music that I make. I consider all of my productions to be public domain. If someone asks, I will give them the source files and any related sample files if they are needed. Music is a dialogue, not a speech. Any listener must be free to become the speaker at any time for the life of the dialogue to be retained. Let us, then, discuss in tone.
TheStumps is offline   Thanks Reply With Quote
Old 14-11-2017, 08:46 AM   #8
Sound--
Minor Glitch
Sound-- has much to be proud ofSound-- has much to be proud ofSound-- has much to be proud ofSound-- has much to be proud ofSound-- has much to be proud ofSound-- has much to be proud ofSound-- has much to be proud ofSound-- has much to be proud ofSound-- has much to be proud of
Posts: 31
MC Status: 1110
Thanks: 21
Thanked 22 Times in 17 Posts
Re: A Potentially Better Way To Balance Streaming Audio Levels

Interesting thread. I think I need to read it at least 10 times before I understand it.
Sound-- is offline   Thanks Reply With Quote
Old 14-11-2017, 12:54 PM   #9
Focalized
Man in the box
Focalized is a savage MCFocalized is a savage MCFocalized is a savage MCFocalized is a savage MCFocalized is a savage MCFocalized is a savage MCFocalized is a savage MCFocalized is a savage MCFocalized is a savage MCFocalized is a savage MCFocalized is a savage MC
Focalized's Avatar
Space Base
Posts: 2,626
MC Status: 10377667
Thanks: 13
Thanked 330 Times in 263 Posts
Re: A Potentially Better Way To Balance Streaming Audio Levels


------------------
Focalized is offline   Thanks Reply With Quote
Thanks to Focalized
brb (14-11-2017)
Old 17-11-2017, 04:01 AM   #10
TheStumps
Knob Twiddler
TheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond repute
TheStumps's Avatar
Alaska
Age: 38
Posts: 236
MC Status: 7110
Thanks: 44
Thanked 142 Times in 94 Posts
Re: A Potentially Better Way To Balance Streaming Audio Levels

Work continues on the tutorial (getting so close...editing is always painful!)

In the meantime, I took my first swing at formalizing this method.

------------------
You, or anyone else, are free to do anything with the music that I make. I consider all of my productions to be public domain. If someone asks, I will give them the source files and any related sample files if they are needed. Music is a dialogue, not a speech. Any listener must be free to become the speaker at any time for the life of the dialogue to be retained. Let us, then, discuss in tone.
TheStumps is offline   Thanks Reply With Quote
Thanks to TheStumps
YoSoyPincho (24-11-2017)
Old 23-11-2017, 08:32 PM   #11
TheStumps
Knob Twiddler
TheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond repute
TheStumps's Avatar
Alaska
Age: 38
Posts: 236
MC Status: 7110
Thanks: 44
Thanked 142 Times in 94 Posts
Re: A Potentially Better Way To Balance Streaming Audio Levels

Alright,

Aside from the calculator tool being released in the thread, "[Only registered and activated users can see links. Click here to register]
", I am also working on a second round of tests.

After clearing with the mod/admin team, I have selected the first 5 songs that were at the top of the listening booth and wrote to their authors and asked for permission to use their song in a test.

This test will compare against a known standard.
I will be using Spotify's streaming regulator levels on the songs and my method and comparing the results.
Spotify uses -14 LUFS with a peak hard limiter (not a clipper) of -1 LUFS.
(meaning, your maximum DNR on spotify is going to be roughly in the 13 dB range).

So I'll run all five songs through both methods and post a compare/contrast results here.

I've started reaching out to the authors of the sample songs already and three have gotten back with permission granted. Two more to go.

Cheers!
TheStumps

------------------
You, or anyone else, are free to do anything with the music that I make. I consider all of my productions to be public domain. If someone asks, I will give them the source files and any related sample files if they are needed. Music is a dialogue, not a speech. Any listener must be free to become the speaker at any time for the life of the dialogue to be retained. Let us, then, discuss in tone.
TheStumps is offline   Thanks Reply With Quote
Old 29-11-2017, 09:51 PM   #12
TheStumps
Knob Twiddler
TheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond reputeTheStumps has a reputation beyond repute
TheStumps's Avatar
Alaska
Age: 38
Posts: 236
MC Status: 7110
Thanks: 44
Thanked 142 Times in 94 Posts
Re: A Potentially Better Way To Balance Streaming Audio Levels

Update: I've waited for feedback from two separate final candidates, however I'm going to just get started with the four samples that I have and I'll see if I can pick up a fifth sample candidate along the way.

It will probably take a couple weeks to process the testing and write-up the results properly.

Thank you to @[Only registered and activated users can see links. Click here to register]
, @[Only registered and activated users can see links. Click here to register]
, @[Only registered and activated users can see links. Click here to register]
, and @[Only registered and activated users can see links. Click here to register]
for supporting this project by lending your work as samples.

Advertisements


------------------
You, or anyone else, are free to do anything with the music that I make. I consider all of my productions to be public domain. If someone asks, I will give them the source files and any related sample files if they are needed. Music is a dialogue, not a speech. Any listener must be free to become the speaker at any time for the life of the dialogue to be retained. Let us, then, discuss in tone.
TheStumps is offline   Thanks Reply With Quote
Thanks to TheStumps
A.M (30-11-2017)
Reply


Thread Tools

Forum Jump


All times are GMT +1. The time now is 08:40 AM.


Electronic Music Forums

Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2017, vBulletin Solutions, Inc.