Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Exposing a playbackPosition property on AudioBufferSourceNode. #2397

Open
notthetup opened this issue Feb 28, 2014 · 74 comments
Open

Exposing a playbackPosition property on AudioBufferSourceNode. #2397

notthetup opened this issue Feb 28, 2014 · 74 comments
Labels
category: enhancement Substantive changes that do not add new features. https://www.w3.org/policies/process/#class-3 Needs Edits Decision has been made, the issue can be fixed. https://speced.github.io/spec-maintenance/about/ size: M Medium amount of work expected to resolve.

Comments

@notthetup
Copy link
Contributor

AudioBufferSourceNode lacks a playbackPosition property which exposes the sample index which is currently being played, or the last sample index played if the AudioBufferSourceNode is not playing. The property can be readonly, and to keep inline with the rest of the API it can be a time value not an index.

If the playbackRate doesn't change the current playback position can be easily calculated using AudioContext's currentTime property. But with Parameter automation on playbackRate the calculations can get pretty gnarly and inaccurate.

This property can come in handy when trying to implement a few types of scenarios

  1. A pause/resume() functionality to hold/unhold the playback of a specific source, while other sources keep playing.
  2. Synchronizing multiple audio buffers based on user interaction. For example starting a buffer a the exact sample position as another one stopped (based on user interaction).

Currently, it seems, based on a conversation in the public-audio mailing list, the only way to generate this property is to capture all changes to to playbackRate parameter and calculate the playbackPosition based on that. This is complicated and redundant since the playbackPosition property is being stored internally in the WebAudioAPI, and recalculating it in Javascript is a waste of effort.

@colinsullivan
Copy link

Also, the onended callback of the AudioBufferSourceNode could be helpful in the second use case you mentioned.

It seems to me that for the web audio platform it is unreasonable to expect sample-accurate events based on user interaction.

@cwilso
Copy link
Contributor

cwilso commented Mar 26, 2014

Confused: you mention a suggestion and the very reason it's not tenable. :) the 'onended' callback of AudioBufferSourceNode can't be used to synchronize, because it's NOT sample-accurate; no JS event can be, because the window for a single sample is on the order of hundredths of a millisecond; JS event delivery is on the order of a few milliseconds, IF garbage collection or other main thread stuff (layout, JS execution, etc) don't get in the way. You need to schedule ahead.

playbackPosition will have to expose the sample position of the buffer for the next scheduling - that is, the next bit that WILL be scheduled. That will let you do the appropriate math.

@karlt
Copy link
Contributor

karlt commented Mar 26, 2014

If we need a playbackPosition, then the onended event would be a good place to put it.
That is sufficient for starting playback of a buffer from a point where it was previously stopped, if you are happy to have a gap between stop and restart.

I'm not so keen to keep playbackPosition always up to date on the AudioBufferSourceNode.
Could scenario 2 in comment 28485063 be addressed by playing all the buffers in sync and using GainNodes to switch in response to user events?

@notthetup
Copy link
Contributor Author

Thanks everyone for your feedback.

@karlt Yup. That would work as long as all the AudioBufferSourceNode are kept running, and a GainNode was used to switch between the individual sources. But if looping is enabled, then this only works if all sources files are of exact same length, which might not be always the case.

The pause (scenario 1) or some variant of that, where one AudioBufferSourceNode is stopped and another is started, is definitely the where knowing the last playbackPosition becomes critical.

As for the sample accuracy argument, sometimes its enough to get an "sample inaccurate" playbackPosition value. A rough estimate to figure out what's being played. Currently if ParameterAutomation is being used, the only way to guess even an inaccurate position is to basically to capture and process all the Automation Events in JS.

Lastly, exposing the playbackPosition in onended may work. But for one of use cases I'm looking at, I need to count the number of loops a AudioBufferSourceNode has completed. Sampling playbackPosition would allow me to check how many loops AudioBufferSourceNode had completed, even with a changing playbackRate.

@cwilso
Copy link
Contributor

cwilso commented Mar 27, 2014

I see a (relatively narrow) use case for the onended return of playbackPosition - but for the scenario I badly needed it, that wouldn't work at all. I have to track the current playback time in wubwubwub (http://webaudiodemos.appspot.com/wubwubwub/ - press the power button, wait a few seconds, press it again, lather, rinse and repeat); I can't wait until it's ended, because it has to smoothly modify, but the playback position controls the deck visuals. (Easier to see if you have a DJ controller attached and you scrub.)

@karlt
Copy link
Contributor

karlt commented Mar 27, 2014

I see, thanks Chris. Your demos/examples are very helpful.

Implementations could actually do AudioBufferSourceNode.playbackPosition reasonably efficiently in the common cases, if necessary, by sending updates from the processing thread to the AudioContext thread only when the playbackRate computedValue has changed. The AudioContext thread can then use the last sync time and rate computedValue to calculate from currentTime until the next update.

@cwilso
Copy link
Contributor

cwilso commented Mar 27, 2014

Yes. The more difficult part is when the playbackRate AudioParam has complex (e.g. setTargetAtTime/setExponentialRampAtTime) scheduling going on. My wubwubwub demo does the calculations between updates, basically the way you suggest, and the math is only moderately complex for the linear ramps I use on power up/down; the exponential ones get harder. (Area under a curve, rather than area under a line, basically.)

@notthetup
Copy link
Contributor Author

@cwilso Yup. I have a working version of the same in my code now. I also have to keep a list of ParameterAutomation events (had to look up Chromium source here) so that I can figure out things like to stop calculating for setTargetAtTime etc. It works, but it's not heavily accurate. If I can convince some people, I will open source it at some point.

@carlosgmartin
Copy link

Is there any indication as to when this feature could be implemented in the Web Audio API?

@cwilso
Copy link
Contributor

cwilso commented Oct 29, 2014

No, there is not.

To be clear: this feature would be:
"Expose a floating-point playbackPosition on BufferSourceNode. This will represent where in the buffer the next playback block is coming from, in terms of seconds. It should be cautioned that it is dangerous to expect this will be useful for sample-accurate scheduling, as rounding errors and thread interactions may cause disruption."

@sebpiq
Copy link

sebpiq commented Jan 29, 2015

I would really love to see this as well, but I'd love it to be an AudioParam. The use case is reading sound in a very complex way : going back, forth, jumping around and so on. In fact if such a feature existed, it would cover all the other features already existing on the node (playbackRate, loop), and provide additional functionality that is not available at the moment (jumps).

@karlt
Copy link
Contributor

karlt commented Feb 3, 2015

On Thu, 29 Jan 2015 01:03:04 -0800, Sebastien Piquemal wrote:

I would really love to see this as well, but I'd love it to be an
AudioParam. The use case is reading sound in a very complex way : going back,
forth, jumping around and so on. In fact if such a feature existed, it would
cover all the other features already existing on the node (playbackRate,
loop), and provide additional functionality that is not available at the
moment (jumps).

That sounds very similar to WaveShaperNode.

@sebpiq
Copy link

sebpiq commented Feb 3, 2015

Hmmm ... There is probably a misunderstanding, because I really don't see
the similarity .. Could you explain?

On Tue, Feb 3, 2015 at 11:03 PM, Karl Tomlinson notifications@github.com
wrote:

On Thu, 29 Jan 2015 01:03:04 -0800, Sebastien Piquemal wrote:

I would really love to see this as well, but I'd love it to be an
AudioParam. The use case is reading sound in a very complex way : going
back,
forth, jumping around and so on. In fact if such a feature existed, it
would
cover all the other features already existing on the node (playbackRate,
loop), and provide additional functionality that is not available at the
moment (jumps).

That sounds very similar to WaveShaperNode.


Reply to this email directly or view it on GitHub
https://github.com/WebAudio/web-audio-api/issues/296#issuecomment-72744969
.

Sébastien Piquemal

-----* @sebpiq*
----- http://github.com/sebpiq
----- http://funktion.fm

@karlt
Copy link
Contributor

karlt commented Feb 4, 2015 via email

@cwilso
Copy link
Contributor

cwilso commented Feb 4, 2015

The point of this issue was to expose a read-only position, not to enable
scrubbing through a buffer with an AudioParam.

On Tue, Feb 3, 2015 at 5:25 PM, Karl Tomlinson notifications@github.com
wrote:

Hmmm ... There is probably a misunderstanding, because I really don't see
the similarity .. Could you explain?

In each case, an input describes which part of a buffer is
produced on the output of the node. With playbackPosition on
AudioBufferSourceNode, the input would be an AudioParam. With
WaveShaperNode, the input is from the output of another AudioNode.
A GainNode with constant input from another source can be used to
convert an AudioParam to AudioNode output.


Reply to this email directly or view it on GitHub
https://github.com/WebAudio/web-audio-api/issues/296#issuecomment-72772982
.

@sebpiq
Copy link

sebpiq commented Feb 4, 2015

Ow yeah ... apologies Karl, you're right. Looks like it could be used for
this. Though it is obviously not what WaveShaper is intended for.

Chris, I just open an issue for that ;) ?

On Wed, Feb 4, 2015 at 2:28 AM, Chris Wilson notifications@github.com
wrote:

The point of this issue was to expose a read-only position, not to enable
scrubbing through a buffer with an AudioParam.

On Tue, Feb 3, 2015 at 5:25 PM, Karl Tomlinson notifications@github.com
wrote:

Hmmm ... There is probably a misunderstanding, because I really don't
see
the similarity .. Could you explain?

In each case, an input describes which part of a buffer is
produced on the output of the node. With playbackPosition on
AudioBufferSourceNode, the input would be an AudioParam. With
WaveShaperNode, the input is from the output of another AudioNode.
A GainNode with constant input from another source can be used to
convert an AudioParam to AudioNode output.


Reply to this email directly or view it on GitHub
<
https://github.com/WebAudio/web-audio-api/issues/296#issuecomment-72772982

.


Reply to this email directly or view it on GitHub
https://github.com/WebAudio/web-audio-api/issues/296#issuecomment-72773266
.

Sébastien Piquemal

-----* @sebpiq*
----- http://github.com/sebpiq
----- http://funktion.fm

@NHQ
Copy link

NHQ commented May 7, 2015

YES PLEASE NO QUESTION

@AshleyScirra
Copy link

Users of the Construct 2 game engine need this. It is very difficult to implement a simple pause and resume without a Web Audio API provided playback position (bikeshedding: I'd prefer the name playbackTime). When playback is paused we need at least a reasonably accurate (not necessarily sample accurate, but close-enough) playback time to pass as the offset to the next start() when resuming. JS timers are not synchronised to the audio clock so I'd expect them to drift apart even if we tried to track this ourselves, especially with looping playback, which was actually the use case which sent me looking for this.

I think the fact there is an onended event is admission enough that it is not adequate to track the playback time with JS. We could just fire the ended event ourselves at the time (currentTime + duration), but obviously this does not work if the playbackRate changes. So I'm a little surprised this is not already in place. Pausing and resuming is a pretty basic use case, and it should be easy to implement this.

FWIW, changing the playbackRate is an interesting feature for games - it's good for accelerating engine sounds, time scaling (i.e. slo-mo) effects, varying the pitch of environmental sounds like footsteps to make them sound less repetitive, and more.

@notthetup
Copy link
Contributor Author

@AshleyScirra If you need a temporary work around for this, I use a really ugly hack which kinda works.

Basically one has to create a second BufferSource where the samples are just counts [1, 2, 3, 4] which is then connected to a ScriptProcessor which keeps stores the last value from every input buffer. The play and pause calls and also the change of playbackRate have to be forwarded to this counter buffer as well. (Maybe these slides explain it better.. http://chinpen.net/talks/wac-paper/#/37 ).

The playPosition is only accurate to block size (128 samples), but it's better than nothing. Also it means having to use the ScriptProcessor quite a bit, which has it's own down sides.

@AshleyScirra
Copy link

@notthetup - hopefully the ugliness of that hack is motivation for the spec to officially include this :P

@cwilso
Copy link
Contributor

cwilso commented Sep 10, 2015

Note the milestones - Joe moved this to v.next.

@dorontal
Copy link

dorontal commented Apr 14, 2016

That's great news that it's in the milestones. I'd like to join the chorus and reiterate that pause/resume cannot currently be implemented accurately without position exposed. So a simple audio player (similar to, for example, the <audio> tag built-in player) cannot currently be implemented with web audio!

That said, I currently use a workaround different from the one above, but it is not extremely accurate and its accuracy gets worse the more pauses you make. The basic idea is to (a) refer to AudioContext's currentTime property for time (do not use javascript time for audio); (b) as soon as pause (disconnect) is called, use currentTime to measure lastPauseTime then when you resume, keep track of totalPauseTime with this totalPauseTime = audioContext.currentTime-lastPauseTime. Your playback time is then audioContext.currentTime-totalPauseTime. This is not 100% accurate and barely tolerable because there's a slight delay between the disconnect() call to pause and the measurement of time via currentTime and if the VM decides to garbage collect between those two statements there would be a huge delay...

Found a decent example (not mine) of this method at this Codepen

@notthetup
Copy link
Contributor Author

@dorontal Your technique is great, but it it assumes that playbackRate is on the AudioBufferSourceNode is unchanged from the default 1. If that parameter is changed, or worse automated, then the calculations get a lot more hairy.

@dorontal
Copy link

I agree. Thanks for pointing that out!

@cherston
Copy link

cherston commented Jun 22, 2016

Joining the chorus of people who would very much welcome this feature, both for the pause/resume scenario as well as the synchronization of multiple buffers scenario.

I think the workaround that I will use to accommodate @notthetup's response to @dorontal will be to keep track of the playbackRate changes as well.

@notthetup
Copy link
Contributor Author

notthetup commented Jun 23, 2016

@cherston Yes. And that soon get very very tedious when you have to factor in parameter automation. Very soon I felt like I was reimplementing AudioParam in JS :(

But with the recent change in the API to support playbackRate from [-Inf, Inf] one the most common usecase I had for this request (reverse playback) is already supported on AudioBufferSourceNode.

@jakearchibald
Copy link

Another use-case: Say I have a looping track, and the user loads in an additional looping track (of the same length), once that track loads I want to start playing it in addition to the current track, in time with the current track.

Is the currentTime of the context accurate for this? Eg if I do:

const loopStartTime = context.currentTime;
loop1Source.connect(context.destination);
loop1Source.loop = true;
loop1Source.start(0);

// then seconds later…
loop2Source.connect(context.destination);
loop2Source.loop = true;
loop2Source.start(0, (context.currentTime - loopStartTime) % loop1Source.buffer.duration);

…will the loops be playing exactly in time with one-another?

@cwilso
Copy link
Contributor

cwilso commented Oct 29, 2016

"Sort of", yes. The problem is that we don't have a way to ENSURE that the audio thread has processed a block in the time between when you get the context.currentTime and the time the start() executes. This means it's always possible you could miss the next actual processing block if start(0) is called.

The solution is that you shouldn't hardly ever call start(0) for anything you want to synchronize. It's best to schedule ahead by the size of one processing "batch", in case the audio thread processes while your code is running in the main thread - you shouldn't presume that "currentTime" is precisely when you can start. That batch is at least a sample block (128 samples) - more on slow systems. The size of one processing chunk is exposed in the spec as "context.baseLatency", but not implemented yet (in Chrome at least).

This should work:

var batchTime = context.baseLatency || (128 / context.sampleRate);
const loopStartTime = context.currentTime;
loop1Source.connect(context.destination);
loop1Source.loop = true;
loop1Source.start( loopStartTime + batchTime);

// then seconds later…
loop2Source.connect(context.destination);
loop2Source.loop = true;
var now = context.currentTime;
loop2Source.start(now + batchTime, (now + batchTime - loopStartTime) % loop1Source.buffer.duration);

Note that usually, you would want to align on beats anyway, so you wouldn't immediately start - or you'd start both samples playing and just control their volumes through gain nodes.

@cwilso
Copy link
Contributor

cwilso commented Oct 29, 2016

Forgot to say - exposing currentPlaybackTime would not change this scenario in the least - the problem is not in doing that math, it's in "when can I actually get something to start playing".

@jakearchibald
Copy link

@cwilso cheers! Given that synchronisation is more important than immediacy in this case, would using a second as the base latency produce a more reliable result?

@yuanworks
Copy link

yuanworks commented Mar 6, 2021

This is great news! 💯💯💯

Just curious as to when is it realistic to see this or the API V2 implemented in browsers? I imagine it will take a long time but a ballpark time frame would be good to know (apologies in advance as I am not very knowledgeable with the W3C / TPAC process).

@rtoy
Copy link
Member

rtoy commented Mar 9, 2021

I would imagine this is currently gated on creating a design for this and then incorporating it into the spec. Once that's done, then browsers are free to implement. Perhaps a browser might implement it based on the design, but I think that's less likely these days.

@rtoy
Copy link
Member

rtoy commented May 12, 2021

F2F Meeting: The proposed API in https://github.com/WebAudio/web-audio-api-v2/issues/26#issuecomment-709478405 still stands with no further changes needed. Just need to write up the spec text to match.

@mdjp mdjp transferred this issue from WebAudio/web-audio-api-v2 Sep 23, 2021
@mdjp mdjp added Priority: Urgent WG charter deliverables; "need to have". https://speced.github.io/spec-maintenance/about/ Needs Edits Decision has been made, the issue can be fixed. https://speced.github.io/spec-maintenance/about/ labels Sep 23, 2021
@snikch
Copy link

snikch commented Jan 16, 2022

Hi guys. I just needed to draw a playback cursor over a waveform on a canvas. I'm building player with AB looping and playbackRate controls for rehearsing on drums.

Here's a solution that worked for me :

  1. When I createBufferSource for loading my song, I also create a second bufferSource (not used for playback) which has the same number of samples as the song and which I fill all the samples linearly from -1 to 1.
  2. I then connect this second positionBuffer (I call it) to a scriptProcessor with an onaudioprocess that computes the samplePosition by simple interpolation of the first sample's value in the processBuffer from -1..1 TO 0 .. nbSamples. (apparently you do have to connect processor to the destination otherwise it won't be run.
  3. All play actions, loop changes, resume, sampleRate changes are then, always done on both bufferSources, the song's and the position's.

That's it. So far so good.

Do you / anyone know if this would work with an AudioWorkletProcessor? It appears like, since the process runs completely separately, there's little to no opportunity to pass information (e.g. the position) to the application code that requires it. From my tests, you can't update an AudioParam and expect it to be visible outside of the call to process, and the output values are the only mutable properties but these, again, never become visible to the calling application.

@p-himik
Copy link

p-himik commented Jun 15, 2022

@snikch With AudioWorkletProcessor, you should be able to use this.port.postMessage(data) in its process function. And you'd use audioWorkletNode.port.onmessage = (e) => do_something(e.data) with it.

But I have no idea how robust it is. Probably, not very much. I have two such processors set up for two sources that I start playing at the same time, using the same audio context. And the processors report different times - apparently, one of them just gets a few frames ahead for some reason. The higher the playback rate, the larger the difference. Update - that particular issue was on my end. After fixing it, the reporting is synchronized. But again, still no clue how truly robust it is.

@cleverchuk
Copy link

Hi guys. I just needed to draw a playback cursor over a waveform on a canvas. I'm building player with AB looping and playbackRate controls for rehearsing on drums.
Here's a solution that worked for me :

  1. When I createBufferSource for loading my song, I also create a second bufferSource (not used for playback) which has the same number of samples as the song and which I fill all the samples linearly from -1 to 1.
  2. I then connect this second positionBuffer (I call it) to a scriptProcessor with an onaudioprocess that computes the samplePosition by simple interpolation of the first sample's value in the processBuffer from -1..1 TO 0 .. nbSamples. (apparently you do have to connect processor to the destination otherwise it won't be run.
  3. All play actions, loop changes, resume, sampleRate changes are then, always done on both bufferSources, the song's and the position's.

That's it. So far so good.

Do you / anyone know if this would work with an AudioWorkletProcessor? It appears like, since the process runs completely separately, there's little to no opportunity to pass information (e.g. the position) to the application code that requires it. From my tests, you can't update an AudioParam and expect it to be visible outside of the call to process, and the output values are the only mutable properties but these, again, never become visible to the calling application.

@snikch Would love to see your code if you don't mind sharing.

@p-himik
Copy link

p-himik commented Jul 24, 2022

@cleverchuk Here's the code that works for me:

// main.js
const counterBuffer = audioCtx.createBuffer(1, buffer.length, audioCtx.sampleRate);
const counterSource = audioCtx.createBufferSource();
counterSource.buffer = counterBuffer;
const length = counterBuffer.length
const counterBufferCD = counterBuffer.getChannelData(0);
for (let i = 0; i < length; ++i) {
  // Clamp to [0; 1).
  // Could clamp to [-1; 1) for higher precision, but it makes handling 0 troublesome.
  counterBufferCD[i] = i / length;
}

const prp = new AudioWorkletNode(audioCtx, 'position-reporting-processor');
prp.port.onmessage = (e) => {
  console.log("Current position:", e.data * length);
}
counterSource.connect(prp);
prp.connect(audioCtx.destination); // Otherwise, `prp` won't be run at all.


// positionReportingProcessor.js
class PositionReportingProcessor extends AudioWorkletProcessor {
  process(inputs, _outputs, _parameters) {
    if (inputs.length > 0) {
      const input = inputs[0];
      if (input.length > 0) {
        const channel = input[0];
        this.port.postMessage(channel[channel.length - 1]);
        return true;
      }
    }
    return false;
  }
}

registerProcessor('position-reporting-processor', PositionReportingProcessor);

@patrick-sawyer-img
Copy link

Genius p-himik - absolute genius. Thanks for that. My DJ website would have been pretty lame without it.

@henrikmathisen
Copy link

@p-himik What does "position" refer to on this line?

console.log("Current position:", Math.max(this.position, e.data * length));

I can't see you're defining it above and I can't find any "position" property "this" might refer to.

@p-himik
Copy link

p-himik commented Sep 29, 2022

@henrikmathisen A rogue statement from ad-hoc code, I've edited the post.

@henrikmathisen
Copy link

@p-himik I suspected as much. Because I've been playing around with it a bit and the data from this.port.postMessage(channel[channel.length - 1]); doesn't seem to actually output the current position but rather the waveform. For instance hooking it up to a song with quiet parts will have the console.log("Current position:", e.data * length); output 0 during those quiet parts. Otherwise it will randomly output numbers from -152 to 152.

Here's a small excerpt from my console:


Current position: -0.07052224167675103 
Current position: 0.14212089825227103 
Current position: 0.5124870090765796 
Current position: -3.167182112052899 
Current position: 0  
Current position: 0.056969472146174205 
Current position: 0  
Current position: -0.188834515924718 
Current position: -135.9335928979399

@p-himik
Copy link

p-himik commented Sep 29, 2022

You have connected a wrong source to the processor. If you connect counterSource and fill its buffer with linearly raising numbers, as it's done in the example, the processor will get that data as the input.

@henrikmathisen
Copy link

I see, thanks :)

@kurtsmurf
Copy link

one more implementation of a playback-position-reporting AudioBufferSourceNode based on this comment. Thanks @selimachour !

@p-himik
Copy link

p-himik commented Sep 29, 2022

Seems like it's simpler than my solution, thanks!

@hoch hoch removed the Priority: Urgent WG charter deliverables; "need to have". https://speced.github.io/spec-maintenance/about/ label Nov 2, 2022
@westarne
Copy link

westarne commented Jan 11, 2024

one more implementation of a playback-position-reporting AudioBufferSourceNode based on this comment. Thanks @selimachour !

I know this is an old post, but I was implementing your solution and was not really happy with how long it took to create an instance of that class (since in my application it froze for a second there). I found out the main issue making it slow is the section

    // fill up the position channel with numbers from 0 to 1
    for (let index = 0; index < audioBuffer.length; index++) {
      this._bufferSource.buffer.getChannelData(audioBuffer.numberOfChannels)[
        index
      ] = index / audioBuffer.length;
    }

So I've rewritten it to

    // fill up the position channel with numbers from 0 to 1
    // most performant implementation to create the big array is via "for"
    // https://stackoverflow.com/a/53029824
    const length = audioBuffer.length;
    const timeDataArray = new Float32Array(length);
    for (let i = 0; i < length; i++) {
      timeDataArray[i] = i / length;
    }
    this._bufferSource.buffer.copyToChannel(
      timeDataArray,
      audioBuffer.numberOfChannels,
    );

Loading a song of ~4 minutes, this reduced the instance creation time from roughly 1s down to 20-30ms (using Chrome)

Maybe it would've been possible to just store the result of this._bufferSource.buffer.getChannelData(audioBuffer.numberOfChannels) in a variable as well, but I thought it's a bit cleaner to have an explicit "override" action.

And another hint for firefox:
It seems to implement the standard more strictly, meaning you can only assign the buffer to the bufferSource AFTER changing the data. So the this._bufferSource.buffer = ... has to be done after copying the buffer data. So you have to store it in a variable first and then assign it before connecting the nodes.

@mjwilson-google mjwilson-google added category: enhancement Substantive changes that do not add new features. https://www.w3.org/policies/process/#class-3 size: S Small amount of work expected to resolve. size: M Medium amount of work expected to resolve. and removed size: S Small amount of work expected to resolve. labels Sep 25, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
category: enhancement Substantive changes that do not add new features. https://www.w3.org/policies/process/#class-3 Needs Edits Decision has been made, the issue can be fixed. https://speced.github.io/spec-maintenance/about/ size: M Medium amount of work expected to resolve.
Projects
None yet
Development

No branches or pull requests