-
Notifications
You must be signed in to change notification settings - Fork 168
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Exposing a playbackPosition property on AudioBufferSourceNode. #2397
Comments
Also, the It seems to me that for the web audio platform it is unreasonable to expect sample-accurate events based on user interaction. |
Confused: you mention a suggestion and the very reason it's not tenable. :) the 'onended' callback of AudioBufferSourceNode can't be used to synchronize, because it's NOT sample-accurate; no JS event can be, because the window for a single sample is on the order of hundredths of a millisecond; JS event delivery is on the order of a few milliseconds, IF garbage collection or other main thread stuff (layout, JS execution, etc) don't get in the way. You need to schedule ahead. playbackPosition will have to expose the sample position of the buffer for the next scheduling - that is, the next bit that WILL be scheduled. That will let you do the appropriate math. |
If we need a playbackPosition, then the onended event would be a good place to put it. I'm not so keen to keep playbackPosition always up to date on the AudioBufferSourceNode. |
Thanks everyone for your feedback. @karlt Yup. That would work as long as all the AudioBufferSourceNode are kept running, and a The pause (scenario 1) or some variant of that, where one AudioBufferSourceNode is stopped and another is started, is definitely the where knowing the last As for the sample accuracy argument, sometimes its enough to get an "sample inaccurate" playbackPosition value. A rough estimate to figure out what's being played. Currently if ParameterAutomation is being used, the only way to guess even an inaccurate position is to basically to capture and process all the Automation Events in JS. Lastly, exposing the playbackPosition in |
I see a (relatively narrow) use case for the onended return of playbackPosition - but for the scenario I badly needed it, that wouldn't work at all. I have to track the current playback time in wubwubwub (http://webaudiodemos.appspot.com/wubwubwub/ - press the power button, wait a few seconds, press it again, lather, rinse and repeat); I can't wait until it's ended, because it has to smoothly modify, but the playback position controls the deck visuals. (Easier to see if you have a DJ controller attached and you scrub.) |
I see, thanks Chris. Your demos/examples are very helpful. Implementations could actually do AudioBufferSourceNode.playbackPosition reasonably efficiently in the common cases, if necessary, by sending updates from the processing thread to the AudioContext thread only when the playbackRate computedValue has changed. The AudioContext thread can then use the last sync time and rate computedValue to calculate from currentTime until the next update. |
Yes. The more difficult part is when the playbackRate AudioParam has complex (e.g. setTargetAtTime/setExponentialRampAtTime) scheduling going on. My wubwubwub demo does the calculations between updates, basically the way you suggest, and the math is only moderately complex for the linear ramps I use on power up/down; the exponential ones get harder. (Area under a curve, rather than area under a line, basically.) |
@cwilso Yup. I have a working version of the same in my code now. I also have to keep a list of ParameterAutomation events (had to look up Chromium source here) so that I can figure out things like to stop calculating for setTargetAtTime etc. It works, but it's not heavily accurate. If I can convince some people, I will open source it at some point. |
Is there any indication as to when this feature could be implemented in the Web Audio API? |
No, there is not. To be clear: this feature would be: |
I would really love to see this as well, but I'd love it to be an AudioParam. The use case is reading sound in a very complex way : going back, forth, jumping around and so on. In fact if such a feature existed, it would cover all the other features already existing on the node (playbackRate, loop), and provide additional functionality that is not available at the moment (jumps). |
On Thu, 29 Jan 2015 01:03:04 -0800, Sebastien Piquemal wrote:
That sounds very similar to WaveShaperNode. |
Hmmm ... There is probably a misunderstanding, because I really don't see On Tue, Feb 3, 2015 at 11:03 PM, Karl Tomlinson notifications@github.com
Sébastien Piquemal -----* @sebpiq* |
Hmmm ... There is probably a misunderstanding, because I really don't see
the similarity .. Could you explain?
In each case, an input describes which part of a buffer is
produced on the output of the node. With playbackPosition on
AudioBufferSourceNode, the input would be an AudioParam. With
WaveShaperNode, the input is from the output of another AudioNode.
A GainNode with constant input from another source can be used to
convert an AudioParam to AudioNode output.
|
The point of this issue was to expose a read-only position, not to enable On Tue, Feb 3, 2015 at 5:25 PM, Karl Tomlinson notifications@github.com
|
Ow yeah ... apologies Karl, you're right. Looks like it could be used for Chris, I just open an issue for that ;) ? On Wed, Feb 4, 2015 at 2:28 AM, Chris Wilson notifications@github.com
Sébastien Piquemal -----* @sebpiq* |
YES PLEASE NO QUESTION |
Users of the Construct 2 game engine need this. It is very difficult to implement a simple pause and resume without a Web Audio API provided playback position (bikeshedding: I'd prefer the name playbackTime). When playback is paused we need at least a reasonably accurate (not necessarily sample accurate, but close-enough) playback time to pass as the offset to the next start() when resuming. JS timers are not synchronised to the audio clock so I'd expect them to drift apart even if we tried to track this ourselves, especially with looping playback, which was actually the use case which sent me looking for this. I think the fact there is an onended event is admission enough that it is not adequate to track the playback time with JS. We could just fire the ended event ourselves at the time (currentTime + duration), but obviously this does not work if the playbackRate changes. So I'm a little surprised this is not already in place. Pausing and resuming is a pretty basic use case, and it should be easy to implement this. FWIW, changing the playbackRate is an interesting feature for games - it's good for accelerating engine sounds, time scaling (i.e. slo-mo) effects, varying the pitch of environmental sounds like footsteps to make them sound less repetitive, and more. |
@AshleyScirra If you need a temporary work around for this, I use a really ugly hack which kinda works. Basically one has to create a second BufferSource where the samples are just counts The playPosition is only accurate to block size (128 samples), but it's better than nothing. Also it means having to use the ScriptProcessor quite a bit, which has it's own down sides. |
@notthetup - hopefully the ugliness of that hack is motivation for the spec to officially include this :P |
Note the milestones - Joe moved this to v.next. |
That's great news that it's in the milestones. I'd like to join the chorus and reiterate that pause/resume cannot currently be implemented accurately without position exposed. So a simple audio player (similar to, for example, the That said, I currently use a workaround different from the one above, but it is not extremely accurate and its accuracy gets worse the more pauses you make. The basic idea is to (a) refer to Found a decent example (not mine) of this method at this Codepen |
@dorontal Your technique is great, but it it assumes that |
I agree. Thanks for pointing that out! |
Joining the chorus of people who would very much welcome this feature, both for the pause/resume scenario as well as the synchronization of multiple buffers scenario. I think the workaround that I will use to accommodate @notthetup's response to @dorontal will be to keep track of the |
@cherston Yes. And that soon get very very tedious when you have to factor in parameter automation. Very soon I felt like I was reimplementing AudioParam in JS :( But with the recent change in the API to support |
Another use-case: Say I have a looping track, and the user loads in an additional looping track (of the same length), once that track loads I want to start playing it in addition to the current track, in time with the current track. Is the const loopStartTime = context.currentTime;
loop1Source.connect(context.destination);
loop1Source.loop = true;
loop1Source.start(0);
// then seconds later…
loop2Source.connect(context.destination);
loop2Source.loop = true;
loop2Source.start(0, (context.currentTime - loopStartTime) % loop1Source.buffer.duration); …will the loops be playing exactly in time with one-another? |
"Sort of", yes. The problem is that we don't have a way to ENSURE that the audio thread has processed a block in the time between when you get the context.currentTime and the time the start() executes. This means it's always possible you could miss the next actual processing block if start(0) is called. The solution is that you shouldn't hardly ever call start(0) for anything you want to synchronize. It's best to schedule ahead by the size of one processing "batch", in case the audio thread processes while your code is running in the main thread - you shouldn't presume that "currentTime" is precisely when you can start. That batch is at least a sample block (128 samples) - more on slow systems. The size of one processing chunk is exposed in the spec as "context.baseLatency", but not implemented yet (in Chrome at least). This should work:
Note that usually, you would want to align on beats anyway, so you wouldn't immediately start - or you'd start both samples playing and just control their volumes through gain nodes. |
Forgot to say - exposing currentPlaybackTime would not change this scenario in the least - the problem is not in doing that math, it's in "when can I actually get something to start playing". |
@cwilso cheers! Given that synchronisation is more important than immediacy in this case, would using a second as the base latency produce a more reliable result? |
This is great news! 💯💯💯 Just curious as to when is it realistic to see this or the API V2 implemented in browsers? I imagine it will take a long time but a ballpark time frame would be good to know (apologies in advance as I am not very knowledgeable with the W3C / TPAC process). |
I would imagine this is currently gated on creating a design for this and then incorporating it into the spec. Once that's done, then browsers are free to implement. Perhaps a browser might implement it based on the design, but I think that's less likely these days. |
F2F Meeting: The proposed API in https://github.com/WebAudio/web-audio-api-v2/issues/26#issuecomment-709478405 still stands with no further changes needed. Just need to write up the spec text to match. |
Do you / anyone know if this would work with an |
@snikch With But I have no idea how robust it is. |
@snikch Would love to see your code if you don't mind sharing. |
@cleverchuk Here's the code that works for me: // main.js
const counterBuffer = audioCtx.createBuffer(1, buffer.length, audioCtx.sampleRate);
const counterSource = audioCtx.createBufferSource();
counterSource.buffer = counterBuffer;
const length = counterBuffer.length
const counterBufferCD = counterBuffer.getChannelData(0);
for (let i = 0; i < length; ++i) {
// Clamp to [0; 1).
// Could clamp to [-1; 1) for higher precision, but it makes handling 0 troublesome.
counterBufferCD[i] = i / length;
}
const prp = new AudioWorkletNode(audioCtx, 'position-reporting-processor');
prp.port.onmessage = (e) => {
console.log("Current position:", e.data * length);
}
counterSource.connect(prp);
prp.connect(audioCtx.destination); // Otherwise, `prp` won't be run at all.
// positionReportingProcessor.js
class PositionReportingProcessor extends AudioWorkletProcessor {
process(inputs, _outputs, _parameters) {
if (inputs.length > 0) {
const input = inputs[0];
if (input.length > 0) {
const channel = input[0];
this.port.postMessage(channel[channel.length - 1]);
return true;
}
}
return false;
}
}
registerProcessor('position-reporting-processor', PositionReportingProcessor); |
Genius p-himik - absolute genius. Thanks for that. My DJ website would have been pretty lame without it. |
@p-himik What does "position" refer to on this line?
I can't see you're defining it above and I can't find any "position" property "this" might refer to. |
@henrikmathisen A rogue statement from ad-hoc code, I've edited the post. |
@p-himik I suspected as much. Because I've been playing around with it a bit and the data from Here's a small excerpt from my console:
|
You have connected a wrong source to the processor. If you connect |
I see, thanks :) |
one more implementation of a playback-position-reporting AudioBufferSourceNode based on this comment. Thanks @selimachour ! |
Seems like it's simpler than my solution, thanks! |
I know this is an old post, but I was implementing your solution and was not really happy with how long it took to create an instance of that class (since in my application it froze for a second there). I found out the main issue making it slow is the section // fill up the position channel with numbers from 0 to 1
for (let index = 0; index < audioBuffer.length; index++) {
this._bufferSource.buffer.getChannelData(audioBuffer.numberOfChannels)[
index
] = index / audioBuffer.length;
} So I've rewritten it to // fill up the position channel with numbers from 0 to 1
// most performant implementation to create the big array is via "for"
// https://stackoverflow.com/a/53029824
const length = audioBuffer.length;
const timeDataArray = new Float32Array(length);
for (let i = 0; i < length; i++) {
timeDataArray[i] = i / length;
}
this._bufferSource.buffer.copyToChannel(
timeDataArray,
audioBuffer.numberOfChannels,
); Loading a song of ~4 minutes, this reduced the instance creation time from roughly 1s down to 20-30ms (using Chrome) Maybe it would've been possible to just store the result of And another hint for firefox: |
AudioBufferSourceNode lacks a
playbackPosition
property which exposes the sample index which is currently being played, or the last sample index played if the AudioBufferSourceNode is not playing. The property can bereadonly
, and to keep inline with the rest of the API it can be a time value not an index.If the
playbackRate
doesn't change the current playback position can be easily calculated using AudioContext'scurrentTime
property. But with Parameter automation onplaybackRate
the calculations can get pretty gnarly and inaccurate.This property can come in handy when trying to implement a few types of scenarios
Currently, it seems, based on a conversation in the public-audio mailing list, the only way to generate this property is to capture all changes to to
playbackRate
parameter and calculate theplaybackPosition
based on that. This is complicated and redundant since theplaybackPosition
property is being stored internally in the WebAudioAPI, and recalculating it in Javascript is a waste of effort.The text was updated successfully, but these errors were encountered: