-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
New way of capturing devices/tracks in janus.js #3003
Conversation
Eventually I managed to I fix the limitation of the separate getUserMedia calls for audio and video as well. To be brief, I implemented a simple concept of grouping: the idea is that, when you add track objects, you can also provide a Since I wanted to keep things simple, I also made grouping implicit if you don't do anything. This means that, if you pass an audio and a video track without any group, those will be grouped automatically as well. As such, a track object like the one we use in echotest.js
will result in a single getUserMedia dialog automatically, since there's an audio and a video track, and they don't have an explicit group set, which means we default to the Considering this was the last major thing I wanted to fix, I'll remove the draft label from the PR. This means that you should REALLY start testing soon, otherwise I'll just merge whether this works for you or not 😁 |
I have updated our janus.js with this PR, the fallback media handler worked fine as default after dropping in the update. Do you foresee any functionality that would enable tracks to specify their own bitrate and codecs? That would make this much more versatile for things like quality options but I'm not sure of the implications / limitations this would bring |
Thanks for the feedback!
Not in this effort, but maybe in the future this could be done. For bitrates, we'll definitely need to fix how we do it for simulcast, since at the moment we're still leveraging the global |
Just pushed a commit that fixes this part. Now a |
Nevermind that, I think I found a couple of APIs that could be used for the purpose. I'll have a look to see if it may make sense to add them in this PR as well, or if that can wait. |
@MvEerd added an attempt to do both, please let me know if you think this would work as they are. |
Thanks for your insight and efforts @lminiero! Good call on the I've updated the janus.js in our staging environment to test this functionality with our Videoroom and noticed the following issues;
(where h264 is the initial video codec, and I added h264, VP8 and VP9 tracks, only the h264 stream is published) Edit: This was caused by us constraining the videocodec ourselves in the configure request. When using the bitrate option I get the following in the JS console when publishing The
Similarly the same happens when a framerate is specified; If there is any additional information I can provide to clarify please let me know. |
Brilliant! I have been away for last two weeks, just saw your comments and this PR. We will try this and get back to you with feedback and issues, if we face any. Thanks for your hard work! |
Hi, Got a minor hurdle! When the live session is going on, tracks: [ Audio track got removed locally, I got following in console from MediaState callback: "Janus stopped receiving our audio (mid=1)" On other participant's end, he got following: and also following: As you can see the participant is getting "onremotetrack" callback for a video track which got replaced on my end, but there is no callback for audio track removed. Is there anything I am missing or misunderstanding? |
The audio track has not been removed, it's been replaced. We're currently only calling |
Sorry, this doesn't make any sense to me! Can you elaborate what exactly you are saying? Just in case I will explain my confusion! If the publisher wants to remove the existing audio track and replace the video track, thus following as offer: tracks: [ Shouldn't this trigger "onremotetrack" callback twice on subscriber's end, one for audio track with on = false, and another for video with on = true ? |
Just checked, another variation of above mention scenario, where Publisher removes the video & replaces the audio,
OK! I understand now that replaceTrack doesn't trigger the onremotetrack. And as "remove: true" is using replaceTrack(null), it doesn't trigger the "onremotetrack" as well. So my question is that how can subscriber be notified about this? I will keep testing with various other scenarios! :) |
onremotetrack has nothing to do with local tracks, so I'm not sure why you're expecting it to be fired? The only thing you can sometimes see are new calls to onlocaltrack, for your own preview since these are your own tracks you're replacing, stopping or adding. |
I am not expecting "onremotetrack" to be fired on my end when I (publisher) removes the audio track, I am expecting it should be fired on the subscribers end to notify that the audio track has been removed. Scenario -
To inform the subscribers that the audio track has been removed, on David's end "onremotetrack" should be fired, right? |
When adding a new screen type device, another video instance is captured |
The screensharing demo works just fine to me, and it does exactly that. |
can you share the codes how you add only the screen ball already in the room and what version of the server you are using |
One more question please, for 3 days already I don’t know what to do if the user does not have a camera how can I display his video with a preview, please help me everywhere I could ask |
Those questions belong to the group, not here. If anyone can or wants to help, that's where you'll find them. |
I'll merge this in a couple of days, so please make sure you test this before that happens if you don't want to be catched unprepared. |
Everything seems to work in normal mode, although there is a slight inconvenience, deleted tracks come 2 times, one in the muted: true state, the second in the muted: false state, although this is the same track, you can make only one track come, otherwise how it's confusing |
Merging. |
This PR contains the following updates: | Package | Update | Change | |---|---|---| | [meetecho/janus-gateway](https://github.com/meetecho/janus-gateway) | minor | `v1.0.4` -> `v1.1.0` | --- ### Release Notes <details> <summary>meetecho/janus-gateway</summary> ### [`v1.1.0`](https://github.com/meetecho/janus-gateway/blob/HEAD/CHANGELOG.md#v110---2022-10-03) [Compare Source](meetecho/janus-gateway@v1.0.4...v1.1.0) - Added versioning to .so files \[[PR-3075](meetecho/janus-gateway#3075)] - Allow plugins to specify msid in SDPs \[[PR-2998](meetecho/janus-gateway#2998)] - Fixed broken RTCP timestamp on 32bit architectures \[[Issue-3045](meetecho/janus-gateway#3045)] - Fixed problems compiling against recent versions of libwebsockets \[[Issue-3039](meetecho/janus-gateway#3039)] - Updated deprecated DTLS functions to OpenSSL v3.0 [PR-3048](meetecho/janus-gateway#3048)] - Switched to SHA256 for signing self signed DTLS certificates (thanks [@​tgabi333](https://github.com/tgabi333)!) \[[PR-3069](meetecho/janus-gateway#3069)] - Started using strnlen to optimize performance of some strlen calls (thanks [@​tmatth](https://github.com/tmatth)!) \[[PR-3059](meetecho/janus-gateway#3059)] - Added checks to avoid RTX payload type collisions \[[PR-3080](meetecho/janus-gateway#3080)] - Added new APIs for cascading VideoRoom publishers \[[PR-3014](meetecho/janus-gateway#3014)] - Fixed deadlock when using legacy switch in VideoRoom \[[Issue-3066](meetecho/janus-gateway#3066)] - Fixed disabled property not being advertized to subscribers when VideoRoom publishers removed tracks - Fixed occasional deadlock when using G.711 in the AudioBridge \[[Issue-3062](meetecho/janus-gateway#3062)] - Added new way of capturing devices/tracks in janus.js \[[PR-3003](meetecho/janus-gateway#3003)] - Removed call to .stop() for remote tracks in demos \[[PR-3056](meetecho/janus-gateway#3056)] - Fixed missing message/info/transfer buttons in SIP demo page - Fixed postprocessing compilation issue on older FFmpeg versions \[[PR-3064](meetecho/janus-gateway#3064)] - Other smaller fixes and improvements (thanks to all who contributed pull requests and reported issues!) </details> --- ### Configuration 📅 **Schedule**: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined). 🚦 **Automerge**: Disabled by config. Please merge this manually once you are satisfied. ♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox. 🔕 **Ignore**: Close this PR and you won't be reminded about this update again. --- - [ ] <!-- rebase-check -->If you want to rebase/retry this PR, click this checkbox. --- This PR has been generated by [Renovate Bot](https://github.com/renovatebot/renovate). <!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzMi4yMTMuMCIsInVwZGF0ZWRJblZlciI6IjMyLjIxMy4wIn0=--> Reviewed-on: https://git.walbeck.it/walbeck-it/docker-janus-gateway/pulls/91 Co-authored-by: renovate-bot <bot@walbeck.it> Co-committed-by: renovate-bot <bot@walbeck.it>
I want to share my desktop stream during a video conference, simply replacing my video stream while keeping the audio stream. What should I do? |
The new multistream support added in #2211 made it possible to send/receive multiple streams to/from Janus in the same PeerConnection. That said so far, for the send part, it hasn't been easy to really take advantage of it when using
janus.js
: in fact, while that basic SDK does have a way to capture devices to send, it does so byusing amedia
object where you specify if you want audio and/or video (and maybe data), if they should be sent/received and so on. As it is, themedia
object is very rigid and not flexible at all, since it doesn't allow you to add more than one audio or video stream for instance, and even just updating sessions is quite a nightmare. A few users did struggle with this, which led to proposals #2966 for instance.Since I wanted to change this and allow for more flexibility since even before we merged multistream support, I eventually decided to bite the bullet and start working on this. As you can see, it's currently a draft PR because it's not complete (more info below), but I wanted to share the current state of the effort nevertheless, mainly to figure out if this new approach makes sense, in particular from an API perspective.
Basically, the main idea was to drop this flat
media
object when you callcreateOffer
orcreateAnswer
, and use an array-based approach instead, where the user provides an array calledtracks
with info on everything they're interested in. This is an example from the updated EchoTest demo code:As you can see, the
tracks
array contains three object:janus.js
to capture the default audio device (capture: true
) and we clarify we want to receive media as well (recv: true
);This simple example should already show how more flexible this approach is: in fact, it shows how we can add more than one audio or video track if we wanted, and for video possibly do simulcast for one but not all tracks, etc. In this example we're only showing how to capture generic audio, video or data, but screen sharing is supported as well, by setting
type: "screen"
.The
capture
property is particularly important, of course, as it specifies whether we want to actually capture or send anything. Settingtrue
is the equivalent of passingaudio: true
orvideo: true
togetUserMedia
(orgetDisplayMedia
for screensharing). For video, this property supports the same string values we had formedia
(e.g.,hires
for 1280x720), as well as explicit constraints (e.g., for ideal/exact resolutions). Passing aMediaStreamTrack
tocapture
, instead, will havejanus.js
skip the capture part, and pass the track directly to the PeerConnection: this is the equivalent of what passing an external stream did before, meaning you can still do thegetUserMedia
yourself and then pass the result tojanus.js
directly. You can see examples of that in the canvas-based demos (e.g., canvas and virtual backgrounds), where we originate a stream externally rather than injanus.js
. Insertable Streams should be supported as well (the e2etest demo does work) by setting the transform functions in the relatedtracks
item (which is an improvement on the more generic audio/video transforms we passed globally before): anyway, I haven't tested in a SFU scenario, so in case you test that it would be nice to know if it does indeed work as it is.It's important to point out you only need to provide a
tracks
array if you want to add/remove/update something: if you issue a renegotiation without providing anything, then nothing is changed, which is helpful when you don't really want to change anything but only, e.g., issue an ICE restart. This is supposed to be an improvement on howmedia
handled that, where you had to pass weird properties likekeepAudio
orkeepVideo
to avoid unexpected changes. Atracks
array is also optional when doing acreateAnswer
for which you don't want to capture anything, e.g., a subscription; this is an example from the updated Streaming demo, for instance:In this case, we are providing a
tracks
only to add an (optional) data channel, just in case data channels were offered. We're not specifying any audio or video track, which meansjanus.js
will automatically accept anything that is offered asrecvonly
. If you want to reject something, add a track item and something to identify it (like themid
of the track) and passrecv: false
to reject it.Upating sessions should also be faily easy now. First of all, if you just want to replace a mic or webcam for instance, there's a new handle method called
replaceTracks
that uses the sametracks
array and allows you to make changes without triggering a renegotiation. An example might be:In this example we're asking
janus.js
to replace the video we're sending inmid
1 with a specific videocamera. You can see this API in action in the updated Device Test demo. Of course you can use atracks
array also to update an existing session, e.g., to replace an audio track and/or stop sending video in a session: to replace a track in a newcreateOffer
/createAnswer
you need to add areplace: true
, while to remove one you passremove: true
instead. This example shows how we can remove a specific video stream while keeping everything else unaltered:The
add: true
can be used to dynamically add a new track instead (e.g., to start screensharing in a session where you're sending audio and video already):To make it easier to figure out which local and remote tracks are available, and which mids they're associated with, you can use the new handle methods
getLocalTracks()
andgetRemoteTracks()
: these are particularly useful when you want to change something in particular, but are not sure how to uniquely address them in a tracks item. The result also includes a few additional info when available, like the track label: this is mostly useful on local devices, as that will tell you which device was captured for instance, but you may have an use for remote labels too.Do you find all this exciting? I hope so, because I do need some feedback on whether this really works in practice! It does seem to do the job in the demos I've tested, but of course real usage by other developers is fundamental to understand if the API as it is needs tweaking or fixing.
That said, as I mentioned this is a draft because of a few existing issues and limitations that I have to work around. But before getting to that, a question you may be asking yourself...
Is the old media object still supported?
Well, in theory yes, in practice it may not always work. I did create a helper function that translates a
media
object to atracks
array (which is called automatically if you only pass amedia
), but it doesn't support all and every property we had, and there were properties we rarely used, that you may need. Besides, I haven't tested what happens when you renegotiate stuff: as I explained, you don't need to provide atracks
again if you don't change anything, so in case you pass amedia
in a renegotiation you may end up with new stuff being captured. I haven't really tested it at this stage, and while I may try and fix things like this, the idea is to migrate totracks
as soon as possible in a new version of Janus (1.1), without worrying too much about the burden of the legacymedia
object. Feedback is welcome of course, especially if you do test what happens with your applications that still rely onmedia
.Now, about things that still don't work or need fixing...
What's the catch?
There are a few limitations in the way this new approach is implemented right now. The plan is to sort them out before making this a mergeable PR, and of course feedback and suggestions would be more than welcome here, as for a couple of things I'm struggling to understand the best way to address them (if they should be addressed at all, that is).
One of the first things you'll notice is that, when you ask for both audio and video, you get different permission dialogs, and not a single one for both. This is because, since we pass an array, we traverse tracks in order, and capture them in order accordingly. I still haven't decided whether this should be changed or not, mostly because I don't have a clear idea of how to take care of this in practice. Besides the technical aspect of how to process two tracks at the same time in the existing code, there's the problem of how to do that semantically: if aFixed with the (possibly implicit) concept of grouping (see comments below).tracks
array contains two audio tracks and a video one, which ones do you ask together? Even if we assume a way is added for the developer to specify what to ask together, how do you address them within an array that doesn't have unique IDs?Another thing you'll notice is that error managament is awful at the moment, or sometimes entirely missing. This is because to process tracks in sequence I'm making use of someThis should be fixed now.async
functions internally andawait
directives on a few WebRTC APIs, and so it's problematic to intercept errors and report them the way we did before. Cleanup is also more problematic in this context. Anyway, this will need to be done properly, so it will hopefully be addressed soon: I was more interested in getting something working first, so I didn't care too much about that for the time being.While we support providing tracks externally (as external streams worked before), there still isn't code to ensure external tracks are not closed when we get rid of them, e.g., via aThis should be addressed now: applications can add aremove: true
. The idea is not to forcejanus.js
to keep track at all, but envisage an additional property you can specify (e.g.,dontCloseTrack: true
) for when you don't want the track to be closed for whatever reason. Considering the developer knows when they provided a track they captured themselves, it should be trivial to just add the property in that case. That said, this isn't in the PR yet.dontStop: true
property to a track when providing acapture
, and if sojanus.js
will not stop the track itself when the time comes to get rid of it.janus.js
it would be helpful to see if that works as expected.This should be all, feedback welcome!