Sorry guys, bad news. I had a peek in the source code (has been some time now) and I saw that indeed, it already does some smart selection on languages, e.g. ignore "narrative" tracks. BUT that is only done on file transcoding (e.g. recording), where streamproxy has random access to the stream and peek into it. For live transcoding, only this information is available, as received from OWIF:
+1:0:pat,7d0:pmt,7d1:video,7db:audio,7dc:audio,7dd:audio,835:subtitle,836:subtitle,837:subtitle,7d1:pcr,835:text
As you can see, there is no language tag present.
This can only be fixed by:
- (as said) reimplement streamproxy into enigma2 itself
- OWIF adding language tags here (or duplicate the page and add the extra tags, to not break compatibility)
Another approach, slightly ulgy, would be to stop using this mechanism at all and fetch the stream using a "normal" streaming request, where we're not using OWIF to request enigma to tune tuner and assign demuxer and then fetch the stream from the demuxer, but completely fetch the stream from the OWIF. I think that would also resolve your authentication issue. The problem here is that streamproxy still would need to parse part of the stream (PMT) to find out the language tags. AFAIK there is no streaming mode where only the audio track, as selected by the user, is included (either explicitly or implicitly using auto language detection).