Jump to content


Photo

merge requests for PLi's git


  • Please log in to reply
1748 replies to this topic

Re: merge requests for PLi's git #341 zakalibit

  • Senior Member
  • 51 posts

+5
Neutral

Posted 12 November 2013 - 11:01

So what are the next steps? obviously at the end the code should adhere to the core members opinion :)

Would it work this way? I'll apply one by one patch and issue a merge request, so we can review the individual patch, I wont apply the next patch until the current one is reviewed.

 

@betacentauri thanks again for doing the individual patches



Re: merge requests for PLi's git #342 betacentauri

  • PLi® Core member
  • 7,185 posts

+323
Excellent

Posted 12 November 2013 - 13:16

Please wait another 1-2 days. I'll rewrite the patches (add error messages, squash patches,...). Then you can create a merge request or Pli guys commit the patches directly.
Xtrend ET-9200, ET-8000, ET-10000, OpenPliPC on Ubuntu 12.04

Re: merge requests for PLi's git #343 betacentauri

  • PLi® Core member
  • 7,185 posts

+323
Excellent

Posted 12 November 2013 - 19:59

Hi,

 

here are the patches (they have new numbers!).

Patch 1 and 2 are not changed (I have only exchanged in all patches my email address with zakalibit one).

I have moved some of the cosmetics from patch 4 to patch 3. I have also added the new error messages to patch 3.

 

So I hope I forgot nothing. As said before, I have only compiled the code. I did no testing.

And there are still some changes in zakalibits merge request in wrappers.cpp, which are not included here.

 

Attached Files


Xtrend ET-9200, ET-8000, ET-10000, OpenPliPC on Ubuntu 12.04

Re: merge requests for PLi's git #344 pieterg

  • PLi® Core member
  • 32,766 posts

+245
Excellent

Posted 13 November 2013 - 13:40

Regarding the ts packet filtering, isn't that just a waste of cpu? (the hardware is pretty good in ts filtering, and discarding garbage)

This chunk can be removed from 0003, probably a typo?
@@ -232,7 +369,7 @@ int eHttpStream::valid()
 
 off_t eHttpStream::length()
 {
-	return (off_t)-1;
+	return (off_t) - 1;
 }


Re: merge requests for PLi's git #345 zakalibit

  • Senior Member
  • 51 posts

+5
Neutral

Posted 13 November 2013 - 14:47

Regarding the ts packet filtering, isn't that just a waste of cpu? (the hardware is pretty good in ts filtering, and discarding garbage)
 

 

No, the problem is not in that, that hardware can not filter, the problem is in lost 188 packets, that can happen on the end of each read buffer, when this happens you see mosaic on the screen as you loose video/audio information.



Re: merge requests for PLi's git #346 zakalibit

  • Senior Member
  • 51 posts

+5
Neutral

Posted 13 November 2013 - 14:52

Here a sample video how it looks when you loose stream packets https://dl.dropboxus...1B-33782794.mp4

I do not get any of this with the above fixes.



Re: merge requests for PLi's git #347 pieterg

  • PLi® Core member
  • 32,766 posts

+245
Excellent

Posted 13 November 2013 - 15:46

And how do you avoid loosing packages, just by discarding data?
Data which you did not read, is lost anyway.
And garbage data is ignored/dropped by the hardware ts filters.

Re: merge requests for PLi's git #348 zakalibit

  • Senior Member
  • 51 posts

+5
Neutral

Posted 13 November 2013 - 16:28

I avoid loosing packet by feeding in to filepush  a buffer length that is a 188 multiplier, i.e. a packet size, if you look in to filepush code, you'll see that it allays  throws away the last packet if it was not read in full.

It has that offset variable that gets passed in to the read method, that works fine with files but not with live streams


Edited by zakalibit, 13 November 2013 - 16:30.


Re: merge requests for PLi's git #349 zakalibit

  • Senior Member
  • 51 posts

+5
Neutral

Posted 13 November 2013 - 16:56

As well filepush has no clue if the buffer starts at the start of the packet, so let say you read in to provided buffer 2000 bytes, so the first chunk of data starts at the packet boundary, files push will write only 1880 bytes, i.e. 120 bytes from the last incomplete packet will be lost, next read let say the same 2000 bytes, will not start at the beginning of the packet, filepush again will throw last 120 bytes, so your hardware will throw away the first 68 bytes and the last 120 bytes as you have an incomplete packet at the start and the end, so you start loosing 2 packets per read, in my implementation I make sure that buffer starts at the beginning of the packet and it will keep the last incomplete packet content for the next read, so that filtering is is nicely feeding in to filepush chunks of buffer that have a certain number of full packets, so nothing gets chopped by filepush and nothing filtered away by hardware.


Edited by zakalibit, 13 November 2013 - 16:57.


Re: merge requests for PLi's git #350 pieterg

  • PLi® Core member
  • 32,766 posts

+245
Excellent

Posted 13 November 2013 - 19:29

The hardware will not (or should not) throw away incomplete packets at the end of the buffer.

If, as you say, the filepush discards incomplete packages at the end of the buffer, I think we should fix the filepush, instead of implementing a CPU intensive software filter in eHttpStream?

Re: merge requests for PLi's git #351 zakalibit

  • Senior Member
  • 51 posts

+5
Neutral

Posted 13 November 2013 - 21:36

The hardware will not (or should not) throw away incomplete packets at the end of the buffer.

If, as you say, the filepush discards incomplete packages at the end of the buffer, I think we should fix the filepush, instead of implementing a CPU intensive software filter in eHttpStream?

 

So what different less cpu intensive filepush will do? It 'll have do do similar processing, which is not needed if the source is not httpsream, let say you're reading from a file, than you just simply seek to the offset value you get in the  timedRead(), so this processing in the httpsream is source specific, I would not push this logic in to filepush. As well I do not think that is that cpu intensive. I'm streaming at the moment on amico alient 1 enigma2 cpu is between 2-8%, my box is at 84-90% idle as per top and this is a single core cpu. 

 

Even with this change httpstream is a fraction compared let say to gstreamer cpu usage, but nobody worries about microptimizing gstreamer. I'd worry about cpu cycles when there wont be any spare cycles for other threads/processes.



Re: merge requests for PLi's git #352 pieterg

  • PLi® Core member
  • 32,766 posts

+245
Excellent

Posted 13 November 2013 - 21:48

eHttpStream::read is always called with a multiple of 188.
What if we just make sure it always returns a multiple of 188 (and store any leftovers in an intermediate buffer of 187 bytes)?
That would give the same result, but avoids having to sync on 0x47 all the time.

Actually, looking at the patch again, that's more or less what you are doing. And assuming we stay in sync, the cpu overhead is almost zero.

One more remark though; assuming that the stream delivers nice multiples of 188, you always end up doing two timedRead's in the eHttpStream::read call, instead of just one.
That's suboptimal for the fill level of the hardware buffer (especially for live streams, where the watermark might be rather low), because you end up reading in larger chunks (with larger intervals) than we would with just a single timedRead.

Re: merge requests for PLi's git #353 zakalibit

  • Senior Member
  • 51 posts

+5
Neutral

Posted 13 November 2013 - 22:17

eHttpStream::read is always called with a multiple of 188.
What if we just make sure it always returns a multiple of 188 (and store any leftovers in an intermediate buffer of 187 bytes)?
That would give the same result, but avoids having to sync on 0x47 all the time.

Actually, looking at the patch again, that's more or less what you are doing. And assuming we stay in sync, the cpu overhead is almost zero.

One more remark though; assuming that the stream delivers nice multiples of 188, you always end up doing two timedRead's in the eHttpStream::read call, instead of just one.
That's suboptimal for the fill level of the hardware buffer (especially for live streams, where the watermark might be rather low), because you end up reading in larger chunks (with larger intervals) than we would with just a single timedRead.

 

To be honest I would not have any concerns about 2 timedReads, they have not caused any problems to me, in this case algorithm correctness and playback reliability are the main priorities, I have tested this code with about 4-5 iptv providers over the last year, all these changes have real issues behind, I even can not recall them all.

So to reiterate, I see no visible benefit to the end user in optimizing those reads, but we risk the image quality in "extreme" situations. I find all this hard to argue as some of the code paths cover some edge cases, that are hard to prove or demonstrate.

 

Actually we could do filtering only at the first read making sure that stream is in sync, I think I had this in one of the versions and I can not recall any issues with it, then we probably could remove the double read. But we definitely need to filter at least at the first read, I had intermittent issues with one of the iptv providers that was starting streaming not at the start of the packet.


Edited by zakalibit, 13 November 2013 - 22:18.


Re: merge requests for PLi's git #354 pieterg

  • PLi® Core member
  • 32,766 posts

+245
Excellent

Posted 13 November 2013 - 22:50

OK, sounds like a good compromise (I am seriously concerned about anything which could have a negative effect the hardware buffer levels, some receivers already have/had issues with lost audio, because they do not keep their dvr buffer full enough, when playing a live stream at live speed)

Re: merge requests for PLi's git #355 zakalibit

  • Senior Member
  • 51 posts

+5
Neutral

Posted 14 November 2013 - 10:40

OK, sounds like a good compromise (I am seriously concerned about anything which could have a negative effect the hardware buffer levels, some receivers already have/had issues with lost audio, because they do not keep their dvr buffer full enough, when playing a live stream at live speed)

The same problem would exist with chunked transfer, we have few reads there. I'm trying to imagine under which circumstances the buffers wont be maintained full enough, the only one I can imagine is a struggling stream, otherwise the code even with 2 reads is fast enough.

 

If you notice in the chunked read branch, the second read will happen with a very small timeout if we have already something in the buffer,  could do the same in the non-chunked branch.

Another thing would be good to have there is to have the read timeout time as function of the stream bitrate and network performance, as at the moment when the stream is struggling the picture on the screen does not show smoothly, i.e. it should do some sort of buffering. But that is hard to figure out in the httpstream, as even wen we read little, hardware buffers could be quite full because of the previous reads. We could have a total bytes read counter and lapsed time, so if the total read bytes * 8/ lapsed time, does not give a us at a number that is >= to the stream bitrate it means we have to read until will fill the buffer with certain amount of data. Is is from top of my head, it would require some prototyping and some testing.


Edited by zakalibit, 14 November 2013 - 10:40.


Re: merge requests for PLi's git #356 zakalibit

  • Senior Member
  • 51 posts

+5
Neutral

Posted 15 November 2013 - 13:57

Pieter why?:

 

request.append("User-Agent: ").append("Enigma2").append("\r\n");

 

and not 

 

request.append("User-Agent: Enigma2"\r\n");

 

why waste cpu here? :)


Edited by zakalibit, 15 November 2013 - 13:59.


Re: merge requests for PLi's git #357 pieterg

  • PLi® Core member
  • 32,766 posts

+245
Excellent

Posted 15 November 2013 - 16:06

Because I expect the user agent value to be configurable some day, it seemed to make sense to split header + value (just like we do with the other headers btw).
I intended to use the PACKAGE define, but that turned out not to be readily available, therefore the fixed string.

Re: merge requests for PLi's git #358 Persian Prince

  • Senior Member
  • 1,982 posts

+247
Excellent

Posted 17 November 2013 - 21:06

i think you should take a look at http://sourceforge.n...faceAssignment/

 

it seems that it's broken and it's not working correctly


Open Vision sources: https://github.com/OpenVisionE2


Re: merge requests for PLi's git #359 zakalibit

  • Senior Member
  • 51 posts

+5
Neutral

Posted 20 November 2013 - 17:05

I have started from scratch :)

Here is the chunked transfer changes https://sourceforge....ge-requests/27/



Re: merge requests for PLi's git #360 pieterg

  • PLi® Core member
  • 32,766 posts

+245
Excellent

Posted 20 November 2013 - 17:32

thanks, merged


2 user(s) are reading this topic

0 members, 2 guests, 0 anonymous users