Jump to content


Photo

Direct streaming with Edision OS Nino Pro


  • Please log in to reply
14 replies to this topic

#1 microboi37

  • Senior Member
  • 74 posts

0
Neutral

Posted 3 April 2023 - 10:05

Hello all,

 

I need some help with direct streaming and this particular STB, Edision OS Nino Pro.

 

I'm using direct streaming with the MidnightStreamer panel. Direct streams don't use FFmpeg, it's raw data streaming.

 

Well, all STBs and players work OK, but this particular Edision model.

 

Streams open but there are occasional glitches on random basis. If I increase the buffer timeout in the panel from 200ms to 400ms, glitches go away.

 

You could thing, OK, problem solved, 200ms more or less it doesn't make any difference, but it's not the case. I use direct streaming as on demand through multiple hops and I have a lot of Edisions OS Nino Pro attached to each hop. 200ms for 7 hops means a delay of more than 1 second for STBs opening streams from the last hop.

 

I have tried all versions of OpenPLi to the latest 8.3 but it makes no difference. Occasional glitches and artifacts are always there if the timeout is set to 200ms.

 

I wonder what's special about this model that causes glitches? Less buffer network memory, a problem with network drivers, a problem with the video decoder? Why does increasing the timeout to 400ms solve the problem?

 

MidnighStreamer support told me to contact OpenPLi or the device manufacturer as this is happening only on a single device, and I cannot say they are wrong because I have tried all STBs like Edision Mini, Nino, Mio, all VU+ STBs, all MAGs, VLC, exo player, but it's happening only on the Edision OS Nino Pro.

 

Does anyone know what is different about this particular model?

 

Thank you

 



Re: Direct streaming with Edision OS Nino Pro #2 neo

  • PLi® Contributor
  • 715 posts

+48
Good

Posted 3 April 2023 - 15:59

The ethernet PHY is provided by the SoC, a Broadcom BCM73625, which was quite popular amongst low-budget STB's. For comparison, you also find it in the VU+ Zero.

 

Implementation in general is very poor, even on high-end STB's, in comparison to ethernet devices for PC's.

 

My guess would be either hardware design or drivers, for both you would need to contact Edision.



Re: Direct streaming with Edision OS Nino Pro #3 neo

  • PLi® Contributor
  • 715 posts

+48
Good

Posted 3 April 2023 - 16:00

p.s. This smells very much like an illegal IPTV setup, and we don't condone illegal activities here !



Re: Direct streaming with Edision OS Nino Pro #4 microboi37

  • Senior Member
  • 74 posts

0
Neutral

Posted 4 April 2023 - 07:33

Thank you for providing this info.

 

VU+ Zero works excellently even with a 100ms timeout. However I don't like this STB because the ethernet port burns quite fast even with low voltage surges.

 

Does OpenPLi have a network buffer parameter that can be adjusted?

 

I am not doing any illegal activity. My company has signed an agreement with local Bulgarian and Macedonian TV stations to bring their signal to Bulgarian and Macedonian speakers in Germany and Canada.



Re: Direct streaming with Edision OS Nino Pro #5 neo

  • PLi® Contributor
  • 715 posts

+48
Good

Posted 4 April 2023 - 13:24

Does OpenPLi have a network buffer parameter that can be adjusted?

 

It has a 188k buffer (1024 mpeg-ts packets which are 188 bytes long), but it is filled and emptied asyncronically, there is no parameter to delay it to make sure the buffer doesn't run empty.

 

But I doubt your problem originates there, as this is a near realtime process, and the same for all boxes.

 

Have you tested the box locally, directly connected, to rule out latency and jitter issues elsewhere in the connection? Because the problems you describe are identical to situations you get when you stream over wifi.

 

Given the fact it is a business setup, wouldn't it be a lot cheaper to just replace the box by one that works good, instead of spending all this time on it?



Re: Direct streaming with Edision OS Nino Pro #6 microboi37

  • Senior Member
  • 74 posts

0
Neutral

Posted 5 April 2023 - 07:07

I was hoping there was a buffer size parameter to increase. 30kB of data causes glitches, while 60kB does not. I don't know how to interpret it. TCP/IP stack? Video decoder starts decoding before there is sufficient data in the video buffer?
 
Testing the box locally is the first thing I've done and the result is the same.
 
Replacing all Edision Nino Pro is the next step if I don't find a solution by asking in the forums.


Re: Direct streaming with Edision OS Nino Pro #7 Dimitrij

  • PLi® Core member
  • 10,327 posts

+350
Excellent

Posted 5 April 2023 - 08:55

 

You could thing, OK, problem solved, 200ms more or less it doesn't make any difference, but it's not the case. I use direct streaming as on demand through multiple hops and I have a lot of Edisions OS Nino Pro attached to each hop. 200ms for 7 hops means a delay of more than 1 second for STBs opening streams from the last hop.

What is one second, is it a lot?

On my receivers, the stream opens in 2-3 seconds and this is not a problem.


GigaBlue UHD Quad 4K /Lunix3-4K/Duo 4K


Re: Direct streaming with Edision OS Nino Pro #8 microboi37

  • Senior Member
  • 74 posts

0
Neutral

Posted 5 April 2023 - 09:24

This is an extra second added on top of the "base" open time:

 

200ms * 7 hops = 1400ms base delay

200ms * 7 hops = 1400ms extra delay

 

If I increase the timeout to 400ms, then the extra 1400ms are added to the base 1400ms. 2800ms in total. This is a minimum value. In practice the delay can be bigger, especially for customers in Canada.

 

With a 200ms timeout, the open time is max 3 seconds even in Canada.

 

3 or more seconds is not acceptable for our customers.



Re: Direct streaming with Edision OS Nino Pro #9 Dimitrij

  • PLi® Core member
  • 10,327 posts

+350
Excellent

Posted 5 April 2023 - 10:19

Many years ago on this forum there was a topic Xtrend ET9000, there was posted a script for network stability.

In particular for the stream, and it helped.

There were many options.

 

ice-network-tuner

#!/bin/sh
#DESCRIPTION=This script will set ETH0 to CPU2 and add Networktweaks
echo 000002 > /proc/irq/16/smp_affinity # eth0
sysctl -w net.core.rmem_max=8388608
sysctl -w net.core.wmem_max=8388608
sysctl -w net.core.rmem_default=65536
sysctl -w net.core.wmem_default=65536
sysctl -w net.ipv4.tcp_rmem='4096 87380 8388608'
sysctl -w net.ipv4.tcp_wmem='4096 65536 8388608'
sysctl -w net.ipv4.tcp_mem='8388608 8388608 8388608'
sysctl -w net.ipv4.route.flush=1
echo ""
echo ""
echo "Now eth0 on CPU2 and tweaks set....."
echo ""
echo ""
echo ""
echo ""
echo ""
echo ""
echo ""
echo ""
echo ""
echo ""
sleep 2

 


Edited by Dimitrij, 5 April 2023 - 10:22.

GigaBlue UHD Quad 4K /Lunix3-4K/Duo 4K


Re: Direct streaming with Edision OS Nino Pro #10 microboi37

  • Senior Member
  • 74 posts

0
Neutral

Posted 5 April 2023 - 11:57

I ran this script and it's been more than 1 hour since I don't see any glitch. I'll do more tests to make sure it's not a placebo.

 

Thank you very much for the provided script! I'll keep you posted



Re: Direct streaming with Edision OS Nino Pro #11 neo

  • PLi® Contributor
  • 715 posts

+48
Good

Posted 5 April 2023 - 14:32

The script increases the buffers used in the TCP stack, you probably only need the wmem values for your use-case.

 

To make these permanent, add them (without the "sysctl -w") to /etc/sysctl.conf.

 

You can then also test if the cpu affinitiy hack is needed. If so, it is an indication the box is to busy with something, streaming in itself isn't cpu bound.



Re: Direct streaming with Edision OS Nino Pro #12 Dimitrij

  • PLi® Core member
  • 10,327 posts

+350
Excellent

Posted 5 April 2023 - 16:05

Only for info:

/etc/rc3.d/S01network_tuner.sh

#!/bin/sh
#DESCRIPTION=This script will set ETH0 to CPU2 and add Networktweaks
#echo 000002 > /proc/irq/8/smp_affinity # GFX
#echo 000002 > /proc/irq/10/smp_affinity # RPTD
#echo 000002 > /proc/irq/13/smp_affinity # BVNF0
#[ -f /proc/irq/16/smp_affinity ] && echo 000002 > /proc/irq/16/smp_affinity # eth0
#echo 000002 > /proc/irq/31/smp_affinity # AVD0
#[ -f /proc/irq/39/smp_affinity ] && echo 000002 > /proc/irq/39/smp_affinity # eth0
#[ -f /proc/irq/42/smp_affinity ] && echo 000002 > /proc/irq/42/smp_affinity # sata_brcmstb
#[ -f /proc/irq/44/smp_affinity ] && echo 000002 > /proc/irq/44/smp_affinity # sata_brcmstb
#echo 000002 > /proc/irq/57/smp_affinity # ehci_hcd:usb2
#echo 000002 > /proc/irq/59/smp_affinity # PCR
#echo 000002 > /proc/irq/62/smp_affinity # ehci_hcd:usb1
#echo 000002 > /proc/irq/63/smp_affinity # ohci_hcd:usb3
#echo 000002 > /proc/irq/64/smp_affinity # ohci_hcd:usb4
ulimit -n 4096
ulimit -s 16384
ifconfig eth0 txqueuelen 50000
ifconfig eth0 promisc
ethtool -K eth0 gro off
ethtool -K eth0 gso off
#echo 1024 65000 > /proc/sys/net/ipv4/ip_local_port_range
#echo 500 512000 64 2048 > /proc/sys/kernel/sem
#echo 268435456 > /proc/sys/kernel/shmmax
#echo 2048 > /proc/sys/kernel/msgmni
#echo 64000 > /proc/sys/kernel/msgmax
sysctl -w fs.file-max=209708
sysctl -w vm.swappiness=10
sysctl -w vm.dirty_ratio=60
sysctl -w vm.dirty_background_ratio=2 
sysctl -w vm.vfs_cache_pressure=50
sysctl -w vm.mmap_min_addr=4096
sysctl -w vm.overcommit_ratio=0
sysctl -w vm.overcommit_memory=0
sysctl -w net.core.rmem_max=16777216
sysctl -w net.core.wmem_max=16777216
sysctl -w net.core.rmem_default=131072
sysctl -w net.core.wmem_default=131072
sysctl -w net.core.somaxconn=32768
sysctl -w net.core.optmem_max=65536
sysctl -w net.core.hot_list_length=1024
sysctl -w net.ipv4.tcp_rmem='8192 87380 16777216'
sysctl -w net.ipv4.tcp_wmem='8192 65536 16777216'
sysctl -w net.ipv4.tcp_mem='65536 131072 262144'
sysctl -w net.ipv4.udp_mem='65536 131072 262144' 
sysctl -w net.ipv4.tcp_max_orphans=16384
sysctl -w net.ipv4.tcp_orphan_retries=0
sysctl -w net.ipv4.ipfrag_high_thresh=512000
sysctl -w net.ipv4.ipfrag_low_thresh=446464
sysctl -w net.ipv4.tcp_rfc1337=1
sysctl -w net.ipv4.ip_no_pmtu_disc=0
sysctl -w net.ipv4.tcp_sack=1
sysctl -w net.ipv4.tcp_dsack=1
sysctl -w net.ipv4.tcp_fack=1
sysctl -w net.ipv4.tcp_window_scaling=1
sysctl -w net.ipv4.tcp_timestamps=1
sysctl -w net.ipv4.tcp_ecn=0
sysctl -w net.ipv4.tcp_congestion_control=cubic
sysctl -w net.ipv4.ip_forward=0
sysctl -w net.ipv4.tcp_no_metrics_save=1
sysctl -w net.ipv4.conf.all.forwarding=0
sysctl -w net.ipv4.conf.default.forwarding=0
sysctl -w net.ipv4.conf.all.send_redirects=0
sysctl -w net.ipv4.conf.default.send_redirects=0
sysctl -w net.ipv4.conf.all.accept_source_route=0
sysctl -w net.ipv4.conf.default.accept_source_route=0
sysctl -w net.ipv4.conf.all.accept_redirects=0
sysctl -w net.ipv4.conf.default.accept_redirects=0
sysctl -w sunrpc.tcp_slot_table_entries=32
sysctl -w sunrpc.udp_slot_table_entries=32
sysctl -w net.unix.max_dgram_qlen=50
#sysctl -w net.ipv4.tcp_frto=2
#sysctl -w net.ipv4.tcp_frto_response=2
#sysctl -w net.core.netdev_max_backlog=250000
#sysctl -w net.ipv4.tcp_moderate_rcvbuf=0
#sysctl -w net.ipv4.tcp_low_latency=0
sysctl -w net.ipv4.route.flush=1
echo 1 > /proc/sys/net/ipv4/ip_forward
#echo htcp > /proc/sys/net/ipv4/tcp_congestion_control   # default cubic
#sysctl -w net.ipv4.tcp_congestion_control=htcp
#echo fq > /proc/sys/net/core/default_qdisc   # default pfifo_fast
 
## echo 0 > /proc/sys/net/ipv4/tcp_tw_reuse                     # already set
## echo 0 > /proc/sys/net/ipv4/tcp_tw_recycle                   # already set
## echo 1 > /proc/sys/net/ipv4/tcp_syncookies                   # already set
## echo 1 > /proc/sys/net/ipv4/tcp_window_scaling               # already set
## echo 1 > /proc/sys/net/ipv4/tcp_timestamps                   # already set
## echo 1 > /proc/sys/net/ipv4/tcp_sack                         # already set
echo 0 > /proc/sys/net/ipv4/tcp_slow_start_after_idle           # for http recommended?
echo 5000 > /proc/sys/net/core/netdev_max_backlog

# increase Linux autotuning TCP buffer limit to 32MB
echo 4096 87380 33554432 > /proc/sys/net/ipv4/tcp_rmem
echo 4096 87380 33554432 > /proc/sys/net/ipv4/tcp_wmem

# allow testing with buffers up to 64MB
echo 67108864 > /proc/sys/net/core/wmem_max
echo 67108864 > /proc/sys/net/core/rmem_max


echo ""
echo "*******************************************************************"
echo "* Ice-Network-Tuner v1.4                                          *"
echo "* eth0/sata on CPU2 and Tweaks are activated.....now ;)           *"
echo "*                                                                 *"
echo "*******************************************************************"
echo ""
sleep 2
exit 0

 


GigaBlue UHD Quad 4K /Lunix3-4K/Duo 4K


Re: Direct streaming with Edision OS Nino Pro #13 microboi37

  • Senior Member
  • 74 posts

0
Neutral

Posted 6 April 2023 - 09:04

There have been not been glitches since yesterday. At this point I doubt it is by chance. I don't know which command from the first script helped. I should try one by one to find out but that's fine with me as long as it works.

 

You saved me from a lot of headaches!

 

Thank you very much for your help!



Re: Direct streaming with Edision OS Nino Pro #14 neo

  • PLi® Contributor
  • 715 posts

+48
Good

Posted 6 April 2023 - 12:23

My money will be on "net.ipv4.tcp_wmem", which enlarges the write buffer.



Re: Direct streaming with Edision OS Nino Pro #15 Erik Slagter

  • PLi® Core member
  • 46,969 posts

+542
Excellent

Posted 9 April 2023 - 10:12

Mine too. And also some money on in fact this parameter is set "right" normally, but overridden by the manufacturer, due to low memory in the receiver. Although I am not quite a VU+ fanboy, I must say they at least fit the receivers with a decent amount of (RAM) memory, and gigabit ethernet interfaces. I don't think it's very smart to use such a low end receiver (containing a BCM73625 SoC) for business purposes.


* Wavefrontier T90 with 28E/23E/19E/13E via SCR switches 2 x 2 x 6 user bands
I don't read PM -> if you have something to ask or to report, do it in the forum so others can benefit. I don't take freelance jobs.
Ik lees geen PM -> als je iets te vragen of te melden hebt, doe het op het forum, zodat anderen er ook wat aan hebben.



1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users