I would like to know the processing involved for different input and output resolutions and scanning methods, something like http://forums.openpl...de-interlacing/
From the other thread, I know that 576i50 to 1080i50 conversion involves 576i50 -> 576p50 -> 1080p50 -> 1080i50, but if the final output is 1080p50, does it skip the last conversion or add another 1080i50 -> 1080p50 conversion?
If both STB and TV can support 1080p50 and 1080i50, which is the better output format for 576i50 input, 1080p50 or 1080i50? Is it the fewer processing steps involved the better (assuming TV and STB have the same SoC)?
Is it possible to use the following examples to explain the process? And how can I tell what 1080p frame rates my STB can support? My TV claims to support 1080p50.
576i50 -> 1080p50
576p50 -> 1080p50
576p50 -> 1080i50
Can the following generic conversions be summarized as sequences of processing steps?
[res1]p[fps1] -> [res2]p[fps2]
[res1]i[fps1] -> [res2]p[fps2]
[res1]p[fps1] -> [res2]i[fps2]
[res1]i[fps1] -> [res2]i[fps2]
res = resolution, fps = frames per second