What Do I Know?! (Čo ja viem) - vol 1.
Real-time system improvements
I recently did an upgrade of real-time graphics system for local TV show What Do I Know?! (original format by Talpa). It was not actually just upgrade but rather completely new system since I build it from scratch. Main goal was to make it as optimized and clean as possible. I also decided to create “real-time version” of all graphical elements in order to have complete control over their look and animation with set of parameters.
Real-time graphical elements
To understand this better I have to explain that during previous shows I usually used pre-rendered stills and videos of graphics elements. Even though it might have less performance impact than creating elements in real-time, it could be quite complicated to switch between many image sequences based on required type of element, style and current state. Since I had enough time I decided to make it on my own - all in real-time, and also nicer than original design I was supposed to use for this show. (Truth is that I couldn’t produce something vastly different than original design so I had to stay in certain boundaries.) In the end I managed to get smooth 60fps (even though I am using 50fps for broadcast standard) without frame drops even during “heaviest” parts of the show.
Best part of this setup was the fact that I could change look of all elements very easily - without pre-rendering and exporting anything. Should this box have sharper corners and faster animation? No problem. Let me just adjust this value and see instant changes on all cloned elements.


Reordering elements
One certain feature I wanted to implement was ability to visually reorder elements. What Do I Know?! is knowledge game show where three celebrities, 200 students and TV viewers with smartphone application are answering all sorts of questions. In some of them are elements displayed in wrong order (leaving players with task of finding right order). Elements should eventually reorder to form correct answer. This was originally done only by exchanging text between elements but it was not particularly pleasing to watch. I always enjoyed watching Eurovision’s voting graphics composed of elements that reorder based on point count with smooth transition (take a look at Eurovision voting graphics).
Therefore I made setup where elements could reorder with nice animation. This is done in 2D with bit of “perspective trickery” in order to look like 3D. Whole setup is driven by array of values that represent position of element on each index. In following gif I am changing default order of elements to represent correct answer.
(E.g. element n. 4 - “Catch Me If You Can” - gets value 6 which means it will move from position 4 to position 6.)


Elements moving up are being scaled up during animation while elements moving down are being scaled down. This essentially looks like elements are moved in 3D space even though they are all just GLSL textures. Combining them together represents another challenge as the ones moving down should be behind the ones moving up. I managed to get desired effect fairly simply - by first defining order in which these textures should be composited (using over operation) and then transforming them. Whole scaling functionality was done inside of this graphical element that I created. That means I could tweak its behavior (not just for reordering but basically any behavior I defined for it) just by changing master element - propagating changes automatically to all clones.
This reordering setup is designed to work on arbitrary number of elements. It is perfectly fine to reorder either 3 or 50 elements in multiple columns - animation would work as expected in each case.
Image scaling and positioning
Some questions include also pictures. I wanted to make the best use of available space and therefore I came up with simple setup to perform image scaling and positioning. Question-makers always provide me with pictures scaled to their largest preferred size within format suitable directly for broadcast output (first image below). Since I didn’t want to make their life harder by requesting cropped images I am performing automatic borders detection (by using alpha channel). Image is afterwards auto-cropped based on these borders (second image below).


Once I have image in its “native resolution” I could perform scaling and positioning logic. I wanted to make image as big as possible (for best possible readability). However its size should never exceed original resolution as upscaling is not desirable. Scaling and positioning setup is driven by simple python code that returns values dependent on current type and state of question. I.e. when question is hidden, image is occupying whole space, but once it is visible, image is scaled and positioned to only occupy top of the screen. Question elements could have variable height (based on number of possible answers) so image had to react to this as well. It is quite simple setup that produces predictable results as long as alpha channel doesn’t contain errors. (I also added threshold to be able to handle special cases where image might have some mess in its alpha channel.)
That’s it for this swift look at some highlights of new real-time system for What Do I Know?!. I hope you will enjoy watching the show with new smooth animations. Thanks for reading :)