What Do I Know?! (Čo ja viem) - vol 2.

7 minute read

Interactive application start

I finished working on “technical part” of this show about two weeks before actual shooting so I decided to also do something more creative. (Local graphics design for broadcast is many times done as fast as possible - without taking much care about its visual quality. Luckily there are exceptions to this rule and some really beautiful broadcast graphics are being created locally - especially by RTVS.) In my spare time I created new graphics design for so called “application start” of this game show.

What Do I Know?! (original format by Talpa) starts by introducing guest celebrities and afterwards host encourages TV viewers to test their knowledge by playing along using application on their smartphones. This is the point where host “starts What Do I Know?! application” on his huge monitor. This was previously done by clicking on actual application inside of Windows start menu. Icon would then scale up with bit of a stutter and that was it. Nothing fancy. I thought it would be great if this was more special. It also seemed like a good exercise. I wanted to concentrate on visualization of data that are being pushed to smartphone applications. With idea of some sort of pulsating “electronical” pattern (similar to electric circuits) I started looking for a way to create it.

Pattern exploration

I stumbled across Toadstorm’s circuit pattern which looks nice but is kind of hard to control. Eetu’s solution named Electric Snowflake seems much simpler & cleaner while providing beautiful results (with non-intersecting curves). Brilliant idea Eetu. I thought I might push this a little further. I started combining various input geometry with different cost attributes and was astonished by diversity of output patterns. I was able to quickly iterate on electronical shapes but also on rather organic patterns. I wasn’t really going for an organic pattern but I couldn’t help myself not to try just a few of them.

While looking at shapes that one might find inside of computers I realized it would be nice to make curves sometimes follow uniform grid patterns - as if there were some computer components that might influence area around them. Voronoi with clustering is good friend when it comes to such pattern so I used simple and efficient technique described by Matt Estela on cgwiki and plugged its output to cost attribute. I really liked the results so I stopped right there and used PDG to generate all variations of my current setup by changing inputs in switches. Voila - click here to see full output from PDG :)
(Below is displayed just small portion of generated patterns.)

Some of the patterns. Click here for more.

Anyway I am sharing hip file with patterns and points setup in case anyone might find it useful.
download patterns.hip

Look development

I wanted to place logo in the middle of pattern and “emit” lights / points from it. By moving these points along curves (technique best described by Matt Estela on cgwiki) it started to look like some sort of data transfer. I created two distinct looks to represent both “inactive” and “active” application states that are switched once logo is pressed. By using light instancing (linked to points moving on curves) I was able to get nice highlights around points on both curves and also grid surface below them. Even though this could be done in compositing just by using blur, proper light attenuation looks much better. Some points are also nicely blocking this light in certain areas. “Active state” also features points with variable opacity that greatly influence overall light transport.

Mantra page shader.

Original logo of the game show (a book) couldn’t be changed so there was not much room for creativity. Nevertheless I decided to at least add ability to open it and list trough its pages. As always this was done inside of Houdini with bit of VEX and also new bend SOP (H18). Textures with “loading progress” were generated in COPs and their paths were added to prims of each page as string attribs which are used in mantra shader.

Is Front Face VOP was used to differentiate shading for both sides of page. Output of Is Front Face is used as condition in Two Way Switch that switches between two BSDFs with proper textures applied (either clean or with progress information). I learned this simple trick from my friend - Ondrej Polacek. (In this case it would be sufficient just to switch between textures - not BSDFs. However switching between BSDFs provides much more possibilities in case it would be needed.)

Pressing logo in final application triggers transition between inactive and active state. Transitioning was done by using eroded mask of curve UVs. By changing exposure of this mask I was able to animate mask gradually from the center all the way to end of all curves.

Transition and mask.

It would be quite easy to use PDG for creating final renders of each pattern. However I didn’t need it (and it would also take a long time to render).

Touch data

Host “starts application” by pressing logo on his monitor and resets animation (in case another take is needed) by double pressing screen anywhere. I have previously created simple application for tablet that reacted to touch input by changing animation. However tablet had to be always installed and deinstalled on set as it couldn’t be there for the whole show. It also required manually switching input on monitor as various information (from the main video server) are displayed on this screen for the rest of the show.

Therefore I decided to use Raspberry Pi to capture and send touch input data. It presented multiple improvements in the whole workflow. Raspberry could be easily installed on set - without ever worrying about its deinstalation. It just captures touch data and sends them over tcpip to main video server that generates graphics and feeds it back to monitor on set. This means monitor input doesn’t have to be switched anymore - video server takes care of both graphics and important information displayed on host’s screen during the whole show.

I have written simple python code for Raspberry that acts as tcpip server sending “clicks” (along with their X,Y coordinates) to client. I didn’t need to serve multiple connections as there is only one video server used for the whole show. Once client is connected to Raspberry, listening socket is closed and new connection could be made only after first client disconnects (or error occurs). Client could also trigger actions such as reboot, shutdown or just trigger default communication response to check if connection is alive. I have created gui (using TkInter) for the purposes of fast visual debugging. In the beginning I tried to capture clicks using pynput and it worked great with mouse but failed to work with touch panel. Luckily TkInter was able to capture touch input.

One issue that arose during tests was related to multi-touch input. Multi-touch itself is not supported but I thought it won’t be a problem since I only needed single clicks. However furiously touching panel at multiple locations for longer time was causing hang in Raspberry’s touch / mouse input. One might think - nah, that is just good enough - nobody is going to touch this panel hundred times per second with all his fingers. Unfortunately host’s papers that are sometimes placed on top of this panel do exactly that. They interestingly produce high amount of touch events under their whole area and therefore render Raspberry useless almost instantly.

I order to overcome this issue, I have added lock and unlock functionality using xinput that just enables or disables specific input device (in this case touch panel). This way I am able to disable input from touch panel when it isn’t needed for interactive application (during this time host could have as many papers placed on top of this panel as he wants). Once I need to capture touch data, I just enable touch input by sending tcpip request to Raspberry. I have used following functions to lock and unlock touch panel (name of the input device is used because input id could change over time):

def lock():
  os.system('xinput -disable $(xinput list | grep "name_of_input_device" -m1 | cut -f2 | cut -d= -f2)')   

def unlock():
  os.system('xinput -enable $(xinput list | grep "name_of_input_device" -m1 | cut -f2 | cut -d= -f2)')   


Finally in order to automatically run python code after boot (and also to make it bit more “professional looking”) I have configured ~/.config/lxseesion/LXDE-pi/autostart. This way no additional UI is displayed when I exit application from fullscreen mode (as seen in gif above).

#@lxpanel --profile LXDE-pi
#@pcmanfm --desktop --profile LXDE-pi
@lxterminal -e "/path/to/shell_script_that_starts_python.sh"


Thanks for reading and Merry Christmas. I hope you will have fun testing your knowledge with What Do I Know?! in the next year :)