I wouldn't limit Synfig UI on this area in any way. I would just rely on whatever windows manager offers you at any time.
Why discard touch screen? If the input gives you one action (click here, zoom there, drag from here to there, ...) why decide on follow the action requested depending on who is sending them?
I imagine the Synfig Studio like a remote device that just receive action requirements with its own arguments. There must be a intermediate interface the UI adapted to each situation that receive inputs from different devices and interprete them into Synfig's action language.
I would like to use Synfig Studio with Kinect one of those days
needs your help!
Developers, packagers, bug testers, translators, artists, web developers, wiki writers... you can