The complexity in developing and evaluating user interfaces has been extremely increased in the last few years, because more and more devices offer capabilities for multimodal interaction. This applies in particular to mobile devices like smartphones and tablet computers. An existing parameter set, aimed at describing aspects of various modalities, was extended and modified to obtain a formal, seamless and generic model of multimodal interaction. This new model is used for run-time and offline analysis of multimodal human-computer interaction. As proof of concept we also developed the Android HCI Extractor. This tool is used to quantify multimodal interaction within Android devices, and to create instances of the proposed model for further analysis and live decision. An example of this tool running on a real application is also described.