The majority of approaches to activity recognition in sensor environments are either based on manually constructed rules for recognizing activities or lack the ability to incorporate complex temporal dependencies. Furthermore, in many cases, the rather unrealistic assumption is made that the subject carries out only one activity at a time. In this paper, we describe the use of Markov logic as a declarative framework for recognizing interleaved and concurrent activities incorporating both input from pervasive lightweight sensor technology and common-sense background knowledge. In particular, we assess its ability to learn statistical-temporal models from training data and to combine these models with background knowledge to improve the overall recognition accuracy. We also show the viability and the benefit of exploiting both qualitative and quantitative temporal relationships like the duration of the activities and their temporal order. To this end, we propose two Markov logic formulations for inferring the foreground activity as well as each activities’ start and end times. We evaluate the approach on an established dataset where it outperforms state-of-the-art algorithms for activity recognition.