Conducting experiments is an important practice for both engineering design and scientific inquiry. Engineers iteratively conduct experiments to evaluate ideas, make informed decisions, and optimize their designs. However, both engineering design and scientific experimentation are open‐ended tasks that are not easy to assess. Recent studies have demonstrated how technology‐based assessments can help to capture and characterize these open‐ended tasks using unobtrusive data logging. This study builds upon a model to characterize students' experimentation strategies in design (ESD). Ten undergraduate students worked on a design challenge using a computer‐aided design (CAD) tool that captured all their interactions with the software. This “process data” was compared to “think‐aloud data,” which included students' explanations of their rationale for the experiments. The results suggest that the process data and the think‐aloud data have both affordances and limitations toward the goal of assessing students' ESD. While the process data was an effective approach to identify relevant sequences of actions, this type of data failed to ensure that students carried them out with a specific purpose. On the other hand, the think‐aloud data captured students' rationale for conducting experiments, but it depended on students' ability to verbalize their actions. In addition, the implementation of think‐aloud procedures and their analysis are time consuming tasks, and can only be done individually.