Accessibility is a required quality for websites today, and several tools exist to test for this quality. These tools are highly advantageous, but sadly they also have some limitations. A particular set of challenges they face is in the evaluation of Rich Internet Applications (RIAs). In this paper, we carry out an experiment to compare and analyze different accessibility testing tools as they evaluate 10 educational websites. We judged these tools based on their error detection, guideline coverage, speed, similarity to one another, and their relative performance when evaluating RIAs. The experiment findings revealed the strength and limitations of each tool. The results of this experiment also exposed that there are many guidelines and success criteria that accessibility testing tools are not able to cover, and that some evaluation tools are similar to each other in terms of the results they produce. Lastly, this experiment highlights a discrepancy in the behavior of the tools when evaluating RIAs compared to when evaluating static websites, although some more than others. This experiment has some limitations which we presented. As a future work, we intend to work with an expert to determine the accuracy of the results produced from the experiment. We also intend to delve deeper into the limitations of these tools and come up with possible solutions.KeywordsAccessibilityAccessibility evaluation toolsOnline educationWeb Content Accessibility Guideline