I studied the benefits and limitations of Augmented Reality (AR) Head-Mounted Displays (AR-HMDs) for collaborative 3D data exploration. Prior of conducting any projects, I saw in AR-HMDs benefits concerning their immersive features: AR-HMDs merge the interactive, visualization, collaborative, and users' physical spaces together. Multiple collaborators can then see and interact directly with 3D visuals anchored within the users' physical space. AR-HMDs usually rely on stereoscopic 3D displays which provide additional depth cues compared to 2D screens, supporting users at understanding 3D datasets better. As AR-HMDs allow users to see each other within the workspace, seamless switches between discussion and exploration phases are possible. Interacting within those visualizations allow for fast and intuitive 3D direct interactions, which yields cues about one's intentions to others, e.g., moving an object by grabbing it is a strong cue about what a person intends to do with that object. Those cues are important for everyone to understand what is currently going on. Finally, by not occluding the users' physical space, usual but important tools such as billboards and workstations performing simulations are still easily accessible within this environment without wearing off the headsets. That being said, and while AR-HMDs are being studied for decades, their computing power before the recent release of the HoloLens in 2016 was not enough for an efficient exploration of 3D data such as ocean datasets. Moreover, previous researchers were more interested in how to make AR possible as opposed to how to use AR. Then, despite all those qualities one may think prior of working with AR-HMDs, there were almost no work that discusses the exploration of such 3D datasets. Moreover AR-HMDs are not suitable for 2D input which are however commonly used with usual explorative tools such as ParaView or CAD software, where users such as scientists and engineers are already efficient with. I then theorize in what situations are AR-HMDs preferable. They seem preferable when the purpose is to share insights with multiple collaborators and to explore patterns together, and where explorative tools can be minimal compared to what workstations provide as most of the prior work and simulations can be done before hand. I am thus combining AR-HMDs with multi-touch tablets, where I use AR-HMDs to merge the visualizations, some 3D interactions, and the collaborative spaces within the users' physical space, and I use the tablets for 2D input and usual Graphical User Interfaces that most software provides (e.g., buttons and menus). I then studied low-level interactions necessary for data exploration which concern the selection of points and regions inside datasets using this new hybrid system. The techniques my co-authors and I have chosen possess different level of directness that we investigated. As this PhD aims at studying AR-HMDs within collaborative environments, I also studied their capacities to adapt the visual to each collaborator for a given anchored 3D object. This is similar to the relaxed "What-You-See-Is-What-I-See" that allows, e.g., multiple users to see different parts of a shared document that remote users can edit simultaneously. Finally, I am currently (i.e., is not finished by the time I am writing this PhD) studying the use of this new system for the collaborative 3D data exploration of ocean datasets that my collaborators at Helmholtz-Zentrum Geesthacht, Germany, are working on. This PhD provides a state of the art of AR used within collaborative environments. It also gives insights about the impacts of 3D interaction directness for 3D data exploration. This PhD finally gives designers insights about the use of AR for collaborative scientific data exploration, with a focus on oceanography.