Figure 7 - uploaded by Rémi Gilblas
Content may be subject to copyright.
F/O and CAPT static ports. Wrong situation is when protecting cover is still present (Fig. 7b).  

F/O and CAPT static ports. Wrong situation is when protecting cover is still present (Fig. 7b).  

Contexts in source publication

Context 1
... We need to verify that the protecting cover is removed from the static ports (Fig. 7). Wrong situation is when protecting cover is still present (Fig. ...
Context 2
... We need to verify that the protecting cover is removed from the static ports (Fig. 7). Wrong situation is when protecting cover is still present (Fig. ...

Similar publications

Article
Full-text available
El artículo describe un proyecto enfocado a la investigación del control del movimiento de un modelo a escala de una aeronave de ala fija en tres ejes estacionarios, así como movimientos verticales y laterales, utilizando las superficies de control principales para desarrollar modelos dinámicos que describan su comportamiento. La finalidad es crear...
Article
Full-text available
A presente pesquisa teve o propósito de desenvolver um procedimento alternativo e inovador para identificar as manifestações patológicas superficiais em pavimento asfáltico de um trecho da rodovia TO-050, em Palmas - TO. Como ferramenta foi utilizado um Veículo Aéreo Não Tripulado (VANT), softwares para plano de voo e processamento de imagens, mosa...
Conference Paper
Full-text available
A pilotagem de aeronaves requer um alto nivel de preparacao dos pilotos, o que, em geral, se reflete em um periodo extensivo de formacao teorica e pratica dos aprendizes. Por meio da utilizacao dos simuladores de voo, os aprendizes estabelecem os primeiros contatos com o ambiente de pilotagem ainda em terra, diretamente no computador, reduzindo os...

Citations

... In aviation maintenance, several attempts have been made to either aid the operator or fully automate the inspection and repair process by introducing new technologies such as artificial intelligence in combination with robots [15][16][17][18][19] or drones [20,21] for defect detection of aircraft wings and fuselage structures [22][23][24][25][26][27][28], wing fuel tanks [29], tires [30] and composite parts [22,[31][32][33][34]. The inspection and serviceability of engine parts such as shafts [35,36], fan blades [26,37], compressor blades [38,39] and turbine blades [8,[40][41][42][43][44][45] is of particular interest as they are safety-critical, and thus unsurprisingly the most rejected parts during engine maintenance [46]. ...
... In aviation maintenance, several attempts have been made to either aid the operator or fully automate the inspection and repair process by introducing new technologies such as artificial intelligence in combination with robots [15][16][17][18][19] or drones [20,21] for defect detection of aircraft wings and fuselage structures [22][23][24][25][26][27][28], wing fuel tanks [29], tires [30] and composite parts [22,[31][32][33][34]. The inspection and serviceability of engine parts such as shafts [35,36], fan blades [26,37], compressor blades [38,39] and turbine blades [8,[40][41][42][43][44][45] is of particular interest as they are safety-critical, and thus unsurprisingly the most rejected parts during engine maintenance [46]. From a hardware perspective, the automation of inspection and repair processes is commonly done using robots. ...
Article
Full-text available
Background—Aircraft inspection is crucial for safe flight operations and is predominantly performed by human operators, who are unreliable, inconsistent, subjective, and prone to err. Thus, advanced technologies offer the potential to overcome those limitations and improve inspection quality. Method—This paper compares the performance of human operators with image processing, artificial intelligence software and 3D scanning for different types of inspection. The results were statistically analysed in terms of inspection accuracy, consistency and time. Additionally, other factors relevant to operations were assessed using a SWOT and weighted factor analysis. Results—The results show that operators’ performance in screen-based inspection tasks was superior to inspection software due to their strong cognitive abilities, decision-making capabilities, versatility and adaptability to changing conditions. In part-based inspection however, 3D scanning outperformed the operator while being significantly slower. Overall, the strength of technological systems lies in their consistency, availability and unbiasedness. Conclusions—The performance of inspection software should improve to be reliably used in blade inspection. While 3D scanning showed the best results, it is not always technically feasible (e.g., in a borescope inspection) nor economically viable. This work provides a list of evaluation criteria beyond solely inspection performance that could be considered when comparing different inspection systems.
... In aviation, automated inspection systems have been developed for detecting damages on aircraft wings and fuselage [16][17][18][19][20][21], tyres [22], engines and composite parts [16,[23][24][25][26]. Several airlines have shown increasing interest in automating visual inspection and tested several systems, including inspection drones for detection of lightning strikes, marking checks and paint quality assurance [27,28], robots with vacuum pads for thermographic crack detection [29] and visual inspection robots for fuselage structures [30]. Engine parts such as shafts [31,32], fan blades [20,33], compressor blades [9,34,35] and turbine blades [9,26,[36][37][38][39][40][41][42][43] are also of particular interest as all are safety-critical to flight operations. ...
... In aviation, automated inspection systems have been developed for detecting damages on aircraft wings and fuselage [16][17][18][19][20][21], tyres [22], engines and composite parts [16,[23][24][25][26]. Several airlines have shown increasing interest in automating visual inspection and tested several systems, including inspection drones for detection of lightning strikes, marking checks and paint quality assurance [27,28], robots with vacuum pads for thermographic crack detection [29] and visual inspection robots for fuselage structures [30]. Engine parts such as shafts [31,32], fan blades [20,33], compressor blades [9,34,35] and turbine blades [9,26,[36][37][38][39][40][41][42][43] are also of particular interest as all are safety-critical to flight operations. ...
... Note that there is a lack of consistency when it comes to reporting the performance of inspection systems. Some researchers only reported the TP rate [35,42] or the FP and FN rates [20], while others reported the error rate [48], detection rate [21,43] or accuracy [43], and still, others made it dependent of the defect size and used a probability of detection curve [33]. For some neural networks, the performance was measured in pixel accuracy, which describes the quality of the classification rather than the detection [9,47]. ...
Article
Full-text available
Background—In the field of aviation, maintenance and inspections of engines are vitally important in ensuring the safe functionality of fault-free aircrafts. There is value in exploring automated defect detection systems that can assist in this process. Existing effort has mostly been directed at artificial intelligence, specifically neural networks. However, that approach is critically dependent on large datasets, which can be problematic to obtain. For more specialised cases where data are sparse, the image processing techniques have potential, but this is poorly represented in the literature. Aim—This research sought to develop methods (a) to automatically detect defects on the edges of engine blades (nicks, dents and tears) and (b) to support the decision-making of the inspector when providing a recommended maintenance action based on the engine manual. Findings—For a small sample test size of 60 blades, the combined system was able to detect and locate the defects with an accuracy of 83%. It quantified morphological features of defect size and location. False positive and false negative rates were 46% and 17% respectively based on ground truth. Originality—The work shows that image-processing approaches have potential value as a method for detecting defects in small data sets. The work also identifies which viewing perspectives are more favourable for automated detection, namely, those that are perpendicular to the blade surface.
... Scholz-Reiter et al. [18] exploit texture analysis and statistical image processing methods to localize defects, which is not designed for crack detection. Recently, Jovančević et al. [2] propose an automated preflight aircraft inspection method using a pan-tilt-zoom camera and image-based low-level feature extraction, such as uniformity of isolated image regions, convexity of segmented shapes, and periodicity of the image intensity signal. This method does not explicitly leverage the repetitive property of geometric shapes and physical context. ...
Article
Full-text available
We present a novel context-driven approach to image-based crack detection for automated inspection of aircraft surface and subsurface defects. In contrast to existing image-based crack detection methods, which rely mostly on low-level image processing and data-driven methods, our method explicitly incorporates multiple high-level context into low-level processing. We present two classes of context: geometric/structural context and physical context. We formulate mathematically a sparse decomposition problem to incorporate the context and apply robust principal component analysis to decompose typical repetitive rivet regions into a normal component and a sparse component. Cracks are detected in the sparse component. By applying the proposed context-driven approach to coated and uncoated test specimens, we achieve significant reduction in false detections compared to the approach without exploiting context.
... Purll [1] first defined web as being any material produced in the form of strips. Textile, paper, glass, wood, metal, food, and industrial parts on a conveyor belt can all be cited under this heading. ...
Article
Full-text available
This paper presents a detailed description of a vision system developed to detect and locate the non-uniformities that appear on the web offset printing machine. Specifically, the system is capable of monitoring the high-speed web offset printing in a real-time environment and alerting the operator of any events (e.g., structural faults, color variations, missing characters, ink splashes, streaks, etc.) that disrupt the uniformities in the web offset printing. Such events are thought to affect crucial printing properties, resulting in non-uniformities in printing and impacting its quality and printability. This paper describes the vision system in terms of its hardware modules, as well as the image processing algorithms that it utilizes to scan the color images and locate areas of defect from the printing web. Basically, the system utilizes high-speed image scanning algorithms to detect edges and boundaries using linear and non-linear filters of dynamic size, threshold and transformation for further analysis. In addition to being tested in a laboratory environment, a prototype of this system was constructed and deployed to a web printing system, where its performance was evaluated under realistic conditions. The system was installed on a Flexo gravure-print-press machine for testing, and it was found that the vision system was able to successfully monitor and detect non-uniformities.
... Fixed pattern noise is a limit on the minimum detectable signal. It is also dependent on temperature and the pattern can change significantly (Purll, 1985), requiring any calibration to be conducted at the operating temperature of the sensor. ...
... PRNU is more dependent on the wavelength of the incident light than the light intensity and the temperature of the sensor (Purll, 1985). Hence the infra-red filter used on standard video CCD sensors also reduces PRNU. ...
Article
In robotics, "person following" depicts the servoing of the relative situation of a robot w.r.t. a moving person. This property may be hard to achieve, especially when the estimation of the person ego-motion is weak (e.g., due to limited prior knowledge or computational resources). This paper introduces a nonholomic mobile robot controller, which ensures an intuitive and safe behavior through an insightful robot-centered problem statement. Under realistic bounded-error readings of hidden constant person velocities, ultimate boundedness of the state vector norm can be ensured in the neighborhood of its equilibrium.
Article
Full-text available
In the process of strawberry easily broken fruit picking, in order to reduce the damage rate of the fruit, improves accuracy and efficiency of picking robot, field put forward a motion capture system based on international standard badminton edge feature detection and capture automation algorithm process of night picking robot badminton motion capture techniques training methods. The badminton motion capture system can analyze the game video in real time and obtain the accuracy rate of excellent badminton players and the technical characteristics of badminton motion capture through motion capture. The purpose of this article is to apply the high-precision motion capture vision control system to the design of the vision control system of the robot in the night picking process, so as to effectively improve the observation and recognition accuracy of the robot in the night picking process, so as to improve the degree of automation of the operation. This paper tests the reliability of the picking robot vision system. Taking the environment of picking at night as an example, image processing was performed on the edge features of the fruits picked by the picking robot. The results show that smooth and enhanced image processing can successfully extract edge features of fruit images. The accuracy of the target recognition rate and the positioning ability of the vision system of the picking robot were tested by the edge feature test. The results showed that the accuracy of the target recognition rate and the positioning ability of the motion edge of the vision system were far higher than 91%, satisfying the automation demand of the picking robot operation with high precision.