To read the full-text of this research, you can request a copy directly from the authors.
Abstract
Introduces interactive computer graphics through a description of hardware and a simple graphics package. Geometrical transformations and 3-D viewing are covered, followed by discussion of design architecture and raster operations. Concludes with chapters on shading models and colour applications. -R.Harris
To read the full-text of this research, you can request a copy directly from the authors.
... Computer graphics techniques are promising in addressing this problem. Computer graphics deals with the display of graphics on computer screen, only using explicit points, edges, and faces [23]. Given the fact that IFC models containing implicit geometries can be eventually displayed, it is reasonable to assume that these implicit geometries have been converted in some way into explicit geometries by computer graphics techniques. ...
... In general, there are three sub-types of B-Rep depending on how points are organized into faces, including (a) explicit polygons (Type 1), (b) polygons defined by pointers into a point list (Type 2), and (c) explicit edges (Type 3) [23]. Table 3 shows the formats of these sub-types, in which stands for faces, stands for points, and stands for edges. ...
... In general, there are three sub-types of B-Rep depending on how points are organized into faces, including (a) explicit polygons (Type 1), (b) polygons defined by pointers into a point list (Type 2), and (c) explicit edges (Type 3) [23]. Table 3 shows the formats of these sub-types, in which F stands for faces, P stands for points, and E stands for edges. ...
The development of a smart city and digital twin requires the integration of Building Information Modeling (BIM) and Geographic Information Systems (GIS), where BIM models are to be integrated into GIS for visualization and/or analysis. However, the intrinsic differences between BIM and GIS have led to enormous problems in BIM-to-GIS data conversion, and the use of City Geography Markup Language (CityGML) has further escalated this issue. This study aims to facilitate the use of BIM models in GIS by proposing using the shapefile format, and a creative approach for converting Industry Foundation Classes (IFC) to shapefile was developed by integrating a computer graphics technique. Thirteen building models were used to validate the proposed method. The result shows that: (1) the IFC-to-shapefile conversion is easier and more flexible to realize than the IFC-to-CityGML conversion, and (2) the computer graphics technique can improve the efficiency and reliability of BIM-to-GIS data conversion. This study can facilitate the use of BIM information in GIS and benefit studies working on digital twins and smart cities where building models are to be processed and integrated in GIS, or any other studies that need to manipulate IFC geometry in depth.
... Since 1950, topographic geodetic measurements are in regards to the Krasovsky ellipsoid with measures: major semiaxis OE (Figure 2) -6,378,245 m, minor semiaxis ON -6,356,863 m and flatness f = 1:298.3 [1]. ...
... Lately, the latter has been used as well for geodetic measurements and navigation, and is known as GPS (Global Positioning System). Both systems use reference ellipsoids to determine geodetic coordinates, which differ from the Krasovsky ellipsoid [1]. As far as we know, however, geodetic coordinates received by a GPS are corrected according to the Krasovsky ellipsoid. ...
... For the purposes of broadcasting central azimuthal projections are made, because with them the orthodromes (the lines on the globe, connecting two points in the shortest possible way) are displayed as straight lines. In geodesy, a geodetic projection is used, which actually is a sloping stereographic azimuthal conformal projection [1]. Our requirements for the projection are as follows: ...
Integration and graphical presentation of information for digital signal processing of weather radar stations (WRS) of a Hail Suppression System is investigated. A general coordinate system is studied and a choice is made for cartographic projection of the data. A reasonable compromise between the accuracy and the speed is obtained regarding the graphic display in real time by introducing a simultaneous movement with the radar antenna. The conditions which allow for simplification of calculations and reduction of computing time with 3D projecting of radar images of cloud systems from large areas are reviewed.
... Usability is an important factor for all software quality models. It is the key factor in the development of successful interactive software applications [1]; [2]. [1] Defined usability as a property of the syntactic and semantic analysis of a user interface. ...
... It is the key factor in the development of successful interactive software applications [1]; [2]. [1] Defined usability as a property of the syntactic and semantic analysis of a user interface. [3] Also described usability as a product attribute, which defines the concept by naming product or system characteristics. ...
... 3) Parametric 4) Spatial Enumerative (Voxel) The boundary based methods describe only a set of surfaces, in terms of faces, edges and vertices. A variety of representations are available in detail, and inter equivalence has been shown for many [NEWMAN81,FOLEY82]. Faces may be polygons, or for example be constructed from B-splines (in 2D), or Bezier patches (in 3D) designed to maintain smoothness at boundaries in some specified way. ...
... Further details can be found in many textbooks [NEWMAN81,FOLEY82]. ...
This thesis describes the development of a computer graphics facility, referred to as UCL3D, for the planning, simulation and evaluation of Maxillo-Facial surgery on a conventional super-minicomputer, with a colour graphics framestore. The introduction defines the requirements as data acquisition, preprocessing, visualisation, dissection, manipulation, quantification, and registration. The principle innovations are in the visualisation, dissection and manipulation stages. The first part of the thesis is concerned with basic definitions and a review of other work. The data acquisition is assumed to be from a medical imaging device that produces a 3D digital array of density values. The preprocessing stage involves interpolation, artefact removal, subregioning, and segmentation. The computer representation of discrete objects from medical 3D data is discussed, and the need to have a volume based representation is justified. The implementation of UCL3D is in terms of octrees, although the principles could be applied to other representations. Both a binary and grey-value implementation have been developed, and for each case, a new structure variation is described. Visualisation is discussed in terms of these representations and surface shading techniques appropriate to each are introduced. A new algorithm for deriving the quadtree projection of an octree in orthogonal directions is presented, and its advantages explained. The concept of a general volume mask, specified interactively, is introduced for the dissection problem. The space partitioning resulting from this mask is more general than previous methods. Manipulation is considered as the combination of Boolean operations and translation and rotations, acting on combinations of dissected objects. The philosophy is to cut and merge medical objects to simulate the "osteotomies" encountered in surgery. Boolean expressions of objects may be visualised prior to being created. A description of the application of the system to several clinical cases is given and finally several areas for future work are suggested.
... This technique allows any line to be displayed as long it can be numerically traced on the image plane. Numerical tracing can be done with data transformations and scaling and, occasionally, with computer graphics algorighmts [13] (e.g., to indicate image perspective, which can be beneficial in some, rare, circumstances). ...
... A parameter combination near the peak frequency shown in the Banik plot was chosen. Namely: m m /(2m c ) = 0.5 (13) P/P cr = 0.101 (14) (The latter parameter in the illustration analysis is 1% greater than the "nice" 0.1 value because the parameters were matched by varying the boom tubular cross section geometry and Young's modulus, and an approximate match was deemed satisfactory.) The boom properties ended up being: ...
... HSV and Y C b C r are also common color spaces. The Hue, Saturation, and Value (HSV) color space represents points in the RGB color model using a cylindrical coordinate system [24]. It effectively decouples brightness and color from the RGB channels. ...
This paper presents a novel low-light image enhancement model named PFLLTNet, specifically designed to address the issues of detail loss and global structure distortion in low-light conditions. The model leverages the separated operations of luminance and chrominance in the YUV color space, combined with multi-headed self-attention (MHSA), feature fusion paths, residual connections, and the PixelShuffle upsampling strategy, significantly improving image detail restoration and enhancing the fidelity of global information. Additionally, we optimized the combination and weight configuration of the loss function, with particular emphasis on the introduction of light consistency loss and the refinement of the learning rate scheduling mechanism using cosine annealing with restarts, ensuring the model’s stability and robustness during extended training. Experimental results demonstrate that PFLLTNet achieves state-of-the-art (SOTA) performance in key metrics such as PSNR and SSIM while maintaining relatively low computational complexity. Due to its computational efficiency and low resource demands, PFLLTNet holds significant potential for deployment in scenarios such as mobile devices, real-time video processing, and intelligent surveillance systems, particularly in environments requiring rapid processing and constrained computational resources. The source code and pre-trained models are available for download at https://github.com/Huang408746862/PFLLTNet.
... Thus, the average of the rump surface's red, green, and blue components was computed for each bird. Hue values ( • ) and brightness were then determined using the algorithm by Foley and Van Dam (1982). Lower hue values indicated redder rumps. ...
In many vertebrates, dietary yellow carotenoids are enzymatically transformed into 4C-ketocarotenoid pigments, leading to conspicuous red colourations. These colourations may evolve as signals of individual quality under sexual selection. To evolve as signals, they must transmit reliable information benefiting both the receiver and the signaler. Some argue that the reliability of 4C-ketocarotenoid-based colourations is ensured by the tight link between individual quality and mitochondrial metabolism, which is supposedly involved in transforming yellow carotenoids. We studied how a range of carotenoids covary in the feathers and blood plasma of a large number (n > 140) of wild male common crossbills (Loxia curvirostra). Plumage redness was mainly due to 3-hydroxy-echinenone (3HOE). Two other, less abundant, red 4C-ketocarotenoids (astaxanthin and canthaxanthin) could have contributed to feather colour as they are redder pigments. This was demonstrated for astaxanthin but not canthaxanthin, whose feather levels were clearly uncorrelated to colouration. Moreover, moulting crossbills carried more 3HOE and astaxanthin in blood than non-moulting ones, whereas canthaxanthin did not differ. Canthaxanthin and 3HOE can be formed from echinenone, a probable product of dietary β-carotene ketolation. Echinenone could thus be ketolated or hydroxylated to produce canthaxanthin or 3HOE, respectively. In moulting birds, 3HOE blood levels positively correlated to astaxanthin, its product, but negatively to canthaxanthin levels. Redder crossbills also had lower plasma canthaxanthin values. A decrease in hydroxylation relative to ketolation could explain canthaxanthin production. We hypothesize that red colouration could indicate birds' ability to avoid inefficient deviations within the complex enzymatic pathways.
... The matrix value of pixel will be made automatically as per the intensity & saturation of grayscale, based on the gray scale format as shown in Fig 5 below [15]. [15]. ...
Dental caries also know as is caused by bacterial invasion on tooth surface. It involves dissolution and destruction of calcified tissues. Early detection is the only way out to control this disease. Two types of investigating techniques are available, one is visual inspection and other is through X-rays. X-rays investigation is more precise and accurate, but the chances of human error and accurate diagnosis are still not possible with naked eyes. To make it more precise image processing techniques are used. In this paper we have analysed the dental X-rays through fuzzy logic and final diagnoses is derived. A comparative study of four edge detection techniques; Sobel, Prewitt, Roberts and Canny have been made. For caries detection Grayscale pixel values matrix were formed and results have been analysed through fuzzy. The fuzzy controller has proved to be intelligent system in diagnosis and the results were much more precise. The pixels were accurately judged and outputs were as expected.
... 19 En general, la mejor resolución de las actuales tarjetas conversoras analógico-digitales (ADC, por sus siglas en inglés, analog-to-digital converter) disponibles en el mercado internacional para la digitalización de señales y su alta velocidad de conversión por canal (o por cada señal independiente simultáneamente registrada) junto a la cada vez mayor velocidad de procesamiento de las computadoras digitales, incluidos los actuales ordenadores o computadoras personales (PC, por sus siglas en inglés, personal computer) permiten que con estos medios y un óptimo software de diseño se puedan alcanzar frecuencias de muestreo digital lo suficientemente altas (para aumentar el muestreo temporal de las señales) y con errores de cuantificación de amplitud tan bajos, que la exactitud del estudio de los procesos por medio del registro y procesamiento digital de señales ya no difiere de su símil analógico; por añadidura, incorpora todas las ventajas mencionadas de la tecnología digital, además de las comodidades gráficas, automáticas y la versatilidad de las PC actuales para el manejo, procesamiento, análisis y transmisión de la información. 20 Por todas estas ventajas, la tecnología digital está mundialmente aceptada y se ha extendido a muchos campos, incluyendo el de la neurofisiología (al menos en el terreno de los estudios electroencefalográficos externos). ...
Introducción: Para una mayor seguridad y efectividad en la neurocirugía estereotáctica y funcional se requiere una guía neurofisiológica como el registro cerebral profundo. Materiales y método: Aplicando la actual tecnología digital y de ingeniería de software se han desarrollado sucesivas versiones del programa NDRS para el registro, visualización, grabación y procesamiento de señales con una computadora personal y se han incorporado, además, facilidades gráficas y automáticas para los análisis de correlación anatomofisiológica y el planeamiento del accionar terapéutico final. Resultados: Desde 1993 hasta el 2009 el NDRS ha sido utilizado en Cuba en la neurocirugía estereotáctica y funcional ablativa para trastornos del movimiento y desde 1996 también en España para guiar el implante de electrodos de estimulación cerebral profunda. En total, hasta el momento este programa se ha empleado en más de 1000 cirugías para trastornos del movimiento, con un promedio de 4 trayectos de registro electrofisiológico por cirugía, menos de 15 minutos por trayecto y con una efectividad clínica posquirúrgica similar a la reportada internacionalmente por otros grupos. Discusión: Las facilidades gráficas y automáticas del NDRS para el procesamiento de las señales, los análisis de correlación anatomofisiológica y el planeamiento del accionar terapéutico permiten aumentar su exactitud, seguridad y efectividad con un menor consumo de tiempo. Conclusiones: El NDRS no sólo permite sustituir con una computadora personal gran parte del equipa- miento para el registro cerebral profundo, sino que además sus herramientas gráficas y automáticas aumentan la exactitud, seguridad y efectividad de los análisis y reducen el tiempo quirúrgico total.
... HSV color space [9] is a method of representing points in an RGB color model in a cylindrical coordinate system. It mainly includes hue, saturation, and value channels. ...
Reference-free low-light image enhancement methods only employ low-light images during training, thereby significantly alleviating the over-reliance on obtaining paired or unpaired datasets. Existing reference-free low-light image enhancement approaches still struggle to strike a balance between enhancing vivid color and suppressing noise in low-light images. To mitigate such issues, we propose a novel deep learning-based reference-free method that contains two phases, separating the low-light image enhancement into decomposition and refinement problems. In the decomposition phase, we present a value channel prior based on histogram equalization on HSV color space, termed as V-HE prior. Inspired by retinex theory, V-HE prior guides the decomposition network (Dec-Net) to estimate the reflectance component of the value channel. To further refine the pre-enhanced result, we construct a structure-aware loss to guide the refinement network (Ref-Net) in the refinement phase. We conduct extensive experiments to verify the effectiveness of the proposed method, qualitatively and quantitatively. Compared with other reference-free algorithms, our approach effectively addresses the challenges of low-light image enhancement and significantly improves image quality.
... In geographical information systems (GIS), local irregular areas are usually abstracted as vector polygons, and the rasterization of vector data into discrete grid cells is essentially similar to polygon filling algorithms for display in computer graphics (Dunlavey & Michael, 1983;Foley & Van Dam, 1982;Horman & Agathos, 2001). The latter is usually applicable for rectangular pixels regularly arranged in the plane, whereas DGGS is on the curved surface of the Earth and can have three types of cell shapes, namely, triangles, quadrilaterals, and hexagons; therefore, it is necessary to develop a grid-generation algorithm suitable for DGGS. ...
Discrete Global Grid Systems (DGGS) provide a multi-resolution discrete representation of the Earth and are preferable for the organization, integration, and analysis of large and multi-source geospatial datasets. Generating grids for the area of interest is usually the premise and basis for DGGS applications. Owing to incongruent hierarchies that restrict the multi-resolution applications of hexagonal DGGS, current grid generation of hexagonal DGGS for local areas mainly depends on inefficient single-resolution traversal methods by judging the spatial relationship between each cell and the area. This study designs a fast generation algorithm for local parts of hexagonal DGGS based on the hierarchical properties of DGGS. A partition structure at intervals of multiple levels is first designed to ensure the coverage relevance between parent and children cells of different levels. Based on this structure, the algorithm begins with coarser resolution grids and recursively decomposes them into the target resolution, with multiple decomposition patterns used and a unique condition proposed to make the generated grids without gaps or overlaps. Efficient integer coordinate operations are used to generate the vast majority of cells. Experimental results show that the proposed algorithm achieves a significant improvement in efficiency. In the aperture 4 hexagonal DGGS, the efficiency ratio of the proposed and traversal algorithms increases from six times in level 14 to approximately 339 times in level 18. This study provides a solid foundation for subsequent data quantization and multi-resolution applications in hexagonal DGGS and has broad prospects.
... Indeed, this theoretical background cannot even predict or explain the performance of the most widely used hidden-surface technique, the z-buffer algorithm. The running time of the z-buffer algorithm is often claimed to be a linear function of the input size, or even constant [FIRE93,FOLE82,FOLE90,FOLE94,NEWM79,WATT92]. On the other hand, Schmitt [SCHM81 ] demonstrated how vertical and horizontal rectangles can force any hidden-line or hidden-surface algorithm to take at least quadratic time in the worst case. ...
The response-time problem of computer-aided geometric design systems is investigated. The necessary and sufficient functions to be provided by the interface are identified as transformations, clipping and visibility computations. Realistic rendering can also be achieved by performing only the above-mentioned three functions in real time, assuming that interreflection calculations are prepared in advance for a static scene. It is demonstrated that any transformation can be performed in time proportional to the total number N of the edges of the model, clipping in time at most proportional to N log N, however visibility computations need time at least proportional to N*N in the worst case. Visibility computations are identified as a bottleneck, contrary to prevailing beliefs that the visibility problem can be solved in linear or constant time by the z-buffer algorithm. A new analysis method is proposed that takes into account not only N, but also the resolution K of the display device, and challenges the traditional classification of visibility computations as object-space and image-space algorithms by distinguishing exact and approximation algorithms. Three approaches are recommended to speed up visibility computations: (1) reducing the expected running time to O(N log N) (2) using approximation algorithms with O(NK) worst-case time, and (3) applying parallel techniques leading to logarithmic time in the worst-case.
... A coplanar two-line layout provides two parallel lines in the scene according to the room structure and dimension, and the perspective projection of any set of parallel lines which are not parallel to the image plane will converge to a "vanishing point" [19]. Vanishing points can be determined by line pair intersections from parallel lines in the scene for most of the existing methods [1,2,8,10,21,26,32,35]. ...
... The rump area in the picture of the first capture was similar to that measured on the new rump and highly correlated (Pearson's r = 0.83, p < 0.001). We then determined each area's hue (°), chroma and brightness through the Foley and van Dam algorithms [84]. In a previous study on the same species, the repeatability [85] of these three variables taken twice was very high (all r > 0.90, n = 30) [15]. ...
Background
The animal signaling theory posits that conspicuous colorations exhibited by many animals have evolved as reliable signals of individual quality. Red carotenoid-based ornaments may depend on enzymatic transformations (oxidation) of dietary yellow carotenoids, which could occur in the inner mitochondrial membrane (IMM). Thus, carotenoid ketolation and cell respiration could share the same biochemical pathways. Accordingly, the level of trait expression (redness) would directly reveal the efficiency of individuals’ metabolism and, hence, the bearer quality in an unfalsifiable way. Different avian studies have described that the flying effort may induce oxidative stress. A redox metabolism modified during the flight could thus influence the carotenoid conversion rate and, ultimately, animal coloration. Here, we aimed to infer the link between red carotenoid-based ornament expression and flight metabolism by increasing flying effort in wild male common crossbills Loxia curvirostra (Linnaeus). In this order, 295 adult males were captured with mist nets in an Iberian population during winter. Approximately half of the birds were experimentally handicapped through wing feather clipping to increase their flying effort, the other half being used as a control group. To stimulate the plumage regrown of a small surface during a short time-lapse, we also plucked the rump feathers from all the birds.
Results
A fraction of the birds with fully grown rump feathers (34 individuals) could be recaptured during the subsequent weeks. We did not detect any significant bias in recovery rates and morphological variables in this reduced subsample. However, among recaptured birds, individuals with experimentally impaired flying capacity showed body mass loss, whereas controls showed a trend to increase their weight. Moreover, clipped males showed redder feathers in the newly regrown rump area compared to controls.
Conclusions
The results suggest that wing-clipped individuals could have endured higher energy expenditure as they lost body mass. Despite the small sample size, the difference in plumage redness between the two experimental groups would support the hypothesis that the flying metabolism may influence the redox enzymatic reactions required for converting yellow dietary carotenoids to red ketocarotenoids.
... We could easily write a string generator, produce a string, then run it through a computer program which will draw the shape or design. In computer science, these strings are commonly known as graphical languages (Foley and Van Dam 1982). The interpreters for these languages are ubiquitous. ...
... Initiées par les travaux analytiques de [Ricci, 1973], popularisées par [Bloomenthal and Wyvill, 1990], les méthodes implicites [Bloomenthal et al., 1997] ont peu à peu pris beaucoup d'importance dans le domaine de l'infographie : elles sont aujourd'hui couramment utilisées pour la visualisation et l'animation de formes [Foley et al., 1982]. Elles ont ensuite été adoptées pour la modélisation géologique [Cowan et al., 2002;Lajaunie et al., 1997;Frank et al., 2007], dans laquelle elles sont très utilisées. ...
Les méthodes de modélisation géologique 3D ont pour but de construire des modèles numériques cohérents du sous-sol à partir de données ponctuelles échantillonnées sur le terrain ou en profondeur. Très populaires de nos jours, les méthodes dites implicites permettent de construire plusieurs champs scalaires agencés les uns par rapport aux autres et construits à partir de données de contacts d’unités géologiques et de leurs orientations respectives. Les surfaces géologiques sont ensuite extraites comme iso-potentielles de ces champs. Dans ce cadre, la Méthode du Potentiel, proposée il y a plus de 20 ans par l’École des Mines et le BRGM, utilise les outils géostatistiques, comme l’interpolation par co-krigeage, afin de reconstruire ces champs scalaires et ces surfaces. Bien qu’éprouvées, ces méthodes de modélisation montrent encore leurs limites face à quelques modèles complexes. Certaines structures géologiques, telles que les plis non cylindriques, les filons minéralisés ou les réseaux fluviatiles par exemple, présentent une structuration suivant une direction préférentielle (anisotropie) clairement identifiable localement mais variant spatialement. Très souvent, le nombre ou la répartition des données disponibles initialement ne permet pas de caractériser cette anisotropie variable correctement. Ces travaux de thèse visent à pallier ce manque en intégrant cette anisotropie comme donnée d’entrée au sein de la modélisation. Pour ce faire, deux approches ont été développées :(1) Une première approche exploite les données de dérivées premières (tangentes) ou dérivées secondes du champ de potentiel, permettant de contraindre localement l’anisotropie du champ scalaire. Cette approche est développée dans le cadre de la modélisation de plis poly-phasés, exemple emblématique de la problématique de l’anisotropie variable et de la nécessité d’action sur la courbure de surfaces. L’apport et l’usage de chacun des types de données sont comparés et discutés dans ce cadre.(2) Une seconde approche plus globale interprète le potentiel comme la convolution d’un bruit blanc par un noyau gaussien. Cette méthode permet d’introduire une expression de l’anisotropie sous forme explicite, pouvant être interpolée depuis des données d’anisotropie échantillonnées ou construite comme un a priori géologique. Enfin, le contexte de déploiement respectif de l’une ou l’autre approche développée est discuté en regard du cas d’application considéré.
... Now a digital image can be represented based on three scales. These are: [47]. ...
Internet of Things (IoT) has made smart objects as the ultimate building block for the development of cyber-physical smart pervasive frameworks. IoT has a variety of application domain that also includes healthcare. IoT revolution in modern days has forced us to redesign the modern healthcare technology with the promising technological, economic and social prospects. Within this paper the authors has presented various issues in relation with Medical IoT starting with the basic architecture that forms the framework for supporting healthcare IoT applications. Further various IoT services and applications are studied in detail. Finally the authors have presented a case study on the Medical IoT application where RFID technology is being used for the detection of ambulance in toll roads.
... The title of this attribute is, however, somewhat misleading. In BIM and other areas such as computer graphics and CAD, the 'world coordinate system' is the local coordinate system of the project [40,49], which refers to the coordinate system of the virtual world created by software that may not be linked to the real world. Meanwhile, in GISs, 'world coordinate system' is closer to a coordinate reference system that is related to the real world, which literally means the coordinate system of the real world, such as the term 'world coordinate reference system' used by [22]. ...
Previous geo-referencing approaches for building information modeling (BIM) models can be problematic due to: (a) the different interpretations of the term ‘geo-referencing’, (b) the insufficient consideration of the placement hierarchy of the industry foundation classes (IFCs), and (c) the misunderstanding that a common way to embed spatial reference information for IFC is absent. Therefore, the objective of this study is to (1) clarify the meaning of geo-referencing in the context of BIM/GIS data integration, and (2) develop a common geo-referencing approach for IFC. To achieve the goal, a systematic and thorough investigation into the IFC standard was conducted to assess the geo-referencing capability of IFC. Based on the investigation, a geo-referencing approach was established using IFC entities that are common in different IFC versions, which makes the proposed approach common to IFC. Such a geo-referencing approach supports automatic geo-referencing that would facilitate the use of BIM models in GIS, e.g., for the construction of digital twins.
... Book-length expositions of splines on computer-aided geometric design include [101,110,117]. Briefer discussions of splines in computer graphics are in [118,119] and with more emphasis on computer-aided manufacturing in [120]. ...
Studies have shown that in many practical applications, data interpolation by splines leads to better approximation and higher computational efficiency as compared to data interpolation by a single polynomial. Data interpolation by splines can be significantly improved if knots are allowed to be free rather than at a priori fixed locations such as data points. In practical applications, the smallest possible curvature is often desired. Therefore, optimal splines are determined by minimizing a derivative of continuously differentiable functions comprising the spline of the required order. The problem of obtaining an optimal spline is tantamount to minimizing derivatives of a nonlinear differentiable function over a Banach space on a compact set. While the problem of data interpolation by quadratic splines has been accomplished analytically, interpolation by splines of higher orders or in higher dimensions is challenging. In this paper, to overcome difficulties associated with the complexity of the interpolation problem, the interval over which data points are defined, is discretized and continuous derivatives are substituted by their discrete counterparts. It is shown that as the mesh of the discretization approaches zero, a resulting near-optimal spline approaches an optimal spline. Splines with the desired accuracy can be obtained by choosing an appropriate mesh of the discretization. By using cubic splines as an example, numerical results demonstrate that the linear programming (LP) formulation, resulting from the discretization of the interpolation problem, can be solved by linear solvers with high computational efficiency and resulting splines provide a good approximation to the optimal splines.
... Having the visible sections of each polygon calculated, they must be checked against other polygons to find the portion that is not obscured by other polygons. For this aim, any polygon is overlaid by other polygons according to depth sorting (Foley and Van Dam, 1982) to make sure that all of the polygons are behind it (Figure 8). The overlay is applied according to the overlying vector method presented by Leonov (2004). ...
Sensor deployment optimization to achieve the maximum spatial coverage is one of the main issues in Wireless geoSensor Networks (WSN). The model of the environment is an imperative parameter that influences the accuracy of geosensor coverage. In most of recent studies, the environment has been modeled by Digital Surface Model (DSM). However, the advances in technology to collect 3D vector data at different levels, especially in urban models can enhance the quality of geosensor deployment in order to achieve more accurate coverage estimations. This paper proposes an approach to calculate the geosensor coverage in 3D vector environments. The approach is applied on some case studies and compared with DSM based methods.
... Componente que armazena cargas elétricas em um campo elétrico. 5 É o menor ponto físico endereçável de uma imagem digital[Foley, Dam et al. 1982] ...
Implementação de um algoritmo de visão computacional por meio de redes neurais convolucionais, utilizando o framework Keras com método de análise YoLo
... This is an image-processing tool for automatized analysis of avian coloration that solves the need for linearizing the camera's response to subtle changes in light intensity [51]. Mean red, green and blue (RGB) values measured from the lateral area of the bill (upper and lower mandibles) were used to calculate hue values following Foley & van Dam [52]. Repeatability calculated on a set of digital photographs measured twice (n = 30) was r = 0.99, p < 0.001. ...
Ornaments can evolve to reveal individual quality when their production/maintenance costs make them reliable as “signals” or if their expression level is intrinsically linked to condition by some unfalsifiable mechanism (“indices”). The latter has been mostly associated with traits constrained by body size. In red ketocarotenoid-based colourations, that link could, instead, be established with cell respiration at the inner mitochondrial membrane (IMM). The production mechanism could be independent of resource (yellow carotenoids) availability, thus discarding costs linked to allocation trade-offs. A gene coding for a ketolase enzyme (CYP2J19) responsible for converting dietary yellow carotenoids to red ketocarotenoids has recently been described. We treated male zebra finches with an antioxidant designed to penetrate the IMM (mitoTEMPO) and a thyroid hormone (triiodothyronine) with known hypermetabolic effects. Among hormone controls, MitoTEMPO downregulated CYP2J19 in the bill (a red ketocarotenoid-based ornament), supporting the mitochondrial involvement in ketolase function. Both treatments interacted when increasing hormone dosage, indicating that mitochondria and thyroid metabolisms could simultaneously regulate colouration. Moreover, CYP2J19 expression was positively correlated to redness but also to yellow carotenoid levels in the blood. However, treatment effects were not annulated when controlling for blood carotenoid variability, which suggests that costs linked to resource availability could be minor.
... In this method, a region is filled in all directions starting from a single point within the region. This method searches for an unlabeled foreground pixel, labels it and marks it "visited" to all the neighboring pixels in the region [71,72]. ...
This research paper focuses on providing an algorithm by which (Unmanned Aerial Vehicles) UAVs can be used to provide optimal routes for agricultural applications such as, fertilizers and pesticide spray, in crop fields. To utilize a minimum amount of inputs and complete the task without a revisit, one needs to employ optimized routes and optimal points of delivering the inputs required in precision agriculture (PA). First, stressed regions are identified using VegNet (Vegetative Network) software. Then, methods are applied for obtaining optimal routes and points for the spraying of inputs with an autonomous UAV for PA. This paper reports a unique and innovative technique to calculate the optimum location of spray points required for a particular stressed region. In this technique, the stressed regions are divided into many circular divisions with its center being a spray point of the stressed region. These circular divisions would ensure a more effective dispersion of the spray. Then an optimal path is found out which connects all the stressed regions and their spray points. The paper also describes the use of methods and algorithms including travelling salesman problem (TSP)-based route planning and a Voronoi diagram which allows applying precision agriculture techniques.
... Accordingly, for each animal, the aver-age of red, green, and blue components of the rump surface was calculated. We then determined hue (º) values through the Foley and van Dam (1982) algorithm. High values of hue indicated pale traits. ...
The mechanisms involved in the production of red carotenoid‐based ornaments of vertebrates are still poorly understood. These colorations often depend on enzymatic transformations (ketolation) of dietary yellow carotenoids, which could occur in the inner mitochondrial membrane (IMM). Thus, carotenoid ketolation and cell respiration could share biochemical pathways, favoring the evolution of ketocarotenoid‐based ornaments as reliable indices of individual quality under sexual selection. Captive male red crossbills (Loxia curvirostra Linnaeus) were exposed to redox‐active compounds designed to penetrate and act in the IMM: an ubiquinone (mitoQ) or a superoxide dismutase mimetic (mitoTEMPO). MitoQ can act as an antioxidant but also distort the IMM structure, increasing mitochondrial free radical production. MitoQ decreased yellow carotenoids and tocopherol levels in blood, perhaps by being consumed as antioxidants. Contrarily, mitoTEMPO‐treated birds rose circulating levels of the second most abundant ketocarotenoid in crossbills (i.e. canthaxanthin). It also increased feather total red ketocarotenoid concentration and redness, but only among those birds exhibiting a redder plumage at the start of the study, that is, supposedly high‐quality individuals. The fact that mitoTEMPO effects depended on original plumage color suggests that the red‐ketocarotenoid‐based ornaments indicate individual quality as mitochondrial function efficiency. The findings would thus support the shared pathway hypothesis.
This article is protected by copyright. All rights reserved
... Graphics is about making data understandable and the task of turning computer generated 3-D data into a picture that looks three dimensional to the human eye is not trivial (Foley and van Dam, 1982). Particularly complex is the calculation of depth and lighting and making the objects in the picture move smoothly. ...
Three dimensional information about the human face is of importance, not only for medical diagnostic purposes, but also as input to facial recognition systems and, more recently, for the entertainment industry. The orthodontic surgeon in particular needs quantitative information about the average sizes and relations of the constituent parts of the human face as a whole or parts of it. A number of methods including a matrix of mechanical probes, lasers, holography. Moire fringe patterns and stereophotogrammetry have been investigated as possible ways in which three dimensional records of human heads could be made. Each method has its own merits and demerits ranging from accuracy requirements, safety factor to the subject, complexity and cost of analysis. The aim of this research was to develop components of a non-contact stereophotogrammetric system to acquire, process, display and replicate the surface of the human face. In contrast to previous work in photogrammetry that used two film based camera stations, this system's key requirements were all-round coverage of the complex facial surface, accuracy and surface quality and a data acquisition time of less than 2 seconds (approximate time for which a person can remain expressionless and motionless). The resulting design is unique in its combination of the complex three dimensional surface coverage, accuracy and speed. The data are acquired using four Pulnix T526 CCD cameras mounted around a semi-circular steel rig. Using an area based stereomatcher, dense disparity models are generated automatically. With the use of control points imaged in the scene, the three resulting models from the stereomatcher can be combined through transformations to form one complete facial model which can then be manipulated to yield profiles, distances between features of interest, and angles in a graphics visualisation suite. Planimetric accuracy is 0.1 mm and accuracy in height is 0.3 mm.
... The second approach focuses on the analysis of objects located in a user interface. Vanderdonckt and Gillo [1994] based on Foley and Van Dam [1982] recognize the two kinds of objects: interaction and interactive objects. Interaction objects (also widgets or controls) represent static (e.g., labels or separators) and dynamic (e.g., buttons, text fields) objects of a user interface. ...
Using metrics and quantitative design guidelines to analyze design aspects of user interfaces (UI) seems to be a promising way for the automatic evaluation of the visual quality of user interfaces. Since this approach is not able to replace user testing, it can provide additional information about possible design problems in early design phases and save time and expenses in the future. Analyses of used colors or UI layout are the examples of such evaluation. UI designers can use known pixel-based (e.g., Colorfulness) or object-based (e.g., Balance or Symmetry) metrics which measure chosen UI characteristics, based on the raster or structural representation of UI.
The problem of the metric-based approach is that it does not usually consider users' subjective perception (e.g., subjective perception of color and graphical elements located on a screen). Today's user interfaces (e.g., dashboards) are complex. They consist of several color layers, contain overlapping graphical elements, which might increase ambiguity of users' perception. It might be complicated to select graphical elements for the metric-based analysis of UI, so the selection reflects users' perception and principles of a visual grouping of the perceived shapes (as described by Gestalt psychology). Development of objective metrics and design guidelines usually requires a sufficiently large training set of user interface samples annotated by a sufficient number of users.
This thesis focuses on the automatic evaluation of dashboard design. It combines common knowledge about dashboards with the findings in the field of data visualization, visual perception and user interface evaluation, and explores the idea of the automatic evaluation of dashboard design using the metric-based approach. It analyzes chosen pixel-based and object-based metrics. It gathers the experience of users manually segmenting dashboard screens and uses the knowledge in order to analyze the ability of the object-based metrics to distinguish well-designed dashboards objectively. It establishes a framework for the design and improvement of metrics and proposes an improvement of selected metrics. It designs a new method for segmentation of dashboards into regions which are used as inputs for object-based metrics. Finally, it compares selected metrics with user reviews and asks questions suggesting future research tasks.
... Some of them are hardly deep learning applicable such as polygon soup [21,6], sweep-CSG [16], and spline-based representation [35]. Some of them need extra data processing such as polygon mesh [12], point cloud [15], and Octree/Quadtreebased representation [4] in the application of Grpah CNN on Mesh [7], PointNet [26], and Octree-based O-CNN [34]. Multi-View and Voxel are leading others in the deep learning era, as they can be directly applied to CNNs such as MVCNN [32] and VoxNet [23]. ...
For the problem of 3D object recognition, researchers using deep learning methods have developed several very different input representations, including "multi-view" snapshots taken from discrete viewpoints around an object, as well as "spherical" representations consisting of a dense map of essentially ray-traced samples of the object from all directions. These representations offer trade-offs in terms of what object information is captured and to what degree of detail it is captured, but it is not clear how to measure these information trade-offs since the two types of representations are so different. We demonstrate that both types of representations in fact exist at two extremes of a common representational continuum, essentially choosing to prioritize either the number of views of an object or the pixels (i.e., field of view) allotted per view. We identify interesting intermediate representations that lie at points in between these two extremes, and we show, through systematic empirical experiments, how accuracy varies along this continuum as a function of input information as well as the particular deep learning architecture that is used.
... One of the most extensively used hidden-surface techniques is the Z-buffer algorithm that can be implemented also as a scan-line method. Relying upon estimated timing data of Sutherland,Snroull and Schumaker (197A), both Newman and Sproull (1979) and Foley and van Dam (1982) conclude that the algorithm takes constant time. Perhnphs one reason for its popularity is this good promise of performance. ...
Many authors postulate the requirement that the execution times of visibility computations grow linearly with the number of scene data. It is shown that, in general, this requirement cannot be met: An Ω(N log N) expected lower bound is provided for scan-line algorithms, i.e. for determining the visibility of N line segments in the plane, assuming the algebraic computation tree model. Introducing the precise notion of depth complexity, it is demonstrated that the expected running time of a widely used hidden-surface method, the z-buffer algorithm, can be Ω(N²). An alternative to the z-buffer algorithm may be the NlogN algorithm that is known to be worst-case optimal, and now it is proved to be also expected-time optimal. If the endpoints of the line segments have integer coordinates, the running time of the NlogN algorithm is O(N log D) for D > 1, where D, D ≤ N/2, is the average depth complexity of the points of the x-axis corresponding the endpoints of the line segments. If the x-coordinates of the segment endpoints are independent identically distributed random variables with a common density f, the expected time of the NlogN algorithm is still O(N log E(D)) for any smooth f with compact support.
... The visible parts can be projected into the picture elements in O(R) time for each scan plane, therefore the total time is O(RN log N + R 2 ) in the worst case. It can be demonstrated that other methods, such as the z-buffer and the Warnock algorithms [7,19,21] take Θ(R 2 N ) time in the worst case. ...
A classification of polygons is proposed together with a new class of connected polygons, called ordinary polygons. Ordinary polygons include simple polygons possibly with holes. The determination of the intersection of a line segment and an ordinary polygon with N edges requires Ω(N log N) time in the worst case. A linear-time algorithm is given, however, if a planar subdivision of the polygon in trapezoids is allowed as a preprocessing. As the minimal trapezoidal subdivision of an ordinary polygon is NP-complete, we propose a subdivision that, although not minimal, has at most 3N vertices and 5N edges, and can be computed in optimal Θ(N log N) time in the worst case. The intersection of an M-edge ordinary polygon with an N-edge ordinary polygon can be obtained in Θ(M log M + M N + N log N) time, which is also worst-case optimal. Applications to worst-case optimal clipping and scan-conversion algorithms and efficient hidden-line and hidden-surface algorithms that use only elementary data structures are demonstrated.
... Abstraktionsmechanismen. Die Grundlage der meisten konzeptionellen Modelle bildet die getrennte Betrachtungsweise einander aufbauender Schichten der Interaktion (lexikalische, syntaktische, semantische Ebene) nach Morris [15], welche von Foley et al. [4] Den Nachteil statischer Fixierung weisen Zustand-Übergangsdiagramme lT7lnicht auf. Sie basieren auf endlichen Automaten, welche formal äquivalent zu regulären Grammatiken sind. ...
Für den Entwurf handlungsorientierter Benutzerschnittstellen sind höhere Spezifikationsmethoden erforderlich. Konzeptionelle
Modelle der Mensch-Computer-Interaktion ermöglichen sowohl eine geeignete Betrachtungsebene für
komplexe Zusammenhänge als auch die notwendige Berücksichtigung von Wissen unterschiedlicher
Disziplinen (Informatik, Arbeitswissenschaften, kognitive Psychologie, Kommunikationstheorie). Das
Interaktions-Management-Netz als konzeptionelles Repräsentationsschema dient der Darstellung
computerunterstützter Problemlösung. Es integriert
sowohl problemspezifische als auch interaktionsbezogene Aspekte. Neben aufgabenorientierten Überlegungen finden auch darüberhinausgehende, benutzerspezifische Konzepte Eingang (umfassende
Hilfestellung, Lernstrategien, etc.), ohne Implementierungsdetails darstellen zu müssen.
This chapter describes important photography principles that are relevant to providing the highest level of microscope documentation. The required camera components and equipment are covered. Ergonomic practices teaching teams of operators and assistants how to take top quality microscope photographs are included.
Seiring dengan perkembangan zaman, manusia berhasil menemukan berbagai macam teknologi yang berguna untuk kehidupan sehari-hari. Penggunaan Proyektor adalah salah satu dari perkembangan teknologi saat ini. Ada dua tipe teknologi proyektor yang sangat populer pada saat ini yaitu teknologi Digital Light Processing (DLP) proyektor dan teknologi 3-Liquid Crystal Display (3LCD) proyektor. Teknologi manakah yang terbaik di antara keduanya? Penelitian ini melakukan pencarian data dari sumber yang dapat dipercaya berupa buku petunjuk manual dari ke dua teknologi proyektor yang akan di bandingakan. Selain itu peneliti juga menyebarkan kuisioner untuk mengumpulkan data untuk mencari perbandingan antara ke dua teknologi tersebut, serta melakukan Observasi dalam rangka melihat langsung perbedaan dari kedua Proyektor tersebut. Kelebihan yang dimiliki oleh teknologi DLP ini antara lain video lebih halus, kotak kecil atau piksel yang kurang terlihat, visi /tampilan seperti film di HDTV, menghasilkan kulit hitam yang lebih hitam, Kontras lebih tinggi, dan ukuran proyektor yang portable. Kelebihan dari LCD adalah penggunaan cahaya yang lebih efisien sehingga dapat memproduksi “ansi lumens” yang lebih tinggi dibandingkan proyektor dengan teknologi DLP. Teknologi 3LCD dan DLP pada proyektor adalah teknologi yang memiliki perbedaan dari segi proses pencahayaan, mulai dari input cahaya sampai di proses atau dirpoyeksikan menjadi gambar atau video. Masing-masing memiliki kelebihan dan kelemahan.
“The Origins of Computer Graphics in Europe,” is being published in two parts: Part 1, in this issue of IEEE Computer Graphics and Applications, is subtitled “The Beginnings in Germany”; Part 2, to be published in the May/June issue, is subtitled “The Spreading of Computer Graphics in Europe.” I was a participant, contributor, and witness to the events reported here and I relate my personal story along with the broader history. Part 1 describes the origins and successful evolution of computer graphics in Germany, starting in 1965, and includes details of the people and subject matter of the earliest research groups. It describes the efforts undertaken to establish computer graphics as a proper academic discipline, including the founding of EUROGRAPHICS, and creation of institutes for both basic and applied research in computer graphics. Part 2 continues the story with a focus on activities contributing to the growth of the academic and industrial computer graphics communities across Europe and documents the two IFIP workshops at Seillac and the development of the GKS Graphics Standard. Over these years, computer graphics gained respect and importance as a component of the computer science curricula and became an important tool and enabling technology for applications for industry and for the IT market in Europe.
Ushbu to‘plam Oʼzbekiston Respublikasi Vazirlar Mahkamasining 2022 yil 7 martdagi №101-F sonli farmoyishi bilan tasdiqlangan “2022 yilda Xalqaro va Respublika miqyosida o’tkaziladigan ilmiy va ilmiy-texnik tadbirlar rejasi”ga ko’ra 2022 yil 28 mart kuni Andijon davlat universitetida o’tkazilgan “Zamonaviy matematikaning nazariy asoslari va amaliy masalalari” mavzusida Respublika miqyosidagi ilmiy-amaliy anjumaniga kelib tushgan tezislar matnlaridan tashkil topgan.
The Research paper contains a brief introduction about 2D reflection or transformation along with the better or less complex algorithm in Computer Graphics. 2D reflection is used for manipulating, repositioning, changing size, rotating and also to get the mirror image of the real-world object which are stored in the form of images in a computer. Suggesting a less complex algorithm to make it easier to understand or for manipulating images more efficiently according to the user needs.
Standing as the first unified textbook on the subject, Liquid Crystals and Their Computer Simulations provides a comprehensive and up-to-date treatment of liquid crystals and of their Monte Carlo and molecular dynamics computer simulations. Liquid crystals have a complex physical nature, and, therefore, computer simulations are a key element of research in this field. This modern text develops a uniform formalism for addressing various spectroscopic techniques and other experimental methods for studying phase transitions of liquid crystals, and emphasises the links between their molecular organisation and observable static and dynamic properties. Aided by the inclusion of a set of Appendices containing detailed mathematical background and derivations, this book is accessible to a broad and multidisciplinary audience. Primarily intended for graduate students and academic researchers, it is also an invaluable reference for industrial researchers working on the development of liquid crystal display technology.
Air plasma sprayed (APS) thermal barrier coatings (TBCs) are a widely used technology in the gas turbine industry to thermally insulate and protect underlying metallic superalloy components. These TBCs are designed to have intrinsically low thermal conductivity while also being structurally compliant to withstand cyclic thermal excursions in a turbine environment. This study examines yttria‐stabilized zirconia (YSZ) TBCs of varying architecture: porous and dense vertically cracked (DVC), which were deposited onto bond‐coated superalloys and tested in a novel CO2 laser rig. Additionally, multilayered TBCs: a two‐layered YSZ (dense + porous) and a multi‐material YSZ/GZO TBC were evaluated using the same laser rig. Cyclic exposure under simulative thermal gradients was carried out using the laser rig to evaluate the microstructural change of these different TBCs over time. During the test, real‐time calculations of the normalized thermal conductivity of the TBCs were also evaluated to elucidate information about the nature of the microstructural change in relation to the starting microstructure and composition. It was determined that porous TBCs undergo steady increases in conductivity, whereas DVC and YSZ/GZO systems experience an initial increase followed by a monotonic decrease in conductivity. Microstructural studies confirmed the difference in coating evolution due to the cycling.
Recently, deep learning-based convolutional neural networks method for image super-resolution has achieved remarkable performance in various fields including security surveillance, satellite imaging, and medical image enhancement. Although these approaches obtained improved performance in medical images, existing works only used a pre-processing step and hand-designed filter methods to improve the quality of medical images. Pre-processing step and hand-designed-based reconstructed medical image results are very blurry and introduce new noises in the images. Due to this, sometimes medical practitioners make wrong decisions, which are very dangerous for human beings. In this chapter, the authors explain that the hand-designed as well as deep learning-based approaches, including some image quality assessment metrics to open the gate to verify the images with different approaches, depend on the single image approach. Furthermore, they discuss some important types of medical images and their properties.
Giant Unilamellar Vesicles (GUVs) are cell-sized aqueous compartments enclosed by a phospholipid bilayer. Due to their cell-mimicking properties, GUVs have become a widespread experimental tool in synthetic biology to study membrane properties and cellular processes. In stark contrast to the experimental progress, quantitative analysis of GUV microscopy images has received much less attention. Currently, most analysis is performed either manually or with custom-made scripts, which makes analysis time-consuming and results difficult to compare across studies. To make quantitative GUV analysis accessible and fast, we present DisGUVery, an open-source, versatile software that encapsulates multiple algorithms for automated detection and analysis of GUVs in microscopy images. With a performance analysis, we demonstrate that DisGUVery's three vesicle detection modules successfully identify GUVs in images obtained with a wide range of imaging sources, in various typical GUV experiments. Multiple pre-defined analysis modules allow the user to extract properties such as membrane fluorescence, vesicle shape and internal fluorescence from large populations. A new membrane segmentation algorithm facilitates spatial fluorescence analysis of non-spherical vesicles. Altogether, DisGUVery provides an accessible tool to enable high-throughput automated analysis of GUVs, and thereby to promote quantitative data analysis in GUV research.
Architecture took an evolutionary context over time, where designers were interested in finding pragmatic spontaneous appropriate solutions and met the needs of people in urban and architectural spaces. Whereas, in modern architecture an intense and varied competition happens between architects through various currents of thoughts , schools and movements, however, that creativity was the ultimate goal , and a the same time we find that every architect distinguishes himself individually or collectively through tools of architectural expression and design representation adopting a school of thought, using , for example, the leaves of various sizes and diverse technical drawing tools to accurately show that he can be read by professionals or craftsmen outside the geographical scope to which it belongs .With the rapid technological development which accompanied the digital craft in the contemporary world , The digital craft summed up time, distance and tools , so they gave the concept more appropriate accuracy , as virtualization has become the most effective tool for Architecture To reach the ideal and typical results at the practical level, or pure research. At the level of residential design and on the grounds that housing plays an important role in the government policies and given that housing is a basic unit common to all urban communities on earth , the use of different programs to show its typicality in two dimensions or in the third dimension-for example, using software "AutoCAD " " 3D Max " , " ArchiCAD " ... etc.-gave virtualisation smart, creative and beautiful forms which lead to better understand the used /or to be used residential spaces, and thus the conclusion that the life system of dwelling under design or under study , as can specifically recognize spatial structure in housing design-using digital software applying "Space Syntax" for example-in the shadow of slowly growing digital and creative development with the help of high-speed computers. the morphological structure of the dwelling is considered to be the most important contemporary residential designs Investigation through which the researcher in this area aims to understand the various behavioral relations and social structures within the projected residential area, using Space Syntax techniques. Through the structural morphology of dwellings can be inferred quality networks, levels of connectivity and depth and places of openness or closure within the dwelling under study, or under design. How, then, have intelligently contributed this digital craft to the perception of those
De nouvelles perspectives d’utilisation des systèmes interactifs se sont ouvertes grâce à l’évolution à tous les niveaux des moyens de communication, et aux progrès technologiques conduisant en particulier à de nouveaux terminaux de travail mobiles. L’informatique pervasive laisse envisager une nouvelle génération de systèmes interactifs, et nécessite de nouvelles modalités d’interaction homme-machine. Les systèmes interactifs doivent désormais s’adapter à leur contexte d’usage, en préservant leur utilisabilité, sans besoin de re-conception et ré-implémentation coûteuses. Des recherches portent sur de nouveaux types d’IHM, qu’elles soient dites sensibles au contexte, ou encore plastiques avec différentes capacités d’intégration de la notion de plasticité. Cependant, dans la plupart des méthodes, l’adaptation est statiquement prédéfinie par le concepteur à la conception. Ensuite, lors d’une nécessité d’adaptation suite à un changement contextuel, l’IHM doit être renvoyée en phase de conception. De plus, la possibilité d’évaluer la qualité de l’adaptation à l'exécution est souvent absente. Nous nous intéressons dans le cadre de la thèse aux interfaces homme-machine ayant la capacité de s’adapter de manière dynamique à leur contexte d’usage en prenant en compte les changements contextuels sans nécessité de retour à la conception. Dans ce courant de recherche et en partant du concept d’IHM plastique, notre contribution consiste à générer une telle IHM à partir d’un modèle abstrait d’IHM spécifié déjà dans une méthode de spécification et conception de systèmes interactifs et/ou un modèle de tâche. Notre méthode s’appuie sur la notion de patrons de conception. Ceux-ci sont utilisés au niveau du passage à l’interface concrète et pendant l’adaptation. L’architecture du système s’appuie sur une composition basée sur les composants métier. Ceux-ci ont la capacité de changer dynamiquement leur facette de présentation. Ce principe est adopté en tant que solution pour l’adaptation dynamique au contexte d’usage. Notre méthode s’appuie également sur la notion d’apprentissage. L’intégration d’une technique d’apprentissage permet de continuer à développer la base de connaissance du système à l’exécution afin de préserver l’utilité de l’adaptation. La méthode proposée est illustrée sur deux cas d’étude : la première concerne une application nomade de guidage touristique. La seconde est empruntée au domaine de la supervision industrielle.
The Electric Scooter is an eco-friendly transport system which will be useful for present and broadly for future generations, so that they can use renewable energy resources to power their vehicles instead of fossil fuels and produce less pollutants or no pollutants. The frame is considered as the foundation which is even termed as the skeleton for a vehicle that supports an objective it its construction and protection for integrated parts in the vehicle. The electric scooter frame which we designed is made by considering set of requirements from the data analysis and reviews, suggestions and experience which will improve the accuracy and precision of the overall scooter by the acceptance of all the engineering principles. The main objective of our paper is to design the frame for the scooter and to perform analysis i.e. impact analysis and weight analysis using finite analysis software.
This course, given in two modules, presents an overview of recent developments in image synthesis. The approach is based on computational geometry. The determination of inherent complexities of problems related to the field is emphasised. Computational geometry serves as a background for a large number of application areas such as computer graphics, image processing, computer-aided design (CAD), robotics, operations research, statistics etc. In the first module, selected problems related to graphics and CAD such as convex hulls, line arrangements, searching, intersection and proximity problems are discussed. Algorithmic techniques including line sweeping, incremental construction, divide-and-conquer and geometric transformations are also demonstrated as problem solving paradigms. Among the topics discussed during the second module are quadratic upper and lower bounds for the hidden-line and the hidden-surface problems, intersection sensitive algorithms and analyses of approximation algorithms. Two main directions can be observed in image synthesis: development of fast methods for engineering applications and the pursuit of realism. A possibility for satisfying the resource requirements of both directions is in parallel processing. While parallel architectures emerging recently are based on approximation algorithms, the determination of the parallel complexity of problems requires the analysis of exact algorithms. The final topic of the course is a recent result in this direction: the determination of the parallel complexity of the hidden-line problem.
ResearchGate has not been able to resolve any references for this publication.