| Conceptual diagram of the workflows related to the knowledge process described in LLGG 2011.

| Conceptual diagram of the workflows related to the knowledge process described in LLGG 2011.

Source publication
Thesis
Full-text available
The aim of the thesis is the definition of a parametric modelling methodology that allows, in a short time and at a sustainable cost, the digital acquisition, modelling and analysis of urban aggregates with the aim of facilitating seismic vulnerability mapping actions in historic centres. The research involves the use of direct data (site surveys)...

Contexts in source publication

Context 1
... and BIM (Building Information Modeling) methodologies allow for the consideration of the territorial scale (GIS) to the individual building organism (BIM) in relation to specific and multipurpose data management activities ( fig. 2). These methodologies are well suited to the needs imposed by the activities required for seismic risk assessment. Unfortunately, they are often adopted separately compared to the required level of knowledge, which leads to a wider approximation for the urban scale, affecting insufficient detail to establish the level of seismic safety ...
Context 2
... of digital culture, the practice of manual drawing obliged to develop these hypotheses and transform them into theses during surveying and restitution operations. Today, on the other hand, there is too often a tendency to accumulate a quantity of data that is often redundant and sometimes insufficient for the intended purposes. (Coppo, 2010) ( fig. 2). The surveying of urban centers has always represented, in the history of surveying techniques one of the most important applications. Even today, the production of cartographic instruments still constitutes the majority of applications of surveying techniques. The cartographies of the Roman era, drawn up for cadastral purposes, ...
Context 3
... in a two (or more) dimensional fashion. Conventional textual languages are not considered two-dimensional since the compiler or interpreter processes it as a long, one-dimensional stream" (Myers, 1986). The first VPLs for geometry modeling purposes can be found In the late '80s: Prismis (nowadays known as Houdini) and ConMan (Haeberli, 1988) ( fig. 23). In the 2000s there was a new success in parametric design with a subsequent spread of programming tools (ex. Grasshopper, Dynamo, Marionette) for design purposes. The applications went far beyond that, as the new VPLs allowed the management of entire workflows (and data) even between different BIM enviroments thus enabling an high ...
Context 4
... of interoperability. VPLs for architecture began to be recognized as programming languages capable of facilitating operations that designers, engineers, and architects used to carry out manually (Rutten, 2012). Together with the BIM revolution, these topics started to be included in the training of young architects ( Boeykens et al., 2009) ( fig. ...
Context 5
... the most widely used methods for analyzing the seismic vulnerability of entire territorial districts, there are statistical analyses. These focus on determining vulnerability mainly with reference to several main features of building units (such as building/construction type) in order to analyze their distribution over the territory ( fig. 25). The object of study then becomes an entire territory without considering the morphological characteristics of individual urban fabrics (at the cost of accuracy), allowing, however, an expeditious and broad assessment for the definition of emergency plans covering one or more territorial areas. The data generally used for this type of ...
Context 6
... analyses follow a mechanistic procedure where structural behavior is investigated in detail by simulating seismic actions on the building unit, from which limit values of the structure's strength are derived with great accuracy ( fig. 26). Such an investigation requires a considerably higher level of information than statistical surveys. Geometric surveys (from the structural scheme to the internal distribution scheme), analysis of the historical chronology of interventions, and performance of on-site tests to determine mechanical and physicochemical characteristics of ...
Context 7
... describe through numerical quantities and therefore makes use of descriptive tables and technical reports. This methodology is also being considered by the 2011 LLGGs, in particular, the reference goes to the level 1 assessment that takes into account these analyses where the results of these analyses can be qualitative (Predari et Al., 2019) ( fig. 27). ...
Context 8
... (they contain only themselves). LOD2, which requires the semantic division between the parts of the building, consists of deconstructing the prisms generated in LOD1. The objects obtained from this deconstruction will have as their first index, the index of the starting parent prism, and the second index proper to the second depth level reached ( fig. 2). This semantic mechanism was maintained in all modeling phases. However, despite the fact that this approach makes it possible to trace all the necessary semantic levels of detail, this system does not take into account the standard CityGML data structure and becomes limited to the experience of this ...
Context 9
... depth filtering. The latter was preferred to the 'Aggressive' one to avoid relevant loss of data during the reconstruction. The final dense cloud consists of 43,422,818 points. The entire pipeline, from acquisition to processing and cleaning phase, required about 3 hours and 15 minutes. The final weight of the point cloud is equal to 639 Mb ( fig. 20). Regarding the SLAM survey, the new BLK2GO sensor from Leica Geosystem was chosen. As already mentioned in paragraph 3, the characteristics of the sensor determine a specific procedure for using the instrument. In particular, great care must be taken during the initialization phase of the sensor, since lifting it too suddenly from the ...
Context 10
... inaccuracy due to cloud roughness. The result of the analysis can be directly represented on the point cloud used as a reference (TLS). Therefore a colour scale was applied to display the deviations in a range from -30 cm to +30 cm. For the SFM scan analysis, a mean deviation of 3.3 cm and a higher standard deviation of 1.05 m were calculated ( fig. 23). From the visual analysis of the comparison, it is possible to notice higher deviation values near the ledges, in the noisiest areas and in the upper part of the facades (22.5 to 30 cm). For the road, ground floor and facades planes, the deviation values are in the range of +/-3 cm. Regarding the analysis with the SLAM scan, the mean ...
Context 11
... observation that emerges is that for the SFM scan the lowest offset values are found in the proximity of wider and well-lit urban canyons while the SLAM scan performs better in narrower and shaded urban canyons. It is worth highlighting that the SLAM scan has a drift that in the maximum deviation reaches 20 cm over a distance of about 200 m ( fig. 24). Regarding the local analysis, it was focused on a portion of the building on four levels that has architectural elements typical of Italian historic centers, with an exception for planarity analysis. In this case, a plain wall (dimensions 3.50x2.50) at ground level was chosen. With regard to the analysis of planarity, the SFM and SLAM ...
Context 12
... the Number of Neighbours algorithm was used, which returns the number of points contained within a sphere of specified radius. In this work, a sphere radius of 5 cm was used. Visual analysis of the scans reveals a more homogeneous density in the SFM scan compared to the SLAM scan where there are more data gaps between overlapping parts ( fig. 25). The table below (tab. 2) shows the mean deviations from the three scan ...
Context 13
... to understand the level of detail achievable by the two clouds in relation to the recognition of openings in the facades. In the SLAM scan, edges can be clearly identified with 15 cm kernel sphere, while the radius has to be doubled in the SFM scan. However, the higher noise level makes it difficult to clearly identify edges in the SFM scan ( fig. ...
Context 14
... analyses conducted so far allow qualitative and quantitative comparisons between the two tested technologies ( fig. 27). First of all, it is important to keep in mind that the results have to be assessed against the finality of the work. In this case, a fast urban 3D acquisition for the creation of a CIM of the historical center is to be updated during the time. The table below (tab. 3) compares the main characteristics that emerged from the technical ...
Context 15
... operation). After this, the windows are grouped by horizontal bands of openings. Thanks to a clustering algorithm from Cockroach, these sub-clouds can be clustered until the individual windows are obtained. In this way, each window is a separate cloud which is, however, semantically linked to the succession floor -facade -building -city block ( fig. 52). The objective to be achieved with this clustering is to create a bounding box parallel to the facade in order to obtain a surface with which to trim the facade and thus recreate the opening. However, this process is not without its imperfections. Indeed, the noise present between one opening and the next results in the creation of ...
Context 16
... for level 1 and level 2 is the same. They collect the input data reflecting the order of the key-value pairs described in the CityJSON specification and organise them in the data lists in Grasshopper. For the purpose of the application, the CIM urban block model described in section 4.2.3 was used. Below is an image of the developed VPL code ( fig. 72). Compared to the structure of CityJSON, there are two main differences in the CityGH proposal. The first one relates to the geometries, as this format is currently designed for passing 3D city models within Grasshopper, therefore a system for deconstructing geometries has not been developed. The geometries in each list created ...