Questions related to Photogrammetry
If you have an area of several 25's of acres (several 100,000 m²) and your only source of GNSS Information is that of the drone, can there be distortion of the 3D Model / orthomosaic that are so large that calculations based on this model cannot be trusted?
In other words: Do GCPs not only add global georeferenced accuracy, but also decrease the error of the scale of the result (for example if you want to measure landfill, the surface area or the volume of some rocks or debris) ?
Hi, I'm going to apply a 360° Camera Ricoh THETA V model for fast survey and 3D photogrammetry in a research project of documentation of Built Heritage sites at the territorial scale.
Does anyone have some suggestions to improve results and quality of models? I have made till now a few tests on archaeological sites, processing the data with Agisoft Metashape software, but I'm not fully satisfied of the quality of the results (bad allignments, scarse dense clouds, rough meshes).
My questions regard:
- Which processing software can be the most suitable (in terms of semi-automatic presettings) for 3D photogrammetry from 360° photos (e.g., Metashape, Reality Capture, others)? Are there any particular settings to provide manually in this case? (above the setting of spherical cameras as source format)
- If someone has experience with Richo THETA cameras, which camera model can be the most suitable for 3D photogrammetry? Actually I'm testing a THETA V model, but I need to buy a new one, and I'm in doubt between Z1 and X models. Can someone suggest a review of these models for goals of digital documentation?
- For my research, also the geo-referencing of processed models, with GPS coordinates, is relevant. Considering the point above, which model can be suggested?
Thank you to all!
how to apply remote sensing and photogrammetry to solve flood problem in a society. or how to use remote sensing and photogrammetry to predict areas that will be likely for flooding
Say I have a satellite image of known dimensions. I also know the size of each pixel. The coordinates of some pixels are given to me, but not all. How can I calculate the coordinates for each pixel, using the known coordinates?
I have two stl files, one is from 3d scanner and other one is made by photogrammetry method. I want to know the surface deviation between these two stl. I need the values in scale from too. i have tried GOM inspect but that can compare CAD with stl not two stl.
As is well known, camera calibration in photogrammetry and with the use of Bundle Adjustment with self-calibration, the coordinates of the principal points cannot be recovered from parallel images. This situation calls for convergent images to recover the coordinates of the principal point. A common explanation is attributed to the algebraic correlation between the exterior orientation parameters and the calibration parameters. Now the question in other words, Is there is any deep explanation about the nature or the type of this algebraic correlation? Is there is any analytical proof for this correlation? or we have to accept this empirical finding (we need convergent images for camera calibration)
GSD is commonly measured for aerial surveys and influenced by many factors. I am interested to measure the GSD of natural images (say an image of a building). My goal is to measure the size of an object in an image based on GSD.
I would appreciate your help.
Within the 3DForEcoTech COST Action, we want to create a workflow database of all solutions for processing detailed point clouds of forest ecosystems. Currently, we are collecting all solutions out there.
So if you are a developer, tester or user do not hesitate to submit the solution/algorithm here: https://forms.gle/xmeKtW3fJJMaa7DXA
You can follow the project here: https://twitter.com/3DForEcoTech
SFM would need a lot of photos. Is there any software that achieves the rectification automatically using few photos?
As UAV datasets (Orthomosaics and DTMs) have a very good spatial resolution but a big challenge is the file size of this data. This big size makes some hassles when you're working on certain computer applications or you want to transfer this data.
Is there any procedure to reduce the size of data without compromising the data quality?
We are aware of the advantages and limitations of both Satellite-based Remote Sensing and Aerial Photogrammetry. However, when we are analyzing the land-use land cover (LULC) changes induced as a result of a natural disaster, such as forest-fire, floods, etc., which method, according to your experience, will be more significant than other to conduct the aforesaid analysis? Why?
I plan to test the horizontal and vertical accuracy of photogrammetric products generated by direct georeferencing (DG) process (without ground control points - GCP) using a UAV system (Matrix 210 RTK V2, DRTK-2 Mobile Station, Zenmus X7 16 mm).
The official DJI website states that Matrix 210 RTK V2 + Zenmus X7 at a GSD of 2 cm, the produced orthoimages can achieve absolute horizontal accuracy of less than 5 cm using DJI Terra as the mapping software.
If you have used this or a similar UAV system I would appreciate hearing your experiences with the DG process. Respectively, can the RTK option for the above-mentioned UAV system lead to sufficient results (total (XYZ) error < 5 cm) if none of the GCP is introduced?
Also, feel free to suggest papers about this or a similar UAV system (assessment of DG accuracy in a specific case study).
Thank you in advance.
Below are the few which are known to me. Which devices have you used to acquire building Point Clouds? What technology does the device use? I think many researchers are looking for choices of devices for their various projects and this post can help them to put the list on the table.
- Ipad Pro 2020
- Google Tango
- ZED Cam
I am adding below the Suggested Choices by respondents:
- Geo SLAM - ZEB Go
- FARO Freestyle 3D hand scanner
- Dotproduct DPI-10
I did close range photogrammetry on the surface of a joint. The photos were loaded to Agisoft software and the 3D model of the joint surface was reconstructed.
Now I want to calculate the roughness of these models and also have the coordinates of the surface points.
- Currently researching SfM for it's use on rock core samples, fossils and hand rock samples. There is wide use of SfM on multiple aspects of both science, and animation, I'm hoping to receive guidance or suggestions for researching and resources for geology purposes.
- My project is related to creating a recommended practice for SfM in geology, so if you have any experiences with SfM, positive or negative, I would love to hear about them.
- I would also like to inquire about newer, cutting-edge technology and software(s) for 3D imaging/photogrammetry.
I have some questions, anybody can help me with references!
- Which formula is being used for Aerial Triangulation in Pix4D Mapper?
- Which formula is being used for the Computer vision algorithm, bundle block adjustment and stereo matching algorithm in Pix4D Mapper?
Thanks in advance!
As is well known, the dependent and independent models of the relative orientation for near vertical images or photographs that were derived from the collinearity model provides a linear solutions for the relative orientation parameters. I looked to my collection of books and papers in photogrammetry and I did not find an exact history of their development. I do appreciate your help in providing historical information about their development.
There is so much research about Photogrammetry but very little of that research is focus on cultural heritage inside museum settings.
As is well known, feature-based photogrammetry started as a very promising research program and direction in the last century and it led to several algorithms that utilize lines, free-form curves, splines, and conic sections. In addition, features are promoted in the context of automation as a driving force. Nevertheless, we are not seeing practical systems that utilize features as computational entities for the photogrammetric orientation procedures as well as the 3D reconstruction of object space features. The simple question, why we are not seeing a real progress? Does the problem at the conceptual level, at the algorithmic level, and/or the implementation level? Does the complexity of photogrammetry itself is contribution to this lack of progress? Do we need more mathematics to handle the features? Your comments could open the door for feature-based photogrammetry to penetrate the market in the form of practical sysytems.
I just started my Masters in Geo-informatics and have missive interest in learning Photogrammetry. Now i want to know how to get image coordinates using collinearity ( Case of diapositive).
Dear 3D scanning and photogrammetry lovers,
As you all know, 3D acquisition facilities are (very) expensive, and some brands are omnipresent in publications because we are sure of their performance. Yet, technological progress is rapid, and cheaper options may be "sufficient" to answer many questions explored via geometric morphometrics (especially at the interspecific level).
What do you think about the Rovopoint POP project?
or about the Einscan Sp? https://it3d.com/en/3d-scanners/einscan-sp/?fbclid=IwAR0BSQUqdmq-WnziPehZCtknrva5LFBEqgdlhefKbTb0pylpq93odOoGPds
For you, do these engines offer reliable opportunities for application of 3D geometric morphometrics technics? Or is the precision not sufficient?
Thanks in advance for your feedback
The major applications, uses and purposes of civilian drones. How the world is transforming by the advancement of technology in mapping sector? How drone is aiding in mapping?
I am performing a simple close range photogrammetry test and comparing outcomes of photogrammetry tools such as VisualSFM, Meshroom, Colmap etc.
Kindly advice, what post evaluation tests/ criteria (such as point cloud density) can be performed for comparison of point clouds from various photogrammetry tools and what tools/ software can assist in such evaluation (such as CloudCompare, Matlab) ?.
Test Objects = such as water bottle, small objects etc.
I'm working on SAR Images Co-registration.I want to divide my image into two looks with the spectral diversity method. But I did not find its complete formulas in any article.
Does anyone know how I can do this? Or do you know the paper that fully explains the algorithm?
I have to find the shape, orientation and the size of a flat material (simple shape) - which is not oriented on the CNC table(active area).
I thought about transforming a picture into a stl file
I have many ideas but I don't know how to program (yet) and I try to find the most reliable solution.
Any suggestion is welcome
Hi dear community,
I am currently working on a project where I want to generate a 3D reconstructed model consisting of rubble piles (in consequence of a building collapse) via remote sensing. In my case, I employ LiDAR aerial laser scanner as well as aerial photogrammetry for point cloud generation of the disaster scene. The problem is that solely the surface of the scene that lies in the field of view can be reconstructed. However, in order to evaluate the structural behavior of the debris with regards to structural stability, I need to know how the collapsed elements are structured beneath the debris surface. Does somebody has an idea how I can proceed or has anybody conducted a related study? Is my objective even feasible?
Thank you in advance!
- Interested in the field of photogrammetry, with which package we can extract information about the cross-section of a 3D model (where can we build a cross-section of a model) ?.
- How can I find out where is which coordinate
- and how can I export to .obj format?
Thanks in advance!
We are using UAV multispectral data acquired by a Micasense RedEdge-M camera for mapping the concentration of suspended sediments and distribution of aquatic vegetation in shallow estuarine waters and are wondering if there is simple way to automatically georeference the data using Agisoft Metashape or any free software, while keeping the original pixel value.
I am actively working on creating HD maps using LiDAR, Photogrammetry work stations. Final delivery of this data is in Vector and Raster formats. I am not familiar at AI, ML concepts related to ADAS, and intend to know the concepts related to data reading and processing while AV in travelling mode.
I humbly request to share related material, links to understand this technology better. so that I can prepare deliverable at better accuracy because we all know safety while driving is a prime concern for the success of AVT.
It seems impossible since most of analyses are based on 3x3 cell neighborhood but I would like to hear your experiences. I know that some hydrodinamical models like TELEMAC are able to perform this.
I have a few pictures taken every 30 seconds from an HD camera fixed onto a tripod of a concrete beam onto which I have a grid drawn on a thin white paint. The test lasts for 30 minutes and I would like to find out the change in distance (or in 2d coordinates x and y) of specific points i marked on the grid for every picture, what program can do that? I am not happy with the one I have. Thanks.
I want to estimate glacier surface lowering using UAV data acquired on glacier frontal areas where it is too difficult and dangerous to measure GCPs.
What do you think about how ortho- photos are made from historical analogue aerial imagery?
Suppose you have a set of analogue aerial images of several Flight Lines.
What software and methods will you use for ortho- photos production?
How to choose photo control points?
How do you measure the precision of the final ortho-photo?
In a multi-temporal photogrammetric project, in order to monitor the changes between the 3D reconstructions of the same scenario, it is necessary to co-register the relative models robustly. Given a number of Ground Control Points (GCPs) more than sufficient and fixed over time useful to geo-reference multi-temporal 3D models, in a theoretical concept of 'Co-Registration' it is more correct:
- carry out a georeferencing of all the models with the same GCPs (checking that they have comparable RMSE and 'Reprojection errors') and then overlap them?
- or set up a georeferenced reference model and then co-register the subsequent models (eg in CloudCompare> Align Tool) aligning them with the first in the same GCPs (maintaining a low RMS alignment error)?
In a recent photogrammetric test of change detection I noticed some differences in the results obtained by adopting the two strategies just described.
I came accross some studies and the settings of camera differ and also the opinion on it. So when I set the camera in manual mode. I will set shutter speed, ISO, F-number in combination where I will avoid blurry images but still I will get enough light. But do you think if I will use auto-focus setting it can lead to failing the aligning process? In the case of terrestrial photogrammetry the focus can differ greatly but also in some case of UAV photogrammetry. And what if I use auto ISO or shutterspeed. What is your opinion?
hello In principle, commercial software is nowadays used to produce orthophones from drones. What do you think is the major difference between these commercial software? Do they have significant advantages and disadvantages in addition to the algorithm used? Which of the following commercial software do you recommend for the UAV?
I am trying to orthorectify an IRS 1C and an IRS 1D multi - spectral image using a DTM/ DEM. I have Cartosat 1 DEM of 1 Arc - Sec (30m). There are no RPCs provided by the vendor and I was told that the product isn't orthorectified nor geometrically corrected.
I use Erdas Imagine 2015 for working with Satellite Data. Any help on this will be highly appreciated.
Known are the length in a triangle ABC of a tetrahedron ABCD, i.e., also all angles between A, B and C are known. The position of D is, however, unknown, but the angles between all edges to D, i.e. between DA and DB, DA and DC and DB and DC, are known as well. Is it possible to determine the position of D, or do I need an additional information. If it is not unique, how many solutions exist?
I am planning to add a polarization filter to my bathymetric SfM setup (DJI m600, Sony a6000, 20mm) in order to reduce sun glint.
Do I have to keep a constant angle of the UAV towards the sun?
Will I have to adjust the rotation angle of the filter for every flight? Do you see/know any other issues?
I am looking for papers related to computer vision, not medical science. I have an understanding of the basic principles and the math involved in stereo vision. I am looking for papers related to:
- How various factors, such as baseline, affect the performance of the stereo vision.
- Techniques to solve correspondence/ matching problem.
- Multiple Baseline Stereo or other prominent variants of stereo especially related to distance or position estimation.
I am looking for a basic set of 5 - 10 papers and may be 10 - 20 more to build upon the knowledge from the basic set.
I'm writing a dissertation on the use of artificial intelligence to enhance the cultural heritage applications of photogrammetry.
I was inspired by Yves Ubelman's work in Palmyra (as featured on the Microsoft AI Services advertisement). However, I have been unable to locate any background information about the technologies used.
There seems to be a dearth of literature available that covers this area; has anybody any suggestions?
I work with photogrammetry and LiDAR data, but now I have a problem with some data.
I have this LiDAR point cloud (4.5 points/m2) that I need to filter and get a DTM. The problem is that the zone has a lot of dense forest and some places without trees (glades).
How can I filter this point cloud correctly? I have access to globalmapper, but I don't know exactly what parameters I should use. Or maybe if you recommend to me another software.
Thank you and have a nice weekend.
1 - On my theses which is about extraction of poles at mobile laser scanner point cloud, First point cloud should be separated to ground points and non ground points which pole points are inside of non-ground point section. for doing that, 2D Dimension of point cloud at XY plane is gridded(m*m). then at each square, minimum height is calculated and those points elevation of them are less than total minimum elevation and a threshold height, are consider ground points. with this simple method, ground points are denoted.
2- for extraction of traffic signs at lidar data, simply with intensity(almost maximum intensity) and a shape filter, they are could be extraction.
I'm a media historian and theorist without extensive technical expertise, looking to better understand how digital photogrammetry functions to support current work in VR (for example in Google Earth VR). I can find sources about how image data is captured. I'd like to find more information about how the data is then dynamically used by the VR engine to render an interactive/navigable environment that remains immersive across multiple scales. I'd appreciate any recommendations, advice, or possible conversations. Thank you! -Brooke
Are the photogrammetric benefits are fairly communicated to the others outside the mapping community? I do understand the historical evolution of photogrammetry, but I'm looking for ways to expand the use of photogrammetry and in turn its market.
In my project "CC-technology" I simulated a ceiling, in the post-processing I generated animations of stress and strain.
We will do the experiment soon and measure the strain with photogrammtry.
How to compare the animated strains from Ansys with the measured strains from photogrammetry? The measures of the ceiling are 4,50mx4,50mx0,16m.
Is there a software for comparing the strains of simulation and experiment within this area 4,50x4,50m and showing me the difference between? With this data I could recognize where my simulation still needs to be improved.
By hand I could only compare the films optically.
When i want to process my photo in AgiSoft,i sometime encounter "not enough memory" error.
Can i solve this problem with increase virtual memory instead of upgrade my physical memory,by accepting that the speed is significantly reduced?
Photogrammetry is the science of making measurements from photographs depending on many mathematical formulas. Finally, a 3D model can be produced in many formats such as FBX, OBJ, etc....
So, how do you think the produced 3D model leverage the industry?
I am concerned with understanding the SfM geometric consideration effective pixel resolution proportionality in a FB-HTP crop scenario theoretically, and specifically how the commercial computational products, for example Pix4D or ReCap, handle pixels, or could handle raw pixels at higher orders.
Hello and gretings from germany,
did you already publish your method to measure pelvic rotation? I am interested in this topic because we can only use 3D photogrammetry which is very expensive and not mobile. In the diagnosis of pain in the groin region or symphysis in soccer players, pelvic rotation seems a very important influence factor to me.
Hello folks. Need a hand identifying the species of this Anuran. The specimen came into the collections at the Oxford University Museum of Natural History with the rather unhelpful label of 'Spadefoot toat'.
As part of other work, I've done some Photogrammetry and CT-Scanning and appears to be female by the presence of eggs from the scans.
3D Model of Skin: https://skfb.ly/MXDH
3D Model of Skeleton: https://skfb.ly/RzyK
Any help would be greatly appreciated.
Specifically looking for techniques and applications but related research would be of interest.
Many thanks Tony
The effort to generate underwater 3D maps using photogrammetry has increased over the last years. It is now able to produce high resolution models. We are currently working on a method to create georeferenced 3D maps. The next step would be to analyse these maps of marine seascapes. My first though is a landscape approach since tools already exist for land analyses. Do you know if such methods already exist?
Normally photogrammetry follows a simple pattern. However, this pattern use to be intuitive. I have being processing a few formulas to avoid the intuitive method. Is there a generalised formula for this purpose? I mean: a way where we can adjust the distribution applied to an object before taking the photos and manage a 100% overlap.
Thank you very much.
Is it possible to have different point densities within a point cloud file of a large area? If so, which parameters play role in it?
Also, does it matter if the point cloud is created from image photogrammetry or airborne laser scanning (ALS)?
I mean anything about lighting, perspective or ....
The analysis is going to employ on some fine aggregates (smaller than 0.075 micrometers), and the change in the color is the matter of great importance.
Has anyone found effective way to rotoscope dynamic objects from a scene with moving camera? Looking for any research on this topic as well.
Am working on a project mapping wreck site using photogrammetry and am running into problem with rotoscoping fish. Does anyone know of any way to potentially auto roto fish into separate matte using After Effects? Am researching other methods to be able to photoscan wreck while removing fish from tracking.
One method I'm considering is camera tracking in a separate software package the camera path and exporting camera position, orientation, and focal length into Photoscan via xml ('import camera'). While this would not help with matting of fish would this help Photoscan processing with known correct camera position?
I'm looking for some references and information about digital frametype sensors and their specifications or any related information regarding the systems the corrections etc.
Thank you in advance.
Questionnaire is carried out in order to develop my master thesis: The impact of user-defined parameters on DEM accuracy. By using feedbacks from the users who works with DEMs the conclusion about users perception of the importance of user-defined parameters in digital terrain modelling will be performed.
Thanks in advance to all !
I'm currently following a B sc in Surveying Science with specialization in photogrametry and remote sensing. In my general degree I'm interest to do a research about Forest canopy loss using Air borne C band SAR images.
So I,m kindly request that any one can send me such articles relevant to my research interest, is very helpful to my study &pleasure to me.
How can photogrammetry be used to trace the geometry of arches in historical buildings with least error?
Where can I find good literature on photogrammetric error sources and impacts on modeling in engineering applications ? Thanks in advance for your replies.
As is well known, in photogrammetry as well as in other disciplines we have to deal with non-linear functions that need initial approximations for their unknown parameters. In general, these initial values have to be close to their true or best estimate values by a certain percentage. Although there are several linear solutions for these problems and some empirical guidelines, the interesting question is: Is there is any deep mathematical theory to analyze the non-linearity of a function?
The 2D projective transformation is used in photogrammetry for 2D rectification and it accounts for several motion such as translations, rotations, and scale factor. Now the question is:
Could the 2D projective transformation be seen as equivalent to the 3D similarity transformation for a plane?
During the last three decades surveying technologies such as Total Station, GPS, and Close Range Photogrammetry were used extensively in the area of the deformation measurement and monitoring. Now the question is:
How do we compare the surveying engineering technologies for deformation measurement with the other tools such as dial gauges & LVDT? Indeed, Surveying compete on several fronts such as portability, ease of use, getting more information such as 3D measurement vs. 1D in the case of the dial gauge. On the other hand, the dial gauge is more accurate in terms of the number of decimal digits. Then and once again the question is: how do we compare them in terms of accuracy, acceptance in the practice, and cost in light of other tools and technologies? Indeed, the answer to this question has some competing aspects. On the other hand, it will be more interesting if we can analyze where the division of labor will happen between the surveying technologies and the other ones as well as the complementary aspects of their mutual use
Is there is any difference between the linear solution of the Direct Linear Transformation (DLT) in photogrammetry and its non-linear version?
I did not have enough experiments yet to address this question, but my initial finding indicates that there are some difference in terms of the obtained residuals from the linear and non-linear solutions.
Kite Aerial Photography (KAP) has so many advantages that they are at least as suitable as UAVs for monitoring coastal dynamics.
Think about it, kites are:
- less regulated, which means higher altitudes thus wider footprints
- extremely inexpensive and portable
- non-intrusive, licensing-free
- wind-friendly, the more wind the more payload, thus, more sensors (RGB camera, micro-Lidar, Multispectral sensors, IMUs, GPS)
- less stable than UAVs, which is good for Structure from Motion algorithms because the same point is seen in diferent angles and scales and more off-nadir images means less doming effect
Obviously zero wind means no kites.But coastal areas are windy by nature.
Moreover, if you set target points, record accurate location (dGPS), then use the targets network to orthorectify the KAP imagery, Structure from Motion algorithms produce DSMs and Orthoimages as good as UAVs.
One of the most important coastal issue that has been tackled with a KAP approach received international attention in the 2014, when it was used for the worldwide famous Dutch project “Zandmotor”.
The point is:
Help me find at least 5 robust arguments that can refrain kites from being the next coastal monitoring tool.
Especially in Least Developed Countries or in Pacific Coutries where low-lying atolls are drowning and UAVs or fine resolution satellite imagery are just too expensive to use.
I would like also to know if anyone has compared the gain and offset of different bands from the same camera (Green, Red and NIR): would it be safe to assume the gain and offset of the green band and NIR are proportional given a known exposure?
Is it the same thing (or has the same principle) because I am kind of confused with these two. If they are different things, what separate them apart? And is there any other techniques of 3D reconstruction besides these two that I should know?