Science topic

Photogrammetry - Science topic

Photogrammetry are making measurements by the use of stereoscopic photographs.
Questions related to Photogrammetry
  • asked a question related to Photogrammetry
Question
2 answers
Dear RG-Community,
I have been using Agisoft Metashape for UAV imagery processing for quite a while now. Also a while ago, I stumbled upon the Micasense GitHub repository and saw that individual radiometric correction procedures are recommended there (e.g., vignetting and row gradient removal -> https://micasense.github.io/imageprocessing/MicaSense%20Image%20Processing%20Tutorial%201.html). Now I was checking which of those radiometric corrections are also performed during processing in Agisoft Metashape. In the manual (Professional version) I could only find that vignetting is mentioned.
Did anyone of you know how to learn more about the detailed process in Agisoft Metashape..or even better..to perform a thorough radiometric image correction that removes any radiometric bias without running into the risk that this collides with the Agisoft Metashape process?
Thanks for your help,
Rene
Relevant answer
Answer
Thanks for responding Fulgence Hatangimana your answer provides a brief overview of first doing the radiometric correction by following the the GitHub repo by Micasense. However, my question was trying to get into the details of radiometric correction that is done by Metashape. For example, if I apply the row gradient correction before processing my images in Metashape, how do I know that Metashape is not trying to repeat the same correction?
  • asked a question related to Photogrammetry
Question
2 answers
Hello,
I am studying the new ASPRS guidelines, version 2, published in 2023. I am stuck with an equation prescribed for estimating the horizontal accuracy of a LiDAR dataset using the error values published by the GNSS and IMU manufacturers.
In the equation for Horizontal RMSE, there is a scaling term (1.478) used for the IMU errors. This is different in the two different versions of the guidelines. I'm attaching screenshots of both the equations as well as the link for the two documents.
If anyone who's an expert in this subject can please provide me with some insight as I'm really stumped with this.
ASPRS guidelines version 1, 2014: https://shorturl.at/aHM36 page no A7
ASPRS guidelines version 2, 2023: https://shorturl.at/s5679 page no 16
If I need to provide further information, I'm happy to do so.
Relevant answer
Answer
Aahed Alhamamy , the author Dr Abdullah, had this to say: "The new equation in edition 2 differs from the one we published in edition 1 as it is based on two accuracies for the IMU. In the earlier version, the equation assumes one accuracy for the roll, pitch, and heading. Since the accuracy of the heading is always worse than the ones for roll and pitch, we revised the equation, so the user can enter two values for the IMU accuracy."
  • asked a question related to Photogrammetry
Question
6 answers
If you have an area of several 25's of acres (several 100,000 m²) and your only source of GNSS Information is that of the drone, can there be distortion of the 3D Model / orthomosaic that are so large that calculations based on this model cannot be trusted?
In other words: Do GCPs not only add global georeferenced accuracy, but also decrease the error of the scale of the result (for example if you want to measure landfill, the surface area or the volume of some rocks or debris) ?
Relevant answer
Answer
Yes, without GCPs and RTK/PPK, it is highly possible to obtain wrong or inaccurate geometric information in UAV-based photogrammetric mapping. When dealing with large areas, relying solely on the drone's GNSS information can lead to distortions in the 3D model or orthomosaic. GCPs not only add global georeferenced accuracy but also help decrease errors in the scale of the results, making them essential for reliable measurements.
The accuracy of UAS-based photogrammetric mapping depends on several factors, including Ground Control Points (GCPs), flight height, camera resolution, GNSS accuracy of the device, weather conditions, processing software, user experience, etc. While it is possible to obtain satisfactory 3D models without GCPs or RTK/PPK, for precise measurements such as surface area or volume calculations, GCPs are essential. For more in-depth information, you can refer to my MSc thesis and the following articles. In the following studies, you can find comparisons of processing with and without GCPs as well.
MSc Thesis:
Good luck on your journey of exploration and innovation!
  • asked a question related to Photogrammetry
Question
3 answers
Hi, I'm going to apply a 360° Camera Ricoh THETA V model for fast survey and 3D photogrammetry in a research project of documentation of Built Heritage sites at the territorial scale.
Does anyone have some suggestions to improve results and quality of models? I have made till now a few tests on archaeological sites, processing the data with Agisoft Metashape software, but I'm not fully satisfied of the quality of the results (bad allignments, scarse dense clouds, rough meshes).
My questions regard:
- Which processing software can be the most suitable (in terms of semi-automatic presettings) for 3D photogrammetry from 360° photos (e.g., Metashape, Reality Capture, others)? Are there any particular settings to provide manually in this case? (above the setting of spherical cameras as source format)
- If someone has experience with Richo THETA cameras, which camera model can be the most suitable for 3D photogrammetry? Actually I'm testing a THETA V model, but I need to buy a new one, and I'm in doubt between Z1 and X models. Can someone suggest a review of these models for goals of digital documentation?
- For my research, also the geo-referencing of processed models, with GPS coordinates, is relevant. Considering the point above, which model can be suggested?
Thank you to all!
Best,
Relevant answer
Answer
Here are some best practices to follow when using these cameras for 3D photogrammetry:
  1. Optimal camera settings: Make sure your Ricoh THETA camera is set to capture images at the highest possible resolution and quality. Turn off any automatic exposure or white balance settings and, if possible, use manual settings to ensure consistent exposure and color balance across all images.
  2. Overlap and coverage: For accurate 3D reconstruction, ensure there is a sufficient overlap between images, ideally around 60-80%. Capture images from various angles and heights to ensure adequate coverage of the scene or object. This will increase the chances of successful feature matching and alignment in the photogrammetry software.
  3. Image stability: Use a tripod or monopod to stabilize the camera and minimize motion blur. This is particularly important when capturing images in low-light conditions or when using longer exposure times.
  4. Even lighting: Try to capture images under consistent and even lighting conditions. Avoid direct sunlight or harsh shadows, as they can cause issues with feature matching and 3D reconstruction. If possible, use overcast days or controlled indoor lighting for best results.
  5. Georeferencing: If you require accurate geolocation data, consider using a GPS-enabled Ricoh THETA camera or adding Ground Control Points (GCPs) to your dataset. This can help improve the accuracy of the final 3D model.
  6. Multiple passes: For complex scenes or objects, consider capturing multiple passes from different angles and distances. This can help ensure adequate coverage and improve the quality of the final 3D reconstruction.
  7. Image processing software: Use a dedicated photogrammetry software like Agisoft Metashape, Pix4D, or RealityCapture to process the images captured with the Ricoh THETA camera. These software packages are specifically designed to handle 360° images and can optimize the processing pipeline for the best results.
  8. Calibration: If possible, calibrate your Ricoh THETA camera to obtain more accurate lens distortion parameters. This can help improve the quality of the final 3D reconstruction.
  9. Experiment and iterate: 3D photogrammetry often requires some trial and error to achieve the best results. Don't hesitate to experiment with different settings, techniques, and software to find the optimal workflow for your specific use case.
  • asked a question related to Photogrammetry
Question
4 answers
Extended Reality (XR) includes AR, VR etc.
Relevant answer
Answer
Ziheng Zhang In the engineering profession, there are several possible uses for extended reality (XR). Among the options are:
- Engineers can utilize virtual reality (VR) to construct and test virtual prototypes of items before building them in the physical world. This can save engineers time and money by allowing them to detect and correct design flaws before manufacturing physical prototypes.
- Training and education: Engineers and other technical workers can benefit from immersive training simulations created with virtual reality (VR) and augmented reality (AR). This may make learning complicated topics and procedures more entertaining and successful.
- Remote collaboration: XR may be used to connect engineers in various places so that they can collaborate in a shared virtual environment. This can help with project communication and cooperation regardless of location.
- Maintenance and repair: Using AR, engineers may get real-time information and directions while doing maintenance or repairs on complicated equipment.
- Site inspections: With XR, engineers may conduct virtual site inspections and discover possible concerns before they exist.
Overall, the application of XR in engineering can assist to simplify processes, enhance efficiency, and promote field accuracy and safety.
  • asked a question related to Photogrammetry
Question
4 answers
how to apply remote sensing and photogrammetry to solve flood problem in a society. or how to use remote sensing and photogrammetry to predict areas that will be likely for flooding
Relevant answer
Answer
Helpful to your analysis would be LiDAR coverage taken during low flow to obtain some channel morphology and as well as topographic detail. Dr. David Rosgen has developed a stream classification system that uses features as gradient, entrenchment, bankfull elevation (top of point bars that might be visible with LiDAR), braided vs non-braided channels, sinuosity, etc. The infrared aerial photos are often useful in separating upland and bottomland species if forested. The LiDAR very helpful in obtaining channel detail that is obscured by dense forest vegetation. Soil scientists familiar with mapping soil types on landscapes in your area would be helpful as they are used to reading landform, vegetation and hydrology indicators. But they too need experience and ground truthing (checking, transects, site visits, etc.).
As suggested by Dr. Klebs, securing available rainfall and stream/river data for your physiographic area(s), developing detailed DEM, classifying vegetation and channel types, Watershed sizes for areas of interest, land use practices, land cover types can also be helpful. Braided and highly sinuous streams almost always have significant flooding. Highly entrenched and high gradient streams tend to have no or narrow floodplain. If a flat area is adjacent to a stream or river, it could be floodplain, unless the channel is deeply entrenched, then it is likely a terrace (abandoned floodplain). Braided streams are typically a result of excess sediment loading, which fills channel capacity and aggrades channel, resulting in more frequent flooding and channel instability.
Due to the potential for land valuation and regulation, as well as human hazards, it may be best to have licensed engineer and appropriate agency review and approval before publishing results that potentially could have legal or human ramifications. If not, perhaps publishing as guideline or remote estimates needing field checking and verification would help limit liability for misinterpretation. Without LiDAR, there are certainly things one can obtain from aerial photos and photo pair interpretation, such as landform, vegetation type and some channel indicators perhaps. Soil maps if available typically separate out flood prone and hydric soils that would be useful information.
  • asked a question related to Photogrammetry
Question
9 answers
Say I have a satellite image of known dimensions. I also know the size of each pixel. The coordinates of some pixels are given to me, but not all. How can I calculate the coordinates for each pixel, using the known coordinates?
Thank you.
Relevant answer
Answer
Therefore, you have 20x20 = 400 control points. If you do georeferencing in Qgis, you can use all control points or some of them, like every 5 Km (16-points). During resampling, all pixels have coordinates in the ground system.
If you do not do georeferencing (no resampling), then you could calculate the coordinates of unknown pixels by interpolation. Suppose a pixel size a [m], then in one km, you have p = 1000/a pixels, and therefore known coordinates have the first(x1,y1) and the last(x2,y2) pixel. The slope angle between the first and last pixel is:
s = arc-tan[(x2-x1)/(y2-y1)]. Therefore, a pixel of a distance d from the first pixel has coordinates x = x1 + d.sin(s) and y = y1 +d.cos(s). You can do either row of column interpolation or both and take the average.
  • asked a question related to Photogrammetry
Question
7 answers
I intend to survey the construction site by photogrammetry. I will use Visualsfm software to generate the point cloud.
Relevant answer
Answer
Me neither, I'm sorry by that.
Regards
  • asked a question related to Photogrammetry
Question
7 answers
I have two stl files, one is from 3d scanner and other one is made by photogrammetry method. I want to know the surface deviation between these two stl. I need the values in scale from too. i have tried GOM inspect but that can compare CAD with stl not two stl.
Relevant answer
Answer
Are there any videos showing the steps of comparing two stl. models using Meshlab?
  • asked a question related to Photogrammetry
Question
14 answers
As is well known, camera calibration in photogrammetry and with the use of Bundle Adjustment with self-calibration, the coordinates of the principal points cannot be recovered from parallel images. This situation calls for convergent images to recover the coordinates of the principal point. A common explanation is attributed to the algebraic correlation between the exterior orientation parameters and the calibration parameters. Now the question in other words, Is there is any deep explanation about the nature or the type of this algebraic correlation? Is there is any analytical proof for this correlation? or we have to accept this empirical finding (we need convergent images for camera calibration)
Relevant answer
  • asked a question related to Photogrammetry
Question
7 answers
GSD is commonly measured for aerial surveys and influenced by many factors. I am interested to measure the GSD of natural images (say an image of a building). My goal is to measure the size of an object in an image based on GSD.
I would appreciate your help.
Relevant answer
Answer
At aerial image,
Ground Sampling Distance (GSD) = one pixel side dimension (sensor) X (average camera height / focal length )
GSD = pixel side dimension X (1/ average imaging scale)
heck this paper for illustration
Low Cost Technique of Enhancement Georeferencing for UAV Linear Projects | Proceedings of the 2020 3rd International Conference on Geoinformatics and Data Analysis (acm.org)
  • asked a question related to Photogrammetry
Question
2 answers
Within the 3DForEcoTech COST Action, we want to create a workflow database of all solutions for processing detailed point clouds of forest ecosystems. Currently, we are collecting all solutions out there.
So if you are a developer, tester or user do not hesitate to submit the solution/algorithm here: https://forms.gle/xmeKtW3fJJMaa7DXA
You can follow the project here: https://twitter.com/3DForEcoTech
Relevant answer
Answer
Dear Martin Mokros,
Got this Project! I‘ll share this with corresponding workmates
Thanks for sharing this info. Kind Regards!
  • asked a question related to Photogrammetry
Question
5 answers
SFM would need a lot of photos. Is there any software that achieves the rectification automatically using few photos?
Relevant answer
Answer
If the area is relatively flat, you can use a single photograph with projective transformation being an option in georeferencing using the QGIS. The accuracy depends on the magnitude of elevation differences from a fitted plane to the earth's surface. Such elevation differences create the so-called relief displacement that produces the error. Such error analysis is shown in the Chapter of Photogrammetry in the book Topographic Mapping, Universal Publis, 2008, pp 349.
  • asked a question related to Photogrammetry
Question
4 answers
Can anyone help me with this, please?
Relevant answer
Answer
In RS, you can develop some methods to extract features from satellite images.
  • asked a question related to Photogrammetry
Question
5 answers
Greetings,
I plan to test the horizontal and vertical accuracy of photogrammetric products generated by direct georeferencing (DG) process (without ground control points - GCP) using a UAV system (Matrix 210 RTK V2, DRTK-2 Mobile Station, Zenmus X7 16 mm).
The official DJI website states that Matrix 210 RTK V2 + Zenmus X7 at a GSD of 2 cm, the produced orthoimages can achieve absolute horizontal accuracy of less than 5 cm using DJI Terra as the mapping software.
If you have used this or a similar UAV system I would appreciate hearing your experiences with the DG process. Respectively, can the RTK option for the above-mentioned UAV system lead to sufficient results (total (XYZ) error < 5 cm) if none of the GCP is introduced?
Also, feel free to suggest papers about this or a similar UAV system (assessment of DG accuracy in a specific case study).
Thank you in advance.
Relevant answer
Answer
Dear Ivan Marić,
Please check my Researchgate and you will find a paper that I published with my students using DJI phantom-4 for 3D mapping for part of the railway station at Khartoum-Sudan. We used the RTK-GPS to provide the ground control points. And we do have a case study in the paper.
Regards,
Gamal
  • asked a question related to Photogrammetry
Question
10 answers
Below are the few which are known to me. Which devices have you used to acquire building Point Clouds? What technology does the device use? I think many researchers are looking for choices of devices for their various projects and this post can help them to put the list on the table.
  • Ipad Pro 2020
  • Google Tango
  • ZED Cam
I am adding below the Suggested Choices by respondents:
  • Geo SLAM - ZEB Go
  • FARO Freestyle 3D hand scanner
  • Dotproduct DPI-10
Relevant answer
Answer
Please take a look at Dotproduct DPI-10. It uses the latest intel sensors and SLAM algorithms.
  • asked a question related to Photogrammetry
Question
8 answers
I did close range photogrammetry on the surface of a joint. The photos were loaded to Agisoft software and the 3D model of the joint surface was reconstructed.
Now I want to calculate the roughness of these models and also have the coordinates of the surface points.
Relevant answer
Answer
Hi Amir,
the following papers provide some interesting info on the topic of roughness and how to compute it:
* Kushunapally, R., Razdan, A., Bridges, N., 2007. Roughness as a Shape Measure. Computer-Aided Design and Applications 4 (1-4), 295–310. DOI: 10.1080/16864360.2007.10738550 (download here: http://www.cad-journal.net/files/vol_4/Vol4Nos1-4.html)
* Nocerino, E., Stathopoulou, E.K., Rigon, S., Remondino, F., 2020. Surface Reconstruction Assessment in Photogrammetric Applications. Sensors 20 (20). Doi: 10.3390/s20205863 (Download here: https://www.mdpi.com/1424-8220/20/20/5863)
As these paper state, one first must define if a local or global roughness measure is needed. Recently, machine learning approaches have been leveraged to estimate roughness, but the more common approach is to use 3D-capable tools to compute and visualize roughness. Two good and free tools are:
* CloudCompare, which can directly compute local roughness (info here: http://www.cloudcompare.org/doc/wiki/index.php?title=Roughness, download here: https://www.danielgm.net/cc/)
* Meshlab, like CloudCompare a tool that is capable of many mesh and point cloud operations (download here: https://www.meshlab.net/#download). Although there is no direct “roughness” measure in Meshlab, the software offers several ways to compute surface curvature, which can be used as a proxy for roughness. You will find some useful information here: https://stackoverflow.com/questions/59194552/meshlab-does-the-topology-of-my-mesh-effect-the-curvature-results
Hope that helps. Cheers!
Geert
  • asked a question related to Photogrammetry
Question
3 answers
  • Currently researching SfM for it's use on rock core samples, fossils and hand rock samples. There is wide use of SfM on multiple aspects of both science, and animation, I'm hoping to receive guidance or suggestions for researching and resources for geology purposes.
  • My project is related to creating a recommended practice for SfM in geology, so if you have any experiences with SfM, positive or negative, I would love to hear about them.
  • I would also like to inquire about newer, cutting-edge technology and software(s) for 3D imaging/photogrammetry.
Relevant answer
Answer
Here are a few resources that should be helpful:
2. Free short courses from UNAVCO, Here is one example short course with materials that you can view and download: https://kb.unavco.org/kb/article/gsa-2019-short-course-510-high-resolution-topography-and-3d-imaging-ii-introduction-to-structure-from-motion-sfm-photogrammetry-869.html
3. Typically, UNVACO and OpenTopography have a short course on SfM at the GSA Annual Meeting (See the reference above for thee 2019 GSA Annual Meeting short course). I would check the GSA Annual Meeting website to see if UNAVCO/OpenTopography will be holding another short course this year.
Hope this helps.
  • asked a question related to Photogrammetry
Question
3 answers
I have some questions, anybody can help me with references!
  1. Which formula is being used for Aerial Triangulation in Pix4D Mapper?
  2. Which formula is being used for the Computer vision algorithm, bundle block adjustment and stereo matching algorithm in Pix4D Mapper?
Thanks in advance!
  • asked a question related to Photogrammetry
Question
4 answers
As is well known, the dependent and independent models of the relative orientation for near vertical images or photographs that were derived from the collinearity model provides a linear solutions for the relative orientation parameters. I looked to my collection of books and papers in photogrammetry and I did not find an exact history of their development. I do appreciate your help in providing historical information about their development.
Relevant answer
Answer
Thank you very much. I will check both of them.
  • asked a question related to Photogrammetry
Question
6 answers
There is so much research about Photogrammetry but very little of that research is focus on cultural heritage inside museum settings.
Relevant answer
Answer
Hi Heather,
can you clarify if by photogrammetry for conservation you mean applications for a) documenting heritage objects for dissemination/virtual display/augmented exhibition, or b) contactless study and/or vistual reconstruction, or c) identifying and mapping decay, previous conservation interventions etc.? because these are usually different categories of documentation in terms of quality and product requirements.
+1 to Arnadi Murtiyoso 's comment
regards,
Efstathios
  • asked a question related to Photogrammetry
Question
5 answers
As is well known, feature-based photogrammetry started as a very promising research program and direction in the last century and it led to several algorithms that utilize lines, free-form curves, splines, and conic sections. In addition, features are promoted in the context of automation as a driving force. Nevertheless, we are not seeing practical systems that utilize features as computational entities for the photogrammetric orientation procedures as well as the 3D reconstruction of object space features. The simple question, why we are not seeing a real progress? Does the problem at the conceptual level, at the algorithmic level, and/or the implementation level? Does the complexity of photogrammetry itself is contribution to this lack of progress? Do we need more mathematics to handle the features? Your comments could open the door for feature-based photogrammetry to penetrate the market in the form of practical sysytems.
Relevant answer
Answer
I would say there is significant progress in feature based photogrammetry.
I can think of 2 reasons why it might be thought there is no progress, first, is there a market need for it, most systems are developed in conjunction with industry to meet their need and then could spin off as a product that could serve or be applied to that type of industry. Second, current applications require a knowledge base that spans different fields of study and this might be a major limiting factor to conceptualization, algorithmic formulation and system implementation such that there is its acceptance within the framework of a particular industry, hence development of such systems would occur at mostly MSc-PhD level. A simple example is the development of a mapping system for underground mines. Such systems exist, and require ability to merge knowledge from computer science, robotics, geomatics/surveying, mining and perhaps other fields to be acceptable into a traditional industry that requires accuracies of a certain degree. Data inputs for an underground system will differ drastically from that of a terrestrial mapping system or an air-borne system even though the concept is similar. Again there most likely is no common solution because of the data inputs from the different instruments that can be used to collect data.
Essentially, the need for a system is supported by available technology and the human knowledge base to bring the system into being.
  • asked a question related to Photogrammetry
Question
5 answers
I just started my Masters in Geo-informatics and have missive interest in learning Photogrammetry. Now i want to know how to get image coordinates using collinearity ( Case of diapositive).
Relevant answer
Answer
As stated in your question, you need to measure the image coordinates from a diapositive (assuming that this a classical aerial image). First you need to determine the coordinates of the fiducial marks and use their known values to compute the parameters of affine transformation. The end end results of this work is to define the mathematical center of the image. Then during your collinearity computation you need to include the coordinates of the principal point as unknown in order to perform an implicit transformation to the physical center of the perspective geometry that is defined by the coordinates of the principal points. In some cases you may have the coordinates of the principal points known from a previous camera calibration process and in this case you need to impose their values explicitly in a reduction process in order to transform from the fiducial center to the principal point center. On the other hand, if you object space is not planner, you can use the Direct Linear Transformation (DLT) to work directly without any concern about the center of y=our image coordinates since the DLT will perform to transformation simultaneously (Affine + collinearity).
  • asked a question related to Photogrammetry
Question
9 answers
Dear 3D scanning and photogrammetry lovers,
As you all know, 3D acquisition facilities are (very) expensive, and some brands are omnipresent in publications because we are sure of their performance. Yet, technological progress is rapid, and cheaper options may be "sufficient" to answer many questions explored via geometric morphometrics (especially at the interspecific level).
What do you think about the Rovopoint POP project?
For you, do these engines offer reliable opportunities for application of 3D geometric morphometrics technics? Or is the precision not sufficient?
Thanks in advance for your feedback
Colline
Relevant answer
Answer
You can take 3D scan images using a mobile camera and a simple software. Today, software has been created that you can not give images of different views, and this software turns it into a three-dimensional object. This is the cheapest form possible. It's actually free
  • asked a question related to Photogrammetry
Question
14 answers
The major applications, uses and purposes of civilian drones. How the world is transforming by the advancement of technology in mapping sector? How drone is aiding in mapping?
Relevant answer
Answer
demography or urban extensions
  • asked a question related to Photogrammetry
Question
5 answers
I am performing a simple close range photogrammetry test and comparing outcomes of photogrammetry tools such as VisualSFM, Meshroom, Colmap etc.
Kindly advice, what post evaluation tests/ criteria (such as point cloud density) can be performed for comparison of point clouds from various photogrammetry tools and what tools/ software can assist in such evaluation (such as CloudCompare, Matlab) ?.
Test Objects = such as water bottle, small objects etc.
Relevant answer
Answer
Dear Abdul,
I did a similar research some time ago, where I used a laser scan derived-mesh model as reference with which to compare the results of some photogrammetric software and/or techniques. Alternately, you can also create an average surface mesh from all your photogrammetric point clouds to use as a reference.
I used the Mesh-2-Cloud and MC3D tools in CloudCompare to generate a Gaussian curve of the error between my reference and compared point clouds. From the Gaussian curve you can then infer average error and standard deviation with regards to a common reference, thus deducing the quality of each software.
You should be careful when comparing photogrammetric point cloud density because it would depend on the settings/parameters you used for each software.
Here are some of my papers on the subject:
(last paper is in Bahasa Indonesia).
Hope this helps.
Best,
arnadi
  • asked a question related to Photogrammetry
Question
5 answers
Hello
I'm working on SAR Images Co-registration.I want to divide my image into two looks with the spectral diversity method. But I did not find its complete formulas in any article.
Does anyone know how I can do this? Or do you know the paper that fully explains the algorithm?
Relevant answer
Answer
Thank you for your answer.
Is it necessary to center the images to the Doppler centroid before using the Fourier transform?
  • asked a question related to Photogrammetry
Question
3 answers
I have to find the shape, orientation and the size of a flat material (simple shape) - which is not oriented on the CNC table(active area).
.
I thought about transforming a picture into a stl file
.
.
.
.
.
.
.
.
I have many ideas but I don't know how to program (yet) and I try to find the most reliable solution.
Any suggestion is welcome
Relevant answer
Answer
For simple geometrical shapes a number of metrics like area, perimeter, eccentricity, bounding box etc. can be easily calculated through image processing techniques to determine shape characteristics. For irregular shapes techniques like centroid-radii model can be used.
  • asked a question related to Photogrammetry
Question
5 answers
Hi dear community,
I am currently working on a project where I want to generate a 3D reconstructed model consisting of rubble piles (in consequence of a building collapse) via remote sensing. In my case, I employ LiDAR aerial laser scanner as well as aerial photogrammetry for point cloud generation of the disaster scene. The problem is that solely the surface of the scene that lies in the field of view can be reconstructed. However, in order to evaluate the structural behavior of the debris with regards to structural stability, I need to know how the collapsed elements are structured beneath the debris surface. Does somebody has an idea how I can proceed or has anybody conducted a related study? Is my objective even feasible?
Thank you in advance!
Relevant answer
Answer
Dear Amir,
I have worked with ground-based Lidar and with terrestrial GPR (Ground Penetrating Radar). Lidar can not penetrate a solid rubble pile and GPR does not work well on rubble, especially if the surface is irregular...
Respectfully yours, Joel
J
  • asked a question related to Photogrammetry
Question
15 answers
Imaging has distinct advantages over data acquisition by direct, on-site, measurement, but there are some limitation?
Relevant answer
Answer
@Qayssar Ajaj
It is undeniable that photogrammetry is a very effective non-contact observation method. However, it also has some limitations:
It is undeniable that photogrammetry is a very effective non-contact observation method. However, it also has some limitations:are equipment, such as the image pixel quality of camera;
2. The matching algorithm of stereo image pairs, such as image batch matching algorithm and scale conversion, is difficult;
3. Photogrammetry does not have penetration ability for observation with occlusion, for example, canopy occlusion makes it difficult for photogrammetry to obtain tree factors above and below the canopy at the same time;
4. For terrestrial photogrammetry, the operator's photography skills are required.
5. In addition, photogrammetry also has certain mandatory requirements on the observation environment, such as illumination, rainfall, cloud and fog.
  • asked a question related to Photogrammetry
Question
1 answer
  • Interested in the field of photogrammetry, with which package we can extract information about the cross-section of a 3D model (where can we build a cross-section of a model) ?.
  • How can I find out where is which coordinate
  • and how can I export to .obj format?
Thanks in advance!
Relevant answer
Answer
Hopefully you have solved it already but if not try cloudcompare.
  • asked a question related to Photogrammetry
Question
15 answers
We are using UAV multispectral data acquired by a Micasense RedEdge-M camera for mapping the concentration of suspended sediments and distribution of aquatic vegetation in shallow estuarine waters and are wondering if there is simple way to automatically georeference the data using Agisoft Metashape or any free software, while keeping the original pixel value.
Relevant answer
Answer
Hi Ghasem Askari , will try your suggestion...but using such approach the resulting georreferenced pictures will have a good accuracy only in them central parts. Isn't it?
  • asked a question related to Photogrammetry
Question
1 answer
I am actively working on creating HD maps using LiDAR, Photogrammetry work stations. Final delivery of this data is in Vector and Raster formats. I am not familiar at AI, ML concepts related to ADAS, and intend to know the concepts related to data reading and processing while AV in travelling mode.
I humbly request to share related material, links to understand this technology better. so that I can prepare deliverable at better accuracy because we all know safety while driving is a prime concern for the success of AVT.
Relevant answer
Answer
Dear Kasiviswandham Ponnapalli,
what i recommend first ist to understand between the difference of a 2D and a 3D map. 2D Maps are mainly for the path planning and integration of objects, 3D maps are mainly for the localization of the vehicle. The paper i recommend are the following:
  • asked a question related to Photogrammetry
Question
9 answers
It seems impossible since most of analyses are based on 3x3 cell neighborhood but I would like to hear your experiences. I know that some hydrodinamical models like TELEMAC are able to perform this.
Cheers
Relevant answer
Answer
Dear Mirko,
Please a look to the attached paper, which may address your question. It is about using multi-resolution image representation for texture analysis. I used MATLAB to implement the presented idea. But it is not difficult to implement it in ArcGIS. These are so many applications that can benefit from the instanteneous use of multi-resolution image representation such as image matching and the detection of the intrinsic scale of image features.
  • asked a question related to Photogrammetry
Question
3 answers
Hello everyone,
I have a few pictures taken every 30 seconds from an HD camera fixed onto a tripod of a concrete beam onto which I have a grid drawn on a thin white paint. The test lasts for 30 minutes and I would like to find out the change in distance (or in 2d coordinates x and y) of specific points i marked on the grid for every picture, what program can do that? I am not happy with the one I have. Thanks.
Kind Regards,
Hasanain
Relevant answer
Answer
Many thanks Al Mouayed, you have saved me a lot of time. Always grateful!
Kind Regards,
Hasanain
  • asked a question related to Photogrammetry
Question
10 answers
I want to estimate glacier surface lowering using UAV data acquired on glacier frontal areas where it is too difficult and dangerous to measure GCPs.
Relevant answer
Answer
Jorge, you may be interested in this article by Cook and Dietze (2019)
They propose a system to compare UAV derived point clouds without ground control essentially by using the sparse point cloud from Survey 1 as control for Survey 2. As photogrammetric datasets can be subject to distortion, this essential would give the same distortion to both datasets, so change detection results are more representative of process than reconstruction error.
As mentioned by Edgar, automated or pick-point registration can be a nice approach too, but since you are working with a glacier where surfaces could potentially retreat at a similar rate, you might overfit your data and estimate lower retreats than actually happened. To get around that, you could focus on bounding rocky sections of the valley to get your translation matrix (the geometric matrix used to move your data to the best fit in CloudCompare) and apply the best translation to your entire dataset for change detection.
  • asked a question related to Photogrammetry
Question
8 answers
hello
What do you think about how ortho- photos are made from historical analogue aerial imagery?
Suppose you have a set of analogue aerial images of several Flight Lines.
What software and methods will you use for ortho- photos production?
How to choose photo control points?
How do you measure the precision of the final ortho-photo?
Relevant answer
Answer
thanks Aysar Jameel Abdalkadhum
  • asked a question related to Photogrammetry
Question
12 answers
In a multi-temporal photogrammetric project, in order to monitor the changes between the 3D reconstructions of the same scenario, it is necessary to co-register the relative models robustly. Given a number of Ground Control Points (GCPs) more than sufficient and fixed over time useful to geo-reference multi-temporal 3D models, in a theoretical concept of 'Co-Registration' it is more correct:
- carry out a georeferencing of all the models with the same GCPs (checking that they have comparable RMSE and 'Reprojection errors') and then overlap them?
- or set up a georeferenced reference model and then co-register the subsequent models (eg in CloudCompare> Align Tool) aligning them with the first in the same GCPs (maintaining a low RMS alignment error)?
In a recent photogrammetric test of change detection I noticed some differences in the results obtained by adopting the two strategies just described.
Relevant answer
Answer
Referencing models with GCP is a great start, and if the GCPs are the exact same positions for both datasets, that is even better. To reiterate what I think your question is, you want to know 1) are GCPs enough for registering two datasets for topographic change detection, and if not, 2) is automated registration a better option for reducing error.
To make this decision, we need to know about GCP coordinate accuracy and the distribution and number of GCP. I will note that when I say coordinate accuracy, I refer to the accuracy of the instrument, such as GPS or total station, and not the accuracy reported within a photogrammetric project. If you have a lot of points (>10) and they are well distributed through your area of interest and your coordinate accuracy is high (~3 cm), then GCP is probably sufficient. If not, it may still be sufficient, you just need to test it with alignment for areas that likely have not have changed (invariant or pseudo-invariant points) or adjust you accepted minimum detectable change threshold - sometimes defined as the propagated error of your GCP (SQRT(error1^2+error2^2).
This error can be improved with automated registration, like the cloudcompare fine registration tool, but it can also exaggerate errors while telling you that registration improved. From personal experience, there can be some topographic distortion from photogrammetric datasets, especially if GCP are not well dispersed within your area of interest. This distortion will make automated registration adjust your clouds to have lower (improved) RMSE while giving them a poorer registration for change detection.
A good test for any registration is to look for apparent spatial trends in your data using a cloud to cloud comparisons. For example, in cloudcompare, run a cloud to cloud comparison and use the slider bar in the properties to look for patterns of vertical offsets. Recently I had one that showed the center of my area of interest had the lowest vertical offset and the amount of vertical offset increased as I moved away from the center. My options are to rebuild the cloud, choose a spot within the cloud that I want to focus change detection on or raise my accepted minimum detection threshold.
If you use an automated registration to improve registration error (I do this frequently), it is a good idea to do it on a subset of your data that you expect to have not changed during the detection interval. That will give you more confidence that you are not "over fitting" your point cloud, or trying to reduce RMSE at the cost of reducing topographic change where it has actually taken place. In cloudcompare you can test the original registration this way by unchecking the rotation boxes and set the drop down box to "Z". In general the cloud will not move, but will give you the same RMSE estimation. You can use that value to see if allowing rotation and translation improves RMSE significantly (as defined by you).
  • asked a question related to Photogrammetry
Question
11 answers
I came accross some studies and the settings of camera differ and also the opinion on it. So when I set the camera in manual mode. I will set shutter speed, ISO, F-number in combination where I will avoid blurry images but still I will get enough light. But do you think if I will use auto-focus setting it can lead to failing the aligning process? In the case of terrestrial photogrammetry the focus can differ greatly but also in some case of UAV photogrammetry. And what if I use auto ISO or shutterspeed. What is your opinion?
Relevant answer
Answer
It depends at what height you are flying at but in general for aerial applications (>100 m) the focus should be on infinite and to answer your question you should not have auto-focus. What would the reason of having auto-focus be?
In regard to the other setting, I would rather keep the shutter speed (1/1000 - 1/2000) and aperture fixed and allow for the ISO to vary based on the light conditions, if you have a decent camera it should not cause too much noise even in low light conditions.
Good luck!
  • asked a question related to Photogrammetry
Question
7 answers
hello In principle, commercial software is nowadays used to produce orthophones from drones. What do you think is the major difference between these commercial software? Do they have significant advantages and disadvantages in addition to the algorithm used? Which of the following commercial software do you recommend for the UAV?
Relevant answer
thank you Reuben Reyes
what is your think of about software such as visual SFM or ....
  • asked a question related to Photogrammetry
Question
4 answers
hello
Located in two zones of the UTM system. How to triangulate in inpho software ?
How will overlap problems be resolved between zones?
Should each zone be triangulated separately? Or will it be triangulated together?
Relevant answer
thank you Marco Baldo
  • asked a question related to Photogrammetry
Question
8 answers
I am trying to orthorectify an IRS 1C and an IRS 1D multi - spectral image using a DTM/ DEM. I have Cartosat 1 DEM of 1 Arc - Sec (30m). There are no RPCs provided by the vendor and I was told that the product isn't orthorectified nor geometrically corrected.
I use Erdas Imagine 2015 for working with Satellite Data. Any help on this will be highly appreciated.
Relevant answer
Hi When you do not have any of the RPC coefficients, you need to consider your work in several ways. One solution is to change the software used. I think you can use pci geomatica software and do it with DEM and polynomial rational function. The next solution is to use topographic maps either DTM or DEM to extract photographic control points. In this case you have to rectify your image with software like PCI or ERDAS.
  • asked a question related to Photogrammetry
Question
2 answers
hello
Why is Match AT suited better for planes compared to UASMaster?
WHat about fixed-wing UAVs?
Relevant answer
Smaller UAV images in general (fixed-wing and multi copters) are quite different from classic aerial cameras in big planes.
Orientation angles are larger and their changes are more extreme .
Furthermore, UAV imagery often shows more motion blur .
and of course the ground resolution is much higher .
All this needs different strategies and matching techniques to be applied to successfully match tie points.
  • asked a question related to Photogrammetry
Question
3 answers
Known are the length in a triangle ABC of a tetrahedron ABCD, i.e., also all angles between A, B and C are known. The position of D is, however, unknown, but the angles between all edges to D, i.e. between DA and DB, DA and DC and DB and DC, are known as well. Is it possible to determine the position of D, or do I need an additional information. If it is not unique, how many solutions exist?
Relevant answer
Answer
@ Stan: I have to admit that I am only interested in mathematical solutions which are possible :-), i.e. the ABC and the angles from D to A,B and C are meaningful values. Unfortunately, until now I do not find any analytical solution. Better said: I found one (from photogrammetry: Killian, K. Über das Rückwärtseinschneiden im Raum Österr. Z. Vermess., 1955, 43, 97-104, 171-179 ) which describes with 4 equations 3 variables (which I would call overdetermined system) but if derived (two angles) and inserted back in the 4 initial equations in order to determine the z-coordinate of D it delivers 2 slightly different solutions which can be optimized to a single one but this needs a a numerical refinement. I hope that I simply need an additional conditions or a totally different approach.
  • asked a question related to Photogrammetry
Question
1 answer
I am planning to add a polarization filter to my bathymetric SfM setup (DJI m600, Sony a6000, 20mm) in order to reduce sun glint.
Do I have to keep a constant angle of the UAV towards the sun?
Will I have to adjust the rotation angle of the filter for every flight? Do you see/know any other issues?
Relevant answer
Answer
Yes, polarization filters vary constantly with the angle to the sun; the maximum affect is achieved at 90 degrees to the sun (photographers have a trick using their thumb and index finger). Linear polarizers are more effective than circular polarizers but most autofocus cameras that use through-the-lens focusing require circular polarizers. That said, for aerial SfM work at infinity focus distance, manual focus is preferred if you can fix (tape) the lens in place so linear polarizers can be used (they are also less expensive to purchase). Lastly, even if you use a circular polarizer, be sure to tape the adjustment ring so it doesn't rotate during your photo survey.
  • asked a question related to Photogrammetry
Question
3 answers
I am looking for papers related to computer vision, not medical science. I have an understanding of the basic principles and the math involved in stereo vision. I am looking for papers related to:
  1. How various factors, such as baseline, affect the performance of the stereo vision.
  2. Techniques to solve correspondence/ matching problem.
  3. Multiple Baseline Stereo or other prominent variants of stereo especially related to distance or position estimation.
I am looking for a basic set of 5 - 10 papers and may be 10 - 20 more to build upon the knowledge from the basic set.
  • asked a question related to Photogrammetry
Question
6 answers
I'm writing a dissertation on the use of artificial intelligence to enhance the cultural heritage applications of photogrammetry.
I was inspired by Yves Ubelman's work in Palmyra (as featured on the Microsoft AI Services advertisement). However, I have been unable to locate any background information about the technologies used.
There seems to be a dearth of literature available that covers this area; has anybody any suggestions?
Relevant answer
Answer
Hi Matthew,
I don't know anything about this, but we had a guest lecture with this guy, Peter Jensen, who uses photos to make a 3D model of archaeological sites for documentation. I hope it helps a bit, if not, just ignore it.
  • asked a question related to Photogrammetry
Question
3 answers
Hello!
I work with photogrammetry and LiDAR data, but now I have a problem with some data.
I have this LiDAR point cloud (4.5 points/m2) that I need to filter and get a DTM. The problem is that the zone has a lot of dense forest and some places without trees (glades).
How can I filter this point cloud correctly? I have access to globalmapper, but I don't know exactly what parameters I should use. Or maybe if you recommend to me another software.
Thank you and have a nice weekend.
Sebastián
Relevant answer
Answer
Sebastian, recomiendo que uses MicroStation y TerraSolid y sus modulos. Puedes crear reglas de decision para limpiar la NDP...
  • asked a question related to Photogrammetry
Question
2 answers
1 - On my theses which is about extraction of poles at mobile laser scanner point cloud, First point cloud should be separated to ground points and non ground points which pole points are inside of non-ground point section. for doing that, 2D Dimension of point cloud at XY plane is gridded(m*m). then at each square, minimum height is calculated and those points elevation of them are less than total minimum elevation and a threshold height, are consider ground points. with this simple method, ground points are denoted.
2- for extraction of traffic signs at lidar data, simply with intensity(almost maximum intensity) and a shape filter, they are could be extraction.
Relevant answer
Answer
Hi Dinesh, I use the Point Data Abstraction Library (http://pdal.io) for tasks like ground extraction. Check filters.pmf (https://pdal.io/stages/filters.pmf.html) and filters.smrf (https://pdal.io/stages/filters.smrf.html), there are links to the research papers describing the methods also. They’re pretty robust and can be used for mobile LIDAR applications.
I see you like MATLAB (based on tags) - you can expose MATLAB functions to PDAL as well (https://pdal.io/stages/filters.matlab.html#filters-matlab), although I prefer the Python API (PDAL can be called as a Python library as well as calling Python functions inside PDAL pipelines).
The neat thing about either approach is that you can use any object-finding methods developed in MATLAB or Python directly in the processing pipeline.
…and you can take advantage of point cloud library tools as well (https://pdal.io/stages/filters.pclblock.html#filters-pclblock).
I like PDAL because it’s easy to create recyclable processing pipelines pretty easily, especially messing around in automating processes, treating the JSON pipelines as templates which can have parameters passed in; or mixing JSON configurations and command line over-rides. And I don’t have to write my own file readers and writers (yet) :D
Extracting street signs seems like a cool project - keep posting updates; and good luck!
  • asked a question related to Photogrammetry
Question
5 answers
...
Relevant answer
Answer
Thank you Martin, Florent and especially Reuben for these very helpful suggestions.
  • asked a question related to Photogrammetry
Question
8 answers
Are the photogrammetric benefits are fairly communicated to the others outside the mapping community? I do understand the historical evolution of photogrammetry, but I'm looking for ways to expand the use of photogrammetry and in turn its market.
Relevant answer
Answer
Advantages coming from photogrammetry are more or less understandable among professionals. But even among them there are those who treat orthophoto just like a picture, a nice background, not using it for any measurements. On the other hand, for an ordinary citizen orthophoto is the simplest way to do and understand things in the context of space, because almost everybody recognizes their home, road or anything else around them on the orthophoto.
  • asked a question related to Photogrammetry
Question
3 answers
In my project "CC-technology" I simulated a ceiling, in the post-processing I generated animations of stress and strain.
We will do the experiment soon and measure the strain with photogrammtry.
How to compare the animated strains from Ansys with the measured strains from photogrammetry? The measures of the ceiling are 4,50mx4,50mx0,16m.
Is there a software for comparing the strains of simulation and experiment within this area 4,50x4,50m and showing me the difference between? With this data I could recognize where my simulation still needs to be improved.
By hand I could only compare the films optically.
Relevant answer
Answer
Photogrammetric measurements are X, Y, Z, coordinates of your targeted points on the structure. First you must compute by photogrammetry Xo, Yo, Zo, the coordinates of the structure targets with zero strain, then aplying strain i compute by photogrammetry the coordinates of the same targets as Xi, Yi, Zi. Then the displacement will be Dx=Xi-Xo, Dy=Yi-Yo, Dz=Zi-Zo.
  • asked a question related to Photogrammetry
Question
4 answers
When i want to process my photo in AgiSoft,i sometime encounter "not enough memory" error.
Can i solve this problem with increase virtual memory instead of upgrade my physical memory,by accepting that the speed is significantly reduced?
Relevant answer
Answer
Hi. Generally this is a problem with Photoscan which happens when either you create a mesh with large number of points or when you create an orthophoto. In the first case try using chunks, NOT using Ultrahigh for your dense cloud or Using lower quality mesh. Alternatively you can create your points and bring them to Global mapper and do the mesh there. If you need texture maps you can upload them back to Photoscan and do the texturing then.
For the second, use tiles to create a set of orthophoto pieces. Then, bring them all into Global mapper and put them together in a format you need. Global mapper has no problem in doing so.
  • asked a question related to Photogrammetry
Question
4 answers
Photogrammetry is the science of making measurements from photographs depending on many mathematical formulas. Finally, a 3D model can be produced in many formats such as FBX, OBJ, etc....
So, how do you think the produced 3D model leverage the industry?
Relevant answer
Answer
Dear Mohamed
Photogrammetry is already used in progress monitoring on construction sites due to its cost efficiency ( c.f. Scaioni et al., 2014). In addition, a practical example of its application of the integrated solution is merging 3D terrestrial laser scan data with photogrammetry for post asset inspection purposes (Monserrat and Crosetto, 2008). El-Omari and Moselhi (2009) have for example demonstrated improved site progress capture and monitoring with the combination of photogrammetry, laser scanned point cloud data and post processing software PHIDIAS. As these technologies continue to coalesce, the management of built environment assets will become increasingly reliant upon automated asset capture to fully exploit a vast array of structured geometric and semantic data extricable from a automatically generated 3D model (or mesh). Photogrammetry as such, offers to expand the current capabilities of facilities managers during O&M of assets within the built environment and in turn, improve business profitability and performance. Hopefully this has helped your query.
The following paper may be of interest: cy_of_optoelectronic_technology_developments_in_the_AECO_sector
Kind regards
Erika
  • asked a question related to Photogrammetry
Question
7 answers
I am concerned with understanding the SfM geometric consideration effective pixel resolution proportionality in a FB-HTP crop scenario theoretically, and specifically how the commercial computational products, for example Pix4D or ReCap, handle pixels, or could handle raw pixels at higher orders.
Relevant answer
Answer
Hi Matthew,
just a hint: if you want to deepen your understanding of this I recommend to read the book "Multiple View Geometry in Computer Vision" by Hartley and Zisserman, or at least some chapters about the basics of projective geometry. Actually, my explanations was inspired by that book. It describes everything very well, and is suitable for beginners, too.
Best regards
  • asked a question related to Photogrammetry
Question
2 answers
Hello and gretings from germany,
did you already publish your method to measure pelvic rotation? I am interested in this topic because we can only use 3D photogrammetry which is very expensive and not mobile. In the diagnosis of pain in the groin region or symphysis in soccer players, pelvic rotation seems a very important influence factor to me.
Yours sincerely
Oliver
Relevant answer
Answer
We currently have a project funded by Royal College of Chiropractors. Although we are developing a protocol using opto electronic methods, we can also use inertial motion sensors. This should help you.
  • asked a question related to Photogrammetry
Question
2 answers
Hello folks. Need a hand identifying the species of this Anuran. The specimen came into the collections at the Oxford University Museum of Natural History with the rather unhelpful label of 'Spadefoot toat'.
As part of other work, I've done some Photogrammetry and CT-Scanning and appears to be female by the presence of eggs from the scans.
3D Model of Skin: https://skfb.ly/MXDH
3D Model of Skeleton: https://skfb.ly/RzyK
Any help would be greatly appreciated.
Paul
Relevant answer
Answer
looks like Pelobates cultripes (western spadefoot or Iberian spadefoot)
Best regards
Alfonso
  • asked a question related to Photogrammetry
Question
5 answers
Specifically looking for techniques and applications but related research would be of interest.
Many thanks Tony
Relevant answer
Answer
Dear Tony,
I am an archaeologist and have published quite a lot on these topics. You can find all my papers here at ResearchGate. I have even grouped them in a special project: https://www.researchgate.net/project/Image-Based-3D-modelling
Cheers,
Geert
  • asked a question related to Photogrammetry
Question
6 answers
The effort to generate underwater 3D maps using photogrammetry has increased over the last years. It is now able to produce high resolution models. We are currently working on a method to create georeferenced 3D maps. The next step would be to analyse these maps of marine seascapes. My first though is a landscape approach since tools already exist for land analyses. Do you know if such methods already exist?
Relevant answer
Answer
The resulting data from the photogrammetry is a 3D point cloud. This point cloud can be used to measure or extract 3D structures. To increase precision and accuracy added control points will help. The software for viewing and processing point clouds used were Agisoft PhotoScan, MeshLab, ParaView, CloudCompare, and LAStools. The point clouds can also be converted into DEMs that can then be used for analysis in ArcGIS or any GIS software.  Hope this helps.
  • asked a question related to Photogrammetry
Question
4 answers
Normally photogrammetry follows a simple pattern. However, this pattern use to be intuitive. I have being processing a few formulas to avoid the intuitive method. Is there a generalised formula for this purpose? I mean: a way where we can adjust the distribution applied to an object before taking the photos and manage a 100% overlap. 
Thank you very much. 
Relevant answer
Answer
Hi Javier,
If you are doing a parallel geometry (for example, building façades, inscription on walls, drone aerial photography), you can follow the classical formulas developed for large-scale aerial photogrammetry (for starters, look for a book called the Manual of Photogrammetry by Kraus and Waldhausl). You have formulas to calculate GSDs, number of photos to be taken for a specific overlap percentage, number of image bands, etc. I did that for a project concerning a building façade. This way, I can estimate the (minimal) number of photos to be taken and how much time I need to complete the photo acquisition. 
In order to adapt this classical method for a convergent geometry (e.g. statues, artefacts, persons...) I agree with Nicolas that it depends on your camera-to-object distance and the type of your sensor, although I would say 20% overlap is a bit too small... I always go for at least 60% in order to have multi-overlaps between the photos. Lower overlap percentage will also present a problem for the dense matching part, if you ever need to generate a dense point cloud.
That being said, the beauty of digital photography is that you can take virtually as many photos as you want. I always think that taking more photos is better than limiting them, as you can always sort the photos afterwards. On the other hand, on some cases you may not be able to return to the site to take additional photos.
Hope this helps.
Best,
Arnadi
  • asked a question related to Photogrammetry
Question
5 answers
Hi,
Is it possible to have different point densities within a point cloud file of a large area? If so, which parameters play role in it?
Also, does it matter if the point cloud is created from image photogrammetry or airborne laser scanning (ALS)?
Thanks,
Mohammad
Relevant answer
Answer
Hi,
Thank you all for your answers.
all the best,
Mohammad
  • asked a question related to Photogrammetry
Question
4 answers
I mean anything about lighting, perspective or ....
The analysis is going to employ on some fine aggregates (smaller than 0.075 micrometers), and the change in the color is the matter of great importance.
Relevant answer
Answer
Dear Ebrahim,
It is important to tell what geomaterial do you intend for imaging.
For noise reductions, you can acquire images several times in the same condition of camera, and average their pixel values.
I suggest that take several images from each situation and intensity, and compare their corresponding results, and then set your system with that situation and intensity in which provide best results and go ahead for your next experiments.
Regards,
  • asked a question related to Photogrammetry
Question
4 answers
Has anyone found effective way to rotoscope dynamic objects from a scene with moving camera? Looking for any research on this topic as well.
Am working on a project mapping wreck site using photogrammetry and am running into problem with rotoscoping fish. Does anyone know of any way to potentially auto roto fish into separate matte using After Effects? Am researching other methods to be able to photoscan wreck while removing fish from tracking.
One method I'm considering is camera tracking in a separate software package the camera path and exporting camera position, orientation, and focal length into Photoscan via xml ('import camera'). While this would not help with matting of fish would this help Photoscan processing with known correct camera position?
Relevant answer
Answer
Am leaning towards pipeline which would use camera tracking program Syntheyes which would export marker and camera data to Photoscan for more exact tracking. Does anyone know of any research that has gone about doing this?
  • asked a question related to Photogrammetry
Question
1 answer
I'm looking for some references and information about digital frametype sensors and their specifications or any related information regarding the systems the corrections etc.
Thank you in advance.
Relevant answer
Answer
SEE THIS WEBSITE FOR YOUR ANSWER
  • asked a question related to Photogrammetry
Question
12 answers
Questionnaire is carried out in order to develop my master thesis: The impact of user-defined parameters on DEM accuracy. By using feedbacks from the users who works with DEMs the conclusion about users perception of the importance of user-defined parameters in digital terrain modelling will be performed. 
Thanks in advance to all !
Relevant answer
Answer
  • asked a question related to Photogrammetry
Question
1 answer
I'm currently following a B sc in Surveying Science with specialization in photogrametry and remote sensing. In my general degree I'm interest to do a research about Forest canopy loss using Air borne C band SAR images.
So I,m kindly request that any one can send me such articles relevant to my research interest, is very helpful to my study &pleasure to me.
Thank you.
Relevant answer
  • asked a question related to Photogrammetry
Question
5 answers
How can photogrammetry be used to trace the geometry of arches in historical buildings with least error?
Relevant answer
Answer
 Dear Nicolas
Thank you very much for your contribution. It has given a clear fundamenta idea regarding the subject. For an advance knowledge i will go through internet as per your suggestion.
Regards
  • asked a question related to Photogrammetry
Question
2 answers
Where can I find good literature on photogrammetric error sources and impacts on modeling in engineering applications ? Thanks in advance for your replies.
Relevant answer
Answer
thank you Sir, but i read this one before and i need more clear literature , esp. for the field of electrical high voltage equipment.
  • asked a question related to Photogrammetry
Question
5 answers
As is well known, in photogrammetry as well as in other disciplines we have to deal with non-linear functions that need initial approximations for their unknown parameters. In general, these initial values have to be close to their true or best estimate values by a certain percentage. Although there are several linear solutions for these problems and some empirical guidelines, the interesting question is: Is there is any deep mathematical theory to analyze the non-linearity of a function?
Relevant answer
Answer
Let me quote late Professor Aleksander Pelczynski (one of the best functional analysts ever) from summer of 1972: "Important transformations from a Banach space into itself are only those which are close to the identity." He did not mention the nature of "closeness."
  • asked a question related to Photogrammetry
Question
4 answers
Relevant answer
Answer
I believe it is a great opportunity for the dissemination of information and for new ways of making research. It is also fundamental in the sense of a "digital archive" of heritage. The risks are, I guess, not larger than the ones currently faced by heritage sites worldwide...
I may be wornd, but I don't believe that many people would stop wanting to go visit "the real deal" just because they can move/play/discover around in the world's most detailed reconstruction. They are different experiences and need not be mutually exclusive.
  • asked a question related to Photogrammetry
Question
3 answers
The 2D projective transformation is used in photogrammetry for 2D rectification and it accounts for several motion such as translations, rotations, and scale factor. Now the question is:
Could the 2D projective transformation be seen as equivalent to the 3D similarity transformation for a plane?
Relevant answer
Answer
Absolutely not!
There is a profound difference:
The 2D projective transformation used in photogrammetry is not an one-to-one mapping or "bijecttive" mapping to use the precise mathematical term. As a mapping from points in 3D (object space) to points in 2D (photo plane) it maps all points on line passing through the focal point to the same image.
On the contrary the 3D similarity transformation is an one-to-one mapping from 3D to 3D.
Furthermore there is an equivalent to 3D similarity transformation for the plane, the planar 2D similarity transformation consisting of a single rotation, a translation with two components and a scale factor.
  • asked a question related to Photogrammetry
Question
5 answers
During the last three decades surveying technologies such as Total Station, GPS, and Close Range Photogrammetry were used extensively in the area of the deformation measurement and monitoring. Now the question is:
How do we compare the surveying engineering technologies for deformation measurement with the other tools such as dial gauges & LVDT? Indeed, Surveying compete on several fronts such as portability, ease of use, getting more information such as 3D measurement vs. 1D in the case of the dial gauge. On the other hand, the dial gauge is more accurate in terms of the number of decimal digits. Then and once again the question is: how do we compare them in terms of accuracy, acceptance in the practice, and cost in light of other tools and technologies? Indeed, the answer to this question has some competing aspects. On the other hand, it will be more interesting if we can analyze where the division of labor will happen between the surveying technologies and the other ones as well as the complementary aspects of their mutual use
Relevant answer
Answer
Your question: in general, compare the surveying engineering technologies for deformation measurement with the other tools such as dial gauges & LVDT.
I would say that:
1.     Surveying technology is 1D or 2D or 3D, as you wish; non surveying technologies usually provide 1D results
2.     Surveying technology can be related to a reference system located outside the influence of the object under study thus providing dimensions in an "absolute" reference system; non surveying usually provides relative measurements (for instance opening of a crack)
3.     Accuracy of these relative measurements usually are better (<1mm) than those provided by surveying technologies (usually accuracy above 1mm, exception: high accuracy spirit levelling)
4.     Surveying technology usually requires a specialized team (surveyors) to perform a surveying measurement campaign; non surveying instruments don’t need a operator specialized on that type of technology
5.     Surveying technology is a indirect method: you read angles, or distances, or photocoordinates, etc. etc. in order to provide the actual displacement (you have a function in between the observables and what you actually need); non surveying equipment provides, usually, what you need (for instance you read a relative displacement directly from an extensometer)
6.     from the points above you can conclude that you have two types of instrumentation for monitoring; these two types of instrumentation complete each other and check each other
7.     surveying campaigns are performed less frequently than those requiring directly measured observables mainly because the surveying campaigns are more complex and specialized in every aspect (personnel, equipment, processing, etc...).
I hope this helps...
  • asked a question related to Photogrammetry
Question
3 answers
Is there is any difference between the linear solution of the Direct Linear Transformation (DLT) in photogrammetry and its non-linear version?
I did not have enough experiments yet to address this question, but my initial finding indicates that there are some difference in terms of the obtained residuals from the linear and non-linear solutions.
Relevant answer
Answer
Strictly speaking there is no "linear version of the direct linear transformation", this would be in fact a contradiction in terms. I presume that by nonlinear version you mean the original standard projective equations of photogrammetry. Despite its name the DLT transformation from unknown object coordinates and exterior and interior orientation elements is a nonlinear mapping (in fact a rational one). Linearity here simply refers to the fact that when the object coordinates are known, the DLT method model can be converted to equations which are linear in the exterior-orientation elements which are no other than the coefficients of the rational transformation (usually denoted by L1, L2, …, L11).
Now about the difference:
In the simple case of projective equations (i.e. the one without additional calibration parameters) we have 9 unknowns (3 coordinates for camera position, 3 angles for camera orientation, 2 coordinates of principal point plus 1 focal distance).
For the DLT method to be exactly equivalent to the projective equations one must incorporate (11-9=) 2 conditions on the coefficients. These conditions however are nonlinear functions of the DLT coefficients and this destroys the advantage of using linear equations and thus avoiding the approximations of linearization in order to use linear model (least squares) methods fo data analysis.
For this reason the 2 conditions are not used in practice. The 2 additional parameters in the DLT method are accounted as additional calibration parameters. However there is no clear projective equations model with 2 additional calibration parameters, with its 11 unknowns being explicitly related to the 11 DLT parameters by well define mathematical relations.
In conclusion the DLT method can be considered as equivalent to the projective equations with implicit additional 2 parameter calibration.
If you send me your email I could send you my class notes (in Ebglish) from a course I tought last year at the Milan Polytechnic, where the above are explained in detail.
Best regards
  • asked a question related to Photogrammetry
Question
8 answers
Kite Aerial Photography (KAP) has so many advantages that they are at least as suitable as UAVs for monitoring coastal dynamics.
Think about it, kites are:
  1. less regulated, which means higher altitudes thus wider footprints
  2. extremely inexpensive and portable
  3. non-intrusive, licensing-free
  4. wind-friendly, the more wind the more payload, thus, more sensors (RGB camera, micro-Lidar, Multispectral sensors, IMUs, GPS)
  5. less stable than UAVs, which is good for Structure from Motion algorithms because the same point is seen in diferent angles and scales and more off-nadir images means less doming effect
Obviously zero wind means no kites.But coastal areas are windy by nature.
Moreover, if you set target points, record accurate location (dGPS), then use the targets network to orthorectify the KAP imagery, Structure from Motion algorithms produce DSMs and Orthoimages as good as UAVs.
One of the most important coastal issue that has been tackled with a KAP  approach received international attention in the 2014, when it was used for the worldwide famous Dutch project “Zandmotor”.
The point is:
Help me find at least 5 robust arguments that can refrain kites from being the next coastal monitoring tool.
Especially in Least Developed Countries or in Pacific Coutries where low-lying atolls are drowning and UAVs or fine resolution satellite imagery are just too expensive to use.
Cheers,
Nic
Relevant answer
Answer
Kite are already used for coastal monitoring. You are right, it is an inexpensive method. However, I agree with the previous comment: they are difficult to control and sometimes crash. And if developping, this activity might be regulated.
Coastal monitoring is not only beach monitoring... What about cliffs ?  You can pilot a drone from the top of the cliff, the drone flying at lower altitude than you. This is not possible with a kite.
Moreover, some beaches are surrounding by cliffs and therefore the flight plan of the kite may be restricted...
  • asked a question related to Photogrammetry
Question
6 answers
I would like also to know if anyone has compared the gain and offset of different bands from the same camera (Green, Red and NIR): would it be safe to assume the gain and offset of the green band and NIR are proportional given a known exposure?
Relevant answer
Answer