Science topic

Cameras - Science topic

Explore the latest questions and answers in Cameras, and find Cameras experts.
Questions related to Cameras
  • asked a question related to Cameras
Question
4 answers
Hi!
Can anybody recommend the optimal (cheap/easy to buy and use) solution to scan a small (200k) herbarium collection?
I am in doubt, have I look on scanners or have I choose camera. In first case special scanners are super expensive. In the second case I am not sure about image quality (i.e. images will be affected by distortion). I know that Kew applied HerbScan, which in fact is an Epson 10000 XL system. Maybe I could construct something like this by myself if somebody could provide me with technical scratches. From other hand, there is relatively cheap projection scanner Fujitsu SV600, but I am not sure how good it is for scanning of such volumetric objects as herbarium.
Any advice will be highly appreciated!
Relevant answer
Answer
Good day Andriy
Please did you find solutions for this? I have same problem and definitely cannot afford the flatbed scanner. I am only thinking of using my Nikon D3100 camera but don't know whether it will work.
  • asked a question related to Cameras
Question
2 answers
Hi dear researchers,
in everyday life we make decisions. Whether it is buying mobile phone, buying car, choosing investment, choosing suppliers, hiring employees, etc. To make better selection, we need to have alternatives, criteria, decision model,etc. However, all decision criteria do not have same importance or preference. For instance in buying mobile phone, you consider storage capacity, looks, price, camera, etc.How do you calculate weights for each criteria?
Learn how to estimate objectively criteria weights using mean,standard deviation and variance in this video
Best
Dr. Niyungeko
Relevant answer
Answer
Thanks colleague, I have visited your Youtube lecture..
Regards,
Emad
  • asked a question related to Cameras
Question
3 answers
I am using Seek thermal camera to track the cooked food as this video
As i observed, the temperature of the food obtained was quite good (i.e close to environment's temperature ~25 c degree, confirmed by a thermometer). However, when i placed the food on a hot pan (~230 c degree), the food's temperature suddenly jumped to 40, the thermometer confirmed the food's temp still was around 25.
As I suspected the background temperature of the hot pan could be the reason of the error, I made a hole on a paper and put it in front of camera's len, to ensure that camera can only see the food through the hole, but not the hot pan any more, the food temperature obtained by camera was correct again, but it would get wrong if i take the paper out.
I would appreciate any explanations of this phenomenon and solution, from either physics or optics
Relevant answer
Answer
Thanks for all comments.
a short update: we talked to the manufacturer, and they confirm the phenomenon as Ang Feng explained. We are going through possible solutions
  • asked a question related to Cameras
Question
1 answer
I have thermal images captured from infrared thermal camera (FLIR; model:A655sc). I have to perform 2D DWT over the thermal image as the 2D DWT takes input as values.
Relevant answer
Answer
You can try it in python. A sample code is here: https://pywavelets.readthedocs.io/en/latest/
  • asked a question related to Cameras
Question
8 answers
Dear all, i accidentally found an old...not too old...microscope camera olympus DP-12, unfortunate the driver was lost, any body can help me...Thanks in advance.
Relevant answer
Answer
Hey Ruben, I hope you are great! I have an old Olympus U-CMAD3 camera that works well with the software NIS Elements F version 5.21.00 64-bit from Nikon.
I have used multiple cameras with this software.
Good luck!
  • asked a question related to Cameras
Question
4 answers
Hi
I have 3 robots and I know the starts and end coordinate of each of them. For example for Robot 1 I have Xr1, Yr1 and Xr2,Yr2 coordinates pair , same for Robot2 and 3. I have lateral cameras so can obtain the visual information. I would like to get the robot-to-robot relative position over time. The camera update is every 0.5 seconds. How can I approach this problem and any possible solution? Is the Hidden Markov Model chains would be a good approach? OR semantic Mapping can be helpful?
Thanks
Relevant answer
Answer
With the development of computer technology, deep learning is gradually applied to the field of image recognition. It can deal with two-dimensional image data naturally and extract features automatically. Compared with the traditional machine learning methods, deep learning is popular for its good learning ability and low generalization error.
  • asked a question related to Cameras
Question
1 answer
I am looking for a publically available video/image dataset from surveillance cameras, which is used to detection if there is violence like fight happened. Also,a dataset from surveillance cameras contains such class also helpful. I have collected UCF-crime, NTU-CCTV dataset, I want to know how to get more datasets like these or how to collect such videos by myself.
  • asked a question related to Cameras
Question
2 answers
The CCD camera of the given pixel captures of the image. I want calibrate the image and find the constant magnification factor for all the images.
Relevant answer
Answer
In this paper, practical and simple methods for characterizing the optical performance of a digital CCD camera equipped with a zoom lens using test targets projected by a Newtonian collimator coupled to an integrating sphere and a halogen light source are presented. The presented methods include the evaluation of the CCD camera system modulation transfer function (MTF), uniformity, dynamic range, linearity, and signal-to-noise ratio (S/N) properties in the visible region of the spectrum.
Regards,
Shafagat
  • asked a question related to Cameras
Question
4 answers
Hi all,
I am trying to deploy neural network to a FPGA board mentioned in the question. So far I have succeed with simple projects provided by Mathworks such as:
and right now I would like to change images sent from PC to images from camera (ideally live video) connected directly to the FPGA board making it standalone machine, from image acquisition and preprocessing to image postprocessing.
Best example I found is:
But it requires a sister card which I don’t have, so before purchase I would like to explore other options. If its possible I would like to stick with Matlab/Simulink and HDL coder.
Thank you in advance.
Relevant answer
Answer
Recently I decided to abandon solutions provided by Mathworks and explore Xilinx tools. After some work with both Mathworks toolboxes and Vitis AI I would like to share my experience.
While Matlab and toolboxes that come with it seem easy to use on the surface, they are really badly described and overflowing with bugs. Moreover, the support provided by Mathworks is very poor and slow.
On the contrary, the Vitis AI environment is rich in detailed documentation. Something that took me a month with little to no success with the use of Matlab I accomplished in days with Vitis AI. Not to mention the far more helpful community that helped me solve issues in a matter of days.
With confidence I can say that Mathworks tools are not ready for the described type of applications.
  • asked a question related to Cameras
Question
8 answers
Dears,
I face a technical issue of how to measure uniformity of infrared radiation on the target surface.
Namely, the setup is as follows. A halogen lamp is mounted at the end of the pipe-lightguide (see Fig. 1). When halogen is on radiation (visible light and infrared) is delivered to the target surface (c.a. 4x4cm). Total power of the radiation delivered to the surface is high (c.a. 200W which gives 13W per cm2). What I need to know is spatial distribution of the radiation on the target surface (to be able to find out how uniform it is). The measuring unit must withstand several seconds of measuring.
Several ideas and related issues:
1. IR camera could be placed at the end of the pipe. However, any reasonable camera will be burnt at once, at least oversaturated, as the power delivered is extreme.
2. Cover IR camera with a filter. This solution does not really work as the filter heats up and further emits radiation, i.e., I do not observe real distribution which would take place with no filter applied.
3. IR photodiodes. I imagine I could place several IR photodiodes at the end of the pipe but, again, they are highly oversaturated at once and filtering, again, does not really work.
4. Thermal-reactive paper/material(?). I am in trouble to find such a material which could withstand the power needed and provide any reasonable resolution.
5. ...?
Do you have any ideas on that?
Relevant answer
Answer
Gerhard, Aparna, thank you for the suggestions!
First, bolometers. As far as I have gone through the topic, it looks like a bolometer can measure the power of incident radiation, but not really its distribution. For the power mesurement we successfully use much cheaper equipment (water cooled sensors are enough).
The chopper wheel - it makes sense of course. However, to use such a wheel I need some space around the output of the pipe, so the wheel shall be mounted not under the pipe but at its side. This is very problematic as the system presented in Fig 1 is an internal part of the greater machine and access to the output of the pipe is very limited in terms of space.
A pinhole diaphragm - this idea we have tested in real. However, here, we faced several issues (e.g., mirrored surface of the diaphragm may change the distribution of radiation as it changes the lightpaths inside the pipe, so the real target may "see" other light distribution than the detector beside the pinholed diaphragm). But we will test that further.
Thank you!
  • asked a question related to Cameras
Question
3 answers
Hello,
We need to purchase a new infrared camera to image brain slices during patching using a DIC system. Any recommendation?
Best regards,
Relevant answer
Answer
Thanks a lot 👍
  • asked a question related to Cameras
Question
1 answer
Dear all friends,
I need a fiber optic camera with the following features:
  1. Waterproof
  2. Resistant to a maximum temperature of about 120 C
  3. Resistant to a maximum pressure of 100 bar
I would be glad and appreciative if anyone could send me some information about the brand, and other details of the camera he/she used.
Relevant answer
Answer
Search the Ali Express website
  • asked a question related to Cameras
Question
2 answers
I am about to start working on 6d pose estimation for Object Grasping based point cloud. In our lab we have the following:
-AUBO i5 industrial Manipulator.
-COBOT 3D Camera that will give us a point cloud of the scene. the camera will be attach to the manipulator in the eye in hand configuration (mounted on the manipulator's gripper(end effector).
Deep learning Based method will be used for 6D pose estimation of the target object.
the 6D pose estimation will be calculated on my laptop, How can I send the final result or the pose estimation to the robot in order to control it and eventually pick and place the target object.
Relevant answer
Answer
Hello Abdulrahman Abdo Ali Alsumeri if you want to control an industrial robotic arm using your own laptop, you can see if that model that you are using has ROS conecction. If this is the case, you can just suscribe to the topics that are usefull for your task and send the required ofnormation for each of the different joints of the robot.
If there is not a ROS interface, you can try to use some other controller created especifically for the industrial robots, that works using 7Step lenguage for example, but normally those programs are controlled by the company that owns the robot.
In ths case, for your model, it has a ROS Interface that you can use. There is the link to ROS WIKI for your model: http://wiki.ros.org/aubo_robot
I hope my comment has helped you.
  • asked a question related to Cameras
Question
1 answer
In the attached file i have shown the thermal image clicked during live preview from my camera.
I am able to fetch the Minimum, Maximum, Mean Grayscale Level values along with GL value at Center(Blue Circle) at backend continuously in live video.
I want to know if i can use these GL values to get corresponding temperature with any equation or any function.
Also you can Suggest me anything else that i can use for this purpose.
Relevant answer
Answer
Normally, thermal cameras digitize (long wave) infrared radiation into graylevel values which then will be mapped to some (absolute) temperature using the internal camera calibration. The temperatures then get transformed into the RGB preview you have shown.
Hence, I guess you'll need the find out how the camera or its software produces the previews, and I don't think there is generic formulae you could use. In the best case, the graylevels you have correspond to a temperature value already, and you only have to find out how this translates to the GL value obtained.
  • asked a question related to Cameras
Question
1 answer
Reality is likely a frame-stack, light particles hit the eye at different times but all these individual frames impact the receptors after one another and are stacked together.
High frequency variable focus cameras can pick up a 3D image with a single lens.
This means it is not so strange to still see in 3D with one eye closed.
Frames on salvinorin A can shake and bend vigorously.
Relevant answer
Answer
Hi,
A few of these references and cross-references be of help to you:
Iglesias JE, Insausti R, Lerma-Usabiaga G, et al. A probabilistic atlas of the human thalamic nuclei combining ex vivo MRI and histology. Neuroimage. 2018;183:314-326. doi:10.1016/j.neuroimage.2018.08.012
Calvey T, Howells FM. An introduction to psychedelic neuroscience. Prog Brain Res. 2018;242:1-23. doi:10.1016/bs.pbr.2018.09.013
Duerler P, Brem S, Fraga-González G, et al. Psilocybin Induces Aberrant Prediction Error Processing of Tactile Mismatch Responses-A Simultaneous EEG-FMRI Study. Cereb Cortex. 2021;32(1):186-196. doi:10.1093/cercor/bhab202
Books: Barnard, George William;Richards, William -A Sacred knowledge: psychedelics and religious experiences. 2016.
Dr Peter Silverstone-The Promise of Psychedelics: Science-Based Hope for Better Mental Heath. Ingenium Books.2022
Tao LinTrip: Psychedelics, Alienation, and Change. Vintage Books. 2018
  • asked a question related to Cameras
Question
8 answers
I am measuring the tensile strength of thin (<1cm) dumbbell-shaped nanofiber membranes.
I have developed a custom tensile testing setup which primarily consists of:
1. A movable stage.
2. A load cell of appropriate force rating is attached to the moving stage.
3. A DC power supply to excite the load cell.
4. A digital multimeter connected to the DC power supply to measure the output voltages from the load cell.
5. A DSLR camera to record photos of the nanofiber membrane at regular intervals during the tensile testing to measure strain.
I am using a camera to record photos of the nanofiber membrane to calculate strain using a GUI based MATLAB program that tracks the edge of the moving end. I am using an optical method to measure strain because I cannot attach an extensometer to the nanofiber membranes as they are fragile and it won't be feasible.
The target of my experiments is to obtain voltage outputs from the digital multimeter every 1 sec and photos from the DSLR camera also every 1 sec with the measurements from the digital multimeter and the DSLR camera being in perfect sync.
I am currently triggering the digital multimeter using an Excel script provided by the manufacturer of the digital multimeter and I am using an intervalometer to trigger the DSLR camera. The problem is I am triggering both instruments manually and therefore the measured voltage and recorded photos are not in sync.
Can anyone please give me suggestions on how to sync both hardware which are from different manufacturers?
Relevant answer
Answer
No, sadly I haven't been able to solve my issue yet. The model of my DMM is: ADCMT 7461A (https://www.adcmt.com/en/products/dmm/7451)
The 7461A model has a USB interface and a GPIB interface only.
Yes, the commands for the 7461A DMM are in its reference manual and they are stated as:
Select from SCPI, ADC, and R6552
SCPI: SCPI Command (GPIB only)
ADC: Command used in ADC CORPORATION (GPIB/USB only)
R6552: Command used in ADC CORPORATION's R6552 (only 7451A,
GPIB only)
  • asked a question related to Cameras
Question
2 answers
Hi,
I would like to understand the link between GRD (ground resolved distance) experimental value and the GSD (ground sample distance) theoretical value. I saw somewhere the following formula used: GRD=2*k*GSD when the factor 2 is to get the value of two-pixel ( cyc/mm ), and k would represent a factor that includes all other influences such as turbulence, aerosol or camera aberration.
When k>1 and if k=1 then we talk about an ideal system.
I would like to know is there a formula to calculate k directly to be able to find the GR? Is there a maximum value of k where one can say that the only influence is the atmosphere and that the camera is limit diffracted?
And is there sources talking of this factor, I have not found any on internet.
Thank you very much
Relevant answer
Answer
Thanks for your answer but do you know this formula of GRD as a function of GSD by the k factor?
What is the acceptable value that allows us to say that the camera is good?
And is there a paper speaking of this k-factor?
Thanks
  • asked a question related to Cameras
Question
3 answers
Hi all,
In my patch clamp rig we have a 'IR-1000 camera' set up to a 'bnc-video to VGA video converter' which is attached to a monitor, through a VGA cable.
This set up worked fine, but recently we get no signal in the monitor, not even in the test function. We tried replacing all the cables, the video converters, the monitor and the IR-1000 control, nothing worked. The camera is working, since we get the image when we plug the BNC-video cable directly into our old cctv monitors.
The lab member who previously used this rig said he had the same issue, but that the set-up started working again by itself after a while.
Has anyone had this type of issue before? Is there any way to resolve it? Or any ideas to why this must be happening?
Relevant answer
Hi all,
In case anyone is going through a similar issue. I managed to recover the signal by changing the 'BNC-video to VGA video converter' to a DAGE-MTI supplied 'Composite to USB-C converter'. It is more expensive, but is working fine now.
  • asked a question related to Cameras
Question
1 answer
Actually I have two interferogram ( containing Young's fringes). One interferogram is recorded without any perturbation in the path of light rays. While recording the second interferogram, I have kept a glass plate in the path of one light beam. Now I want to get information about refractive index of the glass plate by analysing the interferogram. Both Interferogram have been recorded usin a CCD camera. Can anyone help me?
Relevant answer
Answer
You can do this analysis with MATLAB software or any other programming software such as Lab View software.
Best regards.
  • asked a question related to Cameras
Question
1 answer
I set up a number of control experiments indoors, placing soil samples in a number of boxes with openings above them according to different treatments. I would like to use this multispectral sensor to take pictures of these soil samples indoors, but it seems that it must be used in conjunction with a GPS tool? Is it possible to use it without GPS? It is like a digital camera that can take pictures manually.
Relevant answer
Answer
I think you can use it. If it has been installed on a UAV, I suggest you use the UAV to take pictures and avoid damage during the uninstallation. But one trouble you may face, the camera focus distance is set for UAV flying, so when you try to take a picture at a short distance, the image may not be clear enough. Also, the distance between different camera sensors will make the picture shift. I only test the P4M (DJI), it's hard to fix these problems.
  • asked a question related to Cameras
Question
2 answers
Any suggestion for a camera module that:
- shutter speed is configurable
- clear photos in 1 ms,
- color, global shutter, 720p
Thank you.
Relevant answer
Answer
Oguz Sen Decide on the following factors:
1. Consider and determine the pixel, size, and kind of camera module early in the design process.
2. Determine whether the camera module you choose is compatible with the platform, or more specifically, the SOC (system on chip) that the device employs.
  • asked a question related to Cameras
Question
5 answers
I used beam profiler like the camera-based Cinogy beam profiler and the BP209 beam profiler from Thorlab to measure the beam profile of laser output from DFB laser’s output fiber directly. The beam profile is a perfect Gaussian shape. And then I used an industrial CCD camera to replace the original beam profiler. The beam profile is kind of different. It has some stripes which doesn’t changed much even I adjust the position and layout of the output fiber and cut the fiber over and over again. It looks like some kind of interference but I don’t know exactly how it occurs and why it only visible to the industrial CCD camera, not the professional beam profiler. Anyone can explain it? The attached files are the beam profilea captured by the CCD camera and the spectrum of the DFB laser.
Relevant answer
Answer
Let me add:
- Some CCD cameras have an optical window on the sensor chip, while others have not. Beam profilers often have no windows to avoid such interference problems with coherent (laser) radiation. Best check your case(s) with the data sheets of the sensor chips/manufacturers.
- Other sources of interference stripes may be planeparallel optical elements such as optical filters, etc.
- In order to find the origin, you may slightly touch each optical element in the beam path while monitoring the beam profile and carefully observe the impact on it.
- Generally: parallel interference stripes are often caused by interference from "planeparallel" interfaces, while circular ring patterns may arise from damage or maybe just particles on an optical element only. However, for developing a ring pattern, some subsequent beam propagation distance is required. So dust particles directly sitting on the sensor chip do not cause such ring patterns and will not leave their pixel position when touching other optical elements..
  • asked a question related to Cameras
Question
3 answers
The fourth wall is a term that television borrowed from its first origin (theater) represented by the interface of the screen displaying the work. One scene on television is a theatrical clip within a group of scenes that confines the subject between four walls, the fourth of which is removed and replaced by the camera, which represents the viewing angle for the viewer and a difference can be recorded Between parties in the theater the distance and angle of view are constant between the audience and the message does not change, but while this distance is defined and new while the room was closed.
Relevant answer
Answer
One of the most common ways to break the fourth wall is by speaking to the viewer, that is, an interpellative shot or in theater talk with the audience.
  • asked a question related to Cameras
Question
6 answers
I'm looking at conducting a thermographic survey of a building to identify thermal bridging etc. There seem to be two options when taking a series of thermal images. The first is to let the camera automatically set the temperature range for each image. This gives a nice rainbow range of colours and makes thermal bridging more obvious. The second option is to fix the temperature range for all of the images. This allows the images to be directly compared to one another, but can mean the images are more bland since the fixed range is likely to be wider than the variable range.
Which is the best approach?
Relevant answer
Answer
You can also read the book
Astarita, Carlomagno: Infrsared thermography for convective heat transfer, Springer 2013.
Good luck
  • asked a question related to Cameras
Question
3 answers
I am looking for an opensource software tool to handle hyperspectral images acquired using drones by headwall camera. Otherwise suggest me how to go about pre-processing the raw data using programming languages such as MATLAB, Python, R etc... Please help me find one. (For example, I have all the data such as frameDetails, IMU data and finally the raw image. How will I georectify the data?)
Relevant answer
Answer
Were you able to find out an opensource solution for this problem?
  • asked a question related to Cameras
Question
3 answers
Hello!
For my internship I'm looking to do an experiment which involves the analyses of step speed and step length and variabilities in these properties. I'm looking to use a camera system for ease of use (we want to keep setup time for each participant as low as possible) though simple markers can be added.
Which (preferably cheap or free) software would be able to analyze footage from (multiple) camera's and determine the earlier mentioned factors? Ideally the software is relatively easy to use and is able to export the data to MATLAB.
Thanks in advance!
Relevant answer
Answer
Kinovea is one of good software for gait Analysis. It contains all essential motion analysis tools including slow motion  and reverse playback. 
  • asked a question related to Cameras
Question
3 answers
We have collected some data from a automotive radar sensor and a camera sensor simultaneously, and we wish to fuse the track data from both these sensors. Could any one provide the methods, approaches and techniques pertaining to this important problem and any sample codes that are available. That would help us significantly.
Relevant answer
  • asked a question related to Cameras
Question
4 answers
Hi there,
I hope you are doing well.
Do you know, which camera is used recently for the GRENOUILLE technique in order to measure the pulse length of ultra-short pulses?
what features the camera should provide? Should it be fast or regular camera works as well?
This article names a camera but I am not sure it is used recently in experiments or it is out of date!
Attached is a document in this regard.
I appreciate your time and any feedback.
Best regards,
Aydin
Relevant answer
Answer
Dear Hassan Nasser I really appreciate your response.
Thank you!!
  • asked a question related to Cameras
Question
8 answers
I tried to use a industrial CCD camera to capture the pulse laser beam profile from the optical fiber. When I use the camera to watch the fiber directly, the beam profile was perfect. But when I add two lens, one for collimating the laser from fiber and one for focusing on the camera. The laser profile became distorted and quite unstable in shape and diameter. However, when I used a professional camera based beam profiler to replace the CCD camera with the lens unchanged. The beam profile is quite perfect and stable. Also, the CCD camera works fine when the pulse laser was replaced by continous wave light source like the ASE( amplified Spontaneous Emission). Anyone can help explain the CCD camera's strange behavior in filming the pulse laser?
Relevant answer
Answer
Did I understand you correctly that the same CCD camera works stable without two lenses and vice versa?
If so, possibly reason can be dealing with the level of measured light intensity: if the direct image intensity is too high, so the camera is saturated, and the image will be stable, while in reality, it is variable. You can verify this model by decreasing incident laser light intensity.
Another source of instability can be related to interference between the frequency of laser pulses and the CCD camera image acquisition frequency. For this reason, you will have a stable image of continuous light and unstable for the case of pulsed light. Verification can be done by changing laser pulse frequency.
  • asked a question related to Cameras
Question
2 answers
Hello,
I would like to measure provisioning rates in Eurasian Blue tits. My plan is to color ring them, so that I can distinguish between male and female.
In my system they breed in nest boxes, however I do not have the possibility to place a camera within a nest box. PIT tags are also not an option for me. Visual surveys can have problems because If the bird does not land on the nest box hole, then it would be impossible to know if it was a male or female.
I need a camera that could film at least 6h (with batteries). I plan to film at least 30 nests and to move the cameras around, so I may need about 12 cameras.
Of course I have a quality vs quantity vs cost trade off.
Somebody suggested using a security camera, but the problem is that there is no screen to see what you are filming and no WIFI for me to connect the camera to my phone in the field.
Others have suggested camcorders, but their models are no longer being sold and they can get pretty expensive.
Any technical suggestions will be super helpful.
Sincerely,
Chase
Relevant answer
Answer
Well Done !!! interesting...
  • asked a question related to Cameras
Question
2 answers
How can I observe moving cells without visible light? I tried with IR LED and an IR sensitive Bresser ocular camera, but the iR light was too weak.
Relevant answer
Answer
Primary answer :
The digital camera has Infrared blocking filters.
First, remove the filter which is on the digital camera close to the sensor.
The microscope must own hight quality optical elements.
Hope it solves the problem,
ALbert
  • asked a question related to Cameras
Question
5 answers
I have been using a RGB camera with a fish-eye lens to automatically take pictures of the sky every 5min.
I would like to use the data to estimate the aerosol properties. This is possible because the amount of scattered light into each pixel depends on the scattering angle of the incoming Sun light, and on the aerosol properties (e.g. phase function, size distribution, etc.). However, I need to know the pointing of each pixel in the image (i.e. azimuth and zenith) to better than 0.1 deg.
That is why I need to know the lens equation (relation between pixels and viewing angle) which is measured from the optical axis (which might not necessarily be the central pixel due to misalignment).
Any ideas or suggestions?
Any good references?
Thanks in advance,
Henrique
Relevant answer
Answer
If the camera has simple radial distortion, an easy method to determine the optical center is to use a target that is a grid. The lines that are straightest will pass roughly through the optical axis. Note that this works for any angle of the grid relative to the camera focal plane. So you could rotate your image 90 degrees and overlay the two images. Now translate to align the images. Lines should align at the optical axis if there isn't any tilt between the focal plane and the lens.
  • asked a question related to Cameras
Question
4 answers
We are doing a study where airborne drones are used to collect images of blue mussel beds along the coastline. To quantify the cover of these beds, we are looking for a method to obtain the scale of each image. Conventional methods (ground sampling distance or using an object of known size) are not suitable in our case, as you’ll understand from below.
Some details:
- an airborne drone is flown along the coastline at low tide (when blue mussel beds are above water level). The drone pilot is located in a small boat that follows the drone
- the slope of the substrate where the blue mussels are attached varies between 20 degrees to vertical rock walls, hence the camera angle varies between 0 degrees (horizontal view) and negative ~70 degrees, aiming to point the camera directly towards the object.
- to obtain satisfactory pixel resolution of the images (to identify mussels as small as 3 millimetres), distance to the object (blue mussels, substrate) is about 1 to maximum 2 meters, and so is the altitude above water. None of these can be kept constant for all images (due to differing wave height, wind conditions, and slope of substrate, among others)
- the number of images collected along each transect is about 1000. Hence placing an object of known size into each image frame will be very inefficient...
We have thought about using lasers for scale. Either 1 with a constant laserpoint size independent of distance to object, or 2 lasers attached parallel so distance between is constant independent of distance to object. These lasers could be attached to the drone camera, though then needs to be of minimal size (to not obstruct movement of camera) and weight (to not reduce flight time of drone too much). Another option is to attach the laser(s) to the drone control consol, which often is pointed towards where the images are taken anyway.
Any ideas or input on this, or other options to obtain image scale, are welcome 😊
Regards,
Barbro Haugland
Relevant answer
Answer
Reuben Reyes Besides, I suggest to get the DOM with the achieved 3D point clouds when processing Agisoft or Pix4D etc. To do this, the performer should get some GPCs or image poses collected by RTK drone.
  • asked a question related to Cameras
Question
3 answers
Hello everyone, good time.
This is the first time I want to choose a camera. if you can, please guide me.
Do the cameras do software optimizations (on the captured image by lens), for example, to increase the image contrast?
Is software optimization generally done by the manufacturers in the cameras or does the user have to do the desired optimization himself?
Is it possible to apply software changes to increase the image quality of the cameras by the user?
Thank you for your attention.
Best regards.
  • asked a question related to Cameras
Question
6 answers
camera condition:
object: integrating sphere
camera: 4096*3000/3.45
Relevant answer
Answer
Maybe the data I got from your image will be useful to you.
  • asked a question related to Cameras
Question
4 answers
I'm looking for a dataset of natural images, with no specific requirement on the type of scene, designed for the task of detecting (automatically or visually) the presence and position of small objects. This could come for example from an eye-tracking experiment where participants are tasked to e.g. look for dogs in a picture taken with a camera, or an image segmentation dataset. The point is that the task is challenging/the category is rare/the segmentation area is small and hard to find. Each scene would ideally be associated to a label or set of labels indicating the object/s to be searched for, an explicit position or the segmentation masks are a plus but not required.
Relevant answer
Answer
Mask COCO dataset
  • asked a question related to Cameras
Question
1 answer
Hello, I am a student currently working on an MLA for a multi-focused camera. One question arises and I ask a question.
Multi-focused plenoptic cameras from most of the papers adjust the focal length by setting the three lenses the same diameter and different RoC. However, my MLA is made with different diameters of lenses and different RoCs.
Does this cause any imaging problems? Thank you for your help.
I attached the fabricated MLA's Optical microscopy picture
-sincerely-
Relevant answer
Answer
I created a quantitative image of the image you want for you with an algorithm.
I hope it helps you solve your problem.
with the best wishes
  • asked a question related to Cameras
Question
4 answers
Hello
I have to perform a localization of a underwater autonomous vehicle (UUV) and for that purpose have a couple of sensors available on board. I know that ultrasonic (Sonar) sensors perform well in the water. I also have IMU and Monocular Camera available .So my question is , Visual-inertial localization better solution for underwater compare to Sonar(ultrasound)? So thinking of Fusion of the IMU and and Monocular camera to be used for the localization and depth calculation instead of ultrasound. Which solution is more appropriate and have more advantages?
Thanks
Relevant answer
Answer
As an affordable and accurate solution I would recommend the ”Underwater GPS“, that we are using with a BlueRov2: https://shorturl.at/floIJ
  • asked a question related to Cameras
Question
3 answers
The goal of the project will be to use time-lapse camera traps (trail cameras) to monitor and enumerate salmon spawning past reference points in several streams. We will be aiming to identify when the peak spawning period occurs for each stream. Looking for recommendations for remote camera set-ups and well as subsequent processing and analysis of images.
Thanks!
  • asked a question related to Cameras
Question
6 answers
Hello everyone, good time.
I want to select a camera (lens + sensor) for a machine vision system. according to machine vision books, the modulation transfer function of the camera is an important factor that affects algorithm performance?
How can I select a suitable MTF value for the camera of my machine vision system?
Are there any criteria for determining suitable image contrast for the machine vision algorithms?
Can you please tell me other factors that I must consider for selecting a camera for a machine vision system?
thank you very much in advance.
Relevant answer
Answer
Mahdi Mansouri Maybe you misunderstood me, maybe we're talking about different things: the "minimum feature" to be discerned should not be smaller than 10 pixels h/w - the target itself might require more pixels. Depends.
To clarify it for me: what would be your scene and what would be the targeet ?
Pixel size itself is not important - provided lens and sensor are matched: for a given pixel number, a larger sensor (aka larger pixel area) requires a lens with a larger focal length than a smaller sensor of the same number of pixels.
  • asked a question related to Cameras
Question
4 answers
As you can see that the image is taken by changing the camera angle to include the building in the scene. The problem with this is that the measurements are not accurate with the perspective view.
How can I fix this image for the right perspective (centered)?
Thanks.
Relevant answer
Answer
Jabbar Shah Syed To correct the perspective, select Edit>Perspective Warp. When you do this, the pointer changes to a different icon. When you click on the image, it generates a grid with nine pieces. Manipulate the grid's control points (on each corner) and create the grid to encompass the whole structure.
  • asked a question related to Cameras
Question
4 answers
Using an image, can the height of an object present in it be measured, when I know the focal length of the camera used to capture the image?
Relevant answer
Answer
If your depth of field is short and known, and the object of interest is focused, you can estimate by apparent_height [pixels] x focal_length [mm] / pixel_height [mm/pixel] x depth [mm].
  • asked a question related to Cameras
Question
1 answer
Any recommendations for a good camera (with underwater housing available) to get detailed photoquadrats (10 x 10 cm) of Dendropoma cristatum encrustations; detailed enough to be able to zoom from the photographs and recognize the gastropods? Aim is to measure abundance and calculate ratio of living gastropods to vacant shells.
I have tried with an Olympus TG-6 (point and shoot, no manual control) and Canon G7X Mark II (compact camera, full manual control), yet I cannot get sufficiently sharp images. For the latter camera, I have used manual mode and a flash (with diffuser) to keep a fast shutter speed (1/125) whilst maintaining the smallest possible aperture (f/11) possible for maximum depth of field.
Relevant answer
Answer
I would like to suggest the use of an old Nikonos V camera with 35mm lens and the original Nikonos macro complex.
To illuminate the field of view you could use two underwater LED lights like those used for the Go Pro system.
For these uses the Nikonos system is still unsurpassed today.
  • asked a question related to Cameras
Question
4 answers
Hello everyone,
I am trying to obtain actual distance (in meters) using the 3D point cloud. I cannot consider directly the z-axis value as the point cloud is obtained from monocular camera.
It would be great if I can get some ideas on how to proceed?
Best regards!
  • asked a question related to Cameras
Question
1 answer
Hello all,
We are looking at some multispectral images that were captured over a period of 90 minutes such that exposure settings were kept fixed. The sensor (Ultracam-D, Vexcel) has flat-fielding done i.e. the pixel values measure at-sensor radiance similarly across the focal plane. But, absolute calibration is missing and we cannot convert the pixel values into wide-band at-sensor radiance data (offset and gain are unknown). Its a CCD sensor so simple radiance = offset+gain*pixelvalue relationship should work.
Now, by using repeated observations (same camera position) with a temporal lag (of natural targets with reflectance reference values (HCRFs observations) we could derive estimates for the the offset and gain terms, and average Path radiance.
The bands are centered at wavelengths of 450 (BLU), 520 (GRN), 620 (RED), and 800 (NIR) nm. The spectral response functions are wide and overlap slightly. Its an aerial camera (eight cameras).
It was a sunny morning and aerosol optical thickness values increased mildly during the 90-minute campaign. Radiance measurements of downwelling irradiance (400-700 nm) show an increase of 25% during the campaign owing to the Sun climbing higher.
With the help of the reference reflectance targets, dark pixels (shaded waterways) and repeated obsevations we could deduce that
if Lpath (BLU) is 100 units, GRN was 90, RED was 60 and NIR was 40 (on average). The camera was 1.5 km above ground and view zenith angles were 0-25 degrees.
Are we on the right track? We don't have access to MoDTRAN or other software to verify that the proportions are of the right magnitude.
We study differences between tree species in how they response to increasing solar illumination (and decreasing solar zenith angle) and would like to turn our pixel values towards being ratio-scale observations of target radiance so that the differences we observe could be turned into proportions (response of species X is 10% from that of species Y). We should be ok if the Lpath ratios above are of the right magnitude.
br ilkka korpela
Now, our estimates
Relevant answer
Answer
Well, answering to myself. Textbook by Iqbal (1983) and Kaufman's works (e.g. 1993) suggest that we are on the right track.
  • asked a question related to Cameras
Question
9 answers
The Computer Vision a field of artificial intelligence that trains computers to interpret and understand the visual world. Using digital images from cameras and videos and deep learning models, machines can accurately identify and classify objects.
Relevant answer
Answer
Dr. Manoharan Subramanian areas such as the evolution of privacy-preserving techniques in fields such as federated learning and federated analytics should provide new directions.
  • asked a question related to Cameras
Question
11 answers
The software will be used for UX and marketing studies where the participants attend remotely by their laptop or PC (with camera on it)
What is the most comprehensive and the best software company offering eye tracking and facial expression analysis ?
Relevant answer
Answer
  • asked a question related to Cameras
Question
6 answers
Hello,
I have a dataset of an objective color measure (color harmony, a numerical value) of images of 24 different objects. Each object was captured under 8 different light sources via 3 different cameras.
What I want to do is study the impact of the camera (3), light source (8) and the choice of object (24) on the color harmony scores (576 scores), both individual impact and interactions between them.
Since my color harmony data is not normally distributed, I have to use non-parametric tests.
I tried using Aligned-rank treatment but I get the error dF=0 (which is normal as I have total 576 scores=3x8x24). I don't have any sort of repetition in data. I can't use a Generalized Linear model either as I do not have any random effect grouping factor.
Can somebody please help me in choosing the right statistical test and a follow-up post-hoc test as well?
A big thanks in advance!
  • asked a question related to Cameras
Question
10 answers
I'm working in a RGB image classification algorythm for blueberry ripeness stages identification in which I must collect photos for the algorythm traning data set. Is it valid taking the photos using a smartphone camera with enought resolution (eg.: Xiami Redmi Note 10, etc.), considering this research is intended to be published on a scientific journal? I've read some papers refering usage of GoPro cameras for collecting image-data sets. I have compared the image quallity between my smartphone and GoPro pics and the first one are quite better.
Thanks for the answers!
Relevant answer
Answer
Even low-end smartphones can take pictures with sufficient resolution for most of the analysis. I invite you to visit our paper. We also tested the impact of the image resolution on the accuracy of disease severity measurements using RGB images.
  • asked a question related to Cameras
Question
4 answers
Many tasks performed by autonomous vehicles such as road marking detection, pavement crack detection and object tracking are simpler in bird's-eye view. Hence, Inverse Perspective Mapping (IPM) is often applied to remove the perspective effect from a vehicle's front-facing camera and to remap its images into a 2D domain, resulting in a top-down view. Unfortunately, this leads to unnatural blurring and stretching of objects at further distance, due to the resolution of the camera, limiting applicability.
Is there an improved version of this technique ? An improved IPM ? How does it work ? I need this to validate my new automatic road crack detection approach.
Greatly appreciate your review
Relevant answer
Answer
your welcome Wissam Kaddah
Stay Happy Stay Healthy.
  • asked a question related to Cameras
Question
1 answer
Can I get electroluminescence camera and IR camera , PV analyzer on rent for research work in India?
Can I get the access or work permission of solar simulator lab in India for research work in any institute?
and what can I used instead of EL camera?
please share the link.....
Relevant answer
Answer
may be
  • asked a question related to Cameras
Question
4 answers
We are working on a project involving smartphones as measuring device for environmental stimuli. We are considering using iPhone based lux meter aps. 
Relevant answer
Answer
has anybody try to check between well calibrated lux meter and smartphone lux app?
  • asked a question related to Cameras
Question
1 answer
Hello,
I am currently using ZebTrack to analyse some open field data and have managed to analyse ~110 videos that are 17 minutes in length with no problem. However, due to a camera malfunction, 14 videos are only 5 minutes in length and ZebTrack won't allow me to track the fish as it comes up with lots of error messages in the MATLAB console. Does anyone know if there is a minimum video length limit for video analysis in ZebTrack and/or whether there is a way around this problem?
Thanks in advance!
Megan
Relevant answer
Answer
  • asked a question related to Cameras
Question
4 answers
i have been entrusted with task to measure the area of cross section of a furrow(made by farm implemets) through image processing or two camera system but since I am lacking in exposure need aid for the above said purpose.
Relevant answer
Answer
Thank you sir for your suggestion, it is going to help me a lot. i will let you know if i will have any query further.
  • asked a question related to Cameras
Question
3 answers
Hello
I have a trinocular Olympus BX40 equipped with a very old low-resolution digital camera. I would like to adapt my Olympus SX50 HS in place of that camera. Is it possible? Does someone have tried something similar with other digital camera models or microscope models? Just being clear, I do not want to adapt the camera to the eyepiece. Also, the SX50 HS is not DSLR.
Thanks for all the help!
Relevant answer
Answer
It is possible. Instead of trying to fix the mobile camera at microscopic camera hood, focusing the image at an eyepiece using a mobile camera will work better. Good photos can be taken using mobile phones just by placing the mobile camera on the eyepiece. Just fix the mobile camera on the eye-piece and adjust the magnifications.
  • asked a question related to Cameras
Question
3 answers
I am performing some primary pulp cells cultures and I have noticed that in my petri dishes I do have some black dots.
I use DMEM 10% with antibiotics and antifugal. I apologies for the poor quality of the video but I had some issues with the microscope camera as well.
Could you please check the video and the pictures and give me your opinion regarding.
Relevant answer
Answer
Could easily be cell debris. Some (most) cells just sometimes push out little blebs of rubbish that float around thereafter.
Of course, it could be mycoplasma, which has a habit of looking a lot like cell debris (and which tolerates most antibiotics relatively well).
What it most likely isn't is...any other kind of bacterial or fungal infection. Most bacterial infections are rapid and dramatic (you'll be able to see millions of tiny dots whooshing around, and the media will turn visibly cloudy), and most fungal infections are either obviously yeast (you can see the actual yeast cell shapes under light microscopy) or obviously filamentous (you'll see the hyphae spreading through your culture).
You could always try a mycoplasma testing kit (it's basically PCR for mycoplasma DNA), but that is usually reserved for cell lines that you're planning to keep for the foreseeable future. For primaries, which are generally prepared fresh for each experiment, it might be a waste of resources.
My advice would be to assume it's cell debris, and just carry on with your experiments.
Obviously, keep all your primaries in a separate incubator to any long-term cell lines, because if it IS mycoplasma, you really don't want that spreading to your cell lines.
  • asked a question related to Cameras
Question
15 answers
Hello
I have to realize a deep learning model (with tensorflow and open cv) and connect it to a surveillance camera to detect non conform products in a company.
It's my first project and I'm lost, I don't know where to start and where to get a good database (knowing that I've been asked to make a standard model that can be used for any kind of product).
I need your advice to guide me, or if you can recommend tutorials that can help me.
Thanks in advance
  • asked a question related to Cameras
Question
3 answers
I want to score disease symptoms across several genotypes grown in a greenhouse. Will RGB-D sensors be useful for this purpose?
Relevant answer
Answer
Have you researched information about XBOX 360 camera (Microsoft Kinect)?
A few years ago, I used this camera in my master's thesis and it worked well for me.
I used it with Java and also with C ++, to reconstruct 3D faces for teleconferencing applications.
I had written a little about it in these papers:
and
I hope I can help you!
  • asked a question related to Cameras
Question
3 answers
that i should to use it for my project ( security system by using face recognition techniques)
i want the system open source and able to programed by using easy method or language and can enter my data base in it and what is suitable algorithm for it
  • asked a question related to Cameras
Question
3 answers
On the basis of the illumination on the camera detector, how can I calculate the angle of emitted beams? We are given the value of numerical aperture of the lens as well as working distance.
Relevant answer
Answer
I believe this can be done if the beam angle is small. In such a case, it could be calculated with a simple geometric proportion, the ratio between the radial position of the beam edge to the focal length of the lens. The radial position may be calculated with a proportionality factor for pixels. The proportionallity factor can be calculated with the size of pixels, or the ratio of the number of pixels in the detector to the size of the detector.
  • asked a question related to Cameras
Question
1 answer
I was doing a digestion for some of the older plasmids in our lab, and after I ran the gel, the bands were very fuzzy. (This wasn't a camera issue, this is how the bands really looked). Has anyone had this happen before?
(I know the ladder looks odd, it's new and I added too much of it.)
Thanks, everyone!
Relevant answer
Answer
I think all your samples are overloaded. Try loading 3-fold less of everything and run your gel more slowly.
  • asked a question related to Cameras
Question
1 answer
Looking to measure stress in cats using IRT. One study used a FLIR, but it is quite pricey. Any other good options? I think FLIR is a good brand, but I don't know enough about the specs to determine what I need.
Relevant answer
Answer
Hello Kathryn, Flir are very high quality thermal cameras, this company specializes in this area and they are very high quality, but also expensive. We use a testo camera to measure stress in animals, specifically the testo 890-2 model. It is a camera for scientific purposes, has suitable accuracy and many useful functions, they are cheaper than Flir. And the price corresponds to the quality.
  • asked a question related to Cameras
Question
2 answers
I'm looking for methods to perform Timed Up and Go testing in a way faster than camera analysis system. I thought about the possibility of using GAITRite because, as far as I know, it identifies several gait parameters. However, as I didn't find an answer to my question, I decided to share it with you. I could be wrong, but GAITRite would correctly capture only the "get up (from the chair)" to the "180º turn" phases, the others would be superimposed. Is it correct?
Relevant answer
Answer
Thank you so much for your answer. Certainly, the reading you indicated will help me with this question.
  • asked a question related to Cameras
Question
6 answers
Hello all,
We are using the Inspire 2 drone/Zenmuse x7 camera to capture aerial videos of cetaceans. From these videos, we extract still images from which we then calculate the length and width of individual animals using a software that calculates morphometric measurements based on the pixel dimensions of the images, the lens focal length, and the height of the drone.
The videos are recorded at 5280x2160 resolution at an altitude of between 45-50 meters, using the DJI DL 50mm F2.8 LS ASPH lens, shutter speed 1/500, 29.97 fps. For some of our images, our pixel dimension appears to be 0.00445 mm/pixel, while in other images it is around 0.0034 mm/pixel (pixel dimensions calculated using a control object). This change in pixel dimension does not appear to be correlated with either the height or the tilt of the drone, and we have tried to camera in both auto-focus and manual-focus modes. We are not sure if this is an issue with the camera settings or a sign of a larger issue, but we would very much appreciate any suggestions as to what might be causing this problem!
Thank you all for your time.
Relevant answer
Answer
It depends on on a combination of factors, including the equipment (camera, lens, and filters), film used, altitude, camera angle, and weather conditions. However, There are six primary sources of aerial image distortion: terrain, camera tilts, film deformation, camera lens, atmospheric bending, and other camera errors. The terrain, and camera tilts sources of air are considered to be the major sources that contribute the most amount of error.
Kind Regards
Qamar Ul Islam
  • asked a question related to Cameras
Question
3 answers
Hi
Im using raspberry Pi 4 B and have installed ROS melodic. I have Raspberry Pi Camera V2.1. Would like to send a compressed video with a lower latency as much as possible to the Microrcontroller (ESP32) via sonar . So its important to have a very low latency as the sonar has a low bandwidth. I look at this github camera node raspberry pi camera node for Pi Camera V2 (https://github.com/UbiquityRobotics/raspicam_node) but the compressed video has a latency of more than 2 seconds. Any other way or other approaches or other help to overcome the issue with the latency when sending compressed video in real time?
Thanks
Relevant answer
Answer
what happens if you reduce the resolution of the recorded video. This should reduce the latency. I don't know if this would be suitable for you, but would be a good test, if the overall performance will grow.
  • asked a question related to Cameras
Question
3 answers
I am working on speed estimation of vehicles in front of our vehicle using video processing.
Relevant answer
Answer
SiaSearch provides a list of 15 open datasets here: https://www.siasearch.io/blog/best-open-source-autonomous-driving-datasets
Also check out DIYRoboCars' list of datasets here:
  • asked a question related to Cameras
Question
9 answers
To get a rough estimate of the depth accuracy of a stereo camera system, I want to use the formula shown in the attached picture. What I am not sure about is what is usually taken as the assumption for the disparity error delta_d. Is it 1 pixel? A certain fraction of a pixel? Why?
Relevant answer
Answer
Dear Victor Sluiter i found the formula in https://people.inf.ethz.ch/pomarc/pubs/GallupCVPR08.pdf
  • asked a question related to Cameras
Question
4 answers
Hello,
I am not very familiar with hyperspectral imaging and agriculture, but I am very interested in machine learning models used from data generated by hyperspectral cameras flying over farming fields in order to do agriculture analysis , for instance to support crop dusting or analyze the quality of the harvest. So far I understand that for the most detailed research it is necessary to place the camera on a drone in order to reach a sufficient resolution. However for certain techniques, satellite data from such cameras seem to be sufficient.
Do you know any literature which discusses whether drone data is really necessary for a certain agricultural analysis?
Do you know whether there is any publicly or commercially available database with such satellite data, from which such models can be produced in order to help farmes who might not can afford to hire a start-up or other company which flies with a drone over their field?
  • asked a question related to Cameras
Question
2 answers
I'm working with a long-term camera dataset to assess activity patterns and overlap of multiple species. The data I have is in a csv/excel format consisting of the raw data from the camera study (every photo observation).
I need to subset this dataset to just the temporally independent observations so that I can input it into the overlap package for analysis. However, the only package I'm aware of for this preparation is camptrapR, which requires access to the directory of tagged photos (which I do not have).
I'm wondering if anyone has a recommendation of a package or code I can use to identify the independent detections, so that I can prepare the data for input into the overlap package, starting with a csv/excel file of all detections.
Thanks in advance for your help!
Relevant answer
Answer
In addition to "Overlap" package, I recommend "behaviouR" package (https://bookdown.org/djc426/behaviouR-R-package-tutorials/computer-lab-6-analyzing-camera-trap-data-.html#part-2-analyze-your-serengeti-camera-trap-data). Here you can produce very nice density activity curve. I also suggest clock24.plot function in package "plotrix" (http://cran.r-project.org/web/packages/plotrix/index.html). It produces radial plot of activity pattern of animals based on hourly counts of events.
  • asked a question related to Cameras
Question
8 answers
We have already completed a project where we have done image processing using raspberry pi. Now we want to do the project using Arduino / NodeMCU.
Relevant answer
Answer
OV2640 Mini Module can be used for your requirement.