Questions related to Cameras
Can anybody recommend the optimal (cheap/easy to buy and use) solution to scan a small (200k) herbarium collection?
I am in doubt, have I look on scanners or have I choose camera. In first case special scanners are super expensive. In the second case I am not sure about image quality (i.e. images will be affected by distortion). I know that Kew applied HerbScan, which in fact is an Epson 10000 XL system. Maybe I could construct something like this by myself if somebody could provide me with technical scratches. From other hand, there is relatively cheap projection scanner Fujitsu SV600, but I am not sure how good it is for scanning of such volumetric objects as herbarium.
Any advice will be highly appreciated!
Hi dear researchers,
in everyday life we make decisions. Whether it is buying mobile phone, buying car, choosing investment, choosing suppliers, hiring employees, etc. To make better selection, we need to have alternatives, criteria, decision model,etc. However, all decision criteria do not have same importance or preference. For instance in buying mobile phone, you consider storage capacity, looks, price, camera, etc.How do you calculate weights for each criteria?
Learn how to estimate objectively criteria weights using mean,standard deviation and variance in this video
I am using Seek thermal camera to track the cooked food as this video
As i observed, the temperature of the food obtained was quite good (i.e close to environment's temperature ~25 c degree, confirmed by a thermometer). However, when i placed the food on a hot pan (~230 c degree), the food's temperature suddenly jumped to 40, the thermometer confirmed the food's temp still was around 25.
As I suspected the background temperature of the hot pan could be the reason of the error, I made a hole on a paper and put it in front of camera's len, to ensure that camera can only see the food through the hole, but not the hot pan any more, the food temperature obtained by camera was correct again, but it would get wrong if i take the paper out.
I would appreciate any explanations of this phenomenon and solution, from either physics or optics
I have 3 robots and I know the starts and end coordinate of each of them. For example for Robot 1 I have Xr1, Yr1 and Xr2,Yr2 coordinates pair , same for Robot2 and 3. I have lateral cameras so can obtain the visual information. I would like to get the robot-to-robot relative position over time. The camera update is every 0.5 seconds. How can I approach this problem and any possible solution? Is the Hidden Markov Model chains would be a good approach? OR semantic Mapping can be helpful?
I am looking for a publically available video/image dataset from surveillance cameras, which is used to detection if there is violence like fight happened. Also,a dataset from surveillance cameras contains such class also helpful. I have collected UCF-crime, NTU-CCTV dataset, I want to know how to get more datasets like these or how to collect such videos by myself.
The CCD camera of the given pixel captures of the image. I want calibrate the image and find the constant magnification factor for all the images.
I am trying to deploy neural network to a FPGA board mentioned in the question. So far I have succeed with simple projects provided by Mathworks such as:
and right now I would like to change images sent from PC to images from camera (ideally live video) connected directly to the FPGA board making it standalone machine, from image acquisition and preprocessing to image postprocessing.
Best example I found is:
But it requires a sister card which I don’t have, so before purchase I would like to explore other options. If its possible I would like to stick with Matlab/Simulink and HDL coder.
Thank you in advance.
I face a technical issue of how to measure uniformity of infrared radiation on the target surface.
Namely, the setup is as follows. A halogen lamp is mounted at the end of the pipe-lightguide (see Fig. 1). When halogen is on radiation (visible light and infrared) is delivered to the target surface (c.a. 4x4cm). Total power of the radiation delivered to the surface is high (c.a. 200W which gives 13W per cm2). What I need to know is spatial distribution of the radiation on the target surface (to be able to find out how uniform it is). The measuring unit must withstand several seconds of measuring.
Several ideas and related issues:
1. IR camera could be placed at the end of the pipe. However, any reasonable camera will be burnt at once, at least oversaturated, as the power delivered is extreme.
2. Cover IR camera with a filter. This solution does not really work as the filter heats up and further emits radiation, i.e., I do not observe real distribution which would take place with no filter applied.
3. IR photodiodes. I imagine I could place several IR photodiodes at the end of the pipe but, again, they are highly oversaturated at once and filtering, again, does not really work.
4. Thermal-reactive paper/material(?). I am in trouble to find such a material which could withstand the power needed and provide any reasonable resolution.
Do you have any ideas on that?
Dear all friends,
I need a fiber optic camera with the following features:
- Resistant to a maximum temperature of about 120 C
- Resistant to a maximum pressure of 100 bar
I would be glad and appreciative if anyone could send me some information about the brand, and other details of the camera he/she used.
I am about to start working on 6d pose estimation for Object Grasping based point cloud. In our lab we have the following:
-AUBO i5 industrial Manipulator.
-COBOT 3D Camera that will give us a point cloud of the scene. the camera will be attach to the manipulator in the eye in hand configuration (mounted on the manipulator's gripper(end effector).
Deep learning Based method will be used for 6D pose estimation of the target object.
the 6D pose estimation will be calculated on my laptop, How can I send the final result or the pose estimation to the robot in order to control it and eventually pick and place the target object.
In the attached file i have shown the thermal image clicked during live preview from my camera.
I am able to fetch the Minimum, Maximum, Mean Grayscale Level values along with GL value at Center(Blue Circle) at backend continuously in live video.
I want to know if i can use these GL values to get corresponding temperature with any equation or any function.
Also you can Suggest me anything else that i can use for this purpose.
Reality is likely a frame-stack, light particles hit the eye at different times but all these individual frames impact the receptors after one another and are stacked together.
High frequency variable focus cameras can pick up a 3D image with a single lens.
This means it is not so strange to still see in 3D with one eye closed.
Frames on salvinorin A can shake and bend vigorously.
I am measuring the tensile strength of thin (<1cm) dumbbell-shaped nanofiber membranes.
I have developed a custom tensile testing setup which primarily consists of:
1. A movable stage.
2. A load cell of appropriate force rating is attached to the moving stage.
3. A DC power supply to excite the load cell.
4. A digital multimeter connected to the DC power supply to measure the output voltages from the load cell.
5. A DSLR camera to record photos of the nanofiber membrane at regular intervals during the tensile testing to measure strain.
I am using a camera to record photos of the nanofiber membrane to calculate strain using a GUI based MATLAB program that tracks the edge of the moving end. I am using an optical method to measure strain because I cannot attach an extensometer to the nanofiber membranes as they are fragile and it won't be feasible.
The target of my experiments is to obtain voltage outputs from the digital multimeter every 1 sec and photos from the DSLR camera also every 1 sec with the measurements from the digital multimeter and the DSLR camera being in perfect sync.
I am currently triggering the digital multimeter using an Excel script provided by the manufacturer of the digital multimeter and I am using an intervalometer to trigger the DSLR camera. The problem is I am triggering both instruments manually and therefore the measured voltage and recorded photos are not in sync.
Can anyone please give me suggestions on how to sync both hardware which are from different manufacturers?
I would like to understand the link between GRD (ground resolved distance) experimental value and the GSD (ground sample distance) theoretical value. I saw somewhere the following formula used: GRD=2*k*GSD when the factor 2 is to get the value of two-pixel ( cyc/mm ), and k would represent a factor that includes all other influences such as turbulence, aerosol or camera aberration.
When k>1 and if k=1 then we talk about an ideal system.
I would like to know is there a formula to calculate k directly to be able to find the GR? Is there a maximum value of k where one can say that the only influence is the atmosphere and that the camera is limit diffracted?
And is there sources talking of this factor, I have not found any on internet.
Thank you very much
In my patch clamp rig we have a 'IR-1000 camera' set up to a 'bnc-video to VGA video converter' which is attached to a monitor, through a VGA cable.
This set up worked fine, but recently we get no signal in the monitor, not even in the test function. We tried replacing all the cables, the video converters, the monitor and the IR-1000 control, nothing worked. The camera is working, since we get the image when we plug the BNC-video cable directly into our old cctv monitors.
The lab member who previously used this rig said he had the same issue, but that the set-up started working again by itself after a while.
Has anyone had this type of issue before? Is there any way to resolve it? Or any ideas to why this must be happening?
Actually I have two interferogram ( containing Young's fringes). One interferogram is recorded without any perturbation in the path of light rays. While recording the second interferogram, I have kept a glass plate in the path of one light beam. Now I want to get information about refractive index of the glass plate by analysing the interferogram. Both Interferogram have been recorded usin a CCD camera. Can anyone help me?
I set up a number of control experiments indoors, placing soil samples in a number of boxes with openings above them according to different treatments. I would like to use this multispectral sensor to take pictures of these soil samples indoors, but it seems that it must be used in conjunction with a GPS tool? Is it possible to use it without GPS? It is like a digital camera that can take pictures manually.
I used beam profiler like the camera-based Cinogy beam profiler and the BP209 beam profiler from Thorlab to measure the beam profile of laser output from DFB laser’s output fiber directly. The beam profile is a perfect Gaussian shape. And then I used an industrial CCD camera to replace the original beam profiler. The beam profile is kind of different. It has some stripes which doesn’t changed much even I adjust the position and layout of the output fiber and cut the fiber over and over again. It looks like some kind of interference but I don’t know exactly how it occurs and why it only visible to the industrial CCD camera, not the professional beam profiler. Anyone can explain it? The attached files are the beam profilea captured by the CCD camera and the spectrum of the DFB laser.
The fourth wall is a term that television borrowed from its first origin (theater) represented by the interface of the screen displaying the work. One scene on television is a theatrical clip within a group of scenes that confines the subject between four walls, the fourth of which is removed and replaced by the camera, which represents the viewing angle for the viewer and a difference can be recorded Between parties in the theater the distance and angle of view are constant between the audience and the message does not change, but while this distance is defined and new while the room was closed.
I'm looking at conducting a thermographic survey of a building to identify thermal bridging etc. There seem to be two options when taking a series of thermal images. The first is to let the camera automatically set the temperature range for each image. This gives a nice rainbow range of colours and makes thermal bridging more obvious. The second option is to fix the temperature range for all of the images. This allows the images to be directly compared to one another, but can mean the images are more bland since the fixed range is likely to be wider than the variable range.
Which is the best approach?
I am looking for an opensource software tool to handle hyperspectral images acquired using drones by headwall camera. Otherwise suggest me how to go about pre-processing the raw data using programming languages such as MATLAB, Python, R etc... Please help me find one. (For example, I have all the data such as frameDetails, IMU data and finally the raw image. How will I georectify the data?)
For my internship I'm looking to do an experiment which involves the analyses of step speed and step length and variabilities in these properties. I'm looking to use a camera system for ease of use (we want to keep setup time for each participant as low as possible) though simple markers can be added.
Which (preferably cheap or free) software would be able to analyze footage from (multiple) camera's and determine the earlier mentioned factors? Ideally the software is relatively easy to use and is able to export the data to MATLAB.
Thanks in advance!
We have collected some data from a automotive radar sensor and a camera sensor simultaneously, and we wish to fuse the track data from both these sensors. Could any one provide the methods, approaches and techniques pertaining to this important problem and any sample codes that are available. That would help us significantly.
I hope you are doing well.
Do you know, which camera is used recently for the GRENOUILLE technique in order to measure the pulse length of ultra-short pulses?
what features the camera should provide? Should it be fast or regular camera works as well?
This article names a camera but I am not sure it is used recently in experiments or it is out of date!
Attached is a document in this regard.
I appreciate your time and any feedback.
I tried to use a industrial CCD camera to capture the pulse laser beam profile from the optical fiber. When I use the camera to watch the fiber directly, the beam profile was perfect. But when I add two lens, one for collimating the laser from fiber and one for focusing on the camera. The laser profile became distorted and quite unstable in shape and diameter. However, when I used a professional camera based beam profiler to replace the CCD camera with the lens unchanged. The beam profile is quite perfect and stable. Also, the CCD camera works fine when the pulse laser was replaced by continous wave light source like the ASE( amplified Spontaneous Emission). Anyone can help explain the CCD camera's strange behavior in filming the pulse laser?
I would like to measure provisioning rates in Eurasian Blue tits. My plan is to color ring them, so that I can distinguish between male and female.
In my system they breed in nest boxes, however I do not have the possibility to place a camera within a nest box. PIT tags are also not an option for me. Visual surveys can have problems because If the bird does not land on the nest box hole, then it would be impossible to know if it was a male or female.
I need a camera that could film at least 6h (with batteries). I plan to film at least 30 nests and to move the cameras around, so I may need about 12 cameras.
Of course I have a quality vs quantity vs cost trade off.
Somebody suggested using a security camera, but the problem is that there is no screen to see what you are filming and no WIFI for me to connect the camera to my phone in the field.
Others have suggested camcorders, but their models are no longer being sold and they can get pretty expensive.
Any technical suggestions will be super helpful.
How can I observe moving cells without visible light? I tried with IR LED and an IR sensitive Bresser ocular camera, but the iR light was too weak.
I have been using a RGB camera with a fish-eye lens to automatically take pictures of the sky every 5min.
I would like to use the data to estimate the aerosol properties. This is possible because the amount of scattered light into each pixel depends on the scattering angle of the incoming Sun light, and on the aerosol properties (e.g. phase function, size distribution, etc.). However, I need to know the pointing of each pixel in the image (i.e. azimuth and zenith) to better than 0.1 deg.
That is why I need to know the lens equation (relation between pixels and viewing angle) which is measured from the optical axis (which might not necessarily be the central pixel due to misalignment).
Any ideas or suggestions?
Any good references?
Thanks in advance,
We are doing a study where airborne drones are used to collect images of blue mussel beds along the coastline. To quantify the cover of these beds, we are looking for a method to obtain the scale of each image. Conventional methods (ground sampling distance or using an object of known size) are not suitable in our case, as you’ll understand from below.
- an airborne drone is flown along the coastline at low tide (when blue mussel beds are above water level). The drone pilot is located in a small boat that follows the drone
- the slope of the substrate where the blue mussels are attached varies between 20 degrees to vertical rock walls, hence the camera angle varies between 0 degrees (horizontal view) and negative ~70 degrees, aiming to point the camera directly towards the object.
- to obtain satisfactory pixel resolution of the images (to identify mussels as small as 3 millimetres), distance to the object (blue mussels, substrate) is about 1 to maximum 2 meters, and so is the altitude above water. None of these can be kept constant for all images (due to differing wave height, wind conditions, and slope of substrate, among others)
- the number of images collected along each transect is about 1000. Hence placing an object of known size into each image frame will be very inefficient...
We have thought about using lasers for scale. Either 1 with a constant laserpoint size independent of distance to object, or 2 lasers attached parallel so distance between is constant independent of distance to object. These lasers could be attached to the drone camera, though then needs to be of minimal size (to not obstruct movement of camera) and weight (to not reduce flight time of drone too much). Another option is to attach the laser(s) to the drone control consol, which often is pointed towards where the images are taken anyway.
Any ideas or input on this, or other options to obtain image scale, are welcome 😊
Hello everyone, good time.
This is the first time I want to choose a camera. if you can, please guide me.
Do the cameras do software optimizations (on the captured image by lens), for example, to increase the image contrast?
Is software optimization generally done by the manufacturers in the cameras or does the user have to do the desired optimization himself?
Is it possible to apply software changes to increase the image quality of the cameras by the user?
Thank you for your attention.
I'm looking for a dataset of natural images, with no specific requirement on the type of scene, designed for the task of detecting (automatically or visually) the presence and position of small objects. This could come for example from an eye-tracking experiment where participants are tasked to e.g. look for dogs in a picture taken with a camera, or an image segmentation dataset. The point is that the task is challenging/the category is rare/the segmentation area is small and hard to find. Each scene would ideally be associated to a label or set of labels indicating the object/s to be searched for, an explicit position or the segmentation masks are a plus but not required.
Hello, I am a student currently working on an MLA for a multi-focused camera. One question arises and I ask a question.
Multi-focused plenoptic cameras from most of the papers adjust the focal length by setting the three lenses the same diameter and different RoC. However, my MLA is made with different diameters of lenses and different RoCs.
Does this cause any imaging problems? Thank you for your help.
I attached the fabricated MLA's Optical microscopy picture
I have to perform a localization of a underwater autonomous vehicle (UUV) and for that purpose have a couple of sensors available on board. I know that ultrasonic (Sonar) sensors perform well in the water. I also have IMU and Monocular Camera available .So my question is , Visual-inertial localization better solution for underwater compare to Sonar(ultrasound)? So thinking of Fusion of the IMU and and Monocular camera to be used for the localization and depth calculation instead of ultrasound. Which solution is more appropriate and have more advantages?
The goal of the project will be to use time-lapse camera traps (trail cameras) to monitor and enumerate salmon spawning past reference points in several streams. We will be aiming to identify when the peak spawning period occurs for each stream. Looking for recommendations for remote camera set-ups and well as subsequent processing and analysis of images.
Hello everyone, good time.
I want to select a camera (lens + sensor) for a machine vision system. according to machine vision books, the modulation transfer function of the camera is an important factor that affects algorithm performance?
How can I select a suitable MTF value for the camera of my machine vision system?
Are there any criteria for determining suitable image contrast for the machine vision algorithms?
Can you please tell me other factors that I must consider for selecting a camera for a machine vision system?
thank you very much in advance.
As you can see that the image is taken by changing the camera angle to include the building in the scene. The problem with this is that the measurements are not accurate with the perspective view.
How can I fix this image for the right perspective (centered)?
Using an image, can the height of an object present in it be measured, when I know the focal length of the camera used to capture the image?
Any recommendations for a good camera (with underwater housing available) to get detailed photoquadrats (10 x 10 cm) of Dendropoma cristatum encrustations; detailed enough to be able to zoom from the photographs and recognize the gastropods? Aim is to measure abundance and calculate ratio of living gastropods to vacant shells.
I have tried with an Olympus TG-6 (point and shoot, no manual control) and Canon G7X Mark II (compact camera, full manual control), yet I cannot get sufficiently sharp images. For the latter camera, I have used manual mode and a flash (with diffuser) to keep a fast shutter speed (1/125) whilst maintaining the smallest possible aperture (f/11) possible for maximum depth of field.
I am trying to obtain actual distance (in meters) using the 3D point cloud. I cannot consider directly the z-axis value as the point cloud is obtained from monocular camera.
It would be great if I can get some ideas on how to proceed?
We are looking at some multispectral images that were captured over a period of 90 minutes such that exposure settings were kept fixed. The sensor (Ultracam-D, Vexcel) has flat-fielding done i.e. the pixel values measure at-sensor radiance similarly across the focal plane. But, absolute calibration is missing and we cannot convert the pixel values into wide-band at-sensor radiance data (offset and gain are unknown). Its a CCD sensor so simple radiance = offset+gain*pixelvalue relationship should work.
Now, by using repeated observations (same camera position) with a temporal lag (of natural targets with reflectance reference values (HCRFs observations) we could derive estimates for the the offset and gain terms, and average Path radiance.
The bands are centered at wavelengths of 450 (BLU), 520 (GRN), 620 (RED), and 800 (NIR) nm. The spectral response functions are wide and overlap slightly. Its an aerial camera (eight cameras).
It was a sunny morning and aerosol optical thickness values increased mildly during the 90-minute campaign. Radiance measurements of downwelling irradiance (400-700 nm) show an increase of 25% during the campaign owing to the Sun climbing higher.
With the help of the reference reflectance targets, dark pixels (shaded waterways) and repeated obsevations we could deduce that
if Lpath (BLU) is 100 units, GRN was 90, RED was 60 and NIR was 40 (on average). The camera was 1.5 km above ground and view zenith angles were 0-25 degrees.
Are we on the right track? We don't have access to MoDTRAN or other software to verify that the proportions are of the right magnitude.
We study differences between tree species in how they response to increasing solar illumination (and decreasing solar zenith angle) and would like to turn our pixel values towards being ratio-scale observations of target radiance so that the differences we observe could be turned into proportions (response of species X is 10% from that of species Y). We should be ok if the Lpath ratios above are of the right magnitude.
br ilkka korpela
Now, our estimates
The Computer Vision a field of artificial intelligence that trains computers to interpret and understand the visual world. Using digital images from cameras and videos and deep learning models, machines can accurately identify and classify objects.
The software will be used for UX and marketing studies where the participants attend remotely by their laptop or PC (with camera on it)
What is the most comprehensive and the best software company offering eye tracking and facial expression analysis ?
I have a dataset of an objective color measure (color harmony, a numerical value) of images of 24 different objects. Each object was captured under 8 different light sources via 3 different cameras.
What I want to do is study the impact of the camera (3), light source (8) and the choice of object (24) on the color harmony scores (576 scores), both individual impact and interactions between them.
Since my color harmony data is not normally distributed, I have to use non-parametric tests.
I tried using Aligned-rank treatment but I get the error dF=0 (which is normal as I have total 576 scores=3x8x24). I don't have any sort of repetition in data. I can't use a Generalized Linear model either as I do not have any random effect grouping factor.
Can somebody please help me in choosing the right statistical test and a follow-up post-hoc test as well?
A big thanks in advance!
I'm working in a RGB image classification algorythm for blueberry ripeness stages identification in which I must collect photos for the algorythm traning data set. Is it valid taking the photos using a smartphone camera with enought resolution (eg.: Xiami Redmi Note 10, etc.), considering this research is intended to be published on a scientific journal? I've read some papers refering usage of GoPro cameras for collecting image-data sets. I have compared the image quallity between my smartphone and GoPro pics and the first one are quite better.
Thanks for the answers!
Many tasks performed by autonomous vehicles such as road marking detection, pavement crack detection and object tracking are simpler in bird's-eye view. Hence, Inverse Perspective Mapping (IPM) is often applied to remove the perspective effect from a vehicle's front-facing camera and to remap its images into a 2D domain, resulting in a top-down view. Unfortunately, this leads to unnatural blurring and stretching of objects at further distance, due to the resolution of the camera, limiting applicability.
Is there an improved version of this technique ? An improved IPM ? How does it work ? I need this to validate my new automatic road crack detection approach.
Greatly appreciate your review
Can I get electroluminescence camera and IR camera , PV analyzer on rent for research work in India?
Can I get the access or work permission of solar simulator lab in India for research work in any institute?
and what can I used instead of EL camera?
please share the link.....
We are working on a project involving smartphones as measuring device for environmental stimuli. We are considering using iPhone based lux meter aps.
I am currently using ZebTrack to analyse some open field data and have managed to analyse ~110 videos that are 17 minutes in length with no problem. However, due to a camera malfunction, 14 videos are only 5 minutes in length and ZebTrack won't allow me to track the fish as it comes up with lots of error messages in the MATLAB console. Does anyone know if there is a minimum video length limit for video analysis in ZebTrack and/or whether there is a way around this problem?
Thanks in advance!
i have been entrusted with task to measure the area of cross section of a furrow(made by farm implemets) through image processing or two camera system but since I am lacking in exposure need aid for the above said purpose.
I have a trinocular Olympus BX40 equipped with a very old low-resolution digital camera. I would like to adapt my Olympus SX50 HS in place of that camera. Is it possible? Does someone have tried something similar with other digital camera models or microscope models? Just being clear, I do not want to adapt the camera to the eyepiece. Also, the SX50 HS is not DSLR.
Thanks for all the help!
I am performing some primary pulp cells cultures and I have noticed that in my petri dishes I do have some black dots.
I use DMEM 10% with antibiotics and antifugal. I apologies for the poor quality of the video but I had some issues with the microscope camera as well.
Could you please check the video and the pictures and give me your opinion regarding.
I have to realize a deep learning model (with tensorflow and open cv) and connect it to a surveillance camera to detect non conform products in a company.
It's my first project and I'm lost, I don't know where to start and where to get a good database (knowing that I've been asked to make a standard model that can be used for any kind of product).
I need your advice to guide me, or if you can recommend tutorials that can help me.
Thanks in advance
I want to score disease symptoms across several genotypes grown in a greenhouse. Will RGB-D sensors be useful for this purpose?
that i should to use it for my project ( security system by using face recognition techniques)
i want the system open source and able to programed by using easy method or language and can enter my data base in it and what is suitable algorithm for it
On the basis of the illumination on the camera detector, how can I calculate the angle of emitted beams? We are given the value of numerical aperture of the lens as well as working distance.
I was doing a digestion for some of the older plasmids in our lab, and after I ran the gel, the bands were very fuzzy. (This wasn't a camera issue, this is how the bands really looked). Has anyone had this happen before?
(I know the ladder looks odd, it's new and I added too much of it.)
I'm looking for methods to perform Timed Up and Go testing in a way faster than camera analysis system. I thought about the possibility of using GAITRite because, as far as I know, it identifies several gait parameters. However, as I didn't find an answer to my question, I decided to share it with you. I could be wrong, but GAITRite would correctly capture only the "get up (from the chair)" to the "180º turn" phases, the others would be superimposed. Is it correct?
We are using the Inspire 2 drone/Zenmuse x7 camera to capture aerial videos of cetaceans. From these videos, we extract still images from which we then calculate the length and width of individual animals using a software that calculates morphometric measurements based on the pixel dimensions of the images, the lens focal length, and the height of the drone.
The videos are recorded at 5280x2160 resolution at an altitude of between 45-50 meters, using the DJI DL 50mm F2.8 LS ASPH lens, shutter speed 1/500, 29.97 fps. For some of our images, our pixel dimension appears to be 0.00445 mm/pixel, while in other images it is around 0.0034 mm/pixel (pixel dimensions calculated using a control object). This change in pixel dimension does not appear to be correlated with either the height or the tilt of the drone, and we have tried to camera in both auto-focus and manual-focus modes. We are not sure if this is an issue with the camera settings or a sign of a larger issue, but we would very much appreciate any suggestions as to what might be causing this problem!
Thank you all for your time.
Im using raspberry Pi 4 B and have installed ROS melodic. I have Raspberry Pi Camera V2.1. Would like to send a compressed video with a lower latency as much as possible to the Microrcontroller (ESP32) via sonar . So its important to have a very low latency as the sonar has a low bandwidth. I look at this github camera node raspberry pi camera node for Pi Camera V2 (https://github.com/UbiquityRobotics/raspicam_node) but the compressed video has a latency of more than 2 seconds. Any other way or other approaches or other help to overcome the issue with the latency when sending compressed video in real time?
I am working on speed estimation of vehicles in front of our vehicle using video processing.
To get a rough estimate of the depth accuracy of a stereo camera system, I want to use the formula shown in the attached picture. What I am not sure about is what is usually taken as the assumption for the disparity error delta_d. Is it 1 pixel? A certain fraction of a pixel? Why?
I am not very familiar with hyperspectral imaging and agriculture, but I am very interested in machine learning models used from data generated by hyperspectral cameras flying over farming fields in order to do agriculture analysis , for instance to support crop dusting or analyze the quality of the harvest. So far I understand that for the most detailed research it is necessary to place the camera on a drone in order to reach a sufficient resolution. However for certain techniques, satellite data from such cameras seem to be sufficient.
Do you know any literature which discusses whether drone data is really necessary for a certain agricultural analysis?
Do you know whether there is any publicly or commercially available database with such satellite data, from which such models can be produced in order to help farmes who might not can afford to hire a start-up or other company which flies with a drone over their field?
I'm working with a long-term camera dataset to assess activity patterns and overlap of multiple species. The data I have is in a csv/excel format consisting of the raw data from the camera study (every photo observation).
I need to subset this dataset to just the temporally independent observations so that I can input it into the overlap package for analysis. However, the only package I'm aware of for this preparation is camptrapR, which requires access to the directory of tagged photos (which I do not have).
I'm wondering if anyone has a recommendation of a package or code I can use to identify the independent detections, so that I can prepare the data for input into the overlap package, starting with a csv/excel file of all detections.
Thanks in advance for your help!
We have already completed a project where we have done image processing using raspberry pi. Now we want to do the project using Arduino / NodeMCU.