Science topics: Image Post-Processing
Science topic

Image Post-Processing - Science topic

Explore the latest questions and answers in Image Post-Processing, and find Image Post-Processing experts.
Questions related to Image Post-Processing
  • asked a question related to Image Post-Processing
Question
2 answers
I need to plot wind shear from wrf-nmm outputs. inspite of defining "shr" field in rip-user-guide page, no file has been generated after "./ripdp-wrfnmm file.in all wrfoutput_file".
Would you please help me in this regard
Relevant answer
Answer
I found that
use:
feld=bsh5
  • asked a question related to Image Post-Processing
Question
5 answers
I am looking for a matlab code for doing image deblurring using e.g. tikhonov regularization. I am very focussed on implementing the regularization method. Any help is appreciated.
Relevant answer
  • asked a question related to Image Post-Processing
Question
4 answers
Hi All,
I am using variational autoencoders as machine learning model for UI layout generation. Those UI layouts are given as images (matrices) to the machine learning model as input.
The problem is after the model learn the data and it is supposed to generate new examples, the data that is generated is a bit continous.
So my professor suggested to apply Quantize pooling as a post processing transformation to get discontinous images.
It helps also to remove the blurring or the closely graduated pixels in the image.
Anyone can help me with a useful link or paper on how this Quantization works on images ?
I will implement it later using Python.
Relevant answer
Answer
I suggest to add some generated images with your question that someone can look and easily suggest solution.
Maybe there are many solutions other then Quantization. Beacuse as far as i know, Quantization is just normalization of pixel values between some range.
  • asked a question related to Image Post-Processing
Question
5 answers
A solution in MATLAB Software is preferred.
Relevant answer
Answer
Hi
Amin Jaberi
Thank you for your response.
Could you please introduce any relevant references in which these limits are defined?
Cheers
  • asked a question related to Image Post-Processing
Question
3 answers
I have some images taken by AFM from a surface, my purpose is to find coordinate of any point with its roughness.
Format of files is *.mif and the recommended Software is DME-SPM
Relevant answer
Answer
Take help from software's "help" option. and from file there should be option for export data as ...
  • asked a question related to Image Post-Processing
Question
7 answers
Hello,
While working with images taken by the Elios drone, I noticed strange chromatic aberrations along some white edges.
  • This halo seems to appear just on one side of the edge!
  • It can happen anywhere in the image, not just near the borders.
  • It doesn't appear on every edge even if they are equally white.
  • It can be as large as 5-6 pixel wide
Looking at the attached images, I hope people can help me diagnose the exact cause of this and possibly suggest image processing improvement.
Regards,
Bruno Martin
Relevant answer
Answer
Dear Bruno,
regardless the explanation, I think an easy way to reduce the abberation before or after image acquisition would be optical or digital color filtering. Thus, you could place a color filter in front of the camera, or use only one color channel (for example the red channel) of the digital photo.
Kind regards,
Sascha Grusche
  • asked a question related to Image Post-Processing
Question
15 answers
Using the spectrometer, I obtained the raw graph (image attached) which has its x-axis as wavelength. I understand that it needs to be converted to wavenumber in order for me to obtain the raman spectra.
How do I go about this? Do I have to open the data in matlab and code the program to convert every point within the data?
Relevant answer
Answer
Courtney:
First, that's a rather odd looking spectrograph. I don't think I've ever seen such a linear slope in baseline like that. However, I'll ignore that for the moment. Also, your spectrum looks featureless, except for that narrow line at about 632nm, also rather odd. In Raman spectroscopy, the sample is (usually) illuminated with a laser. It is even often brought to a focus point with a lens to maximize intensity on the surface or within the sample. Light from the laser scatters both elastically (without wavelength shift) and in-elastically (with wavelength shift). This in-elastic scattering is the Raman effect.
On the detection end, the spectrometer is normally modified by adding a very narrow 'notch' filter to reject the very bright light scattered at the original laser wavelength. What remains are the photons that were in-elastically scattered at wavelengths other than the original laser wavelength.
Just how much these in-elastically scattered photons are shifted in frequency depends on the vibrations within the molecule. The higher the frequency of the vibration, the larger the shift. In essence, a Raman spectrum is a vibrational spectrum of the molecule. It is very similar to, but not identical with, a standard IR or FT-IR spectrum. The difference is, it is 'observed' at the wavelengths in the vicinity of the laser wavelength. So, if your laser is in the visible spectrum, you can get a vibrational spectrum if an aqueous specimen. Something impossible to do with a standard IR or FT-IR spectrometer because water is opaque to IR.
In the Raman effect there are two sets of spectral 'lines' generated. The most intense lines (but still very very dim) are those that lie on the red (or longer wavelength or smaller wavenumber) side of the laser line. These are called the 'Stokes' lines. There are another mirror image set of lines on the blue (or shorter wavelength) side of the laser line. These lines are called 'anti-Stokes' lines and are typically very much dimmer than the Stokes lines. Hence one normally focuses on the Stokes lines.
As for data processing, one starts by converting the laser wavelength to wavenumbers. I use a simple trick: wavenumbers (cm-1) = 1e7 / wavelength (nm). Say your laser is at 532nm. Converting that to wavenumbers: laser wavenumber = 1e7/532 = 18797 (cm-1). Now using that same formula convert all the other wavelength data over to wavenumbers. In a final step, subtract all the wavenumbers of the spectrum data from the laser wavelength wavenumber. All the data that falls on the positive side of zero are the Stokes line data. All the data that falls on the negative side of zero are the anti-Stokes line data. One normally drops the anti-Stokes data, but not always. Normally, the wavenumber data are displayed on a 0 to 4000 cm-1 scale, similar to a standard IR spectrum.
Now you have a vibrational spectrum (similar to an IR spectrum) where the peaks are in terms of wavenumber shift from the excitation (the laser). Hence one often speaks of the "Raman shift" of a vibrational line.
So Courtney, because your spectrum looks so odd to me, could you tell us how you made the measurement? I just wonder about that odd sloped baseline and the relatively featureless spectrum, which normally would not occur in a conventional Raman spectrometer.
Best regards,
Pat
  • asked a question related to Image Post-Processing
Question
6 answers
Can we use digital topology to make a model for computer screen?
Relevant answer
Answer
How do you define a computer screen?
1) Physically it is a set of pixels (picture elements), which can be on or off, and display colors (to be defined, how many).
That leads to a set of discrete positions (x,y), and associated visual vector (to keep is simple) v(x,y).
2) At logic level, you can generalise with continuous [say in real plan R**2] positions (x,y), and continous vector values v(x,y)
Digital topology relevant to computer screens?
Topology is an area of mathematics coming partially from geometry, and partially from the reduction to a thin minimum of what is often described using distance. Neighborhoods are defined, without the need for a metrics or distance.
Graph theory is an interesting way to approach discrete problems or to discretise them efficiently, for computability. The inventor of graph theory seems to be Leonhard Euler, when solving an opeartions research problem: the 7 bridges of Koenigsberg (today Kaliningrad; to be noted that this city was the city of leading philosopher Immanuel Kant).
Historically computer screens and tv screens have been made using electron guns, shooting towards the screen.
For analogue TV there was the notion of going from left to right on a succession of lines, to display an image. This led to the awkward representation of the rectangle of the screen through line 1, 2... n.
Hence the dimensionality was heavily disturbed, using 1 dimensional objects to describe a two dimensional one in (x,y).
You may match that early age with implicit ordinal ranking of lines: 1, 2, 3, etc, combined with distance along the lines (say just the x coordinate, to make it simple).
Point P(x,y) on screen was then described by:
-P belongs to line D(i) for a certain i
-P has coordinate x from the origin (left, say O(i)) of D(i).
You find a lot of engineering methods for detection in TV signals using this awkward representation closest to the process of producing images in analog TV.
For instance the early scrambling of PayTV "Canal +" in France would segment the picture in lines, and fragment each line in segments, and perform a permutation of these segments to display them again "scrambled". This was meant to let people get a vague view of what the programme was about, and if they wanted to see it in clear, they would have to pay a subscription.
Now you may use other inherently digital representations: (x,y) on a discrete set say (x(i), y(j)) with i in I and j in J, subset of natural numbers set N...
You may also use an origin on the screen O (say intersection of the two diagonals of the rectangle of the screen), and turn around it, along discrete shapes, like --, ¦, going right horizontal, down vertical, left horizontal, up vertical, with a discrete square or rectangular spira (use an homothetic shape of the rectangle defined by the screen, if it's 16:9, keep the L:l ratio).
What metric do you want to use? Taxicab called Manhattan or better L1 is a good candidate, very efficient.
You may use a more streamlined topology, with neighborhoods defined as you wish (shapes suitable for the problem addressed)
A classical picture processing in computer imaging divides the rectangle of the screen in smaller rectangles and within each, tasks of smaller scales are performed. This allows for parallel processing, with the classical issue that an object may be across several of such zones, and glueing back everything together is not easy...
Projective geometry and the computer screen?
Projective geometry may also be interesting.
Each point of the screen can be seen as the intersection of a line and the rectangle surface of the screen. behind the screen think of a cone constructed on any shape, then beyond the shape seen on the screen, there may be a cone...
Etc...
If this is to represent what we see from the world as in painting or photography, a lot has been done long ago.
Canaletto the famous Italian painter of Venice invented a great method, for systematic painting of views of his city, which are close to photography (focal distance, etc). This would be worth revisiting, iof you're interested, maybe this time with different mathematical tools, from your topology background...
Ref:
You may also look for publications of Frank Nielsen on digital imaging. Frank being a topologist, will have addressed your concenrns in much more depth than my summary above.
  • asked a question related to Image Post-Processing
Question
4 answers
I mean anything about lighting, perspective or ....
The analysis is going to employ on some fine aggregates (smaller than 0.075 micrometers), and the change in the color is the matter of great importance.
Relevant answer
Answer
Dear Ebrahim,
It is important to tell what geomaterial do you intend for imaging.
For noise reductions, you can acquire images several times in the same condition of camera, and average their pixel values.
I suggest that take several images from each situation and intensity, and compare their corresponding results, and then set your system with that situation and intensity in which provide best results and go ahead for your next experiments.
Regards,
  • asked a question related to Image Post-Processing
Question
3 answers
Dear Researchers,
Please guide me that how can I delete black part of the image when the image is rotated in MATLAB?
When the image is rotated in Matlab in both of areas(up, down, left, right) black parts are presented. I need the image without this black part. is there any Matlab code to help me what can I do for this problem?
I need sample Matlab code for solving this problem.
Thanks
Relevant answer
Answer
I think you can do a bilinear or bicubic interpolation, interp2(Img);
Otherwise; you can use morphological operations (imopen or imclose)
You can send me your images If you need my help to manipulate them
  • asked a question related to Image Post-Processing
Question
5 answers
when I initially put raw files in lightroom and converted into a jpeg, and then in photoshop. And, if possible, then please tell us more about the loss of quality during image processing, and about what is expressed in common a quality that may be lost during processing (including when converting from raw to jpeg).
Relevant answer
Answer
Even using 100% quality in converting raw to jpeg, there are "decisions" made by the software in the process.  If one wants to subsequently modify the tonal values, hues, or chromatic intensity of all or parts of the resulting jpeg, this can entail some loss of quality (e.g., limitation of tonal spectrum); there's a reason these changes are sometimes called "destructive."  The more one modifies, the more potential loss of quality there is.  Preserving the file in raw format avoids this loss, and many of the changes described can be made directly to the raw file, including in Photoshop.  In Photoshop, you can also convert a layer (presumably the primary image, in this case) to a "smart object," which preserves the original quality, should you want to reverse or change earlier modifications.  There is also new software available--Picktorial--in which any and all modifications are non-destructive.  Anything can be reversed without loss of quality.
  • asked a question related to Image Post-Processing
Question
2 answers
How can tesseract along with OCR feeder can be implemented in the web application. Need help on integration.
Relevant answer
Answer
Hi Sruthi,
Please Try this Youtube video . May be helpful : https://www.youtube.com/watch?v=Mjg4yyuqr5E
  • asked a question related to Image Post-Processing
Question
15 answers
Hi,
I have few SEM images from a sample which was continuously sliced and imaged. Now I need to align them and construct a 3D image. Most of softwares ask to save data in special formats. But I can only saved SEM images.
Is this sufficient to construct 3D tomogram at the end? I know the space of each plane where each image obtained.
Thanks
Relevant answer
Answer
Hi Chiththaka,
another software which can do this really well is Imod (University of Colorado). There is an option to align series of images and also it allows you to do segmentation. You will just need to convert your tifs to mrc file, which is easy (command tif2mrc). 
Best wishes,
Petr
  • asked a question related to Image Post-Processing
Question
21 answers
Hello, I have followed the vasp recommendation for Density of States calculations, but now I need do some graph about it.
I used LORBIR=11 and ISPIN=2.
my outputs files: DOSCAR, EIGENVAL, PROCAR, and also vasprun.xml. But I'm not very sure about which of this files I should use.
For example I have AB2C4 structure and I need get Total DOS, Partial DOS and Local DOS (like one B atom in AB2C4)
Which of this files should be use for my plots and How to perform it?
Thanks in advance!
Relevant answer
Answer
Hi , Maximo Ramirez,
DOSCAR file contains information about dos and vasprun.xml files also contains all information about DOS and bands.
For plotting total DOS you have to plot first two columns of DOSCAR file. Generate DOSCAR file without LORBIT=11 tag and for PDOS and LDOS generate DOSCAR file with LORBIT=11 tag.
You can plot total dos in any plotting software origin, GNUPLOT and in any other data plotting software in linux.
For plotting PDOS and LDOS it is difficult to plot from DOSCAR because it gives many columns first three columns and then other columns related to partial dos of elements present in the system.
So for plotting PDOS and LDOS I would suggest use p4vasp software.
p4vasp supports vasprun.xml file, open p4vasp software and then open vasprun.xml file of your pdos calculation. After that click on dos or bands options you can visualize total dos and total bandstructure. For visualization PDOS for individual elements you have to go other settings in p4vasp software.
After visualizing PDOS or LDOS you have to export your data in .dat format which will come in two columns. First data will be total DOS and after that individual elements data according your salection in visualizing elements. For example if you visualize A elements first in p4vasp and then B element. Data will come first total DOS after that A element data and finally B element data. So arrange data manually and plot it in any plotting software windows or linux.
For any other query ask me without hesitation.
I hope it will be helpful to you !
Shilendra
  • asked a question related to Image Post-Processing
Question
13 answers
Image processing
micromechanics
breakage 
Relevant answer
Answer
Hi Salhi,
Would you you be willing to share an image. I'd love to give it a try with MIPAR and see what it can do. We have found it quite powerful and are releasing a free beta trial tomorrow through MIPAR.us if you are interested! It is actually written in MATLAB!
  • asked a question related to Image Post-Processing
Question
1 answer
Is anyone using Helicon Focus plus Helicon Remote in combination with a Stackshot macro rail (and a Canon DSLR) and has experiences with unplanned rotated images and what it may cause? In one stack some images (raw and jpeg format) are unintentionally rotated to portrait format and consequently stacking isn't possible anymore...
Thanks in advance,
Arne
Relevant answer
Answer
Finally I could solve the problem (with useful help of the Helicon support team...). I figured out that only RAW files are affected so a converter has to be installed, in this case the freeware Adobe DNG converter. It then has to be chosen by clicking the Raw Development Settings button in Helicon Focus and additionally selecting the converter as well in the Helicon Focus preferences -> Integration by selecting the Adobe DNG Converter path. After doing this once, everything works fine again!
  • asked a question related to Image Post-Processing
Question
4 answers
Hi community,
I have a group of related images and I want to perform an intra-image clustering , to find all possible objects in each image, and then an iter-image clustering to determine what are the common segments. But I am confusing in choice of a suitable image descriptor?
Any help please?
thanks to all of you.
Relevant answer
Answer
Independent component analysis, SOM classifier
  • asked a question related to Image Post-Processing
Question
10 answers
I want some one help me to know a simple algorithm to auto detect black cracks within the image during inpainting process
Relevant answer
Answer
Dear Mohammed,
the first and primary for crack detection is binary segmentation (thresholding) such as: Kapur, Otsu and other techniques.
if the primary method doesnt work properly, you should use better methods like: fuzzy segmentation, neural nets, etc.
if you would like use the neural networks, I prefer to use the SVM (support vector machine). this technique has high accuracy and less time consuming.
  • asked a question related to Image Post-Processing
Question
4 answers
What algorithm should I use?
I'm currently able to achieve the same for other colour clothes.
Relevant answer
Answer
Dear Vimala.
This is indeed ongoing research in the skin segmentation domain and two approaches are common:
1. Detect the face (or hand or something else) of each individual and extract skin regions from it to improve the skin model. This topic is called "adaptive skin model" and a good paper to start is from Mittal: "Hand detection using multiple proposals" (in BMVC). A recent paper from Prof. Kawulok "Self-Adaptive Skin Segmentation in Color Images" (in CIARP) does not even need a face detector. The skin model is learned on-line. But the idea remains the same. Adopt the skin model to the current picture.
2. Detect the texture of the skin along with its color. Mr Kitani wrote a cvpr paper about "Pixel-level Hand Detection in Ego-Centric Videos". He clustered the images based on its overall appearance and trained for each cluster of images a separate skin model.
I also wrote a paper about skin segmentation: "Color-based skin segmentation: An Evaluation of the State of the Art" (ICIP). But I evaluated color-based methods only (no texture, no adaptation). These methods are really fast and already provide good results. Adding texture improves the detection a bit but always slows down the detection process a lot. Nevertheless. If you want to try e.g. the Bayesian method, you can try my source code (which is written in c++, using opencv and the boost library).
  • asked a question related to Image Post-Processing
Question
2 answers
this is required for thinning algorithms
Relevant answer
Answer
thanks for you answer. i am trying the same. Sir, can u plz help me what new can i add in thinning area.
  • asked a question related to Image Post-Processing
Question
6 answers
If the backgound and foreground are similar, but we can specify some points for both of them, then how could we segment the graph out?
In this case, grabcut does not work, even we adapt grabcut.
Can anybody give some hints?
Thanks a lot.
Relevant answer
Answer
Depending on what your ultimate goal is, you could draw the bounding box and capture the coordinates write some code to crop the image to just the ROI. That may remove enough of background for your needs, or you can try the Grabcut from that point on the cropped image... although I see the challenge in low contrast.
  • asked a question related to Image Post-Processing
Question
3 answers
For binary (black/white) images, many thinning algorithms exist. However, few methods are available grayscale images, in 3D. I implemented the grayscale thinning algorithm for 2D[1], but now need to extend it to 3D. The key problem in this extension is to guarantee connectivity of the skeleton by computation of a connectivity number. Does anybody know an algorithm to compute a connectivity number for 3D grayscale image?
[1] Kim et al., LNCS, 2006.
Relevant answer
Answer
Doesn;t it fall out naturally from the Euler equation? If you count branches, nodes and ends, that should (I think) give you a measure of the number of loops, which is related to the connectivity.
  • asked a question related to Image Post-Processing
Question
4 answers
I need to classify the images in matlab. I want to know which image is better for classification? Either RGB or grayscale image?
Relevant answer
Answer
It depends on your object of classification. If color has no significance in your images to classify then its better to go for grey scale images to avoid false classification and complexities.
  • asked a question related to Image Post-Processing
Question
5 answers
For two images I1(m,n) and I2(m,n), cross-correlation function can be used to estimate the displacement between them at pixel accuracy. How to reach sub-pixel accuracy? 
Relevant answer
Answer
You can have a look at the Section 3.3 of this paper:
C. Sun, Fast Optical Flow Using 3D Shortest Path Techniques. Image and Vision Computing, 20(13/14):981-991, December 2002. http://vision-cdc.csiro.au/changs/doc/sun02ivc.pdf
With two windows or subimages, you can use least square based matching for subpixel accuracy. E.g., this paper: http://www.isprs.org/proceedings/XXXVII/congress/7_pdf/6_WG-VII-6/03.pdf
  • asked a question related to Image Post-Processing
Question
3 answers
Here I attach my result image after process of image.from this I don't know how to coding the calculate the rain distribution to get the percentage of raindrops image. thanks you for any advance
Relevant answer
Answer
A combination of normalized cross-correlation and least-squares matching is used. Since the latter needs good starting values, an image pyramid strategy is used starting at the level of lowest resolution with a one dimensional cross correlation process that works without operator assistance. Correlation results are considered approximate values in the next pyramid level which makes matching a totally automatic process. On each level the point raster is densified until the bottom level is reached. In order to keep computation time short, only in this original image the least-squares method is used to provide sub-pixel results. The found matches und ergo a quality control, where several criteria are tested and it is decided which results are accepted. Recently, the two steps that consume the Hon's share of computation time, Le. resampling and matching, are sped up considerably by the mutation of the DSS into an Advanced DSS. In the actual constellation this system uses a 14-transputer Paracom MultiCluster hosted on a SUN workstation.
  • asked a question related to Image Post-Processing
Question
11 answers
Anyone of a paper that clarifies this point?
Relevant answer
Answer
AS others have said the size of the edge enchancement filter will depend on the application.  I did pubish a paper (on the list within research gate) about using the variance / Std Dev of the horizontal 1st difference / derivative to determine the size of the optimum window / kernel size for edge enhancement.  Basically, the lower the variance the larger the window; for example, an image with low variance in the 1st difference might require a window size of 11 by 11 pixels while one with a high amount of variance might need a 3 by 3 window.  The work was done quite a while agoe (early to mid 80s) so I am sure there are other and perhaps better methods that have been published.
Pat
  • asked a question related to Image Post-Processing
Question
4 answers
I want to process image averaging technique on particular image. Is there any algorithm available for this process? What is the information I can gather from image averaging ?
Relevant answer
Answer
Image Averaging:
h=fspecial('average',3);
K = imfilter(h,image);
fspecial is a filter designing function with 'average' as filter type and '3' as filter radius.
imfilter will apply this filter to a 2-dimensional image. for a color image you can apply this separately to three layers or to the Y layer of YCbCr image.
However, I would recommend the median filtering as it preserve the edges of image.
K = medfilt2(image,[3 3]);
  • asked a question related to Image Post-Processing
Question
3 answers
In my study, I am planing to use an experimental task which duration depends on subjects' performance. Duration can vary by as much as 6 minutes comparing good and bad performers. My question is, can I use fMRI data with unequal length (from different subjects) within one analysis (standard GLM analysis). Is it methodologically and statistically valid? Or should I somehow make the task equal in length for each subject (for example by extending inter-trial interval or adding rest period at the end)?
Relevant answer
Answer
Self-paced fMRI experiments are quite rare. You need to make sure that the self pacing will not bias your ability to detect effects in one group relative to another. For example if one group has a longer ITI (say 10 seconds vs 5 seconds) you will be better able to estimate the BOLD response to those trials. Having a rest period at the end that is just dead time wouldn't buy you anything. Feel free to take behavioral data that you already have on this task and put it into FSL like you are building a single subject design. You don't have to have any real functional data. Take time series you have laying around to make a 4D file that would match your behavioral data. You aren't going to actually run the analysis. You can look at the efficiency of the design when you set up your statistics to judge how the self pacing is affecting your model.
Can you provide more information about the task? That would help clarify if you should have a fixed IT would work. Some tasks are just not optimal to transfer over to the scanner.
  • asked a question related to Image Post-Processing
Question
2 answers
How can I find the diameter between the ring using image processing software?
Relevant answer
Answer
You can use "rotational average" option in DiffTools package of digital micrograph software. 
  • asked a question related to Image Post-Processing
Question
12 answers
I need efficient methods which increase the accuracy of the classification of medium resolution hyperspectral images (between 17 and 30 meter) based on the collected spectral signatures from ASD FieldSpec 4 Hi-Res Portable Spectroradiometer.
I used the methods available in ENVI such as spectral angle mapper, spectral information divergence, SVM...etc. But they are not efficient enough to separate different features in the image. Sometime improving spatial resolution of the images to 10 m is not showing significant improvement 
Relevant answer
Please read Chapter 7 in Signal Theory Methods in MS RS by David A. Landgrebe, may be you will find your answer.
  • asked a question related to Image Post-Processing
Question
2 answers
I would like to select the relevant and irrelevant features of an image based on neural network. How do I do that?
Relevant answer
Answer
It is computationally exprensive, but you could
simulate the fovea. One sensor in the middle,
responsible for one pixel. In the neighborhood
other sensors, responsible for a bigger number
of pixels. The farther from the midpoint,
the greater the area one sensor is responsible
for.
A sensor measures the brightness of a pixel,
either from a grey value image or the
3 channels of RGB. And puts the measured
value into the input layer of a NN.
Learning: could be done supervised. You
click with the mouse pointer on a pixel and
classify it as interesting or noninteresting.
The computer stores a training pattern,
a pair of sensor data and desired output.
After the NN has learned all training
data, learned to classify input vectors,
it can be used to scan the whole image
and map each point via feed-forward.
Regards,
Joachim
  • asked a question related to Image Post-Processing
Question
2 answers
I wonder, if no-one has organized computational science conference where results have been presented in using 3D techniques? I think that in collaboration with local 3D supporting cinemas, this should be possible in quite easy manner. I think that this should be the direction, because it might help understand many complicate situations better, at least after hype is over.
Relevant answer
Answer
Thank you for a comment. The idea came from personal 3D classes http://www.sony.co.uk/hub/personal-3d-viewer . The one view in one eye and one data to another. I think that it could be relative easy to convert result figures in this format. And in cinemas, they use practically same idea, maybe using different classes (synchronizing the eyes and views correctly). I don't know how costly this is and what kind of practical difficulties there might be, but if we have 3D movies, why not 3D results ?
Btw, the Prezi is very nice tool, I like it :).