Science topic

Real-Time Rendering - Science topic

Explore the latest questions and answers in Real-Time Rendering, and find Real-Time Rendering experts.
Questions related to Real-Time Rendering
  • asked a question related to Real-Time Rendering
Question
3 answers
I recently found a dissertation about lightmap compression: https://www.diva-portal.org/smash/get/diva2:844146/FULLTEXT01.pdf
Are there other papers about this topic? Thanks.
Relevant answer
Answer
Saba A. Tuama Thanks for replying. I noticed that the lightmap solution in this article has a temporal axis. Does this mean their lightmaps aren't static over time, i.e., some sort of animation?
  • asked a question related to Real-Time Rendering
Question
1 answer
Google's Bilateral Guided Upsampling (2016) proposed an idea that upsampling could be guided by a reference image to maintain high-frequency information like edges. This reminds me of G-buffers in real-time rendering.
Does state-of-the-art super-resolution algorithms in real-time renderings, such as Nvidia's DLSS and AMD's FSR, use a similar idea? How do they exploit G-buffers to aid the upsampling?
  • asked a question related to Real-Time Rendering
Question
2 answers
I'm trying to segment the mitochondrial fusion process and hence, would like to track the temporal difference between the inner and outer membrane fusion. I have a Tim23-RFP for the inner membrane. Any good suggestions for marking the intermembrane space would be welcome.
Best,
Roy
Relevant answer
Answer
Thanks Christian, I am in talks with the em guys to do exactly what you mentioned but I wanted to do live cell imaging with the samples. Nevertheless, I shall look into the pub. Thank you very much for the input. As always, your suggestions are quite to the point and I really do enjoy following your work.
Best,
Roy
  • asked a question related to Real-Time Rendering
Question
4 answers
I am trying to detect 3d model from live video stream.Model should be detected by any face.how can  do that?
Relevant answer
Answer
Hi Patil,
If you have a CAD model of your target you may extract the edges and find those edge in image.
Not so common, you may also use texture if your CAD model contains texture information.
I used a commercial software to find the pose of a CAD model in images, but if you pretend to get more data and fully control the process is may be better to implement the methods by yourself
  • asked a question related to Real-Time Rendering
Question
8 answers
I'm analysing ER-mitochondria contact sites but I can not find a detailed protocol to obtain 3D reconstructed and surface rendered images with ImageJ.Thank you
Relevant answer
Answer
I am not an intensive imageJ user, but in Fiji (which is basically an ImageJ), you have the "stacks" submenu in the image tab that allows you to render 3d (3D project).
And in the same fiji, you can also use plugins like "3D viewer" or "volume viewer" to get what you want from a stack.
Have fun
  • asked a question related to Real-Time Rendering
Question
14 answers
Graphics renders 3D scenes from render lists and scene descriptions (a render language) and Computer Vision seems to be advancing to where one can imagine a camera and co-processor (like a GPU) that would generate an object list and scene description from continuously captured camera frames (depth map, segmented/recognized objects, textures, lighting). So in some sense making CV is just the inverse of graphics rendering. The 3D scanner and printer are already a reality. At what point does this fuse so that capture to render is indistinguishable from a digital video stream?
Relevant answer
Answer
There is a more specific area where I think graphics and computer vision will merge. A rapidly expanding research field in robot vision is the simultaneous pose estimation and reconstruction of a scene from a video. By tracking points in the video you can calculate both the camera poses and the 3D position of the points using methods such as SLAM or PTAM. Rather than tracking points, you can do a dense reconstruction for all pixels or in a given volume - mostly referred to as DTAM. My colleague Andrew Davison at Imperial is a leader in this field.
Generally, DTAM methods make very simple assumptions about the surface properties - assuming just Lambertian reflection you can just expect the projection of the same 3D point to have the same colour in all video views. However, if you incorporate other properties, such as specular reflection, you need to have an idea what the lighting of the scene is. Some researchers are looking into this. In general you can say that the more accurate the graphical model the more accurate the reconstruction will be.
An area that interests me is registration/tracking/pose estimation for surgical scenes, where specular highlights are a significant issue for methods like PTAM/DTAM. You may already have a model of the scene you are looking at from preoperative images. In other fields such as camera-based navigation you may also have increasingly accurate predefined models of the scene. In this case a possible way to solve the problem is to match renderings of the model to each of the video frames. Again the more accurate the rendering the better the pose estimation will be.
It's unlikely that these problems will require the very high end of realistic rendering, but it is an area that links graphics and computer vision. Is this perhaps the kind of thing you were thinking of, Sam?