Science topics: Computer Science and EngineeringComputer Graphics and Animation
Science topic
Computer Graphics and Animation - Science topic
Computer Graphic and Animation
Questions related to Computer Graphics and Animation
I am working on animation teaching strategy and would like to make my videos myself rather than outsourcing so as to save cost.
Kindly recommend any free app, software, etc pls.
Thank you.
How to convert 3D model to parametric surface?
I have bunny mesh with vertices(array(5197*3)) and faces(array(10390*3)), how to convert it to parametric surface such as
a=1; b=1; c=1;
u=linspace(-2,2,256);
v=linspace(0,2*pi,256);
[u,v]=meshgrid(u,v);
% Calculate x, y,a dn z at each pair % (u, v) in the grid.
x=a*cosh(u).*cos(v);
y=b*cosh(u).*sin(v);
z=c*sinh(u);
It is well known that the study of computer vision includes acquiring knowledge from images and videos in general. One of its application include deriving 3D information from 2D images.
I would like to know whether the use of 3D imaging from IR cameras be considered in the computer vision field as well. Or is it better suited to be called a Machine Vision application instead?
Hi together,
when I worked in Roy Sambles' Thin Films & Interfaces Group in Exeter in 1993 I used an existing Fortran program for multilayer Fresnel modelling and improved its slo-mo performance to the point where one could change the parameters of the layers and see an almost instant graphical response. This actually gave rise to the "Inverted SPR" discovery.
Now, since then I was wondering if it wasn't feasible to have the computer scan a huge parameter space on its own and essentially doing a "curve discussion" on its own: A new phenomenon or quality can be described in natural language terms and so can known patterns be described (how are minima, maxima distributed, what are known limits, etc.) as well. The latter would have to be told to the computer and then it would only start spitting out results if it found something remarkable, something outside the known patterns. With today's computing power such a generic software tool would have produced the ISPR instantly! Now I would like to pass on the suggestion: create or find a method to describe and detect patterns in modelled data. Then let a computer scan the areas you find interesting? What do you think? Or is it standard procedure nowadays to do this?
Cheers from Switzerland
Marc
I am in need of consultation sources on virtual reality technology applications in the product design development, especially in the preliminary design stage and more precisely in relation to the mechanical parts assembly, ie how to make the parts collide during assembly virtual.
I'm performing a validation study and would like to include a study using the deflected NASA Common Research Model that they have used in the 6th Drag Prediction Workshop for the Static Aero-Elastic Effect case.
I've been looking for a while now and I can't seem to find the exact geometry anywhere so was curios if it's been made public or not.
Do any one have any information or the researches about evaluation of the differences between the 3d MSCT/CBCT DICOM-converted-to-STL models (including shrinkage, distortion and Boolean defects) to the various types 3d optical scanned models, such as: dental intra-oral scanned data, extra-oral scanned data (made by desktop scanners, includes: LASER projection type, photogrammetric type and structured light type) (including meshing, registering and patching)?
Appreciated.
I'd like know what kind of figures are used for making modern pixels and what is minimum of their physical sizes now?
Where to find what animation tools and techniques are used by PIXAR studios for making their movies.
what are the latest application area of image processing?
I really wish to know if there is any good and detailed practical material on L-System for plant growth and development apart from the Algorithmic beauty of plants?
I want to simulate the growth of waterleaf plant (Talinum triangulare) and see the effect of environment and other factors on the plant.
I have tried to use cpfg in L-Studio and I recently saw few articles on Blender, but all these seem not good for me to get the work done.
I will appreciate your comments and supports to find a solution to this task.
I would like to know a good reference (text book and/or library c) of how to adjust/align 2D images.
I am using Dynamic Time Warping to align 2D images along one axis, but I would like to make a multi-dimensional adjustment. I also need the shift matrix associate to this alignment.
I have tried in cfd post but not able to do it properly.
Hi,
I have a question and I truly hope you can help me. I am working on a project to develop a new dynamic cervical column. In order to finish the design and show that it works properly, I need a full 3D model of the head and the neck with all the soft tissue and muscles included. I want to remove all the parts beside the bones and skeletal structure, replace the vertebrae with the design and put the removed soft tissue back afterwards to put the design into motion and show that it works. I was thinking about using one of the Zygote solid models but I have no idea if what I want to do is even possible.
any hint is very much appreciated
Many thanks
Mary
I am writing a code to create 3D model of object using Epipolar geometry?
I get stuck at finding the Essential matrix from the Fundamental matrix. Since my images are not calibrated, I don't have a calibration matrix.
Please help me if anybody knows a method or code to do the calibration from only images to do a 3D reconstruction.
Hi everyone:
I need to do a 3D image reconstruction from the 2D images and it has to be voxel driven.
I think there are couple different ways, however I a not sure if I am right. Please give me some advice and clarification.
1. Using the OpenGL -> I think this one is to build the engine by yourself from the scratch
2. Using the OpenCL and point cloud library(PCL)-> I am not sure if I could render in the voxel driven way
3. Using the commercial available one such as voxelfarm->This requires the paid license.
Any advice will be welcome. Thank you
I have a number of group variables that I capture on MCA plots. I also obviously have the coordinates for these plots in the X, Y and Z dimensions. i.e, most people plot the maps as 2 dimensional and then descriptively outline what is going on with the analyses.
You can also do MCA plots in a 3rd dimension (Z dimensions). This would be a 3D plot
Obviously MCA is a visual representation. I was wondering if there are any recognised tests for comparing group differences (e.g, male and female, or 3 way group splits) in the 2D and 3D cases.
I asked Greenacre and received no reply as yet. I cannot find literature here answering this condition.
Brad
I am trying to get 3D images of the corroded reinforcement steel bars from 2D images taken at 360 degrees. I need to measure the cross-section at different location using some software which can measure the dimensions from 3D images. I am looking for a 0.01 mm resolution for the measurement and an accuracy for the 3D image that can capture even the small dents in the corroded steel rebar. Could you please suggest any software which can serve this purpose.
Hi everybody,
do you know a good entry-point/overview over super resolution algorithms for 3D data (also implementations/libraries are highly welcome)?
I want to produce highly accurate (groundtruth-like) 3D models from multiple partial pointclouds.
Any help appreciated....
Regards,
Bernd
Which tool or API is best for creating Physics & Dynamics based procedural Animation of quadruped characters. Like Unity3D engine, Open Dynamic Engine (ODE), Endorphin, Bullet API, or any other? Please suggest.
I'm analysing ER-mitochondria contact sites but I can not find a detailed protocol to obtain 3D reconstructed and surface rendered images with ImageJ.Thank you
Please elaborate or refer a source:
How is procedural Generated Animation (PGA) using procedural programming techniques, as used in Games industry, different form Physics or Dynamics based Animation techniques for biped or quadruped locomotion?
I know it is unusual, but I am wondering if there are any collaborative Virtual reality projects you know of in Edmonton, Canada. My main area of interest is spatial awareness and human material interaction. I appreciate your help.
I am interested in implementing a virtual Morris water maze task for humans in our lab during intracranial recordings. Therefore, I would need a software that allows me to create 3D levels or environments and an output of the virtual position during spatial navigation. Is there any research oriented package for this purpose?
Important note: I am not interested in Oculus Rift kind of virtual reality, but simple 3D First Person Shooter like programs.
If so then to what extent can this ‘model’ of visual perception be used to direct the development of vision science?
Is this proposition meaningful to a cross-disciplinary section of the research community?
Vision-Space is a new form of illusionary space that's based on perceptual structure and not the fundamentals of optics (or central perspective). It models both the data-strcutures and the dynamic of information exchange taking place within phenomenal field (experiential vision). As such we believe that it starts to model visual awareness. At present the programming architecture for Vision-Space is 'illustrative' in nature. It transforms optical record to accord with our understanding of perceptual structure. Vision it appears, is almost entirely non-photographically rendered. While this software can at present generate Vision-Space moving image media, to move the project forwards we need to create an 'academic' programming architecture advancing a 'generative' programming architecture. This architecture must take account of aspects of neural processing. Such an architecture could obviate the the requirement for optical projection as the basis for image generation and produce stimuli for experimentation to probe perceptual structure in detail.
A Bézier curve is a parametric curve frequently used in computer graphics, animation, modeling, CAD, CAGD, and many other related fields.
Bezier curves and surfaces are curves written in Bernstein basis form; so, they are known many years ago. However, these applications are used heavily only in the last 30 years. Why? What are your thoughts and opinions?
Bézier curves are also used in the time domain, particularly in animation and interface design, e.g., a Bézier curve can be used to specify the velocity over time of an object such as an icon moving from A to B, rather than simply moving at a fixed number of pixels per step. When animators or interface designers talk about the "physics" or "feel" of an operation, they may be referring to the particular Bézier curve used to control the velocity over time of the move in question.
The mathematical basis for Bézier curves — the Bernstein polynomial — has been known since 1912, but its applicability to graphics was understood half a century later. Bézier curves were widely publicized in 1962 by the French engineer Pierre Bézier, who used them to design automobile bodies at Renault. The study of these curves was however first developed in 1959 by mathematician Paul de Casteljau using de Casteljau's algorithm, a numerically stable method to evaluate Bézier curves, at Citroën, another French automaker.
I am a newbie in using perl and cygwin software. And I need to convert animation that is already done by using 3ds Max into OpenGL.
But I have no idea on how to start the script. Anyone?
What is the accuracy of such reconstruction methods with regards to the vibrations of the flying drones, quality of camera and resolution? Is it possible to improve the results by organizing multiple flights and overlaying/accumulating the data in the point cloud? Is there any free software available?
Complex number image in MATLAB.
How can I make an RGB image a plural version in MATLAB? And what do complex angle and mold represent?
Why sometimes we convert from RGB version to plural version?
Looking for an up-to-date text. GPU Gems comes to mind.
The method should be aware geometric properties of the mesh (for example: edges, normals, curvature,..) . It means that the method will makes dense sampling or sparse sampling depending on the sampling region. I appreciate for any suggestions.
Thanks in advance.
This is a computer graphics question. We need the mesh to dilate its concave parts, while keeping its convex parts unchanged. The deformed mesh will enclose the original mesh, just like a convex hull.
THANKS!
I'm measuring angular movement of each finger joint in the human hand. I'm using three reflective markers across each finger joint to determine each joint angle. One marker is placed on top of the relevant joint, with one on either side of it. I'm currently measuring movement using the dot product of recorded data. I noticed however that there is an option when exporting the movement csv file to add kinematic values, but this does not produce angular measurement. Any suggestions?
We want to restore an image using additional vector constraints.
Rink-side advertising of major sports events is often not really there at the actual event, but is regionalized and added in the live TV-mix of the event. This is standard practice, but does anyone know if the same is done with the advertising on the players´ suits? I.e. whether the logo on a star football player's chest is added in the TV-mix, or is that just advertising sci-fi?
I have started a new project in which I want to include the option of searching some text strings within SVG formatted image in my web page. According to my knowledge this is not possible in any raster image format. That is why I am using xml based SVG image format. This kind of feature is already available in inkscape (a software to draw svg images) and www. wikipathways.com, where an user can search a text string inside the image.
Can anybody tell me how I can include this feature in my webpage? And also I want highlight that particular matched string.
I'm looking for work done and discussions about how to define metrics to describe something like a "degree of realism" for computer generated images. I think this can be seen from a physical perspective (pixel by pixel comaprison between a photograph and the rendered scene) but also from a perceptual point of view (a human says "this look realistic"). What do you think about this?