Questions related to Rendering
I am conducting research in the area of heritage planning and conservation. Heritage Impact Assessment(HIA) is necessary before any kind of change or development in the built environment around a heritage site within a defined regulated area to determine its impacts on the potential of heritage. In India, it has now become mandatory by the National Monument Authority (NMA) in case of any centrally protected monument. Visual Impact Assessment is a very important component of an HIA to asses any future impact on the overall landscape of the place around the heritage site. To be precise, according to NMA guidelines, it is required to check the skyline concerning the heritage site, any visual obstruction in views of the heritage site, shadow on the heritage site due to new development, and consideration from building design bye-laws.
Guideline for HIA by NMA can be found here: https://www.nma.gov.in/documents/20126/51838/HIA+Report.pdf
From the available example of HIA reports, I understood that experts are using 3D software, first to model the existing structures and then adding the proposed structure to generate the views in the form of images/renders to visualise the projected development. Sometimes, it is done by only drawing a section and marking the human eye angle. I am not sure how they are validating these views. From these images/renders only, one can not say very definitively whether these are accurate or not. Also, I am unsure about the view/camera point selection.
I have not been able to find any study on the assessment of the overall visual quality of the surrounding area due to new changes.
It would be great if you know of any study or documents or share some light on this.
Speaking of TRICK,
I recommend reading the following attached paper
"EULER’S TRICK AND SECOND 2-DESCENT ", which demonstrates how it is possible to tackle higher-grade Diophantine problems with elementary techniques, such as those discovered by Leonhard Euler:
A method is based on an idea of Euler and seems to be related to unpublished work of Mordell.
In my two proofs of Fermat's Last Theorem I have done nothing but follow Euler's ideas and tricks.
The infinite descent is clearly a technique that renders Krasner's TRICK ineffective !!!
Enjoy the reading.
Andrea Ossicini, AL HASIB
Dear scholar, I hope all of you are doing well
One of the reviewers commented on my article.
The presence of structural breaks may render unreliable findings retrieved from the ARDL test. The authors are suggested to test for the possible structural breaks and augment it into ARDL model.
How can I respond to them?
In the presence of structural break, there is any alternative econometrics technique that gives unbiased estimates other than ARDL?
If yes then name, please, and run the software package
Rendering CGVIEW image...
java -Xmx1500m -jar cgview\cgview.jar -f jpg -i C:\Users\hhh\Desktop\output\scratch\sequence.fasta.xml -o C:\Users\hhh\Desktop\output\sequence.fasta.jpg
Error occurred during initialization of VM
Could not reserve enough space for 1536000KB object heap
I want the DEM rendering effect shown in the picture below, but the hillshade made in ArcGIS is not the one shown in the picture. May I ask if you have detailed production process and parameter setting, please let me know, thank you very much.
Google's Bilateral Guided Upsampling (2016) proposed an idea that upsampling could be guided by a reference image to maintain high-frequency information like edges. This reminds me of G-buffers in real-time rendering.
Does state-of-the-art super-resolution algorithms in real-time renderings, such as Nvidia's DLSS and AMD's FSR, use a similar idea? How do they exploit G-buffers to aid the upsampling?
I would like to meet researchers, developers who are currently or previously worked on foveated, aka. gaze contingent rendering. Although, foveated rendering could be the next big thing in rendering community, still there is an active community missing in this domain. Probably we can all jointly build a developer community if reddit/slack/discord.
We, as authors, are expected to follow certain ethical codes laid down by journals. For instance, authors can not submit the same article in more than one journal.
On the other hand, there are hardly any ethics for journals and editors. Journals rarely make the first decision within the ‘average’ first decision period mentioned in the journal's guidelines. Similarly, some manuscripts remain under review for more than a year at times, and journals reject an article after keeping it under review for such long times. By the time such decision is made, the article already loses its relevance.
I want to stress that a line of ethics shall be drawn for journals and editors as well.
1. There must be a maximum time limit for making the first decision and also for review. Two weeks are enough for making the first decision; the editor must go through the article and make the first decision in this period (If an article has some worth, send it to review else desk reject it). For peer reviews, I understand that getting peer reviews is a timely process, but there still must be an upper limit.
2. I have experienced that revisions are often sent to new reviewers who suggest additional new changes and sometimes recommend rejection also. Revisions should be sent to original reviewers, and in case original reviewers are not available, then the editor must make the decision on the basis of revisions recommended by the original reviewers and the changes made by the authors.
There may be other points also that fellow Research Gate members may highlight.
In my opinion, until journals do not follow such ethics, I do not see any harm in sending the same manuscript for consideration to multiple journals. A very delayed rejection decision renders the manuscript useless. Why should authors be hard done by? The journals and editors do have some ethics to follow too.
We are trying to run experiments with the Biolog EcoPlates to characterize microbial communities from nose swabs and other body sites. After incubation (both with and without shaking) we notice color formation in some of the wells, however the color is concentrated on the side of the well, rendering the measurement inaccurate. We have also noticed before the incubation that the substrate seems to adhere to the side of the wells in the plate, and pipetting does not help. Support from Biolog did not notice anything strange with this particular batch of plates we are using, so I am wondering if I am missing some basic technique here? Has anyone had similar issues?
Could anyone help me with 3D analysis in software ICY? I cannot render all cells which are in my z stack in 2 channels, it shows only part of blue channel, but green one, in which cells of interest are, is not able to be fully rendered from tiff z stack. Could anyone guide me, how to perform full render in 3D and perform analysis?
How to work properly in the development of an integral like the Abel Plana defined on this image:
I am interested in to have a set of steps for attacking the problem of developing the integral and to determine a criterion of convergence for any complex value s, I mean, when the integral could have some specifical behavior at, for example, s=1/2 + i t where I am interested in to study it.
I am interested in the proper evaluation of that integral only with formal steps into complex analysis.
The Abel PLana formula appears also on https://en.wikipedia.org/wiki/Abel%E2%80%93Plana_formula
Adequate and effective social infrastructure is very much necessary for the economic growth of the country.According to Sullivan:
“Social infrastructure refers to those factors which render the human resources of a nation suitable for productive work.”
A developing country is drastically different in terms of how its labour laws are regulated, how its citizens are educated, and how their health is handled. What are the unique ways to create and develop,and sustain social infrastructure in a country (particularly developing).
I am currently analyzing data regarding the validation of a greek translation of the anxiety sensitivity index 3. I have a sample of around 200. Can I transform my variables (e.g. log or Z-scores) to approach normality, or does this render the convergent and discriminant evidence meaningless?
nowadays, more and more foveated rendering is based on eye-tracker directed gaze points known as dynamic foveated rendering. Although no more fixed foveated rendering is catching the attention of scholars' interest, I need some previous work to see how and what were the methods working on this display centered fixed foveated rendering.
I could not find enough research papers from google scholar, would appreciate if you could suggest me some works on fixed foveated rendering.
I have created NDVI of the sentinel-2a image and rendered correctly using plot() command in R. However, when I tried to export it using writeRaster() command, it just saved black and white image. Why?
Note that such black and white image, If I load using raster() function, gets rendered accurately in Rstudio.
Can anyone helps me out?
Thanks in advance.
Over the past decades, ecosystem and climate change modeling have made great advances. However, as indicated by the uncomfortably large deviations between predicted climate change and today's observed effects, there is obviously a lot of room for improvement. Today it is accepted that global warming, and a series of derived effects, are ocurring at much faster rates than predicted even with the most sophisticated models just 10 years ago. Similar discrepancies exist for ecosystem models which try to predict production or the development of pest and disease populations. In the end, the complexities of non-linear behavior and multiple synergistic effects may render such complex systems impossible to model within the limits of acceptable accuracy. If models only hold under extensive lists of unrealistic assumptions (such as linear and additive effects vs non-linear synergistic effects), then their value for deriving practical recommendations must be questioned. So: what are the limits to (meaningful) modeling? I would warmly welcome pointers towards a readable account of this issue.
I have a licensed copy of PyMol (ver 2.3.1) and I am looking to 'replicate' a protein/bilayer i.e., in VMD you select "Periodic" and select from a number of x, y and z options. This, in turn, copies the unit cell along that vector.
Is there a feature similar to this in PyMol? I'm struggling to find one.
I am trying to build an ephys rig with multiple amplifiers and use the WinWCP Whole Cell Electrophysiology Analysis Program to acquire the signals. In order to do so, I was planning to use a National Instruments PCIe-6353 analog to digital converter rendering 16 channels. I have previously tried the PCIe-6351 board with this software and it worked perfectly but I was wondering if there could be any compatibility issues with the PCIe-6353. Any suggestions will be greatly welcomed!!!
We were recently informed that the STEMdiff™ Astrocyte maturation and differentiation kits are discontinued.
We have multiple projects running in parallel relaying on these kits.
are there any similar kits out there?
I am worried that switching will completely skew the results and will render our previous data useless for publication as different kits have different components.
What is commonly done to render the enzyme inactive, ie, a mutation in the ATP binding domain, etc...
I am fascinated by this video made by James Stains from USC.
https://www.youtube.com/watch?v=Eql5c4m_N68 It is a brain tractography from diffusion data, but he is able to show the tracts stemming, which is impossible with all softwares I konw (tracvis, MRItrix, DSI studio...). It is clearly an animation made with some modeling tools liks Blender or 3Dstudio.
How do yuo think are the steps?
Do you generate VTK and then open them on Blender? Or what?
i want to know about mesh data structures like half edge, winged edge,and octree is only suitable for rendering or it can be suitable for analyzing
Actually, for every text to be translated there is a kind of translation that's very suitable for that text. For example, scientific texts should be semantically translated because the accuracy of meaning in such a case is of top priority.Literary Texts, on the other hand, can be communicatively managed. In other words, the meaning in this case is not of top priority but it goes hand in hand with the form which is very important too.
It could be added that political discourse is characterized by a lot of playing on words' meanings. In other words, politicians, in most situations, try to use certain words and expressions with opposite meanings. Such being the case, the pragmatic approach should be depended on when it comes to rendering political discourse.
I am interested in rendering 2 volume files on one brain, similar to the way you can add a volume file in BrainNet. However, BrainNet doesn't allow you to input 2 volume files. Do you know of any other software or toolbox (preferably Matlab based) that allows one to input 2 volume files?
I have MRIcron, but the resulting images are not as nice as BrainNet's. Are there any imaging resources that can take 2 volume files on a "pretty" brain?
In this age 21st century all kinds of skills, simple or sophisticated, need to be continuously updated and developed or else they will go obsolete, and that may render many people jobless, discouraged and poor.
I am conducting a study to explore the quality of mental health care services rendered by the public heath facilities with the aim of developing a progress monitoring tool to improve the services offered.
I have a phantom_omni haptic and want to render a deformable shape for this i define a plane of vertexes and when curser collide to shape, vertexes near the collision point moved down and it more or less deform. but the question is , this method is too slow and force feed back and rendering frequency come down , so what method do you suggest me to solve this problem?
Given the longer and longer wait time to give the first decision to accept or not a paper (by the editors), I would like to ask you :
As a reviewer, how long do you usually take to examine a paper? And what is your acceptance or refusal rate to review a paper (not the rate of reject or accept its publication). Thank you.
I need detailed information about ACQUIRE Algorithm used for graphical target detection, which is a solution for the line of sight problem.
Here is the basic algorithm.
FBA (Framebuffer Based ACQUIRE) Algorithm
1. Move camera to agent's eye point
2. Render frame
3. Set target agent to false color (e.g. pure red)
4. Render frame again
5. Segment natural color frame into the figure and its surrounding pixels, the ground
6. Compute figure and ground brightness values
7. Compute detection probability via ACQUIRE
For not normally distributed data, I used Kruskal-Wallis test to investigate the statistical significance between different variables. I performed an exercise with three different intensities (different weights w1, w2, w3), then with one weight at three different speeds (s1, s2, s3). The readings were observed from 3 different points(p1, p2, p3).
I opted for statistical significance among p1, p2, p3 at (w1 and w2 and w3) and at (s1 and s2 and s3). then i opted statistical significance among w1, w2, w3 for p1 and p2 and p3. then i opted statistical significance among s1, s2, s3 for p1 and p2 and p3. So, there are 36 independent Kruskal-Wallis tests.
After that, Mann-Whitney test was performed for pairs (post hoc analysis).
I got comment on the test that, "the piecemeal statistical approach, consisting of a very large number of comparisons made between dependent variables during different conditions, without corrections for multiple testing, renders it probable that “significant” results may well be due to chance."
Can someone please suggest where I am wrong, and how and where to adjust the p value?
I am currently working on a research proposal about ancient auralizations and I am surprised to find very little actual audio renderings. Whilst I am sure many researchers have them, it seems that it very rarely gets published onto websites and other research platforms. Can anybody point me to either their private websites/collections or public online platforms where I might find this data? Thank you in advance.
I am currently analyzing the following panel data set with STATA on a daily base:
- N = 324 companies
- T = 252 trading days
- 6 social media variables from Facebook and Twitter data (e.g., answer times, number of posts and replies)
- 2 financial performance variables from CRSP data (abnormal return, idiosyncratic risk)
I tried both fixed effects estimation (xtreg fe) and panel vector autoregression (pvar) but neither of the approaches yields satisfying results. I also tried the Arellano-Bond approach (xtabond) but was not quite sure about the endogenous and predetermined regressors. Varying these yielded very few significant results.I also varied the operationalization of the social media and financial variables and looked at sub-samples (e.g., single industries, particular time frames, Facebook vs. Twitter sample) etc.
Apparently, for large T panels, the bias apparent for fixed effects estimation - the rationale for dynamic panel analysis - declines with time and eventually becomes insignificant, thus rendering a consistent fixed effects estimator. In comparison with other studies, I would assume that my T is rather large, thus a fixed effects estimation might be more sensible than a panel vector autoregression.
Any thoughts on this topic?
I am planning to measure pressure and temperature variations in and around a metal vapour deposition chamber. The substrate is a steel strip, and I wanted to experimentally measure pressure and temperature variations, as you introduce the metal vapour in the chamber. I would aim to put these gauges in different locations of the vacuum chamber. I have been advised that because of the metallic vapour being introduced in the system, the gauge gets contaminated by the vapour and renders the gauges useless. Does anyone have any suggestions of pressure gauges that may work?
This is being undertaken as a form of validation for a computational simulation of a similar scenario.
Using 3D meshes, how to automatically render 2D multi-view data from different view points while preserving the texture on the rendered 2D images ?
Any hints will be highly appreciated.
Is it possible for yeast to express a KanR bacterial selectable marker from a plasmid? I am trying to knock out a gene in S. cerevisiae using a geneticin/G418 resistance cassette. The deletion renders the cells very sick so I first transformed them with a backup URA plasmid. The backup plasmid has a KanR cassette for selection in bacteria. However, I am getting a lawn of growth on my G418 plates after the knockout transformation (yeast + backup plasmid + KO cassette). Is it possible that my yeast are expressing the KanR cassette from the bacterial promoter somehow (maybe some kind of crossover with my KO cassette)?
I'm a media historian and theorist without extensive technical expertise, looking to better understand how digital photogrammetry functions to support current work in VR (for example in Google Earth VR). I can find sources about how image data is captured. I'd like to find more information about how the data is then dynamically used by the VR engine to render an interactive/navigable environment that remains immersive across multiple scales. I'd appreciate any recommendations, advice, or possible conversations. Thank you! -Brooke
I'm looking for a good 3D render which includes the cerebellum to display clusters (NIFTI files). Any recommendation?
Thank you for your help,
Local government in South Africa is entrenched in the Constitution as a sphere of government as opposed to a tier. However, the financial and poor leadership related challenges encountered by municipalities render them ineffective in the delivery of basic services to residents.
The inability to render services necessitate the administrative and financial intervention by National Government at a wider scale and not as an exception to a few municipality.
In this context, can municipalities be considered a sphere of government with executive authority, and how can they ensure that services are rendered in an inclusive and sustainable manner that promotes the human rights of local residents.
I am interested to calculate the change in environmental impacts when changing from open dumping of slaughterhouse waste to a rendering process.
im looking for a nitrate test that can reliably and accurately quantify Nitrate in media-solutions. It is also important that it is robust and does not get interference from other media components. (Especially Chloride as the chloride concentration does change significantly during the experiment due to pH regulation which renders my current test-kit suboptimal!)
If anyone had some experiences with nitrate tests during algea cultivation and can recommend a method I would be grateful!
Professor Reuven Tsur proposes a cognitive-fossils approach in his 2017 book. It offers convincing explanations in understanding some poetic conventions which influence-hunting approach fails to render a satisfactory one. That's amazing! But does this approach have a power to help readers realize their unique meaning construction while reading literary works?
I am about to launch some immunofluorescent staining experiments on cells that were cultured on transwell membranes, and I have seen different protocols for the staining:
1. membranes are released from the transwell apparatus before fixation and antibody incubation;
2. fixation and antibody incubation protocols are done in intact transwell systems, and membrane is released just before mounting.
Which protocol renders better results according to your experience? The second one seems easier to perform and with lower rate of cell loss during staining.
Thanks in advance.
Digitally Reconstructed Radiograph (or DRR) that is created from a computed tomography (or CT) data set. This image would contain the same treatment plan information, but the patient image is reconstructed from the CT image data using a physics model.
There is a lot of hue and cry that Rice-Wheat cultivation in North India over last 60 years has alrrady depleted rechargeable ground water.Since canal water is not ample and decreasing due to lesser rainfall over last couple of years,can we imagine this practice will render land barren .
I am pursuing my PhD and preparing the material for a white light source. I am getting the PL results as required for a white light, but not able to calculate the CRI(Color rendering Index).
I just use the standard SPSS function to render cluster analysis and think / hope there are better ways to graph cluster analysis results.
With the LEDs having a high CRI values like 90+, is it possible for the Human eye to detect the change?
What is the criteria for this change?
What are the variations of the CRI vs other lighting parameters?
I want to use cell lysate for an geranylgeranylation assay with the endogenous GGTase I enzyme as the source of enzyme with added dansyl peptide and GGPP.
I've tried sonication alone with just 100 mM HEPES, pH 7.5 and 150 mM salt with phosphatase and protease added.
I've also tried using a lysis using RIPA buffer and phosphatase and protease added as well.
The total protein in my assay is around 2 mg/mL. Could it be that my enzyme level is too low to detect activity on or my method of lysis is just rendering my protein inactive?
I need to render a few ITO glass slide hydrophobic and I was wondering what is the best way to proceed? I have done a few research on it online and most of them suggest silanization but I am worried it might affect the transparency of the slide and most of them didn't have a precise protocol. Would anyone know where I can find a protocol to make ITO hydrophobic? Thank you in advance!
I am rendering a panoramic image using an omni-directional stereo system, e.g. Facebook Surround 360, with optical flow.
When I am stacking the optical flow fields horizontally
and visualize them with e.g. color-coding of middlebury or normalized vertical disparity, it is clearly visible that image is synthesized of 14 vertical stripes, 14 because it is number of cameras set in an equitorial way
Should visualization of the optical flow field look like a consistent image or is it correct that it looks like for example the data i have uploaded?
I am planning to use a cryopump in a physical vapor deposition system to evaporate metals with high vapor pressure. I am particularly concerned with the metal vapor finding its way to the active adsorption surfaces in the cryopump and rendering it ineffective over time.
I would guess some of these concerns could be addressed by proper design so that the vacuum port is away from the line of sight of the evaporator. Operationally, I was considering getting the system to its baseline vacuum and shutting it off completely from the vacuum system. The hope is, since the system would be leak tight, I can perform the evaporation very quickly before the vacuum may deteriorate.Thus the cryopump would not be exposed to the metal vapor.
This is purely a physical (thermal) vapor deposition and none of the reactants are gases.
Does anyone have ideas to almost eliminate this potential problem?
If the rendering speed is faster, that means that the object uses less polygon and other parameters. This results in a low pixel produced. On the other hand, if the rendering speed is slower, that means that the object has more polygons and has a high pixel, hence resulting in a low rendering speed.
I am trying to reduce GPU load by introducing calculations similar to what is done for a pixel repeat mode in display modes (1440x576p).
Is this already achieved phenomenon so that I can look into this.
This would help rendering low level graphics implementation efficiently.
Would be helpful to know what calculations goes in for a pixel repeat mode(display) or atleast does it look to be a viable solution
First by fractals i mean fractals which will produce by iterating a formula on polar coordinate, like where we'll make Mandelbrot fractal.
I was always wondered how to guess what a formula is going to look like as it calculate, or how can we know what is the fractal formula of an image.
Its unlikely for me to find a way, but how can we get nearer to guess them?
We have been facing this problem for couple of years. We have changed DPT mountant but the problem persists. The slides we have prepared in the past are good even after one and a half decades. We follow the same paraffin embedding technique now but the slides does not last long (a few months).
I'm looking for data exchange formats that can used to import lens description into my raytracing application. E.g. for light sources there is IES oder EULUMDAT, for 3D models there is stl, stp and many more. Is there something similar for optical lenses or lens systems?
I have gypsum samples which I suspect have received heat treatment. Hence, the rate constants rendered in literature at room temperature may not apply to my samples. I need to come up with a more accurate value for my particular system.
For a 3D data rendering, I used the VMD modeling software to visualize the POPC lipid membrane and water molecules, which are covering the surfaces of two lipid leaflets.
In computer graphics and computer vision, depth profiles are oftentimes viewed as either plain depth maps (i.e. an image where the Intensity is proportional to the depth value) or as "shaded" and/or "rendered" images of that depth profile.
The question is now, what exactly do the terms "shaded" and "rendered" mean? And where are they different from one another?
My opinion is:
Shading -> only Lambertian / diffuse reflectance behavior, i.e. I~n*l
Rendering -> more complex reflection effects, like specularities, interreflections, subsurface scattering, etc.
That would mean that "shaded" surfaces are a subset/special case of "rendered" surfaces.
Do you agree/disagree?
In Monte-Carlo based rendering algorithms, the final image is composed of the average value of all samples taken through each pixel. In the "classic" formulation for most algorithms (Path Tracing, Photon Mapping, MTL, etc.) I've always seen that a fixed number or rays is sampled through each pixel. However, I have the impression that clever strategies can be devised to sample a different number of rays per pixel, such that more "troublesome" pixels (which take longer to converge the real average color) receive more samples. My first idea would be to shoot a number of rays proportional to the estimated pixel variance. Is there anyone working on this subject, that can point me to a state of the art survey about such techniques?