Science topic
Rendering - Science topic
Explore the latest questions and answers in Rendering, and find Rendering experts.
Questions related to Rendering
I've been using LUHMES cells to differentiate into dopaminergic neurons for over 6 months now, but i always have issues when it comes to platting for differentiation. It's always an ON/OFF kind of situation, sometimes they adhere and then begin to differentiate(ideal), other times they don't adhere at all or adhere but maintain a round shape and not the typical shape of LUHMES which renders them from differentiating (the scenario that happens more often) which is quite frustrating. I follow the recommended method for platting:
1- Begin culture in T75 - Day 0
2- begin differentiation - Day 1
3- lift cells and plate them on 96/384 well plate (Day 2) and continue differentiation.
I hemi-feed them every other day once plated.
With 96 well, i tried different cell densities when platting (20, 25, 30, 35k/well), when they are able to differentiate, i noticed that anything above 35k they wouldn't adhere or would cluster quite a lot, which is not ideal as I'm using them for imaging.
Does anyone ever had these reoccurring issues with their LUHMES cells and could shed a light to my problems? I'm running out of ideas what could possibly be happening. Cells are fine and happy while in T75/175 flasks. Could it be the plate? i use PerkinElmer PhenoPlate. Also, for imaging, what cell density does everyone go for when using 96/384? I have also plated the LUHMES on day 3, which worked before, but still had the same issue as Day 2 plating.
Thanks in advance!
Please help to understand the aims and scopes of these two different techniques along with the possible computational cost that they render. Thanks.
The primary challenge in analyzing the shadow blister arises when the transverse distance between the two edges along the X-axis is either large or when the slit width reaches zero and the secondary barrier overlaps the primary barrier, rendering the "Fresnel Integral" valid. In such scenarios, this phenomenon can also be interpreted using traditional ray theory.
As the transverse distance decreases to approximately a millimeter, the validity of the "Fresnel Integral" diminishes. Regrettably, this narrow transverse distance has often been overlooked.
This article explores various scenarios where the transverse distance is either large or less than a millimeter, and where the secondary barrier overlaps the primary barrier.
Notably, complexity arises when the transverse distance is very small. In such conditions, the Fourier transform is valid only if we consider a complex refractive index, indicating an inhomogeneous fractal space with a variable refractive index near the surface of the obstacles. This variable refractive index introduces a time delay in the temporal domain, resulting in a specific dispersion region underlying the diffraction phenomenon.
Refer to:
http://www.ej-physics.org/index.php/ejphysics/article/view/304
Hello,o o o o o o o o o
I have master's level training in logic and meta-logic, high marks, A's, but am not a practicing logician at all, I am a neo-empiricist pluralist (epistemic and ontological) who does not think any single CTM based model of the mind can be reduced to logic exactly because of a categorical ambiguity that gets contingently invoked.
Neurons appear and so must be first modelled as objects with endogenous functions rendered over many internal sub-relations describing their emergent dynamics (behaviour). BUT, when we swtich to building a bit model of human cognition in terms of neural patterns and inter-dynamics, especially when thinking of the brain as composed of neural bit maps made morphic to structures in reality "outside", we then need to categorize neurons on more purely exogenous and relational terms too.
I do not believe this is allowed (does not lead to wff's) in any model built over any singular, i.e. a strictly reduced and monist and singular classical logic based model, but I am not versed enough in these technical terms to be certain here about the argumentative or procedural complexities involved, and would welcome any logician's insights here (I am a "fan" of semi-classical model building, although!).
I am looking for a practiced logician, with expertise in Model Theory (building structures of interpretation) who might have a peripheral interest in philosophy of mind, but who is willing to consider non-classical approaches.
Quid pro quo!
If you want a primer on my overall concern with the limits of finalizing totalized logical models (TOE's), please give this a go:
Thanks, Brian
- Contrary to "DOI:10.1007/978-3-319-77167-0" claim (@John David Orme), human nature is not the cause of wars or conflicts. But I argue that ...
- Acknowledgment and Agreement: “I concur that political leaders who seek dominance or control may resort to military force and wars.”
- Addressing the Challenge: “In the face of politicians who consistently prioritize their desires and engage in conflicts, what strategies can humanity employ to mitigate these tendencies?”
- Overcoming Greed and Conflict: “How can we effectively reduce politicians’ greediness and prevent them from perpetuating wars and conflicts?”
- Proposed Solutions: “What alternative approaches or substitutes exist for such politicians, ensuring a more peaceful and equitable governance?”
Indeed, the root cause of human conflicts and wars is often intertwined with politics. Political decisions, power struggles, and ideological differences can escalate tensions and lead to armed conflicts. While other factors also play a role, understanding the political dynamics is crucial for preventing and resolving conflicts.
Human conflicts and wars have been a recurring theme throughout history, shaped by complex factors. Let’s delve into this topic:
- Definition of War:
- In the popular sense, war refers to a conflict between political groups involving hostilities of considerable duration and magnitude.
- Sociologists typically apply the term “war” only if it is initiated and conducted following socially recognized forms.
- Military writers often confine the term to hostilities where the contending groups are sufficiently equal in power to render the outcome uncertain for a time.
- Causes of War:
- Politics plays a central role in the genesis of conflicts and wars. Here are some key factors:
- Territorial Disputes: Political disagreements over land, resources, or boundaries often escalate into armed conflicts.
- Ideological Differences: Clashes between opposing political ideologies (e.g., democracy vs. authoritarianism) can lead to war.
- Power Struggles: Political leaders seeking dominance or control may resort to military force.
- Nationalism: Intense patriotism and national identity can fuel aggression.
- Economic Interests: Politics influences economic policies, trade, and resource allocation, which can trigger conflicts.
- Historical Grievances: Past political injustices and unresolved issues contribute to tensions.
- Alliances and Treaties: Political alliances can drag nations into wars.
- Leadership Decisions: Political leaders’ choices impact whether conflicts escalate or de-escalate.
- Politics plays a central role in the genesis of conflicts and wars. Here are some key factors:
- Theoretical Perspectives on War:
- Realism: Emphasizes power, self-interest, and the anarchic nature of international relations. Realists argue that war is inevitable due to the pursuit of national interests.
- Liberalism: Advocates for cooperation, institutions, and diplomacy. Liberals believe that democracies are less likely to go to war with each other.
- Constructivism: Focuses on ideas, norms, and identity. Constructivists argue that war is shaped by social constructs and perceptions.
- Critical Theory: Examines power structures, inequality, and historical context. Critical theorists critique war as a product of systemic flaws.
- Preventing and Resolving Conflicts:
- Diplomacy, negotiation, and dialogue are essential tools for conflict resolution.
- International organizations (e.g., the United Nations) play a role in promoting peace.
- Addressing root causes (poverty, inequality, and injustice) can reduce the likelihood of conflicts.
- Promoting education, tolerance, and understanding can foster peaceful coexistence.
In summary, while politics and politicians are the sole drivers of conflicts and wars, it is essential to recognize that multiple factors intersect to create complex situations. Understanding these dynamics and working toward peaceful solutions remain critical for a better world.
Indeed, what can we do when we are facing politicians who prioritize personal gain over peace and well-being, here are some considerations:
- Active Citizenship and Accountability:
- Stay Informed: Educate yourself about political issues, policies, and the track records of politicians. Be aware of their actions and decisions.
- Vote Wisely: Participate in elections and vote for leaders who prioritize peace, diplomacy, and the welfare of their constituents.
- Hold Politicians Accountable: Engage in peaceful protests, write letters, and use social media to express your concerns. Demand transparency and accountability from elected officials.
- Promote Alternatives to War:
- Diplomacy: Encourage dialogue and negotiation as alternatives to military conflict. Diplomacy can prevent wars and resolve disputes.
- International Cooperation: Support organizations like the United Nations that work toward global peace and cooperation.
- Conflict Resolution: Advocate for conflict resolution mechanisms that address root causes and promote reconciliation.
- Promote Ethical Leadership:
- Character Matters: Look for leaders with integrity, empathy, and a commitment to the common good.
- Reject Fear-Based Politics: Beware of politicians who exploit fear and division for personal gain. Seek leaders who unite rather than divide.
- Limit Political Power Concentration:
- Checks and Balances: Support systems that prevent excessive concentration of power. Independent judiciaries, free press, and legislative oversight are crucial.
- Term Limits: Encourage term limits for politicians to prevent entrenched power.
- Encourage Grassroots Movements:
- Community Activism: Engage in local initiatives that promote peace, justice, and equality. Grassroots movements can drive change.
- Youth Participation: Empower young people to be politically active and advocate for a better future.
- Promote Education and Critical Thinking:
- Education: Invest in education that fosters critical thinking, empathy, and understanding. Informed citizens make better choices.
- Media Literacy: Teach media literacy to discern reliable information from propaganda.
- Explore New Political Models:
- Participatory Democracy: Explore models where citizens actively participate in decision-making beyond voting.
- Technological Innovations: Leverage technology for transparent governance and citizen engagement.
Remember that change often starts at the grassroots level.
How can we create a world where politicians do not govern human societies?
Creating a world where politicians do not govern human societies requires bold shifts and collective efforts. Here are some steps we can take:
- Empower Local Communities: Decentralization: Shift power away from centralized governments. Empower local communities to make decisions that directly impact their lives.
- Participatory Democracy: Involve citizens in decision-making processes. Encourage town halls, community forums, and participatory budgeting.
- Transparency and Accountability: Open Government: Demand transparency in governance. Ensure politicians disclose their actions, finances, and affiliations.
- Independent Oversight: Strengthen independent bodies (e.g., ombudsmen, auditors) to monitor politicians’ conduct.
- Term Limits and Rotation: Limit Tenure: Implement term limits for politicians. Prevent long-lasting power accumulation. Rotation of Leaders: Encourage regular rotation of leadership positions to prevent entrenched interests.
- Merit-Based Selection: Qualifications: Select leaders based on merit, expertise, and commitment to public service.
- Competency: Assess candidates’ abilities to address societal challenges effectively.
- Education and Civic Literacy: Critical Thinking: Promote education that fosters critical thinking, ethics, and civic responsibility.
- Media Literacy: Equip citizens to discern reliable information from propaganda.
- Alternative Models: Technocracy: Explore governance by experts in relevant fields.
- Direct Democracy: Use technology for direct citizen participation in decision-making.
- Global Cooperation: International Institutions: Strengthen global organizations to address transnational issues.
- Shared Goals: Promote cooperation over competition among nations.
Remember, change begins with individual actions and collective advocacy. Let’s envision a world where governance serves the common good, not personal, or distinctive groups' interests.
- How can we establish a world where governance transcends political structures?
- Do humans, in nature, need politicians?
- What do you think?
Dear colleagues,
I have encountered a statistical issue involving the analysis of Likert scale data, specifically using the POSAS scare scale, in a paired situation. The scale comprises numerous questions, and I am considering two approaches for comparison: either evaluating each item separately between two groups or aggregating the values to obtain a final score for subsequent comparison.
However, a challenge arises when I choose to add up the scores. In such cases, obtaining significant results does not provide clarity on where the differences originated. Conversely, if I opt to compare each item individually, the number of tests would escalate to nearly 80, making it sensible to consider Bonferroni correction. Nevertheless, dividing the significance threshold (0.05) by 80 renders many scores non-significant.
I am contemplating whether it is reasonable to adjust with Bonferroni correction by dividing by the total number of tests. Alternatively, I am considering adjusting for each Likert item separately. For instance, in a two-time point comparison, dividing by 2 for each Likert item seems more appropriate than aggregating all Likert scale comparisons, which would result in a division by 70-80 tests.
I would greatly appreciate your assistance and insights on this matter.
In the era of big data and artificial intelligence (AI), where aggregated data is used to learn about patterns and for decision-making, quality of input data seems to be of paramount importance. Poor data quality may lead not only to wrong outcomes, which will simply render the application useless, but more importantly to fundamental rights breaches and undermined trust in the public authorities using such applications. In law enforcement as in other sectors the question of how to ensure that data used for the development of big data and AI applications meet quality standards remains.
In law enforcement, as in other sectors, the key element of ensuring quality and reliability of big data and AI apps is the quality of raw material. However, the negative effects of flawed data quality in this context extend far beyond the typical ramifications, since they may lead to wrong and biased decisions producing adverse legal or factual consequences for individuals,Footnote11 such as detention, being a target of infiltration or a subject of investigation or other intrusive measures (e.g., a computer search).
source:
my research study is about financial price modeling, i have several times series in which the frequency of some series is monthly, while the frequency of others is annual. my objecyive is to convert the annual series to montjly data but, what is the relevant statistical method in this case?
In my research study, I'm trying to model financial series using two series as data bases: the first series is monthly, and the second series is daily.
I've tried to render the second series monthly as well, by taking the instantaneous average of each month, but I've noticed that the wide fluctuations that make my research so interesting have disappeared.
how can we render a daily time series into a monthly series while retaining the effect of extreme values in the series?
what is the appropriate measure or statistical technique in this case?
Hello,
I would like to create Physics-Based Rendered (PBR) textures from image example as automatically as possible. I'm interested in research papers, libraries or software able to generate PBR textures approximating an image example as realistically as possible without requiring a 3D artist expertise.
Just showing a textured region of the input image example should suffice.
Any reference along these would be appreciated:
- "Deep Inverse Rendering for High-resolution SVBRDF Estimation from an Arbitrary Number of Images": https://gao-duan.github.io/publications/mvsvbrdf/mvsvbrdf_low_resolution.pdf
- "Match: differentiable material graphs for procedural material capture": https://dspace.mit.edu/handle/1721.1/134067
- Super Texture for Blender: https://blendermarket.com/products/super-texture
Regards,
Bruno
If a monoecious plant's tissue is exposed to chemical or radiation-induced mutations, is the male, or female part more likely to be rendered sterile?
I'm seeking to update my laptop to a new one, or desktop. I have found that I can fiddle around with structures with this current 6th gen i7, 8 gb lenovo 700 with these internal graphics cards https://www.techpowerup.com/gpu-specs/hd-graphics-520.c2783. So rendering final >600 dpi publication images or heavy electron density maps is not happening.
Many people prefer Macs and Macbooks for such graphical tasks. I prefer staying on Linux. If someone would suggest a laptop, desktop, or graphics card model (Linux compatible), that makes molecular graphics an enjoyable work task, that would be much appreciated :)!
I'm working on confocal imaging of attine ants fungus gardens, which are sponge-like fungal structures containing fragmented plant material and an associated bacterial microbiota. To look into the fungus gardens structure, we are attempting to embedd the structure and to use the microtome to get longitudinal sections.
Our first attempt using Leica Historesin did not render good results for FISH. Does anyone know a better method that could render good quality FISH results?
Thanks
I am using Google Earth Pro to get some images for maps in my research area, and also to get some coordinates of my study area, and I am required to cite it. But I am not sure how the citation may be.
Should I need to include this information? Thanks!
Google Earth Pro
7.3.6.9326 (64-bit)
Build Date
Tuesday, December 13, 2022 5:26:44 AM UTC
Renderer
DirectX
Operating System
Microsoft Windows (6.2.9200.0)
Graphics Driver
Google Inc. (00008.00017.00010.01404)
Maximum Texture Size
16384×16384
Available Video Memory
4336 MB
Server
In his 1907 article, "Man's Greatest Achievement", Tesla equates the imperceptible primary substance with the luminiferous aether. It is now suggested that the luminiferous medium, which is the medium for the propagation of light, is in fact perceptible matter, just like ponderable matter, arising when the primary substance is rendered into tiny whirlpools, and that the luminiferous medium differs in nature from ponderable matter, only in scale, in that the former is comprised of leptons.
Speaking of TRICK,
I recommend reading the following attached paper
"EULER’S TRICK AND SECOND 2-DESCENT ", which demonstrates how it is possible to tackle higher-grade Diophantine problems with elementary techniques, such as those discovered by Leonhard Euler:
A method is based on an idea of Euler and seems to be related to unpublished work of Mordell.
In my two proofs of Fermat's Last Theorem I have done nothing but follow Euler's ideas and tricks.
The infinite descent is clearly a technique that renders Krasner's TRICK ineffective !!!
Enjoy the reading.
Andrea Ossicini, AL HASIB
I am conducting research in the area of heritage planning and conservation. Heritage Impact Assessment(HIA) is necessary before any kind of change or development in the built environment around a heritage site within a defined regulated area to determine its impacts on the potential of heritage. In India, it has now become mandatory by the National Monument Authority (NMA) in case of any centrally protected monument. Visual Impact Assessment is a very important component of an HIA to asses any future impact on the overall landscape of the place around the heritage site. To be precise, according to NMA guidelines, it is required to check the skyline concerning the heritage site, any visual obstruction in views of the heritage site, shadow on the heritage site due to new development, and consideration from building design bye-laws.
Guideline for HIA by NMA can be found here: https://www.nma.gov.in/documents/20126/51838/HIA+Report.pdf
From the available example of HIA reports, I understood that experts are using 3D software, first to model the existing structures and then adding the proposed structure to generate the views in the form of images/renders to visualise the projected development. Sometimes, it is done by only drawing a section and marking the human eye angle. I am not sure how they are validating these views. From these images/renders only, one can not say very definitively whether these are accurate or not. Also, I am unsure about the view/camera point selection.
I have not been able to find any study on the assessment of the overall visual quality of the surrounding area due to new changes.
It would be great if you know of any study or documents or share some light on this.
Dear scholar, I hope all of you are doing well
One of the reviewers commented on my article.
The presence of structural breaks may render unreliable findings retrieved from the ARDL test. The authors are suggested to test for the possible structural breaks and augment it into ARDL model.
How can I respond to them?
OR
In the presence of structural break, there is any alternative econometrics technique that gives unbiased estimates other than ARDL?
If yes then name, please, and run the software package
Please help
Thanks
Ijaz Uddin
I would like to deflavinate my enzyme (MtCDH) to render it catalytically inactive. I have come across the publication from B. E. P. SWOBODA but I am not sure if it is working. Does anyone have experience with this?
Rendering CGVIEW image...
java -Xmx1500m -jar cgview\cgview.jar -f jpg -i C:\Users\hhh\Desktop\output\scratch\sequence.fasta.xml -o C:\Users\hhh\Desktop\output\sequence.fasta.jpg
Error occurred during initialization of VM
Could not reserve enough space for 1536000KB object heap
Done.
I recently found a dissertation about lightmap compression: https://www.diva-portal.org/smash/get/diva2:844146/FULLTEXT01.pdf
Are there other papers about this topic? Thanks.
I want the DEM rendering effect shown in the picture below, but the hillshade made in ArcGIS is not the one shown in the picture. May I ask if you have detailed production process and parameter setting, please let me know, thank you very much.
Google's Bilateral Guided Upsampling (2016) proposed an idea that upsampling could be guided by a reference image to maintain high-frequency information like edges. This reminds me of G-buffers in real-time rendering.
Does state-of-the-art super-resolution algorithms in real-time renderings, such as Nvidia's DLSS and AMD's FSR, use a similar idea? How do they exploit G-buffers to aid the upsampling?
I would like to meet researchers, developers who are currently or previously worked on foveated, aka. gaze contingent rendering. Although, foveated rendering could be the next big thing in rendering community, still there is an active community missing in this domain. Probably we can all jointly build a developer community if reddit/slack/discord.
We, as authors, are expected to follow certain ethical codes laid down by journals. For instance, authors can not submit the same article in more than one journal.
On the other hand, there are hardly any ethics for journals and editors. Journals rarely make the first decision within the ‘average’ first decision period mentioned in the journal's guidelines. Similarly, some manuscripts remain under review for more than a year at times, and journals reject an article after keeping it under review for such long times. By the time such decision is made, the article already loses its relevance.
I want to stress that a line of ethics shall be drawn for journals and editors as well.
1. There must be a maximum time limit for making the first decision and also for review. Two weeks are enough for making the first decision; the editor must go through the article and make the first decision in this period (If an article has some worth, send it to review else desk reject it). For peer reviews, I understand that getting peer reviews is a timely process, but there still must be an upper limit.
2. I have experienced that revisions are often sent to new reviewers who suggest additional new changes and sometimes recommend rejection also. Revisions should be sent to original reviewers, and in case original reviewers are not available, then the editor must make the decision on the basis of revisions recommended by the original reviewers and the changes made by the authors.
There may be other points also that fellow Research Gate members may highlight.
In my opinion, until journals do not follow such ethics, I do not see any harm in sending the same manuscript for consideration to multiple journals. A very delayed rejection decision renders the manuscript useless. Why should authors be hard done by? The journals and editors do have some ethics to follow too.
We are trying to run experiments with the Biolog EcoPlates to characterize microbial communities from nose swabs and other body sites. After incubation (both with and without shaking) we notice color formation in some of the wells, however the color is concentrated on the side of the well, rendering the measurement inaccurate. We have also noticed before the incubation that the substrate seems to adhere to the side of the wells in the plate, and pipetting does not help. Support from Biolog did not notice anything strange with this particular batch of plates we are using, so I am wondering if I am missing some basic technique here? Has anyone had similar issues?
Dear All,
Could anyone help me with 3D analysis in software ICY? I cannot render all cells which are in my z stack in 2 channels, it shows only part of blue channel, but green one, in which cells of interest are, is not able to be fully rendered from tiff z stack. Could anyone guide me, how to perform full render in 3D and perform analysis?
How to work properly in the development of an integral like the Abel Plana defined on this image:
I am interested in to have a set of steps for attacking the problem of developing the integral and to determine a criterion of convergence for any complex value s, I mean, when the integral could have some specifical behavior at, for example, s=1/2 + i t where I am interested in to study it.
I am interested in the proper evaluation of that integral only with formal steps into complex analysis.
The Abel PLana formula appears also on https://en.wikipedia.org/wiki/Abel%E2%80%93Plana_formula
Best regards
Carlos López
Adequate and effective social infrastructure is very much necessary for the economic growth of the country.According to Sullivan:
“Social infrastructure refers to those factors which render the human resources of a nation suitable for productive work.”
A developing country is drastically different in terms of how its labour laws are regulated, how its citizens are educated, and how their health is handled. What are the unique ways to create and develop,and sustain social infrastructure in a country (particularly developing).
Hello!
I am currently analyzing data regarding the validation of a greek translation of the anxiety sensitivity index 3. I have a sample of around 200. Can I transform my variables (e.g. log or Z-scores) to approach normality, or does this render the convergent and discriminant evidence meaningless?
Thank you!
Hello,
nowadays, more and more foveated rendering is based on eye-tracker directed gaze points known as dynamic foveated rendering. Although no more fixed foveated rendering is catching the attention of scholars' interest, I need some previous work to see how and what were the methods working on this display centered fixed foveated rendering.
I could not find enough research papers from google scholar, would appreciate if you could suggest me some works on fixed foveated rendering.
- non Fourier heat conduction equation which's also known as hyperbolic heat equation, is in the form of :
I have created NDVI of the sentinel-2a image and rendered correctly using plot() command in R. However, when I tried to export it using writeRaster() command, it just saved black and white image. Why?
Note that such black and white image, If I load using raster() function, gets rendered accurately in Rstudio.
Can anyone helps me out?
Thanks in advance.
Over the past decades, ecosystem and climate change modeling have made great advances. However, as indicated by the uncomfortably large deviations between predicted climate change and today's observed effects, there is obviously a lot of room for improvement. Today it is accepted that global warming, and a series of derived effects, are ocurring at much faster rates than predicted even with the most sophisticated models just 10 years ago. Similar discrepancies exist for ecosystem models which try to predict production or the development of pest and disease populations. In the end, the complexities of non-linear behavior and multiple synergistic effects may render such complex systems impossible to model within the limits of acceptable accuracy. If models only hold under extensive lists of unrealistic assumptions (such as linear and additive effects vs non-linear synergistic effects), then their value for deriving practical recommendations must be questioned. So: what are the limits to (meaningful) modeling? I would warmly welcome pointers towards a readable account of this issue.
I am working on tumor induction in experimental rats and needed 1,2-Dimethylhydrazine to induced tumor in the lab. animals. Please I need help from anyone who can render help.
I am trying to build an ephys rig with multiple amplifiers and use the WinWCP Whole Cell Electrophysiology Analysis Program to acquire the signals. In order to do so, I was planning to use a National Instruments PCIe-6353 analog to digital converter rendering 16 channels. I have previously tried the PCIe-6351 board with this software and it worked perfectly but I was wondering if there could be any compatibility issues with the PCIe-6353. Any suggestions will be greatly welcomed!!!
We were recently informed that the STEMdiff™ Astrocyte maturation and differentiation kits are discontinued.
We have multiple projects running in parallel relaying on these kits.
are there any similar kits out there?
I am worried that switching will completely skew the results and will render our previous data useless for publication as different kits have different components.
Thanks,
Oded
What is commonly done to render the enzyme inactive, ie, a mutation in the ATP binding domain, etc...
I have a licensed copy of PyMol (ver 2.3.1) and I am looking to 'replicate' a protein/bilayer i.e., in VMD you select "Periodic" and select from a number of x, y and z options. This, in turn, copies the unit cell along that vector.
Is there a feature similar to this in PyMol? I'm struggling to find one.
Thanks
HI,
I am fascinated by this video made by James Stains from USC.
https://www.youtube.com/watch?v=Eql5c4m_N68 It is a brain tractography from diffusion data, but he is able to show the tracts stemming, which is impossible with all softwares I konw (tracvis, MRItrix, DSI studio...). It is clearly an animation made with some modeling tools liks Blender or 3Dstudio.
How do yuo think are the steps?
Do you generate VTK and then open them on Blender? Or what?
i want to know about mesh data structures like half edge, winged edge,and octree is only suitable for rendering or it can be suitable for analyzing
Actually, for every text to be translated there is a kind of translation that's very suitable for that text. For example, scientific texts should be semantically translated because the accuracy of meaning in such a case is of top priority.Literary Texts, on the other hand, can be communicatively managed. In other words, the meaning in this case is not of top priority but it goes hand in hand with the form which is very important too.
It could be added that political discourse is characterized by a lot of playing on words' meanings. In other words, politicians, in most situations, try to use certain words and expressions with opposite meanings. Such being the case, the pragmatic approach should be depended on when it comes to rendering political discourse.
Hi,
I am interested in rendering 2 volume files on one brain, similar to the way you can add a volume file in BrainNet. However, BrainNet doesn't allow you to input 2 volume files. Do you know of any other software or toolbox (preferably Matlab based) that allows one to input 2 volume files?
I have MRIcron, but the resulting images are not as nice as BrainNet's. Are there any imaging resources that can take 2 volume files on a "pretty" brain?
Thank you.
In this age 21st century all kinds of skills, simple or sophisticated, need to be continuously updated and developed or else they will go obsolete, and that may render many people jobless, discouraged and poor.
I am conducting a study to explore the quality of mental health care services rendered by the public heath facilities with the aim of developing a progress monitoring tool to improve the services offered.
I have a phantom_omni haptic and want to render a deformable shape for this i define a plane of vertexes and when curser collide to shape, vertexes near the collision point moved down and it more or less deform. but the question is , this method is too slow and force feed back and rendering frequency come down , so what method do you suggest me to solve this problem?
Given the longer and longer wait time to give the first decision to accept or not a paper (by the editors), I would like to ask you :
As a reviewer, how long do you usually take to examine a paper? And what is your acceptance or refusal rate to review a paper (not the rate of reject or accept its publication). Thank you.
I need detailed information about ACQUIRE Algorithm used for graphical target detection, which is a solution for the line of sight problem.
Here is the basic algorithm.
FBA (Framebuffer Based ACQUIRE) Algorithm
1. Move camera to agent's eye point
2. Render frame
3. Set target agent to false color (e.g. pure red)
4. Render frame again
5. Segment natural color frame into the figure and its surrounding pixels, the ground
6. Compute figure and ground brightness values
7. Compute detection probability via ACQUIRE
For not normally distributed data, I used Kruskal-Wallis test to investigate the statistical significance between different variables. I performed an exercise with three different intensities (different weights w1, w2, w3), then with one weight at three different speeds (s1, s2, s3). The readings were observed from 3 different points(p1, p2, p3).
I opted for statistical significance among p1, p2, p3 at (w1 and w2 and w3) and at (s1 and s2 and s3). then i opted statistical significance among w1, w2, w3 for p1 and p2 and p3. then i opted statistical significance among s1, s2, s3 for p1 and p2 and p3. So, there are 36 independent Kruskal-Wallis tests.
After that, Mann-Whitney test was performed for pairs (post hoc analysis).
I got comment on the test that, "the piecemeal statistical approach, consisting of a very large number of comparisons made between dependent variables during different conditions, without corrections for multiple testing, renders it probable that “significant” results may well be due to chance."
Can someone please suggest where I am wrong, and how and where to adjust the p value?
I am currently working on a research proposal about ancient auralizations and I am surprised to find very little actual audio renderings. Whilst I am sure many researchers have them, it seems that it very rarely gets published onto websites and other research platforms. Can anybody point me to either their private websites/collections or public online platforms where I might find this data? Thank you in advance.
I am currently analyzing the following panel data set with STATA on a daily base:
- N = 324 companies
- T = 252 trading days
- 6 social media variables from Facebook and Twitter data (e.g., answer times, number of posts and replies)
- 2 financial performance variables from CRSP data (abnormal return, idiosyncratic risk)
I tried both fixed effects estimation (xtreg fe) and panel vector autoregression (pvar) but neither of the approaches yields satisfying results. I also tried the Arellano-Bond approach (xtabond) but was not quite sure about the endogenous and predetermined regressors. Varying these yielded very few significant results.I also varied the operationalization of the social media and financial variables and looked at sub-samples (e.g., single industries, particular time frames, Facebook vs. Twitter sample) etc.
Apparently, for large T panels, the bias apparent for fixed effects estimation - the rationale for dynamic panel analysis - declines with time and eventually becomes insignificant, thus rendering a consistent fixed effects estimator. In comparison with other studies, I would assume that my T is rather large, thus a fixed effects estimation might be more sensible than a panel vector autoregression.
Any thoughts on this topic?
Thanks,
Sarah
I am planning to measure pressure and temperature variations in and around a metal vapour deposition chamber. The substrate is a steel strip, and I wanted to experimentally measure pressure and temperature variations, as you introduce the metal vapour in the chamber. I would aim to put these gauges in different locations of the vacuum chamber. I have been advised that because of the metallic vapour being introduced in the system, the gauge gets contaminated by the vapour and renders the gauges useless. Does anyone have any suggestions of pressure gauges that may work?
This is being undertaken as a form of validation for a computational simulation of a similar scenario.
Is it possible for yeast to express a KanR bacterial selectable marker from a plasmid? I am trying to knock out a gene in S. cerevisiae using a geneticin/G418 resistance cassette. The deletion renders the cells very sick so I first transformed them with a backup URA plasmid. The backup plasmid has a KanR cassette for selection in bacteria. However, I am getting a lawn of growth on my G418 plates after the knockout transformation (yeast + backup plasmid + KO cassette). Is it possible that my yeast are expressing the KanR cassette from the bacterial promoter somehow (maybe some kind of crossover with my KO cassette)?
Dear all,
I'm looking for a good 3D render which includes the cerebellum to display clusters (NIFTI files). Any recommendation?
Thank you for your help,
Cheers,
Dan
Local government in South Africa is entrenched in the Constitution as a sphere of government as opposed to a tier. However, the financial and poor leadership related challenges encountered by municipalities render them ineffective in the delivery of basic services to residents.
The inability to render services necessitate the administrative and financial intervention by National Government at a wider scale and not as an exception to a few municipality.
In this context, can municipalities be considered a sphere of government with executive authority, and how can they ensure that services are rendered in an inclusive and sustainable manner that promotes the human rights of local residents.
In the process of generating Computer Images
the input is a model 2D,3D and the final output is a 2D digital image
is rendering before illumination? or illumination before rendring ? or the order does not matter?
I am interested to calculate the change in environmental impacts when changing from open dumping of slaughterhouse waste to a rendering process.
Hi,
im looking for a nitrate test that can reliably and accurately quantify Nitrate in media-solutions. It is also important that it is robust and does not get interference from other media components. (Especially Chloride as the chloride concentration does change significantly during the experiment due to pH regulation which renders my current test-kit suboptimal!)
If anyone had some experiences with nitrate tests during algea cultivation and can recommend a method I would be grateful!
Best wishes,
Matthias Koch
Professor Reuven Tsur proposes a cognitive-fossils approach in his 2017 book. It offers convincing explanations in understanding some poetic conventions which influence-hunting approach fails to render a satisfactory one. That's amazing! But does this approach have a power to help readers realize their unique meaning construction while reading literary works?
(I am writing an article on how to render intellectual capital economically viable)
I am about to launch some immunofluorescent staining experiments on cells that were cultured on transwell membranes, and I have seen different protocols for the staining:
1. membranes are released from the transwell apparatus before fixation and antibody incubation;
2. fixation and antibody incubation protocols are done in intact transwell systems, and membrane is released just before mounting.
Which protocol renders better results according to your experience? The second one seems easier to perform and with lower rate of cell loss during staining.
Thanks in advance.
Digitally Reconstructed Radiograph (or DRR) that is created from a computed tomography (or CT) data set. This image would contain the same treatment plan information, but the patient image is reconstructed from the CT image data using a physics model.
There is a lot of hue and cry that Rice-Wheat cultivation in North India over last 60 years has alrrady depleted rechargeable ground water.Since canal water is not ample and decreasing due to lesser rainfall over last couple of years,can we imagine this practice will render land barren .
Is there any specific Correlation between these two?
I am pursuing my PhD and preparing the material for a white light source. I am getting the PL results as required for a white light, but not able to calculate the CRI(Color rendering Index).
I just use the standard SPSS function to render cluster analysis and think / hope there are better ways to graph cluster analysis results.
With the LEDs having a high CRI values like 90+, is it possible for the Human eye to detect the change?
What is the criteria for this change?
What are the variations of the CRI vs other lighting parameters?
I want to use cell lysate for an geranylgeranylation assay with the endogenous GGTase I enzyme as the source of enzyme with added dansyl peptide and GGPP.
I've tried sonication alone with just 100 mM HEPES, pH 7.5 and 150 mM salt with phosphatase and protease added.
I've also tried using a lysis using RIPA buffer and phosphatase and protease added as well.
The total protein in my assay is around 2 mg/mL. Could it be that my enzyme level is too low to detect activity on or my method of lysis is just rendering my protein inactive?
I need to render a few ITO glass slide hydrophobic and I was wondering what is the best way to proceed? I have done a few research on it online and most of them suggest silanization but I am worried it might affect the transparency of the slide and most of them didn't have a precise protocol. Would anyone know where I can find a protocol to make ITO hydrophobic? Thank you in advance!
I am rendering a panoramic image using an omni-directional stereo system, e.g. Facebook Surround 360, with optical flow.
When I am stacking the optical flow fields horizontally
and visualize them with e.g. color-coding of middlebury or normalized vertical disparity, it is clearly visible that image is synthesized of 14 vertical stripes, 14 because it is number of cameras set in an equitorial way
Should visualization of the optical flow field look like a consistent image or is it correct that it looks like for example the data i have uploaded?