Science topic

Rendering - Science topic

Explore the latest questions and answers in Rendering, and find Rendering experts.
Questions related to Rendering
  • asked a question related to Rendering
Question
3 answers
I've been using LUHMES cells to differentiate into dopaminergic neurons for over 6 months now, but i always have issues when it comes to platting for differentiation. It's always an ON/OFF kind of situation, sometimes they adhere and then begin to differentiate(ideal), other times they don't adhere at all or adhere but maintain a round shape and not the typical shape of LUHMES which renders them from differentiating (the scenario that happens more often) which is quite frustrating. I follow the recommended method for platting:
1- Begin culture in T75 - Day 0
2- begin differentiation - Day 1
3- lift cells and plate them on 96/384 well plate (Day 2) and continue differentiation.
I hemi-feed them every other day once plated.
With 96 well, i tried different cell densities when platting (20, 25, 30, 35k/well), when they are able to differentiate, i noticed that anything above 35k they wouldn't adhere or would cluster quite a lot, which is not ideal as I'm using them for imaging.
Does anyone ever had these reoccurring issues with their LUHMES cells and could shed a light to my problems? I'm running out of ideas what could possibly be happening. Cells are fine and happy while in T75/175 flasks. Could it be the plate? i use PerkinElmer PhenoPlate. Also, for imaging, what cell density does everyone go for when using 96/384? I have also plated the LUHMES on day 3, which worked before, but still had the same issue as Day 2 plating.
Thanks in advance!
Relevant answer
Answer
Hi, When I started working with LUHMES cells, I figured it out that they dont like 96 well plates very well and if you have to do imaging, 96 well plates are not ideal. cells get stressed and detached. I am using 35mm ibidi plates for imaging... so I proliferate 0.7x10^6 cells in T25 flask and after 2 days change proloferating medium to differentiating medium and then after 2 days I split the cells into ibidi plates. I keep the seeding density 0.6-0.7x10^6/ ibidi plate. first add 2ml of DM in ibidi plates and then spread the cells by pipetting. You dont need to change the medium until day 6 but on the day of imaging replace the medium but no need to rinse the cells.
  • asked a question related to Rendering
Question
2 answers
Please help to understand the aims and scopes of these two different techniques along with the possible computational cost that they render. Thanks.
Relevant answer
Answer
Hello
MMGBSA (Molecular Mechanics Generalized Born Surface Area) and MMPBSA (Molecular Mechanics Poisson-Boltzmann Surface Area) are both computational methods used to estimate the free binding energy of protein-ligand complexes in drug design studies. While they share some similarities, there are key differences between these two approaches:
Solvation model: MMGBSA uses the Generalized Born (GB) model to calculate the polar solvation energy. MMPBSA uses the Poisson-Boltzmann (PB) equation to calculate the polar solvation energy.
Computational efficiency: MMGBSA is generally faster and computationally less expensive. MMPBSA is more computationally intensive due to the complexity of solving the PB equation.
Accuracy: MMPBSA is often considered more accurate, especially for highly charged systems or those with significant electrostatic contributions. MMGBSA can provide reasonably accurate results for many systems and is often sufficient for relative binding free energy calculations.
Handling of explicit water molecules: MMGBSA typically does not include explicit water molecules in the calculation. MMPBSA can incorporate a limited number of explicit water molecules, which can be important for some systems.
Parameterization: MMGBSA requires fewer parameters and is generally easier to set up. MMPBSA may require more careful parameterization, especially for the dielectric constants and grid spacing.
Sensitivity to conformational changes: MMGBSA is often less sensitive to small conformational changes. MMPBSA can be more sensitive to structural variations, potentially providing a more detailed energy landscape.
Treatment of long-range electrostatics: MMGBSA may not capture long-range electrostatic interactions as accurately as MMPBSA. MMPBSA generally provides a more rigorous treatment of long-range electrostatic effects.
Applicability to different types of complexes: MMGBSA is often preferred for protein-ligand complexes and small to medium-sized systems. MMPBSA may be more suitable for larger systems, protein-protein interactions, or highly charged complexes.
  • asked a question related to Rendering
Question
2 answers
The primary challenge in analyzing the shadow blister arises when the transverse distance between the two edges along the X-axis is either large or when the slit width reaches zero and the secondary barrier overlaps the primary barrier, rendering the "Fresnel Integral" valid. In such scenarios, this phenomenon can also be interpreted using traditional ray theory. As the transverse distance decreases to approximately a millimeter, the validity of the "Fresnel Integral" diminishes. Regrettably, this narrow transverse distance has often been overlooked. This article explores various scenarios where the transverse distance is either large or less than a millimeter, and where the secondary barrier overlaps the primary barrier. Notably, complexity arises when the transverse distance is very small. In such conditions, the Fourier transform is valid only if we consider a complex refractive index, indicating an inhomogeneous fractal space with a variable refractive index near the surface of the obstacles. This variable refractive index introduces a time delay in the temporal domain, resulting in a specific dispersion region underlying the diffraction phenomenon. Refer to: http://www.ej-physics.org/index.php/ejphysics/article/view/304
Relevant answer
Answer
Your interpretation demonstrated you just explained your imagination rather than a proper experimental consideration. Please take time and see:
or at ResearchGate:
Best regards,
  • asked a question related to Rendering
Question
5 answers
Hello,o o o o o o o o o
I have master's level training in logic and meta-logic, high marks, A's, but am not a practicing logician at all, I am a neo-empiricist pluralist (epistemic and ontological) who does not think any single CTM based model of the mind can be reduced to logic exactly because of a categorical ambiguity that gets contingently invoked.
Neurons appear and so must be first modelled as objects with endogenous functions rendered over many internal sub-relations describing their emergent dynamics (behaviour). BUT, when we swtich to building a bit model of human cognition in terms of neural patterns and inter-dynamics, especially when thinking of the brain as composed of neural bit maps made morphic to structures in reality "outside", we then need to categorize neurons on more purely exogenous and relational terms too.
I do not believe this is allowed (does not lead to wff's) in any model built over any singular, i.e. a strictly reduced and monist and singular classical logic based model, but I am not versed enough in these technical terms to be certain here about the argumentative or procedural complexities involved, and would welcome any logician's insights here (I am a "fan" of semi-classical model building, although!).
I am looking for a practiced logician, with expertise in Model Theory (building structures of interpretation) who might have a peripheral interest in philosophy of mind, but who is willing to consider non-classical approaches.
Quid pro quo!
If you want a primer on my overall concern with the limits of finalizing totalized logical models (TOE's), please give this a go:
Thanks, Brian
Relevant answer
Answer
I’m not much of a logician either, but it seems to me that given standard first-order logic, one can introduce an operator λx such that, when prefixed to an open sentence (predicate), creates a singular term for a property or relation. So for example given the open sentence Fx one can create λxFx to be a singular term naming the property of being F (or F-ness) and with the open sentence Hxy one can create λxyHxy as a name for the H-relation.
Then it’s business as usual:
Kermit likes the property of being green: LkλxGx
Kermit likes the relation of being to the right of: LkλxyRxy
The property of being tall, blonde, and not overweight is desirable. Dλx(Tx & Bx & ~Ox)
Parenthood is statutory: SλxyPxy
And the λ-terms can be quantified over just like the regular singular terms:
e.g. if SλxyPxy then ∃xSx
And you could quantify into the scope of the λ-operator too.
So basically you'd have a way of using singular terms as either unstructured objects or as displaying some internal relational features.
  • asked a question related to Rendering
Question
4 answers
  • Contrary to "DOI:10.1007/978-3-319-77167-0" claim (@John David Orme), human nature is not the cause of wars or conflicts. But I argue that ...
  1. Acknowledgment and Agreement: “I concur that political leaders who seek dominance or control may resort to military force and wars.”
  2. Addressing the Challenge: “In the face of politicians who consistently prioritize their desires and engage in conflicts, what strategies can humanity employ to mitigate these tendencies?”
  3. Overcoming Greed and Conflict: “How can we effectively reduce politicians’ greediness and prevent them from perpetuating wars and conflicts?”
  4. Proposed Solutions: “What alternative approaches or substitutes exist for such politicians, ensuring a more peaceful and equitable governance?”
Indeed, the root cause of human conflicts and wars is often intertwined with politics. Political decisions, power struggles, and ideological differences can escalate tensions and lead to armed conflicts. While other factors also play a role, understanding the political dynamics is crucial for preventing and resolving conflicts.
Human conflicts and wars have been a recurring theme throughout history, shaped by complex factors. Let’s delve into this topic:
  1. Definition of War:
    • In the popular sense, war refers to a conflict between political groups involving hostilities of considerable duration and magnitude.
    • Sociologists typically apply the term “war” only if it is initiated and conducted following socially recognized forms.
    • Military writers often confine the term to hostilities where the contending groups are sufficiently equal in power to render the outcome uncertain for a time.
  2. Causes of War:
    • Politics plays a central role in the genesis of conflicts and wars. Here are some key factors:
      • Territorial Disputes: Political disagreements over land, resources, or boundaries often escalate into armed conflicts.
      • Ideological Differences: Clashes between opposing political ideologies (e.g., democracy vs. authoritarianism) can lead to war.
      • Power Struggles: Political leaders seeking dominance or control may resort to military force.
      • Nationalism: Intense patriotism and national identity can fuel aggression.
      • Economic Interests: Politics influences economic policies, trade, and resource allocation, which can trigger conflicts.
      • Historical Grievances: Past political injustices and unresolved issues contribute to tensions.
      • Alliances and Treaties: Political alliances can drag nations into wars.
      • Leadership Decisions: Political leaders’ choices impact whether conflicts escalate or de-escalate.
  3. Theoretical Perspectives on War:
    • Realism: Emphasizes power, self-interest, and the anarchic nature of international relations. Realists argue that war is inevitable due to the pursuit of national interests.
    • Liberalism: Advocates for cooperation, institutions, and diplomacy. Liberals believe that democracies are less likely to go to war with each other.
    • Constructivism: Focuses on ideas, norms, and identity. Constructivists argue that war is shaped by social constructs and perceptions.
    • Critical Theory: Examines power structures, inequality, and historical context. Critical theorists critique war as a product of systemic flaws.
  4. Preventing and Resolving Conflicts:
    • Diplomacy, negotiation, and dialogue are essential tools for conflict resolution.
    • International organizations (e.g., the United Nations) play a role in promoting peace.
    • Addressing root causes (poverty, inequality, and injustice) can reduce the likelihood of conflicts.
    • Promoting education, tolerance, and understanding can foster peaceful coexistence.
In summary, while politics and politicians are the sole drivers of conflicts and wars, it is essential to recognize that multiple factors intersect to create complex situations. Understanding these dynamics and working toward peaceful solutions remain critical for a better world.
Indeed, what can we do when we are facing politicians who prioritize personal gain over peace and well-being, here are some considerations:
  1. Active Citizenship and Accountability:
    • Stay Informed: Educate yourself about political issues, policies, and the track records of politicians. Be aware of their actions and decisions.
    • Vote Wisely: Participate in elections and vote for leaders who prioritize peace, diplomacy, and the welfare of their constituents.
    • Hold Politicians Accountable: Engage in peaceful protests, write letters, and use social media to express your concerns. Demand transparency and accountability from elected officials.
  2. Promote Alternatives to War:
    • Diplomacy: Encourage dialogue and negotiation as alternatives to military conflict. Diplomacy can prevent wars and resolve disputes.
    • International Cooperation: Support organizations like the United Nations that work toward global peace and cooperation.
    • Conflict Resolution: Advocate for conflict resolution mechanisms that address root causes and promote reconciliation.
  3. Promote Ethical Leadership:
    • Character Matters: Look for leaders with integrity, empathy, and a commitment to the common good.
    • Reject Fear-Based Politics: Beware of politicians who exploit fear and division for personal gain. Seek leaders who unite rather than divide.
  4. Limit Political Power Concentration:
    • Checks and Balances: Support systems that prevent excessive concentration of power. Independent judiciaries, free press, and legislative oversight are crucial.
    • Term Limits: Encourage term limits for politicians to prevent entrenched power.
  5. Encourage Grassroots Movements:
    • Community Activism: Engage in local initiatives that promote peace, justice, and equality. Grassroots movements can drive change.
    • Youth Participation: Empower young people to be politically active and advocate for a better future.
  6. Promote Education and Critical Thinking:
    • Education: Invest in education that fosters critical thinking, empathy, and understanding. Informed citizens make better choices.
    • Media Literacy: Teach media literacy to discern reliable information from propaganda.
  7. Explore New Political Models:
    • Participatory Democracy: Explore models where citizens actively participate in decision-making beyond voting.
    • Technological Innovations: Leverage technology for transparent governance and citizen engagement.
Remember that change often starts at the grassroots level.
How can we create a world where politicians do not govern human societies?
Creating a world where politicians do not govern human societies requires bold shifts and collective efforts. Here are some steps we can take:
  1. Empower Local Communities: Decentralization: Shift power away from centralized governments. Empower local communities to make decisions that directly impact their lives.
  2. Participatory Democracy: Involve citizens in decision-making processes. Encourage town halls, community forums, and participatory budgeting.
  3. Transparency and Accountability: Open Government: Demand transparency in governance. Ensure politicians disclose their actions, finances, and affiliations.
  4. Independent Oversight: Strengthen independent bodies (e.g., ombudsmen, auditors) to monitor politicians’ conduct.
  5. Term Limits and Rotation: Limit Tenure: Implement term limits for politicians. Prevent long-lasting power accumulation. Rotation of Leaders: Encourage regular rotation of leadership positions to prevent entrenched interests.
  6. Merit-Based Selection: Qualifications: Select leaders based on merit, expertise, and commitment to public service.
  7. Competency: Assess candidates’ abilities to address societal challenges effectively.
  8. Education and Civic Literacy: Critical Thinking: Promote education that fosters critical thinking, ethics, and civic responsibility.
  9. Media Literacy: Equip citizens to discern reliable information from propaganda.
  10. Alternative Models: Technocracy: Explore governance by experts in relevant fields.
  11. Direct Democracy: Use technology for direct citizen participation in decision-making.
  12. Global Cooperation: International Institutions: Strengthen global organizations to address transnational issues.
  13. Shared Goals: Promote cooperation over competition among nations.
Remember, change begins with individual actions and collective advocacy. Let’s envision a world where governance serves the common good, not personal, or distinctive groups' interests.
  1. How can we establish a world where governance transcends political structures?
  2. Do humans, in nature, need politicians?
  3. What do you think?
Relevant answer
Answer
António José Rodrigues Rebelo
The root cause of human conflicts and wars primarily lies in political factors and the actions of politicians.
Dear Colleague, António José Rodrigues Rebelo, while your analysis emphasizes psychological factors like envy and jealousy, it’s essential to recognize that human conflicts are multifaceted. Here’s a counterpoint:
  1. Political Nature of Conflicts: Power Struggles: Politics often involves the struggle for power, influence, and control. Conflicts arise when distinct groups or nations vie for dominance. Resource Allocation: Political decisions impact resource distribution. Disputes over land, water, minerals, and other assets can escalate into conflicts. Ideological Clashes: Political ideologies—whether democratic, authoritarian, or revolutionary—shape how societies function. These differing worldviews can lead to clashes.
  2. Historical Examples: World Wars: Both World War I and World War II had deep political roots. Nationalism, territorial ambitions, and alliances among nations fueled these devastating conflicts. Cold War: The ideological struggle between the United States and the Soviet Union during the Cold War was fundamentally political. It shaped global dynamics for decades.
  3. Leadership Decisions: Politicians and Statesmen: Leaders’ decisions—whether wise or misguided—directly impact conflict outcomes. Declarations of war, peace negotiations, and alliances fall within their purview. Hitler’s Role: Returning to your example of Adolf Hitler, his political ideology, expansionist ambitions, and anti-Semitic policies were pivotal in World War II.
  4. Structural Factors: Institutions and Systems: Political structures, such as governments, international organizations, and treaties, influence conflict prevention and resolution. Economic Policies: Political choices regarding trade, sanctions, and economic cooperation affect relations between nations.
In summary, while psychological factors play a role, politics and the decisions made by politicians significantly shape the course of human conflicts and wars. Understanding this interplay allows us to address conflicts more effectively.
  • asked a question related to Rendering
Question
6 answers
Dear colleagues,
I have encountered a statistical issue involving the analysis of Likert scale data, specifically using the POSAS scare scale, in a paired situation. The scale comprises numerous questions, and I am considering two approaches for comparison: either evaluating each item separately between two groups or aggregating the values to obtain a final score for subsequent comparison.
However, a challenge arises when I choose to add up the scores. In such cases, obtaining significant results does not provide clarity on where the differences originated. Conversely, if I opt to compare each item individually, the number of tests would escalate to nearly 80, making it sensible to consider Bonferroni correction. Nevertheless, dividing the significance threshold (0.05) by 80 renders many scores non-significant.
I am contemplating whether it is reasonable to adjust with Bonferroni correction by dividing by the total number of tests. Alternatively, I am considering adjusting for each Likert item separately. For instance, in a two-time point comparison, dividing by 2 for each Likert item seems more appropriate than aggregating all Likert scale comparisons, which would result in a division by 70-80 tests.
I would greatly appreciate your assistance and insights on this matter.
Relevant answer
Answer
Thanks a million for your help.
  • asked a question related to Rendering
Question
3 answers
In the era of big data and artificial intelligence (AI), where aggregated data is used to learn about patterns and for decision-making, quality of input data seems to be of paramount importance. Poor data quality may lead not only to wrong outcomes, which will simply render the application useless, but more importantly to fundamental rights breaches and undermined trust in the public authorities using such applications. In law enforcement as in other sectors the question of how to ensure that data used for the development of big data and AI applications meet quality standards remains.
In law enforcement, as in other sectors, the key element of ensuring quality and reliability of big data and AI apps is the quality of raw material. However, the negative effects of flawed data quality in this context extend far beyond the typical ramifications, since they may lead to wrong and biased decisions producing adverse legal or factual consequences for individuals,Footnote11 such as detention, being a target of infiltration or a subject of investigation or other intrusive measures (e.g., a computer search).
source:
Relevant answer
Answer
EDUARD
I would also strongly suggest looking at the nature of “outliers. “ IME, they may point to
1) Enhanced data collection methods and/or metrics ( respectively, improving future  efforts, but sometimes remarkable improvements in model validities)
2) Breakthroughs in understandings (pointing to new research directions, eg  important genetic polymorphism, and a inanticipated mechanism for reducing disease, transmission, or immediate product improvement opportunity)
ALVAH
 Alvah C. Bittner, PhD, CPE
  • asked a question related to Rendering
Question
6 answers
Preferably SVG or EPS.
Relevant answer
Answer
I have a good experience with using Inkscape for creating SVGs out of the PNGs from Pymol. I usually use the Multicolor/color mode and 8-10 scans (colors) are usually sufficient, otherwise default settings can be kept. More help for Inkscape tracing bitmaps is here: https://inkscape.org/doc/tutorials/tracing/tutorial-tracing.html
  • asked a question related to Rendering
Question
4 answers
my research study is about financial price modeling, i have several times series in which the frequency of some series is monthly, while the frequency of others is annual. my objecyive is to convert the annual series to montjly data but, what is the relevant statistical method in this case?
Relevant answer
Answer
@Davood-Omidian yes i understand your answer, it is very important, I will try it, thank you very much
Best regards
  • asked a question related to Rendering
Question
6 answers
In my research study, I'm trying to model financial series using two series as data bases: the first series is monthly, and the second series is daily.
I've tried to render the second series monthly as well, by taking the instantaneous average of each month, but I've noticed that the wide fluctuations that make my research so interesting have disappeared.
how can we render a daily time series into a monthly series while retaining the effect of extreme values in the series?
what is the appropriate measure or statistical technique in this case?
Relevant answer
Answer
There are several ways to convert a daily series into a monthly one. Popular methods include
  1. Taking the average (arithmetic or geometric).
  2. Taking the value at the end of the month
  3. Taking the value at the middle of the month.
  4. Taking the value at the beginning of the month.
  5. Taking the maximum of the series in a month,
  6. Take the minimum of the series in a month, or
  7. the difference between that maximum and minimum.
  8. Get an estimate of the monthly variance of the series
What you do now depends on the nature of your monthly variable and the process you are trying to capture. Look at the economics or finance of the process. For example, if you think that your monthly variable is dependent on the volatility and level of your daily series, you might consider using one of 1 to 4 for the level and 7 or 8 for the volatility. There are many combinations that might be useful but that is up to you. It may also be necessary to log transform your variables before analysis.
  • asked a question related to Rendering
Question
2 answers
Hello,
I would like to create Physics-Based Rendered (PBR) textures from image example as automatically as possible. I'm interested in research papers, libraries or software able to generate PBR textures approximating an image example as realistically as possible without requiring a 3D artist expertise.
Just showing a textured region of the input image example should suffice.
Any reference along these would be appreciated:
Regards,
Bruno
Relevant answer
Answer
Nuwan Madusanka
,
I see that PyVista simplifies to use of VTK to load and display 3D models but I don't see anything in your answer about the creation of PBR textures from picture?
As shown here:
it should help generate different maps like:
  • Diffuse
  • Specular
  • Normal
  • Roughness
  • Etc.
  • asked a question related to Rendering
Question
2 answers
If a monoecious plant's tissue is exposed to chemical or radiation-induced mutations, is the male, or female part more likely to be rendered sterile?
Relevant answer
Answer
the pistils contains much more "liquid" volume than stamens that is absorption capacity and radiation-induced mutations will be greater in female part. Btw 30 years ago I conducted a short study using a strong magnetic field as an impact factor and mitosis defects test. I believe that only the volume of liquid solutions is a main component of any physical impact, according to Giorgio Piccardi
  • asked a question related to Rendering
Question
3 answers
I'm seeking to update my laptop to a new one, or desktop. I have found that I can fiddle around with structures with this current 6th gen i7, 8 gb lenovo 700 with these internal graphics cards https://www.techpowerup.com/gpu-specs/hd-graphics-520.c2783. So rendering final >600 dpi publication images or heavy electron density maps is not happening.
Many people prefer Macs and Macbooks for such graphical tasks. I prefer staying on Linux. If someone would suggest a laptop, desktop, or graphics card model (Linux compatible), that makes molecular graphics an enjoyable work task, that would be much appreciated :)!
Relevant answer
Answer
recommends you HP laptop corei7 32GB Ram with 500GB SSD
including Linux operating system
see this link
  • asked a question related to Rendering
Question
2 answers
I'm working on confocal imaging of attine ants fungus gardens, which are sponge-like fungal structures containing fragmented plant material and an associated bacterial microbiota. To look into the fungus gardens structure, we are attempting to embedd the structure and to use the microtome to get longitudinal sections.
Our first attempt using Leica Historesin did not render good results for FISH. Does anyone know a better method that could render good quality FISH results?
Thanks
Relevant answer
Answer
There are several methods that can be used for solid embedding of fungal specimens to allow fluorescence in situ hybridization (FISH) and confocal imaging. Some of the most common methods include:
  1. Cryosectioning: Cryosectioning involves freezing the fungal specimens and then cutting them into thin slices using a cryostat microtome. The slices are then mounted onto glass slides and can be used for FISH and confocal imaging.
  2. Epoxy resin embedding: Epoxy resin embedding involves infiltrating the fungal specimens with a liquid epoxy resin, which is then cured to form a solid block. The blocks can then be sectioned and mounted onto glass slides for FISH and confocal imaging.
  3. Acrylamide embedding: Acrylamide embedding involves infiltrating the fungal specimens with a liquid acrylamide solution, which is then polymerized to form a solid block. The blocks can then be sectioned and mounted onto glass slides for FISH and confocal imaging.
  4. Agar embedding: Agar embedding involves embedding the fungal specimens in a solid agar block, which can then be sectioned and mounted onto glass slides for FISH and confocal imaging.
Each of these methods has its own advantages and disadvantages, and the best method for solid embedding of fungal specimens will depend on the specific requirements of the experiment and the characteristics of the specimens. It is important to carefully consider the specific needs of the experiment and the properties of the specimens when selecting a method for solid embedding.
  • asked a question related to Rendering
Question
2 answers
I am using Google Earth Pro to get some images for maps in my research area, and also to get some coordinates of my study area, and I am required to cite it. But I am not sure how the citation may be.
Should I need to include this information? Thanks!
Google Earth Pro
7.3.6.9326 (64-bit)
Build Date
Tuesday, December 13, 2022 5:26:44 AM UTC
Renderer
DirectX
Operating System
Microsoft Windows (6.2.9200.0)
Graphics Driver
Google Inc. (00008.00017.00010.01404)
Maximum Texture Size
16384×16384
Available Video Memory
4336 MB
Server
Relevant answer
Answer
This url may help and has an example:
The following is just a suggestion: include any KML or KMZ files you used with included "Snapshot View" to replicate the view you made and provide the files from a git repository.
  • asked a question related to Rendering
Question
17 answers
In his 1907 article, "Man's Greatest Achievement", Tesla equates the imperceptible primary substance with the luminiferous aether. It is now suggested that the luminiferous medium, which is the medium for the propagation of light, is in fact perceptible matter, just like ponderable matter, arising when the primary substance is rendered into tiny whirlpools, and that the luminiferous medium differs in nature from ponderable matter, only in scale, in that the former is comprised of leptons.
Relevant answer
Frederick David Tombe, Then, basically we agree in the model of the aether, although my interpretation of vortices is a bit different: I see them as a discrete system of particles in, probably, a rotation state, rather than a set of vortices in a continuum media.
Best Regards
  • asked a question related to Rendering
Question
4 answers
Speaking of TRICK,
I recommend reading the following attached paper
"EULER’S TRICK AND SECOND 2-DESCENT ", which demonstrates how it is possible to tackle higher-grade Diophantine problems with elementary techniques, such as those discovered by Leonhard Euler:
A method is based on an idea of Euler and seems to be related to unpublished work of Mordell.
In my two proofs of Fermat's Last Theorem I have done nothing but follow Euler's ideas and tricks.
The infinite descent is clearly a technique that renders Krasner's TRICK ineffective !!!
Enjoy the reading.
Andrea Ossicini, AL HASIB
Relevant answer
Answer
You are right to question the journal. The publisher behind “Journal of Research in Applied Mathematics” is Quest Journals Inc., a publisher mentioned in the Beall’s list (https://beallslist.net ). This is a red flag that this might be a dubious journal/publisher. There are more:
-Prominently mentioned impact factor https://www.questjournals.org/jram/impact.htmlis fake. The mentioned SJIF is a notorious misleading metric (https://beallslist.net/misleading-metrics/ ) often used by predatory journals
-In the above link it is suggested that there is a link with IOSR (another dubious publisher mentioned in the Beall’s list). Indeed, the lay-out of the papers suggest a link
-Contact info Mayur Vihar, Phase 1, New Delhi, India is vague
-They make use of misleading info in their mails by referring to a real and legit journal with the title Quest (https://hrcak.srce.hr/file/387134 )
-Had a quick look at some of the papers in “Journal of Research in Applied Mathematics” and looking at for example the difference in editing (if any) I suspect numerous cases of plagiarised content. Take for example https://www.questjournals.org/jram/papers/v8-i5/A08050103.pdf (I suspect that the authors are taken for the internet and that they are not aware of this one)
Together with the fact that none of their journals has a serious indexing (Scopus, ESCI/SCIE etc.) I would say avoid this one.
Best regards.
  • asked a question related to Rendering
Question
2 answers
I am conducting research in the area of heritage planning and conservation. Heritage Impact Assessment(HIA) is necessary before any kind of change or development in the built environment around a heritage site within a defined regulated area to determine its impacts on the potential of heritage. In India, it has now become mandatory by the National Monument Authority (NMA) in case of any centrally protected monument. Visual Impact Assessment is a very important component of an HIA to asses any future impact on the overall landscape of the place around the heritage site. To be precise, according to NMA guidelines, it is required to check the skyline concerning the heritage site, any visual obstruction in views of the heritage site, shadow on the heritage site due to new development, and consideration from building design bye-laws.
Guideline for HIA by NMA can be found here: https://www.nma.gov.in/documents/20126/51838/HIA+Report.pdf
From the available example of HIA reports, I understood that experts are using 3D software, first to model the existing structures and then adding the proposed structure to generate the views in the form of images/renders to visualise the projected development. Sometimes, it is done by only drawing a section and marking the human eye angle. I am not sure how they are validating these views. From these images/renders only, one can not say very definitively whether these are accurate or not. Also, I am unsure about the view/camera point selection.
I have not been able to find any study on the assessment of the overall visual quality of the surrounding area due to new changes.
It would be great if you know of any study or documents or share some light on this.
Relevant answer
Answer
check the pdf. below.
  • asked a question related to Rendering
Question
4 answers
Dear scholar, I hope all of you are doing well
One of the reviewers commented on my article.
The presence of structural breaks may render unreliable findings retrieved from the ARDL test. The authors are suggested to test for the possible structural breaks and augment it into ARDL model.
How can I respond to them?
OR
In the presence of structural break, there is any alternative econometrics technique that gives unbiased estimates other than ARDL?
If yes then name, please, and run the software package
Please help
Thanks
Ijaz Uddin
Relevant answer
Answer
Ijaz Uddin When a time series abruptly changes at a point in time, this is referred to as a structural break. This change might include a shift in the mean or a shift in the other parameters of the process that generates the series.
A structural break in economics can occur as a result of a war, a significant shift in government policy, or another similarly abrupt event.
  • asked a question related to Rendering
Question
3 answers
I would like to deflavinate my enzyme (MtCDH) to render it catalytically inactive. I have come across the publication from B. E. P. SWOBODA but I am not sure if it is working. Does anyone have experience with this?
Relevant answer
Answer
Hi, I have in mind that you may try incubation with high concentrations of potassium bromide. Good luck. joao
  • asked a question related to Rendering
Question
4 answers
Rendering CGVIEW image...
java -Xmx1500m -jar cgview\cgview.jar -f jpg -i C:\Users\hhh\Desktop\output\scratch\sequence.fasta.xml -o C:\Users\hhh\Desktop\output\sequence.fasta.jpg
Error occurred during initialization of VM
Could not reserve enough space for 1536000KB object heap
Done.
Relevant answer
  • asked a question related to Rendering
Question
3 answers
I recently found a dissertation about lightmap compression: https://www.diva-portal.org/smash/get/diva2:844146/FULLTEXT01.pdf
Are there other papers about this topic? Thanks.
Relevant answer
Answer
Saba A. Tuama Thanks for replying. I noticed that the lightmap solution in this article has a temporal axis. Does this mean their lightmaps aren't static over time, i.e., some sort of animation?
  • asked a question related to Rendering
Question
4 answers
I want the DEM rendering effect shown in the picture below, but the hillshade made in ArcGIS is not the one shown in the picture. May I ask if you have detailed production process and parameter setting, please let me know, thank you very much.
Relevant answer
Answer
Hey!
Sometimes very well results give applying not only transparent hillshade, but additionaly transparent slope map. If You make such compilation - the effects may surprise you sometimes, see below :-)
  • asked a question related to Rendering
Question
1 answer
Google's Bilateral Guided Upsampling (2016) proposed an idea that upsampling could be guided by a reference image to maintain high-frequency information like edges. This reminds me of G-buffers in real-time rendering.
Does state-of-the-art super-resolution algorithms in real-time renderings, such as Nvidia's DLSS and AMD's FSR, use a similar idea? How do they exploit G-buffers to aid the upsampling?
  • asked a question related to Rendering
Question
1 answer
I would like to meet researchers, developers who are currently or previously worked on foveated, aka. gaze contingent rendering. Although, foveated rendering could be the next big thing in rendering community, still there is an active community missing in this domain. Probably we can all jointly build a developer community if reddit/slack/discord.
Relevant answer
Answer
I have created a reddit community on foveated rendering. please join: (1) foveated_rendering (reddit.com)
  • asked a question related to Rendering
Question
7 answers
We, as authors, are expected to follow certain ethical codes laid down by journals. For instance, authors can not submit the same article in more than one journal.
On the other hand, there are hardly any ethics for journals and editors. Journals rarely make the first decision within the ‘average’ first decision period mentioned in the journal's guidelines. Similarly, some manuscripts remain under review for more than a year at times, and journals reject an article after keeping it under review for such long times. By the time such decision is made, the article already loses its relevance.
I want to stress that a line of ethics shall be drawn for journals and editors as well.
1. There must be a maximum time limit for making the first decision and also for review. Two weeks are enough for making the first decision; the editor must go through the article and make the first decision in this period (If an article has some worth, send it to review else desk reject it). For peer reviews, I understand that getting peer reviews is a timely process, but there still must be an upper limit.
2. I have experienced that revisions are often sent to new reviewers who suggest additional new changes and sometimes recommend rejection also. Revisions should be sent to original reviewers, and in case original reviewers are not available, then the editor must make the decision on the basis of revisions recommended by the original reviewers and the changes made by the authors.
There may be other points also that fellow Research Gate members may highlight.
In my opinion, until journals do not follow such ethics, I do not see any harm in sending the same manuscript for consideration to multiple journals. A very delayed rejection decision renders the manuscript useless. Why should authors be hard done by? The journals and editors do have some ethics to follow too.
Relevant answer
Answer
Yes that is standard practice. All references should be cited properly as well.
For a good article research question the key attributes are:
(i) Being specific
(ii) Being originality and
(iii) having general relevance to the wider scientific community.
  • asked a question related to Rendering
Question
1 answer
We are trying to run experiments with the Biolog EcoPlates to characterize microbial communities from nose swabs and other body sites. After incubation (both with and without shaking) we notice color formation in some of the wells, however the color is concentrated on the side of the well, rendering the measurement inaccurate. We have also noticed before the incubation that the substrate seems to adhere to the side of the wells in the plate, and pipetting does not help. Support from Biolog did not notice anything strange with this particular batch of plates we are using, so I am wondering if I am missing some basic technique here? Has anyone had similar issues?
  • asked a question related to Rendering
Question
3 answers
Dear All,
Could anyone help me with 3D analysis in software ICY? I cannot render all cells which are in my z stack in 2 channels, it shows only part of blue channel, but green one, in which cells of interest are, is not able to be fully rendered from tiff z stack. Could anyone guide me, how to perform full render in 3D and perform analysis?
Relevant answer
Answer
It will not work in the first attempt, you can try to do more attempts, I think that will work as just like most people does. If you not then let me know.
Kind Regards
Qamar Ul Islam
  • asked a question related to Rendering
Question
4 answers
How to work properly in the development of an integral like the Abel Plana defined on this image:
I am interested in to have a set of steps for attacking the problem of developing the integral and to determine a criterion of convergence for any complex value s, I mean, when the integral could have some specifical behavior at, for example, s=1/2 + i t where I am interested in to study it.
I am interested in the proper evaluation of that integral only with formal steps into complex analysis.
The Abel PLana formula appears also on https://en.wikipedia.org/wiki/Abel%E2%80%93Plana_formula
Best regards
Carlos López
Relevant answer
Truman Prevatt thanks a lot, there is a lot of literature I need to read about.
The expansion of the Hadamard product of Xi(s) and its Taylor series of Xi(s) around 1/2 has let me obtain at least two valuable expressions that related the coefficients a_2n of that Taylor series with the real par of there non trivial zeros.
Abel Plans is fascinating because let’s evaluate any s in that expression letting find a value of Zeta(s), especially in cases where s is simple like s=0 . The formula contains the attractive 1/2 in one part of the definition when it is properly.. I want to revise all the details documents you have provided me. Thanks and I will be asking later about curious results.
Have a nice Sunday
  • asked a question related to Rendering
Question
5 answers
Adequate and effective social infrastructure is very much necessary for the economic growth of the country.According to Sullivan:
“Social infrastructure refers to those factors which render the human resources of a nation suitable for productive work.”
A developing country is drastically different in terms of how its labour laws are regulated, how its citizens are educated, and how their health is handled. What are the unique ways to create and develop,and sustain social infrastructure in a country (particularly developing).
Relevant answer
Answer
How can that be done considering that unemployment is usually very high in developing counties and taxation is usually weak?
  • asked a question related to Rendering
Question
12 answers
Hello!
I am currently analyzing data regarding the validation of a greek translation of the anxiety sensitivity index 3. I have a sample of around 200. Can I transform my variables (e.g. log or Z-scores) to approach normality, or does this render the convergent and discriminant evidence meaningless?
Thank you!
Relevant answer
Answer
Rhianon Allen wrote: "...most parametric analyses assume normality."
Normality of what, though? There is much confusion about this!
Better textbooks clarify that for OLS models (including t-tests and ANOVA), it is the errors (i.e., deviation of actual values from fitted values using the true, population regression expression) that are assumed to be normally distributed. But sadly, not many books point out that normality of the errors is a sufficient condition, but not a necessary condition.
The necessary condition is that the sampling distributions of the parameter estimates (i.e., the constant & the regression coefficients) be approximately* normal. As was implied above, if the errors are normally distributed, the sampling distributions of the parameter estimates will also be normal. But the latter sampling distributions approach normality as n increases, even if the error distribution is not normal. If you need published references to support what I've said here, see the textbooks by Wooldridge (2021) and Vittinghoff et al (2012) that are mentioned in the slides I have attached.
* Approximate normality is the best one can hope for when working with real world data. See the famous statement by George Box that I have attached below. The implication is that the t- and F-tests for OLS models are really approximate tests when one is using real world data, and the question is not whether we have exact normality or not, but whether the approximation is good enough to make the model "useful" (to borrow a word from another famous statement by Box!).
HTH.
  • asked a question related to Rendering
Question
1 answer
Hello,
nowadays, more and more foveated rendering is based on eye-tracker directed gaze points known as dynamic foveated rendering. Although no more fixed foveated rendering is catching the attention of scholars' interest, I need some previous work to see how and what were the methods working on this display centered fixed foveated rendering.
I could not find enough research papers from google scholar, would appreciate if you could suggest me some works on fixed foveated rendering.
Relevant answer
Answer
Without a gaze point, all kind of foveated rendering is a static foveated rendering. Some past documents, e.g. Oculus developers previously work implicitly subdivided the image space into different regions with different sampling rates. But such literature is really rare nowadays. More and more techniques are used where gaze-point is the main parameter for good foveated rendering.
  • asked a question related to Rendering
Question
1 answer
- non Fourier heat conduction equation which's also known as hyperbolic heat equation, is in the form of :
Relevant answer
Answer
You should use a general form of PDE physic to define the governing equation.
  • asked a question related to Rendering
Question
2 answers
I have created NDVI of the sentinel-2a image and rendered correctly using plot() command in R. However, when I tried to export it using writeRaster() command, it just saved black and white image. Why?
Note that such black and white image, If I load using raster() function, gets rendered accurately in Rstudio.
Can anyone helps me out?
Thanks in advance.
Relevant answer
Answer
Paul Griesberger Thank you so much for your reply! In QGIS also, it was the same as in R. However, I fixed the problem now. For this, I loaded the raster output in the QGIS and added pseudocolor. It worked perfectly now.
  • asked a question related to Rendering
Question
3 answers
Over the past decades, ecosystem and climate change modeling have made great advances. However, as indicated by the uncomfortably large deviations between predicted climate change and today's observed effects, there is obviously a lot of room for improvement. Today it is accepted that global warming, and a series of derived effects, are ocurring at much faster rates than predicted even with the most sophisticated models just 10 years ago. Similar discrepancies exist for ecosystem models which try to predict production or the development of pest and disease populations. In the end, the complexities of non-linear behavior and multiple synergistic effects may render such complex systems impossible to model within the limits of acceptable accuracy. If models only hold under extensive lists of unrealistic assumptions (such as linear and additive effects vs non-linear synergistic effects), then their value for deriving practical recommendations must be questioned. So: what are the limits to (meaningful) modeling? I would warmly welcome pointers towards a readable account of this issue.
Relevant answer
Limits are insufficient data to parameterise models and that we cannot predict the future :)
  • asked a question related to Rendering
Question
2 answers
I am working on tumor induction in experimental rats and needed 1,2-Dimethylhydrazine to induced tumor in the lab. animals. Please I need help from anyone who can render help.
Relevant answer
Answer
Chen Jingwen ,Thank you so much
  • asked a question related to Rendering
Question
4 answers
I am trying to build an ephys rig with multiple amplifiers and use the WinWCP Whole Cell Electrophysiology Analysis Program to acquire the signals. In order to do so, I was planning to use a National Instruments PCIe-6353 analog to digital converter rendering 16 channels. I have previously tried the PCIe-6351 board with this software and it worked perfectly but I was wondering if there could be any compatibility issues with the PCIe-6353. Any suggestions will be greatly welcomed!!!
Relevant answer
Answer
Hi everyone, thanks for your feedback! As Javier Zorrilla de San Martin suggested, I have contacted John Dempster. He was extremelly kind and offered different solutions. Basically, he said I should be able to fully operate up to 4 amplifiers due to the limit of 4 analog outputs in these NI cards. In order to connect more amplifiers, one of the 4 outputs should be routed to the extra amplifier stimulus inputs. He also offered to upgrade WinWCP to enable the support of more than 4 amplifiers, which is the current default.
  • asked a question related to Rendering
Question
1 answer
We were recently informed that the STEMdiff™ Astrocyte maturation and differentiation kits are discontinued.
We have multiple projects running in parallel relaying on these kits.
are there any similar kits out there?
I am worried that switching will completely skew the results and will render our previous data useless for publication as different kits have different components.
Thanks,
Oded
Relevant answer
Answer
Hi, we are facing the same problems :-(
so far I found nothing similar
  • asked a question related to Rendering
Question
4 answers
What is commonly done to render the enzyme inactive, ie, a mutation in the ATP binding domain, etc...
Relevant answer
Answer
If you just want to inhibit the activity of the enzyme, the easiest way would be to add EDTA in excess of the Mg2+ concentration in the reaction.
  • asked a question related to Rendering
Question
7 answers
I have a licensed copy of PyMol (ver 2.3.1) and I am looking to 'replicate' a protein/bilayer i.e., in VMD you select "Periodic" and select from a number of x, y and z options. This, in turn, copies the unit cell along that vector.
Is there a feature similar to this in PyMol? I'm struggling to find one.
Thanks
Relevant answer
Answer
Hi Anthony,
You can do:
show cell
  • asked a question related to Rendering
Question
3 answers
HI,
I am fascinated by this video made by James Stains from USC.
https://www.youtube.com/watch?v=Eql5c4m_N68 It is a brain tractography from diffusion data, but he is able to show the tracts stemming, which is impossible with all softwares I konw (tracvis, MRItrix, DSI studio...). It is clearly an animation made with some modeling tools liks Blender or 3Dstudio.
How do yuo think are the steps?
Do you generate VTK and then open them on Blender? Or what?
Relevant answer
Answer
The quality of video is not enough to say something about rendering technique. Based on the video description it was "created by scientific animator Jim Stanis" and "shows selected pathways in the atlas they created". It can be based on VTK as well as on own rendering framework. As for me, simple line primitives were used for tracks visualization, from totally transparent to opaque for new ROI appearing and with some transparency for old ones. Surely, VTK usage can simplify this workflow.
  • asked a question related to Rendering
Question
1 answer
i want to know about mesh data structures like half edge, winged edge,and octree is only suitable for rendering or it can be suitable for analyzing
Relevant answer
Answer
you can analyze it in Matlab tool and also write c++ code to measure and perform all analysis
  • asked a question related to Rendering
Question
13 answers
Actually, for every text to be translated there is a kind of translation that's very suitable for that text. For example, scientific texts should be semantically translated because the accuracy of meaning in such a case is of top priority.Literary Texts, on the other hand, can be communicatively managed. In other words, the meaning in this case is not of top priority but it goes hand in hand with the form which is very important too.
It could be added that political discourse is characterized by a lot of playing on words' meanings. In other words, politicians, in most situations, try to use certain words and expressions with opposite meanings. Such being the case, the pragmatic approach should be depended on when it comes to rendering political discourse.
Relevant answer
Answer
Translated into what?
And how?
  • asked a question related to Rendering
Question
5 answers
Hi,
I am interested in rendering 2 volume files on one brain, similar to the way you can add a volume file in BrainNet. However, BrainNet doesn't allow you to input 2 volume files. Do you know of any other software or toolbox (preferably Matlab based) that allows one to input 2 volume files?
I have MRIcron, but the resulting images are not as nice as BrainNet's. Are there any imaging resources that can take 2 volume files on a "pretty" brain?
Thank you.
Relevant answer
Answer
The Lead-DBS 3D viewer can render any number of volumes (and it’s in Matlab). Menu “Add Objects” > “Add ROI”
  • asked a question related to Rendering
Question
5 answers
In this age 21st century all kinds of skills, simple or sophisticated, need to be continuously updated and developed or else they will go obsolete, and that may render many people jobless, discouraged and poor.
Relevant answer
Answer
Thank you Dr. Vasan, for your detailed answer, that is based on vast and invaluable experience.
  • asked a question related to Rendering
Question
3 answers
I am conducting a study to explore the quality of mental health care services rendered by the public heath facilities with the aim of developing a progress monitoring tool to improve the services offered.
Relevant answer
The useful theory is death-anxiety related to all mental disorders. There are lots of scales measuring this already. Contact the authors to this paper. They may be on RG:
Clin Psychol Rev. 2014 Nov;34(7):580-93. doi: 10.1016/j.cpr.2014.09.002. Epub 2014 Sep 22.
Death anxiety and its role in psychopathology: reviewing the status of a transdiagnostic construct.
Iverach L1, Menzies RG2, Menzies RE3.
Author information
Abstract
Death anxiety is considered to be a basic fear underlying the development and maintenance of numerous psychological conditions. Treatment of transdiagnostic constructs, such as death anxiety, may increase treatment efficacy across a range of disorders. Therefore, the purpose of the present review is to: (1) examine the role of Terror Management Theory (TMT) and Experimental Existential Psychology in understanding death anxiety as a transdiagnostic construct, (2) outline inventories used to evaluate the presence and severity of death anxiety, (3) review research evidence pertaining to the assessment and treatment of death anxiety in both non-clinical and clinical populations, and (4) discuss clinical implications and future research directions. Numerous inventories have been developed to evaluate the presence and severity of death anxiety, and research has provided compelling evidence that death anxiety is a significant issue, both theoretically and clinically. In particular, death anxiety appears to be a basic fear at the core of a range of mental disorders, including hypochondriasis, panic disorder, and anxiety and depressive disorders. Large-scale, controlled studies to determine the efficacy of well-established psychological therapies in the treatment of death anxiety as a transdiagnostic construct are warranted.
  • asked a question related to Rendering
Question
3 answers
I have a phantom_omni haptic and want to render a deformable shape for this i define a plane of vertexes and when curser collide to shape, vertexes near the collision point moved down and it more or less deform. but the question is , this method is too slow and force feed back and rendering frequency come down , so what method do you suggest me to solve this problem?
Relevant answer
Answer
Use some graphics tricks to improve the graphics and haptics rendering rate. For example, you can a dense mesh nearer to the tool, and other places sparse mesh will do. The trick depends on your application.
  • asked a question related to Rendering
Question
4 answers
Given the longer and longer wait time to give the first decision to accept or not a paper (by the editors), I would like to ask you :
As a reviewer, how long do you usually take to examine a paper? And what is your acceptance or refusal rate to review a paper (not the rate of reject or accept its publication). Thank you.
Relevant answer
Answer
I'm frequently asked by the editor to complete my reviews in 4 or 6 weeks. However, I don't take that long because the actual reviewing process doesn't take anywhere near that long. I like to clear my desk of all tasks as soon as they come in, so I'm usually done with my reviews in about a week. But, I don't think this speeds up the process much because the other 2-3 reviewers take much longer, and then the editor takes some time to make a decision.
  • asked a question related to Rendering
Question
1 answer
I need detailed information about ACQUIRE Algorithm used for graphical target detection, which is a solution for the line of sight problem.
Here is the basic algorithm.
FBA (Framebuffer Based ACQUIRE) Algorithm
1. Move camera to agent's eye point
2. Render frame
3. Set target agent to false color (e.g. pure red)
4. Render frame again
5. Segment natural color frame into the figure and its surrounding pixels, the ground
6. Compute figure and ground brightness values
7. Compute detection probability via ACQUIRE
Relevant answer
Answer
you can use different Matlab image processing algorithm to do your work. graphical target direction you can simulate your problem with Matlab problem and design your algorithm in this regard
  • asked a question related to Rendering
Question
3 answers
For not normally distributed data, I used Kruskal-Wallis test to investigate the statistical significance between different variables. I performed an exercise with three different intensities (different weights w1, w2, w3), then with one weight at three different speeds (s1, s2, s3). The readings were observed from 3 different points(p1, p2, p3).
I opted for statistical significance among p1, p2, p3 at (w1 and w2 and w3) and at (s1 and s2 and s3). then i opted statistical significance among w1, w2, w3 for p1 and p2 and p3. then i opted statistical significance among s1, s2, s3 for p1 and p2 and p3. So, there are 36 independent Kruskal-Wallis tests.
After that, Mann-Whitney test was performed for pairs (post hoc analysis).
I got comment on the test that, "the piecemeal statistical approach, consisting of a very large number of comparisons made between dependent variables during different conditions, without corrections for multiple testing, renders it probable that “significant” results may well be due to chance."
Can someone please suggest where I am wrong, and how and where to adjust the p value?
Relevant answer
Answer
You need information on multiple comparisons methods. See thr link:
Best, David Booth
  • asked a question related to Rendering
Question
6 answers
I am currently working on a research proposal about ancient auralizations and I am surprised to find very little actual audio renderings. Whilst I am sure many researchers have them, it seems that it very rarely gets published onto websites and other research platforms. Can anybody point me to either their private websites/collections or public online platforms where I might find this data? Thank you in advance.
Relevant answer
Answer
Also many thanks for the link, Guilherme Campos!
  • asked a question related to Rendering
Question
1 answer
I am currently analyzing the following panel data set with STATA on a daily base:
  • N = 324 companies
  • T = 252 trading days
  • 6 social media variables from Facebook and Twitter data (e.g., answer times, number of posts and replies)
  • 2 financial performance variables from CRSP data (abnormal return, idiosyncratic risk)
I tried both fixed effects estimation (xtreg fe) and panel vector autoregression (pvar) but neither of the approaches yields satisfying results. I also tried the Arellano-Bond approach (xtabond) but was not quite sure about the endogenous and predetermined regressors. Varying these yielded very few significant results.I also varied the operationalization of the social media and financial variables and looked at sub-samples (e.g., single industries, particular time frames, Facebook vs. Twitter sample) etc.
Apparently, for large T panels, the bias apparent for fixed effects estimation - the rationale for dynamic panel analysis - declines with time and eventually becomes insignificant, thus rendering a consistent fixed effects estimator. In comparison with other studies, I would assume that my T is rather large, thus a fixed effects estimation might be more sensible than a panel vector autoregression.
Any thoughts on this topic?
Thanks,
Sarah
Relevant answer
Answer
Interesting project! A few thoughts:
  • 1. did you test for the level of bias in the lagged dependent variable in fixed effects? You can estimate the dynamic panel with xtivreg (simpler than xtabond for this purpose), using the second lag of the dependent variable to instrument for the first lag. (tutorial if helpful: https://youtu.be/wqwDcY9pq5I ). You are correct about the consistency of the estimates at large samples, so T=250 should yield consistent results)
  • 2. did you consider the Hausman-Taylor estimator, wherein the fixed effects transformation is applied only to those variables most likely highly correlated with the time-invariant error? This can significantly improve efficiency. (again, tutorial if helpful: https://youtu.be/Rp4HhDIcMZo)
  • asked a question related to Rendering
Question
1 answer
I am planning to measure pressure and temperature variations in and around a metal vapour deposition chamber. The substrate is a steel strip, and I wanted to experimentally measure pressure and temperature variations, as you introduce the metal vapour in the chamber. I would aim to put these gauges in different locations of the vacuum chamber. I have been advised that because of the metallic vapour being introduced in the system, the gauge gets contaminated by the vapour and renders the gauges useless. Does anyone have any suggestions of pressure gauges that may work?
This is being undertaken as a form of validation for a computational simulation of a similar scenario.
Relevant answer
Answer
If you simply put your gauge head behind a screen in order to prevent it from being directly hit by metal vapour from the source you should be able to gain reasonable pressure readings.
  • asked a question related to Rendering
Question
1 answer
Is it possible for yeast to express a KanR bacterial selectable marker from a plasmid? I am trying to knock out a gene in S. cerevisiae using a geneticin/G418 resistance cassette. The deletion renders the cells very sick so I first transformed them with a backup URA plasmid. The backup plasmid has a KanR cassette for selection in bacteria. However, I am getting a lawn of growth on my G418 plates after the knockout transformation (yeast + backup plasmid + KO cassette). Is it possible that my yeast are expressing the KanR cassette from the bacterial promoter somehow (maybe some kind of crossover with my KO cassette)?
Relevant answer
Answer
Hi there,
Bacterial KanR cassette is functional in yeast and therefore promotes resistance to geneticin. You should avoid this selection marker on your plasmid.
  • asked a question related to Rendering
Question
5 answers
...
Relevant answer
Answer
Thank you Martin, Florent and especially Reuben for these very helpful suggestions.
  • asked a question related to Rendering
Question
10 answers
Dear all,
I'm looking for a good 3D render which includes the cerebellum to display clusters (NIFTI files). Any recommendation?
Thank you for your help,
Cheers,
Dan
Relevant answer
Answer
The render is super nice!
  • asked a question related to Rendering
Question
6 answers
Local government in South Africa is entrenched in the Constitution as a sphere of government as opposed to a tier. However, the financial and poor leadership related challenges encountered by municipalities render them ineffective in the delivery of basic services to residents.
The inability to render services necessitate the administrative and financial intervention by National Government at a wider scale and not as an exception to a few municipality.
In this context, can municipalities be considered a sphere of government with executive authority, and how can they ensure that services are rendered in an inclusive and sustainable manner that promotes the human rights of local residents.
Relevant answer
Answer
I agree with Cam that the people always make the difference and that electing and even appointing good leaders is a challenge. I think the law can help by improving transparency for example requiring open meetings, and access to government records and documents. Laws can also minimize the degree to which nepotism and cronyism are used to filled government positions. Finally the citizens must be actively involved in the functions of government and this may be best accomplished through educating the members of society to become actively engaged and responsible civic citizens.
  • asked a question related to Rendering
Question
2 answers
In the process of generating Computer Images
the input is a model 2D,3D and the final output is a 2D digital image
is rendering before illumination? or illumination before rendring ? or the order does not matter?
Relevant answer
Answer
You will find your answer and much more in any basic source about the graphic pipeline:
  • asked a question related to Rendering
Question
5 answers
I am interested to calculate the change in environmental impacts when changing from open dumping of slaughterhouse waste to a rendering process.
Relevant answer
Answer
The IPCC protocol for estimating waste emission inventories is probably the most straightforward tool. Note, the IPCC protocol only inventories methane as GHG emissions, not carbon dioxide. It does this because the carbon is biogenic and is not thought to contribute to global climate change. Fossil-carbon sources do contribute to global climate change.
The process identifies the amount of carbon that will turn to methane in an open dumpsite. Instead of a first-order decay model, we will simply use a mass balance, meaning all of the GHG emissions occur in the year the waste is placed. Slaughterhouse was may degrade very quickly so this could be a reasonable assumption.
It uses a few parameters.
DOC - degradable organic carbon (the amount of carbon that will biodegrade).
DOCf - the fraction of carbon that will anaerobically degrade (that will form the methane).
F - the fraction of methane in anaerobic gas
MCF - the methane correction factor, to account for some aerobic decomposition that will occur
Borjesson et al. (2009) actually did most of the work for you here. They estimated slaughterhouse waste as part of their emissions inventory.
DOC of industrial waste (mostly slaughterhouse) = 0.12 tonnes C/tonne waste
DOCf = 0.7 (higher than the IPCC default to account for the fact that slaughterhouse waste is likely protein and fat-rich, meaning very biodegradable and has a higher methane yield than cellulosic wastes such as paper.
F = 0.5
MCF = 0.5 (This number I am assuming because open dumping means there is quite a bit of aerobic degradation.) Look at the IPCC guidelines in the link above. You could assume a slightly higher number, such as 0.6.
DOC * DOCf *F * MCF *16/12 (CH4/C ratio) = 0.12 * 0.7 * 0.5 * 0.5 *16/12 = 0.028 t CH4/t waste disposed.
Divide that number by 0.714 and multiply by 1000 (unit conversion) to get 39.2 m3 CH4/t waste.
Alternately, take the 0.028 t CH4/t waste and multiply by the CO2 factor that you would like to use (21, 28, etc. depending on the timescale) to determine tonnes of CO2 generated per tonnes of waste disposed.
  • asked a question related to Rendering
Question
2 answers
Hi,
im looking for a nitrate test that can reliably and accurately quantify Nitrate in media-solutions. It is also important that it is robust and does not get interference from other media components. (Especially Chloride as the chloride concentration does change significantly during the experiment due to pH regulation which renders my current test-kit suboptimal!)
If anyone had some experiences with nitrate tests during algea cultivation and can recommend a method I would be grateful!
Best wishes,
Matthias Koch
Relevant answer
Answer
Thank you very much for your detailed and helpful answer! This does indeed give some more insight and might just be what I need. Concentrations below 0,05 mg/L are not of to much concern as you correctly pointed out...!
Thanks again!
  • asked a question related to Rendering
Question
1 answer
Professor Reuven Tsur proposes a cognitive-fossils approach in his 2017 book. It offers convincing explanations in understanding some poetic conventions which influence-hunting approach fails to render a satisfactory one. That's amazing! But does this approach have a power to help readers realize their unique meaning construction while reading literary works?
Relevant answer
  • asked a question related to Rendering
Question
3 answers
(I am writing an article on how to render intellectual capital economically viable)
Relevant answer
Answer
Here you have it. Good luck!
  • asked a question related to Rendering
Question
7 answers
I am about to launch some immunofluorescent staining experiments on cells that were cultured on transwell membranes, and I have seen different protocols for the staining:
1. membranes are released from the transwell apparatus before fixation and antibody incubation;
2. fixation and antibody incubation protocols are done in intact transwell systems, and membrane is released just before mounting.
Which protocol renders better results according to your experience? The second one seems easier to perform and with lower rate of cell loss during staining.
Thanks in advance.
Relevant answer
Answer
we normally use PFA 3% followed by 0.1 % Triton x-100 as the first approach. Then, if we see no AB binding we try with methanol or similar..
  • asked a question related to Rendering
Question
2 answers
Digitally Reconstructed Radiograph (or DRR) that is created from a computed tomography (or CT) data set. This image would contain the same treatment plan information, but the patient image is reconstructed from the CT image data using a physics model.
Relevant answer
Answer
Part of the issue is framing the problem in a way that lends itself to parallelism, and that lends itself to attack by other types of hardware besides the CPU, like the integer and short float cores in a GPU.
  • asked a question related to Rendering
Question
1 answer
There is a lot of hue and cry that Rice-Wheat cultivation in North India over last 60 years has alrrady depleted rechargeable ground water.Since canal water is not ample and decreasing due to lesser rainfall over last couple of years,can we imagine this practice will render land barren .
Relevant answer
Answer
Still expecting some good replies.
  • asked a question related to Rendering
Question
2 answers
Is there any specific Correlation between these two?
Relevant answer
Answer
The intrinsic spectrally resolved sensitivity (ISRS) by itself is not a metric like CRI. It is an analysis method developed by Lin et al. to identify wavelength regions that lead to large changes in CRI when spectral energy in those regions is removed from the spectral distribution of the light source being evaluated. They found that removing energy near 444, 480, 564 and 622 nm seemed to result in the largest fluctuations in CRI. More info in the paper by Lin et al. at:
  • asked a question related to Rendering
Question
4 answers
I am pursuing my PhD and preparing the material for a white light source. I am getting the PL results as required for a white light, but not able to calculate the CRI(Color rendering Index).
Relevant answer
Answer
There are several ways of calculating CRI Ra. You can perform calculations by hand, using spreadsheets or computer programs (e.g. Matlab, Phython).
If you alrready measured the spectral power distribution of the light source, and have them in tabular form, you can simply insert them in a spreadsheet which performs CRI calculations. For example, there is one available on the net: bramley.auld.me.uk/406/Calculating%20CRI-CAM02UCS-v2.xls You can plug in your light source spectrum and it gives you the CRI Ra, as well as CRI special colour rendering index for 14 samples, light source CCT and CRI-CAM02UCS.
  • asked a question related to Rendering
Question
4 answers
I just use the standard SPSS function to render cluster analysis and think / hope there are better ways to graph cluster analysis results.
Relevant answer
Answer
A different approach is to begin by running Multi-Dimensional Scaling on the same data, and determining whether the fit (stress) will accept a 2 dimensional solution. If so, you can map the clusters onto the MDS.
  • asked a question related to Rendering
Question
13 answers
With the LEDs having a high CRI values like 90+, is it possible for the Human eye to detect the change?
What is the criteria for this change?
What are the variations of the CRI vs other lighting parameters?
Relevant answer
Answer
To answer the question, "With the LEDs having a high CRI values like 90+, is it possible for the Human eye to detect the change?", my preferred reference is:
van Trigt, C. 1997. "Color Rendering, a Reassessment," Color Research & Application 24(3):197-206
wherein the author states, "... when the CIE method was recommended, a number of weaknesses were known to the principal author. Accordingly, only a difference of some five points in the index was considered meaningful. Unfortunately, this provisio was often not kept in mind in practice."
Remember that the CIE metric was developed two decades before the development of tricolor phosphor fluorescent lamps in the early 1980s, when even the best fluorescent lamps had CRI Ra values in the low 70s. With the introduction of white light LEDs in the early 2000s, it became painfully obvious that the CRI metric could not predict color rendering or color preference. We spent almost a decade with CIE Technical Committees TC-162 (which identified the problem) and TC-169 (which failed to agree on a better metric. The IES Color Committee introduced its TM-30 metric a few years ago, and CIE TC-229 recently introduced an almost identical metric. The IES Color Committee is preparing to release a slightly modified TM-30 color rendering metric that will make it compatible with the CIE metric for a truly international standard.
Regardless, the CIE General Colour Rendering Index Ra (to give it its proper name) will undoubtedly survive in the trade literature for decades to come. Given that any light source with a CRI of less than 80 is unacceptable and that increments of less than 5 points are meaningless, we have:
< 80 Unacceptable
80 - 85 Bad
86 - 90 Acceptable
91 - 95 Good
96 - 100 Excellent
unless, of course, the color rendering index is high but the color rendering is still visually terrible.
  • asked a question related to Rendering
Question
1 answer
I want to use cell lysate for an geranylgeranylation assay with the endogenous GGTase I enzyme as the source of enzyme with added dansyl peptide and GGPP.
I've tried sonication alone with just 100 mM HEPES, pH 7.5 and 150 mM salt with phosphatase and protease added.
I've also tried using a lysis using RIPA buffer and phosphatase and protease added as well.
The total protein in my assay is around 2 mg/mL. Could it be that my enzyme level is too low to detect activity on or my method of lysis is just rendering my protein inactive?
Relevant answer
Answer
Is your protein from prokaryotic or Eukaryotic expression systems? 2 mg/ml is the total protein. Did you detect your target protein by SDS-PAGE from the total cell lysate and the soluble fraction of the cell lysate? How was the thickness of the band corresponds to the size of your target protein compared with the bands of other contaminant proteins?
If you have good expression level for your protein, you can use any of the lysis buffers that:
1. do not contain buffer additives inhibiting the activity of your enzyme
2. have a pH range and ionic strength not affecting the stability and activity of your target enzyme
3. do not contain detergents denaturing the enzyme/or affecting the activity of the enzyme
  • asked a question related to Rendering
Question
3 answers
I need to render a few ITO glass slide hydrophobic and I was wondering what is the best way to proceed? I have done a few research on it online and most of them suggest silanization but I am worried it might affect the transparency of the slide and most of them didn't have a precise protocol. Would anyone know where I can find a protocol to make ITO hydrophobic? Thank you in advance!
Relevant answer
Answer
Hi Wang,
Please read this article below. You may try conjugate polyelectrolyte as well.
Polyelectrolytes: Multi-Charged Conjugated Polyelectrolytes as a Versatile Work Function Modifier for Organic Electronic Devices (Adv. Funct. Mater. 8/2014)
  • asked a question related to Rendering
Question
3 answers
I am rendering a panoramic image using an omni-directional stereo system, e.g. Facebook Surround 360, with optical flow.
When I am stacking the optical flow fields horizontally
and visualize them with e.g. color-coding of middlebury or normalized vertical disparity, it is clearly visible that image is synthesized of 14 vertical stripes, 14 because it is number of cameras set in an equitorial way
Should visualization of the optical flow field look like a consistent image or is it correct that it looks like for example the data i have uploaded?
Relevant answer
Answer
" Why is there purple and bluish-green patches non-overlapping !!!! " can you specify where exactly are you referring to?