Science topics: NeuroscienceVisual
Science topic
Visual - Science topic
Explore the latest questions and answers in Visual, and find Visual experts.
Questions related to Visual
Humor in Journalism and Reporting
Journalists often allude to children’s literature, because in our diverse culture, memories of classic children’s books are what we all have in common.
Nursery Rhymes and Folk Tales are a rich resource because they present a full array of personalities
• Chicken Little to represent alarmists
• Pinocchio to represent liars.
• The Big Bad Wolf to warn us of danger
• The Frog Prince to give hope to discouraged women
• Humpty Dumpty to point out how easy it is to fall from grace
In the Old Days, when Humpty Dumpty fell, sympathetic bystanders rushed to put him back together, but in a recent cartoon he was shunned by creatures shouting “Salmonella!”
• Cultural Icons can be either
– recognized visual symbols
– or familiar words that can be parodied.
• Editorial Cartoonists first have to help the viewers into the mindset of the original,
• Then take them in a new direction.
For efficiency, cartoonists make use of common visual symbols
• Pointing fingers or arrows
• The Trojan Horse
• Tombstones and the initials R.I.P.
• Skulls/The Grim Reaper
• The three monkeys (See no evil. Hear No evil. Speak no evil)
• The Ghost of Christmas Past
• Superman
• Railroad tracks not matching up
What are the cultural icons used by American journalists and editorial cartoonists? What about the cultural icons of other countries?
Don and Alleen Nilsen “Humor Across the Academic Disciplines” PowerPoints: https://www.public.asu.edu/~dnilsen/
I'm interested in conducting TLC analysis for pheromones, but the quantity available is quite small. Does anyone have experience analyzing such small quantities of pheromones using TLC? Any advice or guidance would be greatly appreciated. What is the minimum quantity of a compound that can be detected using Thin Layer Chromatography (TLC)? Can compounds be visualized in quantities as minuscule as nanograms (ng) or picograms (pg) when employing TLC?
Recently I have been trying to understand the effect of silica gel pore size on a mechanochemical reaction. It involves intense milling of spherical silica gel of similar particle size distribution but different pore sizes (6 nm, 10 nm, 30 nm and 50 nm, as indicated by the manufacturer).
I was told that silica with larger pore sizes would break more easily than those with smaller pore sizes when being milled, which makes sense because they would have more empty spaces inside the structure to make them more fragile.
Upon milling, the silica broke into different sizes; some are large, and some are extremely small and become a cluster of fine particles. However, it seems that not even TEM is able to visualize the pores this small. Is there any method that can actually visualize the structure of pores?
I am currently writing up a GT study for my PhD and have been writing memos throughout the study. They are in various forms (written, visual, recorded, sticky notes etc.). My question is that they are obviously part of the analytical process and I have included this in my methods discussion, but can they also be part of my findings section as an alternative to verbatim quotations? I'm not sure if I'm missing the point of memoing!
What specific changes in set design have digital technologies brought about?
How does visual presentation in cinema influence literary narrative and theatrical atmosphere?
Has the perception of classical literature changed in the context of modern technologies? Has it given new aesthetic meanings to works?
Does multimedia make the traditional stage more attractive? How do classical works (for example, Chekhov or Tolstoy) combine with digital technologies on the modern stage?
The provisional title of my PhD research proposal:
Artistic Exploration and Visual Communication of Megalithic Statues of Lore Lindu National Park
Background:
Lore Lindu National Park in Central Sulawesi is one of the important sites that holds a unique collection of megalithic statues that are rich in historical, cultural, and aesthetic values. However, the artistic potential and visual communication value of these statues have not been explored in depth in a contemporary design context. This research aims to integrate the visual and artistic elements of megalithic statues into a visual communication design that can connect this cultural heritage with modern society.
Research Objectives:
Assess the artistic, symbolic, and historical values of megalithic statues in Lore Lindu National Park.
To explore the potential of transforming the visual elements of megalithic statues into visual communication design.
Develop a design model or visual work based on megalithic statues to promote and preserve the cultural heritage of Lore Lindu.
Research Methods:
Artistic and Explorative Approach: Visual ethnography is used to document and analyze the artistic elements of the statues.
Design Experimentation: Adapting the visual elements of the statues into modern design media such as illustration, branding, and digital media.
Local Collaboration: Involving local communities in understanding the cultural meaning behind the statues as a basis for artistic exploration.
Expected Outcome:
A portfolio of visual design works that interpret megalithic statues as part of cultural heritage.
A scientific publication on the relevance of visual communication design in the preservation of megalithic culture.
Recommendations for design-based cultural promotion strategies in the Lore Lindu region.
Dear researchers,
Recently reviewers frequently asking to provide graphical abstracts from authors. While researchers are not professional designers, and creating a graphical abstract typically doesn’t demand much design skills or significant time investment. But still, it seems excessive to me! If researchers choose to provide one, that's ok, it's up to them.
However, is it truly essential to visually represent every study? I believe reviewers shouldn't be overly strict about this requirement. I'm curious to hear other researchers' perspectives, reasoning, and justifications.
Regards
Has the World Health Organization defined foundations or criteria for determining visual pollution, or are there other international standards?
Dear experts it may concren,
We want to use stereotaxic injunction technique give AAV for up-regulating a protein in whole cerebella in P0 mice. Firstly, we have to try different injection locations and volumes for whole cerebella being infected. Normally, we should wait 4w to see the virus expression. Do you know someways to visualize AAV spreading area in 12h or 24h for saving time and reducing unnecessary harm to more mice ?
Thank you!
A visual artist is someone who practices arts such as drawing, painting, textile design, ceramics, graphic design, sculpture, Fashion design and the likes
Hello
I created two spectral density plots in EEGLAB from the same participant's data. One corresponds to a resting-state control condition, and the other to a meditation condition. Both recordings were 5 minutes long with eyes closed, and they underwent identical pre-processing steps, which included:
1. Manual rejection of bad channels and major artifacts (e.g., movements).
2. Filtering (0.5–75 Hz).
3. Re-referencing to the averaged signal.
4. Independent Component Analysis (ICA) decomposition.
5. Rejection of artifact components using EEGLAB’s default flagging function, followed by visual inspection.
The control spectrum was processed on Computer A, which has the Signal Processing and Statistics and Machine Learning toolboxes installed, while the meditation spectrum was processed on Computer B, which does not have these toolboxes.
As expected, the two spectra are similar in shape, but their power density scales differ drastically. Specifically:
- The control condition processed on Computer A shows a shift down in power density, with some values even being negative (which is physically implausible).
- This discrepancy is not due to the different conditions, as it is not observed in other participants.
The issue seems to arise from differences between the two computers or the toolboxes available. Has anyone encountered a similar issue or can provide insight into the potential cause of this problem?


Dear All,
I hope that you are fine.
Please, who could help us? We need a tool for Galois Lattice Visualization (open source). For example we could navigate by query to find and visualize a given formal concept in the lattice and a number of hierarchcal level related to this concept.
Best regards.
Hello,
I have several .xyz files (each composed of several molecules) and I'd like to find an automated way to visualize and save them.
I'd appreciate any guidance.
Hi,
I am a beginner in bioinformatics. I have more than 500 genomes to analyze and I already got the results from Roary. I want to visualize these results by showing the tree compared to a matrix with the presence and absence of core and accessory genes. Roary provides us with roary_plots.py, which can achieve this. However, If I have hundreds of genomes, the plot will be a mess and nor very clear.
If there are any tips to make the tree look more clear in the cases of having hundreds of genomes, I'd be grateful to hear them.
Best,
Lingyu

Dear colleagues,
I’ve created a video on YouTube simulating diffraction phenomena and illustrating how it differs from wave interference.
I hope this visual approach offers a clear perspective on the distinctions between these effects.
Of course, we need a large, complete overall map.
I am interested in the study of visual subcompetence in education, specifically how visual tools and technologies can be integrated into the educational process to enhance the development of professional competencies in future teachers, particularly in mathematics education.
I am looking for research and definitions that highlight and specify the concept of visual subcompetence in education. Specifically, I am interested in how visual subcompetence is distinguished as part of the broader professional competence, particularly in the context of mathematics teacher education.
How do we typically choose between Convolutional Networks and Visual Language Models when it comes to Supervised Learning tasks for Images and Videos ?
What are the design consideration we need to make ?
I would like to know ho the grain grows a technical detailed explanation with a visual
Hello. I am currently working on lactide modification, and would like to take this article as reference: A Bifunctional Monomer Derived from Lactide for Toughening Polylactide | Journal of the American Chemical Society (acs.org)
The author uses TLC to monitor the reaction, (Lactide + NBS, substitution of H to Br) but skip the details. I wonder how to visualize lactide/product after TLC since the compound is not UV-active or have any good functional group for dye to react.
Also, I am not sure which solvent should I use for this system.
Appreciate the help!

In some industries, authority provides lunch for their staffs and workers. The kitchen store of these industry regularly receive the supply of frozen chicken meat from meat processing company and they have to make QC report by visual checking. How can they detect bacterial contamination while receiving frozen chicken visualy?
I want to visualize the bacterial colonies grown inside a closed system made of transparent polymer. For this, I am planning to use inverted fluorescent microscope. Please suggest what magnification will be best 40X or 100X? Also, kindly share the fluorescent dyes to be used to stain bacterial membranes?
So I need to represent a lot of overlapping regions (TFBS) in many promoter sequences (EG at some positions 5 at most might overlap). I have each region start and end points in an excel file with the total length of the sequence.
Doing it by hand is a lot of work. I tried CIIIDER; but I had a problem with the scale bc my promoters are small (<300 bp) and CIIIDER does not allow me to increase the size of the representation as I need it.
Does anyone recommend a software that can do that?
Thank you in advance.
Dear friends, I need to transform my qPCR data into a heatmap to make it more visualized. I have seen this form in several articles, but I'm not sure about some details:
1. Should the P value be presented in the heatmap? If so, how to do that?
2. Is there any requirements of sample size to make a heatmap?
Thank you for any advice!
How is it possible to visualize nanoparticles (3-7 nm) dispersion in liquid crystal? Is it possible by Scanning Electron Microscope?
And what are the potential applications of these technologies in areas such as smart agriculture or autonomous exploration?”
Pathfinding refers to the process by which a robot navigates from one point to another. This process involves the use of sensors, software, and sometimes visual cues.
Hi,
I'm currently working on a project where I need to plot the atom-projected band structure using GPAW.
I've been able to calculate the band structure for my material, but I'm having trouble figuring out how to separate and visualize the contributions from different atomic species.
I need something like https://vaspkit.com/_images/hse-band3.png

Hi all,
My lab has Thermo Scientific™ Invitrogen™ EVOS™ FL Auto 2 Imaging System, and I was wondering if I will be able to use it with whole blood, whilst focusing on platelets?
The idea would be to activate platelets with agonists and potentially be able to see the formation of thrombi.
Does anyone know if this is possible? Any SOPs, tips and tricks?
I used a Li-COR Flux system to measure respiration in different soil types. The question is, my data is mostly positive values but on a small scale (0 < x < 1). However, there are a couple of days where I got negative values.. VERY negative (-43, -32...). They are not outliers since I did triplicates per day.
I don't want to delete this data, I think maybe something was happening in the microbial communities those days. But the difference in scale doesn't allow me to visualize the data in scatter plots.
I was thinking about some type of standardization? But I don't want to alter the dC/dt values.
Thank you!
In basin delineation and hydrological modeling, selecting the exact value in Flow Accumulation to identify streams is a critical step. The Flow Accumulation tool calculates the accumulated flow as it traverses down a landscape, which can be visualized as the number of cells contributing to flow into each cell. This accumulated flow can then be used to define stream networks by setting a threshold value.
Dear community,
I want to perform an analysis of the risilin localization within the legs of myriapods and would like to visualize the localization of the resilin components in different taxa.
Do you have any recommandations for a commercial antibody (doesn't matter if its mono- or polyclonal))
Thank you very much.
Best,
Benjamin
CONTEXT: Achieving the 2030 UN agenda for SDGs requires integrated, citizen-centric approaches and holistic interventions for delivering transformative results on the social, economic, and environmental dimensions. Current initiatives in many emerging markets are slow and face adoption and scalability challenges at a local and systemic level due to lack of in-depth understanding and prioritization of complex issues, many of which relate to each other, like the SDGs. A good starting point is to take a human-centric approach starting with developing deeper empathy with citizens to visualize and design a future for the citizens of the country. But who better to share authentic insights and see a better future than those who will live it – CHILDREN.
Within many cultural contexts it is recognized that drawing techniques can provide a relatively easy way to gather personal and socio-cultural information, both from and about children, as well as offer valuable insights into children’s experiences, ideas, feelings and environmental perceptions. Childhood and children are now seen as worthy of investigation in their own right. Much recent studies has emphasized the importance of listening to children’s perspectives on issues that are important and relevant for them. The advantage of using drawing is that this is self-reported data.
These drawings can be used to explore the world they live in, and therefore understand the social, economic and environmental issues at the local level. Art activities provide a psychologically safe and creative way for children to express their strongest desires in a visual form without relying on words or the need to know a language for expression.
ASK: I am looking to conduct a literature review on visualization, image interpretation and content analysis techniques for issue identification in the drawings and artworks of children. In addition, I am, therefore, seeking projects are worthy of mention based on their quality of work and potential to scale in the aforementioned areas. A good example is Room 13 that started in Scotland and Project Dream On India - that captured 10,000+ artworks of children from pan India including Jammu and Kashmir.
Would appreciate your references, thoughts, ideas et al.
Thanking you in anticipation.
I made a material which loos like liquid,
but when I do the rheology test,
the G'>G" .
In my opinion, I think it means it was a solid or a gel.
The test parameters are like this
and the result is figure 2
Would anyone be able to advise why this may be? I'm not really sure how to further optimize my parameters as I've tried several different ones already.
Thanks!


Hi
I am new to 3D culture, live cell imaging and confocal microscopy. Is it possible to visualize epithelial and endothelial cells by live cell imaging for a period of time using confocal? In addition, I would like to visualize the bacterial cells and the biofilm produced on the 3D culture over a course of time?
I do have the information on the antibodies to be used for labelling to visualize epithelial & endothelial cells, and biofilm. However, the protocol needs the cells to be fixed. Also, I could get information on the nucleic acid stains which can be used on dead cells to visualize bacteria and eukaryotic cells.
Any thoughts on this?
Thanks in advance
Warm regards
Bindu
i seeded CACO2 on 0,4 um PET transwell insert, i want to see my cell layer in transversal view in order to monitor the polarization and formation of brush border. To do this, i fixed with 4% PFA in PBS for 30 minutes, then put in alchol 70% until the processing.
I continued the dehydratation with 95% alchol for 1h (twice), 100% alchol for 1h (three times), xylene for 1h (twice), paraffine overnight, and one change of paraffine the day after for one hour and then embedded in paraffine.
The problem apperead during the cutting, because the insert started to curling up and seems that the sample is detatched from the surrounding paraffine.
How to solve this problem? What can you suggest me to change in the protocol?
Understanding a student's learning style is crucial for effective education. The VARK model, one of the most common frameworks, categorizes learning styles into four types: visual, auditory, reading/writing, and kinesthetic. Each style has distinct characteristics; for example, visual learners benefit from diagrams and charts, while auditory learners excel with spoken information. Recognizing these preferences allows educators to tailor their teaching methods, creating a more inclusive and productive learning environment for students.
My concern is to collect information for my academic research in photojournalism and visual journalism.
What are the Theories, Models and Methods to analyze, interpret, decode, evaluate and explain visual language in academic research concerning Photojournalism and Visual Journalism?
is there any helping material available?
Thanks in Advance
Rizwan Ali
I have atomic coordinates and the structure is cubic cell with lattice parameter 10.001 Angestrum. How could I prepare the cell vectors to visualize in Vesta? Thanks.
I have data consisting of two trends over time.
For example, the GDP and net immigration figures of the UK, from 1950-2020 (not the real data).
I can plot the 2 trend lines on the same graph and visually examine if they look like they correlate over time. However, is there any way to examine if they statistically correlate over time?
Note, this need to be a simple analysis that one can do in MS excel of other free software (e.g. JASP), as it is for a student.
Would it be possible to visualize vortices with lengths on the Kolmogorov scale using visualization techniques such as Schlieren or Shadowgraphy?
I would like to create visually appealing diagrams for my publications. For instance, to create graphical abstracts for articles, I want to know which computer tool can assist me in this regard.
Thank you in advance for your responses.
I found to use a wrong secondary Ab for my WB when I visualized it with ECL, can I wash with TBST for several times and incubate the right secondary Ab? Another question is why my bands are not flat? What is wrong with my electrophoresis? Thanks!
I need to visualize specific regions of mouse brain (e.g., substantia nigra). However, cryosectioning the entire brain tissue is resource intensive and not feasible. Is there a relatively quick and easy way to identify the anatomical location of previously cut brain sections without the use of staining or immunofluoresence?
PARADOX
This PowerPoint begins with the Ambiguity Paradox: Everything is ambiguous; however, nothing is ambiguous.” Perhaps all words and sentences are ambiguous, if they are not seen or heard in the larger context. However, the larger context (both linguistic and non-linguistic) resolves almost all of the ambiguities--Except when the speaker is intentionally trying to be ambiguous, as with linguists and politicians.
Then we go on to discuss science fiction and the “grandfather paradox,” and the building which is larger on the inside than the outside, and some “Catch-22 paradoxes,” some visual paradox, and many other paradoxes from the 16th, 17th, 18th, 19th, 20th, and 21st centuries as expressed by Montaigne, Beaumarchais, Josh Billings, Henry Wheeler Shaw, Oscar Wilde, Marshall McLuhan, Joseph Heller, Gilbert and Sullivan, and others.
Gilbert and Sullivan often relied on paradox for comic effect. In The Pirates of Penzance, they composed a song about paradoxes:
How quaint the ways of paradox!
At common sense she gaily mocks!
A paradox, a paradox,
A most ingenious paradox!
Ha! Ha! Ha! Ha! Ha! Ha! Ha!
Ho! Ho! Ho! Ho! Ho! Ho!
Poor Fredrick was to be the apprentice on a pilate ship until he was 21. But, by mistake, he became the apprentice on a pirate ship until he was 21.
But he was born on February 29th in a leap year, so he was only five birthdays old. For some ridiculous reason, to which, however, I’ve no desire to be disloyal,
Some person in authority, I don’t know who, very likely the Astronomer Royal, Has decided that, although for such a beastly month as February, Twenty-eight days as a rule are plenty, One year in every four his days shall be reckoned as nine and twenty.
Through some singular coincidence—I shouldn’t be surprised if it were owing to the agency of an ill-natured fairy—You are the victim of this clumsy arrangement, having been born in leap-year, On the twenty-ninth of February.’ And so, by a simple arithmetical process, you’ll easily discover, That though you’ve lived twenty-one years, yet, if we go by birthdays, You’re only five and a little bit over!
Ha! Ha! Ha! Ha! Ha! Ha! Ha!
Ho! Ho! Ho! Ho! Ho! Ho!
Not able to view the downloaded model from ClusPro using PDBSum. Since both the docked proteins contain single A chain, PDBSum shows it as single image. Please give suggestions. Also tried to save the chains as separate using PyMol, still didn't work.
I'm a secondary school teacher who is starting a new research in the field of Visual Thinking Strategies (VTS) in Ibi, Spain.
We are a group of 4 teachers that have been working VTS in our school with students between 12 and 14 years old during the last 5 years.
This year we have started a new research related to a two stages VTS activity. The first stage is carried out by one group of students and consists of describing a image. The second stage is developed by a different group of students and consists of drawing a picture after having listened the description done by the first group.
The results of this research can be applied to teaching people with a disability (blind people).
Do any of you have been working in this field?.
ALTERNATE VIEW: what is big-data-driven imagery besides something threatening? How can we harness this new tool for unselfish projects to help all of us and the planet?
I have conducted a detached leaflet inoculation assay to measure/compare disease severity. I used image analysis software to get the % diseased area. While using these actual numbers is the usual practice, is it acceptable to convert those percentages into a scale, such as 0-15, or 1-12 like the H-B scale used for visual estimation?
Hello ResearchGate Members,
I hope this message finds you well. I am currently exploring different tools for visualizing frequency collocations extracted from the AntConc program using network graphs. While I have tried VOSviewer and KH Coder, I've encountered challenges as they don't seem to generate graphs based on simple frequency.
I would greatly appreciate any recommendations or insights from the community on alternative tools or methods that effectively visualize frequency-based collocations through network graphs. Your expertise and suggestions will be invaluable in enhancing my research visualization.
Thank you in advance for your assistance, and I look forward to learning from your experiences.
Hi.
I want to perform a docking on a double-helix DNA, about 20 bp long.
I am using the free version of Discovery Studio Visualizer.
I want to energy-minimize my DNA before the docking, but DS has many minimization options and I'm not sure which is the most appropriate to choose.
Can anyone help?
Can artificial intelligence read minds? If so, how does that impact automated language translation? My answer: Artificial intelligence either can already read minds or potentially probably will be able to. Assuming the information in thoughts is translatable regardless of whatever language the person thinks in, the translation between humans may come faster. Humans are very visual creatures and language is fluid(always representing imagery in thoughts including extreme abstractions) thus, artificial intelligence translating one language to another could end the need for human language altogether, or maybe at least extremely facilitate translations.
I'm trying to set a new subject
I am docking small peptides up to 6 AA chain long. After so many month of trying to find a successful peptide-protein docker, i finally found Swisssdock but now the challenge is how to analyse the results. I cannot open the results in ChimeraX and pyMol which are so much farmiliar with many of us. USCF chimera I can open the results but it is difficult to create images for publications. Can I please get help with a step by step guide how to view the results from Swissdock using Discovery Studio Visualizer to analyse the interactions .
I will be greatful and highly appreate any help rendered.
Is it necessary to have a special microscope filter to visualize transient transferred mKeima cells? The microscope I have been using has the typical GFP and mCherry filters, but not a special filter to see mKeima fluorescence.
I have some single channel EEG data which are in the .csv format. How to import this .csv File to EEGlab and analyze them? Or is there any way to convert these data to .edf or .bdf format (Biosemi Data Format)?
With the increase of office hours, the human eye will produce visual fatigue, but with the increase of time and gradually deepen the visual fatigue, how is the change process?Working on different media, the process of eye fatigue change with time is different. What is the difference?
In the realm of data visualization in Python, which library stands out as the most versatile and effective tool, accommodating diverse data types and producing impactful visual representations?
Hi everyone.
we are doing IF for checking the translocation of a protein between cytoplasm and nucleus in SY5 cells , and I need a marker better than NeuN to visualize the whole cell and make it easy to distinguish between cytoplam and nuclei.
Does anyone have any suggestions for a marker instead of NeuN for SH-SY5Y cells?
Is it possible that the amplification failure products can be visualized in electrophoresis? Due to the failed amplification results it shows bands in my electrophoresis with bands that are quite clear. My amplification curve clearly shows amplification failure, but when I look back at it with electrophoresis there are some obvious bands, how is that possible?


Automotive manufacturers are increasingly integrating augmented reality (AR) and virtual reality (VR) technologies to revolutionize various aspects of vehicle design, prototyping, manufacturing processes, and customer experiences. These immersive technologies offer significant benefits in the era of advanced automotive engineering and smart mobility solutions. Here's how AR and VR are being utilized in the automotive industry:
- Vehicle Design and Styling: Designers and engineers use VR to visualize and interact with 3D models of vehicles, enabling them to explore different design iterations and assess aesthetics, ergonomics, and functionality. VR design reviews facilitate efficient collaboration and decision-making among cross-functional teams.
- Virtual Prototyping and Simulation: VR allows automotive manufacturers to create virtual prototypes of vehicles and conduct realistic simulations of various scenarios, such as crash testing, aerodynamics, and thermal analysis. This streamlines the development process, reduces physical prototyping costs, and enhances safety assessments.
- Manufacturing and Assembly Processes: AR is applied on the factory floor to guide assembly line workers with real-time instructions and visual overlays. AR-assisted assembly and maintenance improves productivity, reduces errors, and enhances worker training and skill development.
- Quality Control and Inspection: AR and VR enable technicians to perform detailed quality control inspections using digital overlays, highlighting potential defects or deviations during manufacturing processes. This enhances product quality and reduces defects.
- Customer Experience and Marketing: Automotive manufacturers leverage AR and VR in showrooms and marketing campaigns to offer immersive and interactive experiences to customers. VR-based test drives and AR-enabled product presentations allow customers to explore vehicle features and configurations.
- Virtual Showrooms and Configurators: AR and VR technologies power virtual showrooms and vehicle configurators, enabling customers to customize vehicles, explore different options, and visualize the final product before purchase.
- Service and Maintenance: AR is utilized to provide service technicians with real-time diagnostic information, step-by-step repair instructions, and overlay information on the vehicle, simplifying maintenance tasks and reducing service time.
- Training and Skill Development: AR and VR-based training programs are used to educate service technicians, assembly line workers, and sales personnel. These interactive training modules enhance skills and knowledge retention.
- Design Validation and Customer Feedback: VR allows automotive manufacturers to conduct virtual focus groups and user studies to gather customer feedback on vehicle designs, features, and usability.
- Autonomous Vehicle Development: AR and VR technologies are utilized to simulate real-world driving scenarios for testing and validation of autonomous vehicle systems, reducing the reliance on costly physical road testing.
The integration of AR and VR technologies in the automotive industry is transforming the entire product lifecycle, from design and manufacturing to sales and customer support. These immersive technologies not only improve efficiency and reduce costs but also enhance the overall customer experience and drive innovation in advanced automotive engineering and smart mobility solutions.
I am doing research on ERP signals using visual stimulus. I am using BioRadio device SDK to get the ERP signals. I can obtain the time and values of the electrodes being used.
While going through ERPLAB tutorial, I see that while given an EEG.set file, it generates EVENTLIST using a set of 'event codes'. Would someone explain to me how to obtain such event codes ? Are they responses to go/no-go task during any visual stimulus ? Do I need to set flag as event codes for such responses to tasks ?
I am working with an optic nerve crush model and a neuroprotective/regenerative treatment. Among the planned assessments, visual acuity tests are intended to be conducted. However, optic nerve crush is only performed on one eye, while the other eye remains intact (for bioethical reasons and to prevent the rat from becoming nearly blind). In the visual acuity test, we obviously want to evaluate only the damaged eye and determine if the treatment improves visual capacity. However, we are unsure how to close the intact eye in a way that prevents the animals from having visual input from that healthy eye, without distracting them or causing them to touch their eye, so that their attention remains focused on the visual task. I would appreciate any comments or experiences from anyone who has conducted visual tests in this single-eye model. Thank you.
1. for visual observation of LAMP with UV Light, can we use geldoc or is there any other tool that can be used to view the LAMP results?
2. How to dilute the LAMP primer from 100mM concentration to 5mM.
I have got two questions and before asking them I would just like to tell you the ways and cues that the human brain uses to make up the perception of time.
1. We use images and our brain actually in some part holds these images, of the past. We differentiate between the past and the present at least through visual stimulus by referring to these images and the difference between past and the present.
2. This includes all sorts of perceivable change.
Now I would like you to visualize a space where every observable actually attains a constant value and the images in your memory, that aid conceiving the notion of time, are of this one state.
Note* Even the state of the observer's body is constant. There's no change in any possible stimulus.
Question: Will time still exist?
I was doing some research, and I have read somewhere on the internet that is not recommended to use homogeneous range of values for color progression between classes (eg., 20%, 40% 60% 80%, 100%) when we are using "value" as visual variable in cartography.
Than, I assumed there might be some "rules" to define a good color progression!
For example: "Black 0, Black 10, Black 30, Black 60 and Black 100" which corresponds to a variation of 10% between the second and first class, 20% between the third and second class, 30% between the fourth and third class and 40% between the fifth and fourth class.
I would like to kindly receive your opinion/feedback about this topic.
Thank you in advanced..
Hello
I need to convert ligand-receptor interactions from 3D to 2D. But my Discovery Studio Visualizer software does not have this capability. How can I access the download link of version 4.5?
When I enter the relevant site, I encounter errors to fill in the registration fields.
Thanks in advance for all the help
I am trying to model log reaction times (logRT) as a function of movement direction (categorical with two levels: 'right' and 'left') and reward (categorical with two levels: 'reward' and 'control'). As fixed effects, I enter direction and reward (with interaction term) into the model. As random effects, I have random intercepts and slopes by session number for the effects of both direction and reward. Additionally, given that the variance across levels of reward is clearly different, I allow the model to assume different variances for different levels of reward. Visual inspection of residuals plotted against predicted logRT as well as residuals plotted against predictors (see the figure below) reveals an obvious deviations from homoscedasticity.
var_reward <- var(df$log[df$reward == 'reward'])
LMM1 <- lme(logRT ~ direction*reward,
random = ~ reward + direction | session_no,
data = df,
method = "ML",
weights=varIdent(c(reward = var_reward), form=~ 1 | reward))
My questions are:
1. Do I need to quantify the deviation from homoscedasticity by running tests like Breusch-Pagan to prove that the residuals are heteroscedastic? Or visual inspection of such plots is a proof of heteroscedasticity?
2. Are you aware of any R packages that serve the purpose of functions below for linear mixed models?
- bptest{lmtest}
- white-test{whitestrap}
- bgtest{lmtest}
Information-light-energy enters an interior space of its own volition, bringing with it, pertinent visual information about the nearby environment external to that interior space as proven by the Camera Obscura.
However, this is deemed not interactive and that no intelligence entity is involved in that process.
If that is the case, then that is the case for all visual media surely, because light-is-visual media.
Interested in any thoughts on this.
Hi everyone,
for a paper I am preparing a plot that links source, method of preparation, and application in a circle-like structure. To discuss it here, I prepared a sketch that should be easier to follow without prior knowledge of my specific field of research (see attached Graphic).
The three circle segments represent vegetables, things you can make from them, and places, where this food might be offered. Lines then connect Veggies, preparations, and types of restaurant, that typically go together.
Examples: Sweet potato chips and potato chips are a thing, hence a connecting line (Zucchini chips and pumkin chips not so much). Vodka is made from potatos, but not sweet potatoes, pumpkin, or Zucchini, and it is served in Pubs, and maybe fine dining restaurants, but rather not in Italian restaurants or Fast-food parlors.
Things I like about this plot-type are:
-I find it aesthetically pleasing.
-You can pretty easily identify niche and common entities from the number of lines originating there (like potatoes and soup going with kinda everything, and Zucchini and Vodka being rather picky).
-Following the lines from one entity, you get to all associated entities, making it quite easy to reference neighbors in the network.
-Unlike in a network graph, entities are not spread about the whole network, but neatly organized on the outside for reference and added text.
-Entities are easy to group into categories. More or less categories, than the three in the example would be possible.
In the example, the categories are all underlaid by a primary color (yellow, blue, red) and each line has the secondary color (orange, green, purple) of the categories it connects, to make it easier to follow the lines. However, the information density could be increased further by formatting the color and strength / thickness of lines. For example, you could code a probability or strength of association by a line color or by line thickness, like: How probable is it to find vodka at a fine dining restaurant; coded as color of the connecting line on a red-blue gradient like in a heat-map, or by line strength.
My questions are:
-Is this plot a new thing, or is there a plot type, that's basically the same, that I just don't know about.
-Does it make sense to display information in this way, or is there a more approachable or more aesthetically pleasing way of conferring the same information?
-What to call this kind of plot? My initial thought was cobweb plot, but that's already a thing. I then thought dreamcatcher, doily, or weaving-frame diagram, which don't seem to be taken already. The design is visually inspired by a circos plot, but otherwise doesn't share a lot of similarities. I am hence a bit hesitant, to call it "narrative circos plot" or "Network circos plot".
If you want to use this kind of plot, I'd be super happy to see your work! Would be cool, if you'd reference or link this post.
Also shout-out to Biorender, where I threw this plot together. However, I am sure, you can also make a plot like this with less manual moving of lines and icons, if you know your R (or GIMP or Inkscape for that matter).
PS: Please spare me your comments, that it actually is possible to make Vodka from Zucchini ;-D it's about the graph style, not the specifics for this one. The example graph is not super polished, but you get the picture. Also I think it's clear, that I am not researching the culinary scene, but I found that an easy and accessible example.

I used EGFP as the protein fusion tag and CMV as the promoter of the plamid. I meet some problem when I try to assess the expression level of Cag-A in AGS cells. The EGFP protein is hard to visualize under microscope (almost none). And we next used western blot (anti-GFP antibody) and RT-PCR followed with agrose gel electrophoresis. The results of RT-PCR is positive while the WB is negative.

I tried to visualize the dendritic spines of dopaminergic neurons in ventrolateral periaqueductal gray (vlPAG) by injected retrograde virus AAV.rTH.PI.Cre.SV40 to Rostral ventromedial medulla (RVM) and also AAV5-CAG-FLEX-EGFP-WPRE to vlPAG (dilution ratio 1:2000) in C57BL6 mouse, then sacrificed at 3 weeks post-injection. However, I only can see the soma body and cant see dendritic spines. I wonder why I cant see the dendritic spines?
Compare and contrast the different methods for assessing soil compaction, such as bulk density measurements, penetrometer tests, and visual assessments. Evaluate the strengths and limitations of each method, and their suitability for different soil types and land uses.
Hi everybody,
on a couple of occasions I used Phylogeoviz (phylogeoviz.org) to visualize haplotype distributions on a map. The last time I used it was about two years ago and it worked like a charm (thanks Dr. Tsai!). So, now I wanted to use it again for a fast and easy display. The page is still there, but it seems that it is not working anymore. The input is (mostly) accepted, but upon culculation, no map is displayed. I tried different browsers with the same result. Is it just (dumb) me, or is anybody else having the same problem?
Is there another easy way to visualize such date. I am sure that there are some GIS solutions, but as I said, I just need a quick display every few years and don't want to invest too much time.
Any suggestions?
TIA,
Klaus
İ am studying on visual perception of visually impaired users and designers. For sharing the act of designing experiencel, i am seeking for low-vision designers if there are any.
Why do we see the "down below" side of the physical world (e.g., the ground or the floor) at the bottom of our visual mental image (our visual perceptions)? Why not on the top of our mental imagery, for example?
But, are we even sure that the floor is at the bottom of our visual perception? Do we have even any reference that ensures us that the floor is at the bottom? Has our visual perception a bottom, to begin with? I guess NOT.
So what makes us think that the ground is at the bottom of our mental imagery?
I think this "top or bottom" thing of the visual perception is yet another Quale. So I guess we are back at the hard problem of consciousness again.
Note: I am not talking about cognition and evolutionary mechanisms that formed our cognitive 3D navigation mechanisms to register the ground as down below in order to be able to function consistently under gravity. I am talking about phenomenal consciousness.
I am doing a repeated measures ANCOVA.
My independent variable is visual clutter measured as a between-subjects 2 levels categorical variable (Higher visual clutter versus lower visual clutter). For either the Higher visual clutter or lower visual clutter conditions, the participants in those respective groups either see the SPORT TYPE soccer first or basketball first (Note that everything is randomized so there are no order effects).
The dependent variable is consumer satisfaction and I have measured this separately for basketball and soccer. so two continuous dependent variables satisfaction (SATS) soccer and SATS basketball.
In the covariates box, I added the moderator: the continuous variable escapism and I added the interaction effect of visual clutter and escapism gender and age
My supervisor who is now unavailable told me in her feedback that if the difference in the means of sport type is significant I should include it as a covariate in the ANCOVA model.
But either I completely misunderstood her feedback or I am correct that it is impossible to create a dummy variable for a within-subjects continuous variable (sport type satisfaction).
And even so, in the repeated measures ANCOVA, the within-subjects test shows that there is no significant difference in means between basketball and football. But a simple paired t-test shows that for the higher visual clutter condition, there is a significant difference in means for the satisfaction levels between basketball and soccer while in the lower visual clutter condition there is not.
Can I drop sports-type based on the results of the repeated measures ANCOVA, the within-subjects effect being insignificant and conduct a regular ANCOVA afterwards?
My thesis is due in five days and I would deeply appreciate the help.
Sebastian Brom
Dear all,
I am going to conduct online-studies about how sounds are represented visually by sketches.
Do you have an opinion, on what to consider when studying sketch-based associations of sound? (I do not want to go in the depth of my own conclusions, as I do not want to bias answers, or if someone wants to take part in the online study.)
What influence may have sketching types, or listening modes?
If someone took part: What is your impression; is something "missing" to care about?
Many Thanks - cheers, lars
Do you recognize this planet? Is this a photo or a fantastic image of one of recently discovered and entirely covered with liquid oceans exoplanets?
This and other similar images will be used here to test our humans ability in recognizing visual patterns. Without (best as first guess) or with (final resort) help of AI. The conclusions may serve to enhancing AI tool for recognizing imaged objects.
(To enhance such alghoritms we may later discuss the way we found the solution without them.)
The first attached photo has already been solved a long time ago in one of RG precursors of this thread - nevertheless, I am recalling it in a slightly harder form (without description it contained at its bottom - as such was used to recognize the image by Aleš Kralj).
The rule is who will be first to recognize the planet (or other proponed image) (s)he will have right to demonstrate her/his own example of visual pattern to be solved.
Bjørn Petter Jelle commented once:
The planet thread is back with a lot of new questions and answers! :-) Unfortunately, all the old ones have gone somewhere into oblivion in the digital world of ResearchGate...?
And I'd answered: All is well what ends well. Isn't it?
Anyway, the actual format of RG questions is such that after few dozens answers all the previous also are vanishing in "eternal" oblivion, alike light in a Black Hole :-)
Sometimes, some information occurs again on its surface alike the quanta of Hawking radiation, however. And this new question is such new attempt of returning to our previous common experiments with human brain abiluty for recognizing of images or searching for them without and with refering to AI.

I am a starter of Quantum Espresso. I want to visualize the output files from Quantum Espresso. Primarily, I want to see atom configurations and trajectories from .xyz files and .path files. (these are output by NEB code)However, these files can not be read by Ovito.
Hello ! I am doing a research to identify the visual effects of Marketing on people with color blindness.
Colors red and yellow are known for stimulating appetite. Depends on the type of color blindness, we could have different effects on the affected people. How do I determine the effect ?
Thank you.
La idea es compartir experiencias del enfoque de la investigación de Gestión Visual, me pongo a disposición para platicar del tema, quedo atento a sus dudas o preguntas, saludos
Hello to everyone!
I am trying to understand an electrostatic issue. The scenario is, that we have a plate capacitor filled with vacuum (d=10cm), where a thin dielectric sample is placed on the bottom plate (ε=10, 1x1cm2 and 0,5mm thick). The applied voltages are -400V @ bottom plate and 0V @ top plate.
I simulated the problem with ANSYS Maxwell. Firstly, I visualized the E-Field around the sample (1) and secondly the E-Field on the dielectric surface itself (2).
(1) As you can see on the images, the E-Field shows edge effects leading to a reduced E-Field magnitude around the edges of the sample (green-blue), while above the center of the sample it is increased (yellow-orange).
(2) Inversely, looking at the sample surface itself, we can see that the E-Field in the center is weaker (dark blue) than at the edges (light blue-green). So the other way around...
I understand that the dielectric weakens the initial E-Field in general due to induced polarization. But what could be the reason for the lateral differences in E-Field magnitude between the edge areas (higher magnitude) and the center (lower magnitude)?
Does an increased E-Field in the edge areas of the dielectric surface mean, that there are more surface charges?
Thank you in advance!

Is there any way to better visualize that correlation with different colors
- How does the usability of the multimodal affect visitors' experience in heritage museum?
- What are the implications of the use of multimodal for visitors' experience in heritage museum?
- How to organise types of functions rather than specific features might be key to separate visual patterns from algorithms?
Hello! I am currently writing my first paper to be published, and I would love some advice on how to explain the statistical analysis of my experiment. For my study I grew bacteria in two cell lines, with and without cycloheximide (4 treatment groups total). I then harvested the flasks for 14 days and quantified the bacteria in my samples. I grew each flask in duplicate and tested them in triplicate, so I ended up with 6 data points per treatment group per day.
I averaged these data points to create a line graph to visualize growth, and I used excel to perform ANOVA using 2 treatment groups and 1 harvest day at a time, to see if they were significantly different on each day. I need to briefly describe this in my methods section, but I am having trouble wording it in a professional way, as it was a pretty simple process.
Any advice would be appreciated! Thanks!
Alex H
CDC ORISE Fellow
Hello everyone
I've been having trouble getting results for a particular set of primers, but I can't get any bands, so I decided to do a contact PCR, but the result was the same, no visual bands at all.
I used 25 runs with a starting temperature of 67°C to 55°C, reducing 0.5°C per run, and different samples from different strains of the organism my primers target.
I will appreciate your help :)
Watching movies and interacting with visual media are engaging ways to help students learn.
So, I would like to stream a movie, TV show, or documentary as part of a lesson for undergrad students in a supply chain course.
Do you suggest any media that could be useful?
I created this R package to allow easy VCF files visual analysis, investigate mutation rates per chromosome, gene, and much more: https://github.com/cccnrc/plot-VCF
The package is divided into 3 main sections, based on analysis target:
- variant Manhattan-style plots: visualize all/specific variants in your VCF file. You can plot subgroups based on position, sample, gene and/or exon
- chromosome summary plots: visualize plot of variants distribution across (selectable) chromosomes in your VCF file
- gene summary plots: visualize plot of variants distribution across (selectable) genes in your VCF file
Take a look at how many different things you can achieve in just one line of code!
It is extremely easy to install and use, well documented on the GitHub page: https://github.com/cccnrc/plot-VCF
I'd love to have your opinion, bugs you might find etc.

I'm trying to plot solar irradiance or GHI from WRF output. Is there any python script to visualize this ?
I work with ipsc-CMs and we are trying to measure the action potential of the cells with Fluovolt. After preparation and incubation as described by the manufacturer, we were able to visualize the signal peak before the contraction. But after recording cells for a couple of minutes (1-2min), the frequency decreases along with fluorescence reduction until the cells stop completely and we can no longer see any signal (baseline fluorescence continues). We observed that this problem starts right after fluovolt excitation, as it does not occur in bright field (end of contraction).
A few notes> we add Blebbistatin (10uM) in the incubation solution but we already tested without it. We also tested 3 different dilutions of fluovolt and different light intensities and the problem remains.
As far as I read from surveyings, papers & documentations of some libraries, it seems like each pixel in the data array part of a DICOM slice (each .dcm file) is just HU value (in radiology terminology).
Since I tried to open many pixel_array using Pydicom or Mathlab, it is integer with the range is really high like [0, 2000] or more.
I'm not sure I was correct about this because I'm still wondering how each slice is visualized? (because normal image is in [0, 255] scale) ? ( or they scale down from HU range to [0,255] ? )
OR I was wrong, the CT scanner output the slice with normal image [0, 255] after processing the HU already ?
I am studying the visual analysis of diagrams. In that context I need to find the relation of attention with sensation and perception. I think attention is related to both sensation and perception, but I would like to find articles dealing with this subject that could clarify my doubts.
In theory clasees, we all have learnt about drainage density (DD), which is the ratio of 'total length of stream network in a basin (L)' to the 'basin area(A)'. However, when, we are trying to determine DD of some Indian river basins using Arc-GIS, we are facing some confusions. For instance,
1. We have to define some particular break value in Arc-GIS to get streams of differnt orders. Lesser the break value, higher is the stream order, therefore resulting in a higher value of L, and vice versa. For example, when we set the breakvalue as 500 for a particular basin, we are obatining a stream order upto 7, but if we increase the break value to 1500, the max. stream order reduces to 5 - automatically the L value also reduces. Thus, the same baisn may yeild two differnt DD values, under two aforesaid considertions of break values.
2. We also fixed the theoretically least possible break value , i.e., >0 and obtained a extremely dense stream network with high DD value.
So, my question is, what should be the threshold break value for a particular basin in order to get the DD?
3. From literature, we found that, there are five classes of DD with the following value ranges (km/km2), i.e., very coarse (<2), coarse (2-4), moderate (4-6), fine (6-8), and very fine (>8). However, for 20 river basins across differnt parts of India, we obtained DD values ranging between 1.03 to 1.29, which makes all those basins fall under very coarse category. But, from our visual inspection (one sample bain attached below), it seems to be very less to us.
We want some justification/ clarification/ comment on it.
Hi everyone. I'm planning on determining MP presence, size, color, shape, etc., in other words, in doing a visual sorting/characterization of MP accumulated in penaeid shrimp abdominal muscle. Nevertheless, visual sorting becomes more difficult as particle size get smaller, and is time-consuming and is more likely to fall into misidentification errors. Generally, it is recommended to do visual sorting with plastics no less than 500 microns, but I'm anticipating that any plastic embebed in the abdomen is much smaller than that. I was planning to try alcali tissue digestion with KOH and fiber glass microfilters of 2 microns of pore size, and my intention was to observe the filters under a stereoscopic microscope of a minimum of 45X of magnification. But still I'm going to obtain small plastic particles, if any (spoiler: there will be). So my question is if you have any recommendation or alternative method?... observe the filters under a fluorescent microscope using Nile red to facilitate MP discrimination? analyze another tissue? use a greater pore size filter? change the organism... or maybe it is possible to do the job. Espectroscopy methods are not allowed, since it is part of another stage of the project, I just wanna perform visual sorting/characterization.
Thank you very much for your attention.
Best regards
In R Programming how to visualize time data ? as this: https://github.com/vasturiano/timelines-chart
thanks in advance
I want to visualizing the temporal relationship of patients records over time, assuming that i have EHR dataset, what are the used techniques to plot or visualize such kind of dataset, where each patient have different features which are updated over time.