Questions related to Information Visualization
I am interested to get depper to the connection between data analysis methods and information visualization that can be generated by this data analysis. For example, data clustering (in data mining) produces a certain kind of information. Which visualization method could be used to best visualize the produced information and why?
I have found this http://www.visual-literacy.org/periodic_table/periodic_table.html which very good on depicting the different visualization methods but lacks explaining to what data analysis method each one of them it is connected.
Any recommended good source?
In some cases I prefer to change the way I visualized the project's information (in terms of graphics or layout). To be exact, they differ when inserted in a paper compared to when I bring them in my design portfolio. Is it ethically and scientifically acceptable?
Actually I am looking for any app used by film-editors on the narrative stage , as to visualize narrative, rhythm and pace structure. Structural diagram apps used by film-editors. Or other apps that could be used by film editors even if they are not yet into it.
I'm drawing drapery plot using meta package in R.
I would like to add vertical line to show point estimate for odds ratio. How to get this using drapery function?
Was Jock D. Mackinlays "Ranking of perceptual tasks"  (Fig. 15, p. 125) empirically verified by now? At the time of writing (1986) it was not, as stated by the author. Kamps also states in 2012 that Mackinlay "never evaluated his task ranking empirically" , but possibly/probably he or someone else did by now. Is there a very similar ranking that was evaluated? I'm aware that many other related empirical studies have been done in this field since then, but would someone recommend a study that I shouldn't miss by no means in this specific context?
1. Mackinlay, Jock. “Automating the Design of Graphical Presentations of Relational Information.” ACM Trans. Graph. 5, no. 2 (1986): 110–41. doi:10.1145/22949.22950.
2. Kamps, Thomas. Diagram Design: A Constructive Theory. Springer Science & Business Media, 2012.
I have a CT scan of a permafrost core. I assume that this core consists of air, ice, organic and mineral parts. I have threshold values for all constituents.
I need to build a 3D model and I want to define the threshold range for each constituent (i.e. let's say air's threshold values are from 0 to 80, ice from 81 to 150, organic from 151 to 190 and mineral from 191 to 255). I have several cores and threshold values are different for them.
On the screenshot, you can see the desired result.
It is possible to do it with Avizo but at the moment I don't have a license and it is quite expensive.
Is there any free alternative that I may use?
I already tried the ITK Python package and Huygens Professional and also several other programs but unfortunately, I did not reach my goal.
Update: another satisfying result can be four different images for each constituent respectively.
The use of interactive data visualization tools is becoming very popular to enable users explore the data from different points of view. These tools allow users to investigate data from different points of view, customizing filters, grouping data, finding main influencers etc. This interactivity is usual in business, by creating dashboards, or in statistical data analysis of large datasets.
However, when we observe how scientific data is presented to the community, they are usually restricted to the point of view of the authors. They don't allow data exploration by other users, who could find new insights from the data.
So, what do you think about it?
I'd like to see the output of every layer of DNN under OpenCV Python. So, I am asking you here to help me. If anyone knows a blog or something else explain how to show us outputs of layers, please, put its link in an answer.
When I use Hiscite to analyze the retrieved results, it shows that the 356 items I've searched for are totally irrelevant, which seems quite unlikely. I noticed that the LCS column of these 356 items are all "0" (Capture1), and the export buttom of "including the references cited" are missing (Capture2). So I was wondering do they stop providing the service of Hiscite or is there a solution in this case?
For large social networks, such as tens of millions of nodes and hundreds of millions of edges, traditional visualization techniques are obviously unable to handle, even with some decoration techniques (such as graphics substitution, edge bundling), the amount of computation is enormous.
Assuming a scenario: Now that you have 50 gigabytes of graph data (probably in a side table format), can we design a system to explore the content of this 50 Gigabyte data (such as structure, aggregation)?
Obviously, even if the result is not analyzed, the above system should at least display the data. However, even simple display is very difficult in this case. So, maybe we need to use these big data tools, such as Spark,especially I think the GraphX maybe used.
As for the above, I often wonder, for the visualization of large scale data, if this still belongs to the purely visualization field. Should we focus on data processing? Should we focus on building distributed computing models? Should we focus on optimizing the computing framework?
If we should, then for the research that has good results. How to highlight the original visual theme? For others, without good results, how do we determine whether the bad results are due to problems with visual processes or distributed frameworks?
Introducing a framework like spark assumes that more variables need to be adjusted, actually it perhaps makes the process more complex, we need to consider more details of the distributed framework, which is, I think, a disaster for the original visualization process!
What do you think of the above content ?
I would be very grateful if you give your opinion.
I am working with demetia patients data in which there are some psychological test with their scores.
Each test/questions itself has subcategories. For example, there is a test which has 15 questions and each question has a score like (0 1 2 , neutral bad good).
I want to see the similarity/cohort analysis among groups of patients.
I thought about using time wheel visualization but in that case I can use the sum of each test (like sum of 15 questions score). Which is not preferred case here.
What is the best way to visualize the subcategories score and than to see the group of similar patients?
Currently, I try to visualize knowledge about sustainable behaviour. I.e., I'd like to construct diagrams showing (primary causal) argumentations about why certain behaviours are (un-)sustainable. It would be an aim, to visualize the consequences of our actions and their environmental and societal impacts and to encourage discussions about the underlying argumentations, based on these visual representations. Ideally, such diagrams could be developed cooperatively.
Does anyone know about related work?
Does anyone even know about the existence of such "sustainability knowledge modelling languages"?
(Or do you think, the idea to visualize such sustainability-knowledge would be unproductive?)
I am looking for introduction and advanced theories on these two subjects. Something (possibly recent) that can get me up to speed so I can dig further in what might fit me.
Also any readings about real-time data visualization?
I have implemented a probabilistic anomaly detection model based on multivariate Gaussian Probability on several derived variables from the raw data. I'm looking for suggestions on the most effective alternate approaches including key steps, assumptions, models which can further be explored to improve the accuracy of the solution.
Visual thinking helps engineers to reason technical problems as well as ideate design solutions. Schemes and sketches are fully applies by engineers for their work, in adición, visual thinking is linked to creativity.
Current engineering educations includes CAD Geometry into engineering graphics subjects, but traditional sketching is displaced, meaning a decrease on spatial abilities and so visual thinking skills. Despite creative aspects are considered in for engineering educations, visual thinking is not. What do you think? What is your experience as educators/students?
Maps and other kind of geovisualizations such as globes, interactive 3-d worlds etc. seem to be very popular and fascinate a lot of people. But what is the reason for this (just aesthetics, potential immersion/fantasy firing, exploratory spirit, ...)?
Well, of course there are a lot of threads in the Web that deal with this question. But for me, the answers I found there are not really satisfying (often fuzzy, not pointing-out the core issues). Who knows research works, scientific papers, or useful Web-links addressing this topic?
If we knew what makes geovisualization such interesting, maybe we could export these ideas and concepts to other domains, e.g. to set-up interesting user interfaces for (non-geospatial) applications, or to design innovative experiental spaces, etc.
I'm curious about your thoughts and ideas!
Hi all ,how to use SOM for multi-dimensional images if i don't have a ground truth table ?what should be entered to SOM?the whole image or random subset?? and how to evaluate the result??
I have constructed a tool that could generate sequence diagrams from Java source codes. The resulting sequence diagrams are represented using standard XMI format of UML sequence diagram.
In order to check whether my tool could produce the correct representations of programs, I need to visualize my output in an existing visualization tool?
I tried ArguUML, Enterprise Architect, Visual Paradigm, Trace Modeler, and Altova UModel, but unfortunately they could not do the job.
Any advice about a suitable tool?
I am looking for a quantitative estimate of the gain that better Data Visualization can have to Business Decision making in the "Big Data" world.
Recent studies have shown that between 30% and 40% of business decisions are solely driven by data (the remainder is coming from decision maker's own knowledge and gut feeling plus advice from collaborators). We also know that the way data is visualized affects the viewer's ability to make good decisions (extensive literature on information visualization).
I'm looking for a general estimate of how much decisions can improve solely due to better data visualization. Do you know about any specific study on this?
We know that hot spot mapping is a leading visualization technique, widely used by criminologists to analyze and make predictions about crimes. What other visualization technique has significant use among them? Thanks in advance.
I plan to evaluate a visualization prototype regarding its visualization quality. Approaches I already found:
- Heuristic Evaluation: First applied to information visualization by Nielsen (1994). Further evaluation heuristics: Shneidermann (1996), Scapin und Bastien (1997), Freitas et al. (2002), Amar und Stasko (2004), sowie Zuk und Carpendale (2006). In 2010, Forsell and Johansson published a meta study, resulting in a heuristic set of 10 criteria, which explains 86,7% of a predefined list of visualization problems. Out of the heuristics, Forsell et al. seems to have the highest empirical legitimation. A drawback of all heuristic evaluation approaches seems to be that only visualization experts can evaluate the prototype, not the real users. Furthermore, only few studies seem to have applied Forsells criteria yet?
- Abstract Task Evaluation: introduced by Ardito, Buono, Costabile and Lanzilotti (2006). Allows the evaluation by real users in their context, as the tasks are predefined. Although a combination of tasks based on heuristics stated above seems to be a very promising approach, I could not find information about a 'standard library on visualization evaluation tasks'. Are predefined tasks set available? Edit(2015-05-04): The term 'abstract task' has also more recently been used by Brehmer, and Munzner (2013 - A Multi-Level Typology of Abstract Visualization Tasks'). As Ardito et al. are not cited and the goal seems to be slightly different, I assume both authors talk of different task definitions.
- Grounded Evaluation: Introduced to information visualization by Isenberg, Zuk, Collins and Carpendale (2008). Like grounded theory, the focus seems to lie on evaluating in the real context. I would expect that using Forsells Heuristics here is not how grounded evaluation is intended?
- Scenario EvaluationEdit (2015-05-04): Structuring evaluations of information visualization by scenarios rather than methods has been proposed by Lam, Bertini, Isenberg, Plaisant, Carpendale (2012). This broad review of existing information visualization evaluation literature (over 800 publications from IEEE InfoVis, in Palgrave’s Journal of IV, IEEE VAST, EuroVis) wants to encourage the information visualization community to reflect on evaluation goals and questions before choosing methods. The seven scenarios are: evaluating visual data analysis and reasoning, evaluating user performance, evaluating user experience, evaluating environments and work practices, evaluating communication through visualization, evaluating visualization algorithms. This study and its classication codeset was later followed up by Isenberg, Isenberg Chen, Sedlmair, Möller (2013 - 'A Systematic Review on the Practice of Evaluating Visualization') to compare Lam et al.'s findings to the publication corpus of IEEE Visualization (581 papers). As Forsell's proposal was not in both corpuses, her study is not part of both studies (only mentioned by Isenberg et al. in the related work). It seems that visualization quality only plays a minor role in the scenario evaluation approach as classified in both studies. Although the scenario 'evaluating visualization algorithms' has one focus on 'visualization quality assessment', this seems to be rather technically oriented (automatic computation of metrics like 'usage of display space', 'average aspect ratio of layouting') compared to the more complex and qualitative criteria of Forsell.
Am I missing some important approaches? Is there a repository for information visualization questionnaires?
Amar, R., & Stasko, J. (2004). A knowledge task-based framework for design and evaluation of information visualizations. In Information Visualization, 2004. INFOVIS 2004. IEEE Symposium on (S. 143–150). IEEE.
Ardito, C., Buono, P., Costabile, M. F., & Lanzilotti, R. (2006). Systematic inspection of information visualization systems. In Proceedings of the 2006 AVI workshop on BEyond time and errors: novel evaluation methods for information visualization (S. 1–4). ACM.
Brehmer M, Munzner T (2013) A multi-level typology of abstract visualization tasks. Vis Comput Graph IEEE Trans On 19:2376–2385.
Forsell, C., & Johansson, J. (2010). An heuristic set for evaluation in information visualization. In Proceedings of the International Conference on Advanced Visual Interfaces (S. 199–206). ACM.
Freitas, C. M., Luzzardi, P. R., Cava, R. A., Winckler, M., Pimenta, M. S., & Nedel, L. P. (2002). On evaluating information visualization techniques. In Proceedings of the working conference on Advanced Visual Interfaces (S. 373–374). ACM.
Isenberg T, Isenberg P, Chen J, et al (2013) A systematic review on the practice of evaluating visualization. Vis Comput Graph IEEE Trans On 19:2818–2827.
Lam H, Bertini E, Isenberg P, et al (2012) Empirical studies in information visualization: Seven scenarios. Vis Comput Graph IEEE Trans On 18:1520–1536.
Scapin, D. L., & Bastien, J. C. (1997). Ergonomic criteria for evaluating the ergonomic quality of interactive systems. Behaviour & information technology, 16(4-5), 220–231.
Shneiderman, B. (1996). The eyes have it: A task by data type taxonomy for information visualizations. In Visual Languages, 1996. Proceedings., IEEE Symposium on (S. 336–343). IEEE.
Zuk, T., & Carpendale, S. (2006). Theoretical analysis of uncertainty visualizations. In Electronic Imaging 2006 (S. 606007–606007). International Society for Optics and Photonics. Abgerufen von http://proceedings.spiedigitallibrary.org/proceeding.aspx?articleid=728263
In the paper by Le Moan et al. "Saliency for Spectral Image Analysis" , Mutual Saliency term is used. I am confused that how to write it's complete MATLAB code and what do the results exactly represent?
Currently I am pursuing my research on Visualization techniques or methods that suits for Interactive dialogs especially when troubleshooting issues, in software application user interfaces. Can someone point me to research papers already existing in this domain?
Dear vegetation community and multivariate users,
I liked discussions here related to selection of different ordination methods. In this connection I have a query related to using NMDS score value as environmental variable and regression against a response variable. Bot NMDS axes were taken as latent environmental variable and done regression against response separately. I have a strong reason why I did that. Because sampling vegetation is hard to distinguish a clear controlling gradient.
I shall be grateful of some feed back and discussions related to this topic.
I am using all of these analyses in R and vegan.
Thank you all in advance.
I need a measure which results in 0 or 100 if 2 RGB images being tested are same, and give different value if there is some dissimilarity in them. The number should represent mutual information.
I am working on the layouting algorithm, for this i need a data structure which should able to store, access information about nodes, edges and clusters very efficiently
This structure should able to handle atleast data about 10000 entities. Please help me if nay one has worked on such problem?
The Learning Analytics process is based on the analysis of a great volume of historical data. How can I best present the synthesis of the analysis of this data to the user?
I'd like to build a graphical user interface that adapts according to eye pupil diameter, then I'd like to try to detect some human vision problems and to provide a graphical user interface according to the level of the problem of human vision.
What is the accuracy of such reconstruction methods with regards to the vibrations of the flying drones, quality of camera and resolution? Is it possible to improve the results by organizing multiple flights and overlaying/accumulating the data in the point cloud? Is there any free software available?
I've tried to upload your visualization ontology : http://code.know-center.tugraz.at/static/ontology/visual-analytics.owl in Protégé 3.5, but I encountered a error. Did you check such a import?
Conference Paper Suggesting Visualisations for Published Data
I am looking for studies or papers dealing with visualisation criteria. How many colors/forms/objects to display at a time? Is there display rules to ensure a quick understanding of complex graphs or drawings?
Digital repositories has the challenge to offer better alternatives to access over a big collection of digital resources. Information visualization could be a solution to improve search and access digital resources in repositories ? What are the strategies or challenge that information visualization should cover in the area of digital repositories?