- Rozália Klára Bakó added an answer:3How can I measure the complexity of an information flow?
Once an information flow (IF) can be represented as a process, can we measure your complexity as a network?
According Brandes, Robins, McCranie & Wasserman (2013), in order to think in terms of network first is necessary to have elements (E) and processes (P) of a network model: phenomenon (E), which passes through an abstraction (P), making the concept of the (E) which may be represented (P) and data network (E).
Taking the IF as a broad phenomenon, can we introduce the concept of network processes, which can be viewed as a web of interdependent tasks or activities, its products, which are more than the sum of its components and the participating organizations, can be understood as a network of people and communications?
Which measure can be more appropriate in this case?Following
- Ranga Dabarera added an answer:10In an agent network (or graph) how can I find the most influential node to drive the whole network to a certain direction/opinion or idea?
This can be seen in many disciplines under different terminology. I need to get get as much ideas as possible. Feel free to let me know how it is handled in realms closer to your research. For instance
- Sociophysics: Opinion formation on social networks
- Marketing: viral marketing
- Microfinance : Diffusion models to study the influential individuals, etc...
In brief the idea is: given a network, if we need to drive an idea, which node (or agent) we should select. All your comments are highly appreciated. Thank you in advance.
Professor Bin Jiang: Thank you very much for the publication professor. Really appreciate your insight in this.Following
- George Stoica added an answer:99+Social media: Downvoting and degrading in social networks – What are the reasons? - What are the reactions?We have recently seen a lot of down-votings in some threads. We also saw different kinds of reactions on this phenomenon. I would like to ask a question about these reactions.
First I assume that there are several possible reasons for downvotings: dissent – misunderstandings – misuse of buttons without knowing it– a social scientist who writes a paper about the reactions – a test carried out by RG – some technical problem, etc.
I would like to have a discussion about the reactions on the part of researchers. We have seen calls to ban down-voters or to cancel their anonymity. Elections in democracies are anonymous and there are good reasons for this. I think most of us agree about some basic traits of democracy, the right to stay anonymous is among them. What about these basics in social media?
The main reason is that the down-voters did not like my answers & explanations. What's strange, is that I have been downloaded for mathematical solutions to my own problems (which are correct, because I made those problems), or even for the problems themselves! For this ones, I have no explanation.
- 4Can anybody please name some common applications of Complex Networks (aka. network science) preferably related (but not limited) to computer science?I'm trying to write a report and I need some hot topics/applications to emphasize. Does anybody also know about the particularly active area of research in Complex Networks?
Take a look at the blog -- http://thenetworkthinkers.com
Many applications of network analysis are illustrated there. Most from real client engagements.Following
- 10Social network question: Is there a technical term for an ego network in which the central node is removed?
I've seen networks of this type used to highlight communities within your list of friends on Facebook. It removes yourself from the network, because this node is by default connected to everyone and clutters the visualization. Looking to see if there is a standard technical term for such a network.
Ego's alter network works in academia. With business clients I use Ego's "network neighborhood" -- a more common term and because there are both direct and indirect alters. Here is a picture of Erdøs' famous network neighborhood... with Paul Erdøs removed.
- Carl J. Mueller added an answer:7Have you come across any papers or material on identifying the path complexity in an ER diagram?
I am trying to understand how a complexity (quantified) can be attributed to any virtual path from an attribute to another attribute of a different entity.
Formal models are good for estimating the development effort, but the model is completed late in the process (waterfall, scrum). Generally, management wants an effort estimate that included the modeling or design process. In essence, the question is what will it cost to provide the require facilities, This is why the point techniques are favored by many developers.Following
- Mohammad zakaria Masoud added an answer:7What is the well position of base station in leach, leach-c and pegasis?
After your experiences, tell me what the best position of the base station in an area of 100 x 100 m, is the center, or the extremity or outside corresponding to the best density (number of nodes)?
All WSN papers that deal with clustering locate sink in the center. The locations of other nodes are random with normal distribution. this makes the center more near to most of the nodes. If you try to use K-mean algrithm to find four different clusters you will find that they located around the center.Following
- Gino Tesei added an answer:6How can I change standardized predicted values of neural network into non-standard values?
I used nnet package in R to train the neural network and make prediction. At first, because the output values were large, i used the formula (x-xmin)/(xmax-xmin) to standardize them in range of 0 to 1. After training the network, i predicted the output values. The result is a range of data in range of 0 and 1. How can i un standardize the predicted values to have predicted values of the first unit?
here are some suggestions based on the information you provided.
As the values of input variables are a mixture of binary (hence, they should belong to range [0,1]) and quantitative currency (hence, they should belong to to range like [1000$, 1000000$]) I suggest to scale the input variables so that all ones belong to the same range [0,1]; so it means applying the transformation (X-mean(X))/sd(X) to all input variables
If you use caret you can just specify this transformation inside the the pre-process object. For example, if XX is your train set (not scaled) of 1980 observations, yy the related response variables (not scaled), and ZZ the test set (not scaled) of 540 observations,
fit <- train(y = yy, x = XX ,
method = "avNNet",
preProc = c("center", "scale"),
tuneGrid = expand.grid(.decay = c(0.001, .01, .1), .size = seq(1, 27, by = 2), .bag = FALSE),
linout = TRUE, trace = FALSE, maxit = 1000,
trControl = trainControl(method = "repeatedcv",repeats = 5, number = 10))
When you make the prediction - myPred = predict(fit,ZZ) - you don't need to scale the test set, as caret scales it for you.
If nnet still coverges pretty quicly with this code, I would suggest 3 transformations on the train set and test set on cascade, i.e. applying i+1 transformation only if the problem is still not fixed with the i transformation. These transformations are very easy to do with caret.
1) removing near zero var predictors
data = rbind(XX,ZZ)
PredToDel = nearZeroVar(data)
data = data [,-PredToDel]
XX = data[1:1980,]
ZZ = data[1981:nrow(data),]
2) removing predictors that make ill-conditioned square matrix
data = rbind(XX,ZZ)
PredToDel = trim.matrix( cov( data ) )
XX = data[1:1980,]
ZZ = data[1981:nrow(data),]
3) removing high correlated predictors
data = rbind(XX,ZZ)
PredToDel = findCorrelation(cor( data ))
XX = data[1:1980,]
ZZ = data[1981:nrow(data),]
Hope this can help youFollowing
- Sashank Dara added an answer:1Do you have references on Smart AV Networks?
I am doing research on user experience with smart AV networks. If anyone has papers, case studies or documents about smart AV Networks, please do share.
what is a smart av networkFollowing
- Dr. M. Anand Kumar added an answer:6How do vampire attacks work?
i would like to know different parameter of vampire attack on nodes.
Vampire Attacks parameters: Stretch attacks create a long route(false Route) from Source to Destination which consumes lot of energy. For Example Valid route(Path) from Source to destination A-->C === A-->B-->C. But false route will be some thing like A-->C === A-->B-->D. -->E-->F-->C. Parameters are Source, Destination, Packet Id, Time to live, Next Hop, Routers etc. The parameters mainly depend on the specific Network. The parameters that are mentioned above are common to all.Following
- Christian L. Staudt added an answer:9Is general sparsification of complex networks possible?
Nowadays complex network data sets are reaching enormous sizes, so that their analysis challenges algorithms, software and hardware. One possible approach to this is to reduce the amount of data while preserving important information. I'm specifically interested in methods that, given a complex network as input, filter out a significant fraction of the edges while preserving structural properties such as degree distribution, components, clustering coefficients, community structure and centrality. Another term used in this context is "backbone", a subset of important edges that represent the network structure.
There are methods to sparsify/filter/sample edges and preserve a specific property. But are there any methods that aim to preserve a large set of diverse properties?
This recent paper of ours is an attempt at an answer to above question.Following
- Bin Jiang added an answer:7What are the applications of network analysis?
There is basically no limit to the phenomena that can be modeled and analyzed in terms of complex networks - entities and their relationships between which can be represented as the nodes and edges of a graph, and which form a non-trivial pattern. So let's make this a small survey:
- Where in your research do you employ complex networks and network analysis methods?
- What are your data sources? How big are they?
- Which tools do you use for the network analysis process?
- What did you learn from the network analysis?
I would point you to the very dynamic field of complex networks, involving large amounts of nodes and edges, and community detection is particularly active subject. The following paper presented a new way of thinking in community detection or classification in general:
Jiang B. and Ma D. (2015), Defining least community as a homogeneous group in complex networks, Physica A: Statistical Mechanics and its Applications, 428, 154-160.
Jiang B., Duan Y., Lu F., Yang T. and Zhao J. (2014), Topological structure of urban street networks from the perspective of degree correlations, Environment and Planning B: Planning and Design, 41(5), 813-828.Following
- Jarret Cassaniti added an answer:6Is anyone working on Ebola contact-tracing with mobile phones?
Is anyone working on Contact Tracing, in the current Ebola outbreaks, using non-interview (non-F2F) techniques (such as tracking mobile phones) to gather information on movements/interactions of infectious individuals?
Hi Valdis, you can check the Ebola Communication Network: http://ebolacommunicationnetwork.org/latest-materials/. You can also post your question on the Springboard for Health Communication Professionals: http://www.healthcomspringboard.org/groups/ebola-communication/Following
- Ra'ed (Moh'd Taisir) Masa'deh added an answer:3Where can I find dataset for Optimization Queue for Scheduling Tasks in Cloud Computing ?
My work requires the Real-world data sets for arrival of tasks and task service time.
Could you please see our paper on Job Scheduling for Cloud Computing Using Neural Networks. It might help. Yours,Following
- Wu-Chen Su added an answer:7What are the current active research areas in social network analysis?I'm trying to choose my MSc research area in the field of social network analysis but I'm new to this subject and am not aware of currently active research areas or the trends. I would appreciate any suggestion you might give, so I can choose the one closest to my interests.
You may consider my paper for current issues and trends of multiple online social network study to find suitable topics for your study.Following
- Gandhi Kishan Bipinchandra added an answer:3Are there any good visualizations showing the turbulence of real-life scale free networks?
I am looking for good (dynamic - i.e. dynamic gif or video) visualizations that show how turbulent real-life international scale-free networks can be. For instance, the global propagation of computer viruses; of 'viral' memes, etc. Can anybody help me?
have a look it
- Maurizio Campolo added an answer:19Which software are you using for complex network analysis?There is a variety of software packages which provide graph algorithms and network analysis capabilities. As a developer of network analysis algorithms and software, I wonder which tools are most popular with researchers working on real-world data. What are your requirements with respect to usability, scalability etc.? What are the desired features? Are there analysis tasks which you would like to do but are beyond the capabilities of your current tools?
I apply Brain Connectivity Toolbox (BCT, http://www.brain-connectivity-toolbox.net/ source C++, Matlab and Octave) contains a large selection of complex network measures, statistic and comparation, by Sporn/Rubinov.
I use another source: http://strategic.mit.edu/downloads.php?page=matlab_networks .Following
- Abdulmunem Khudhair added an answer:18Book RecommendationsCan anybody recomend me a book about networks in general?
Thanks a lot
Go for Cisco v5Following
- Ritesh Kumar added an answer:2Is there any method for finding optimal number of communities using network based community detection methods such as louvain method?
Graph based community detection methods are very effective in explaining the underlying structure of graph but i have not come across any method find optimal number of community similar to clustering methods.
I am sorry, I should have made it a bit clear.
Say, I am trying to identify the communities in an unsupervised manner and for that I am trying to maximize the modularity. Now, I get different number of communities with different nodes even at a single resolution parameter. The question arises which of the communities is the best i.e. is there any statistical criteria which can lead me to find that number?
Moreover, the choice of resolution parameter itself is a question mark.Following
- 4What are some of the best models describing the epidemic spread over a network?Epidemic in Networks
Looking for some of the best papers or thesis to go through.
The spread of TB/HIV in human networks...
Follow links to original papers with the CDC at bottom of article.Following
- James R Knaub added an answer:4In cluster sampling method, On what basis we calculate the number of clusters to be selected?
Please help me
The number of clusters, I would think, would have to be a compromise between the difficulty in traveling to or otherwise reaching the clusters in the first stage, and the number of smallest units you can handle for a sample size. Cluster sampling, unlike stratification, actually increases the overall sample size needed, but may lower your cost. Also, in general, the larger your sample size, the greater might be your nonsampling error, like measurement error, but the convenience of cluster sampling (which is a randomization/design-based method) may ameliorate this to a degree.
So to determine the number of clusters depends upon the convenience (NOT "convenience sampling") of this design, the sample size you can handle, and the accuracy you can attain considering both sampling error and nonsampling error. So this is rather customized.
You could look into this in a book such as Cochran, W.G.(1977), Sampling Techniques, 3rd ed., John Wiley & Sons.
Cheers - JimFollowing
- Ingo Vogt added an answer:12Tools for analyzing large scale networksWhat kind of tool would you suggest for the analysis of large scale biological networks? The tools I have used so far are Cytoscape and Network Workbench. Both have some really nice features but also disadvantages. I would like to discuss with others about this topic and if there are recommendations for specific types of networks making a tool advantageous compared to others?Following
- Abhishek Dwaraki added an answer:6Is an Erdos-Renyi Communication network possible?We know very well that a communication network is always assumed to be or also exhibits the nature of Scale Free Networks. But is it possible that a communication network can be framed as ER Network or a Random Network?
Links to any papers or thesis that model communication networks as ER or Random model would help too.I think this might shed some useful information on whether networks can be modeled on the Erdos-Renyi model and can be scale free or not. It is basically a review paper, but has some interesting points that can be made use of.Following
- Henning Meyerhenke added an answer:9Datasets of networks for benchmarking community detection algorithmsIs someone knows where to find datasets of networks with known communities (that's the important point), in order to have reference clusters to validate/invalidate community detection algorithms ?Following
- Manikant Prasad added an answer:4SIR or SIS model, which one is more accurate for explaining the spread of computer virus in a network?In SIS/SIRS model the network components which are infected are assumed to recover and go to susceptible state based immediately or after some time based on immunity of the virus.
While in SIR model, once recovered, the nodes are assumed to have become immune to the same disease and no longer participates in the spread of epidemic.
I was just wondering which model describes the computer virus epidemic more accurately.Thanks sir for your answer :)Following
- Peteris Daugulis added an answer:11What are current algorithmic challenges in connectome analysis?The human connectome is a comprehensive map of the neural connections in the brain - in other words, a graph. Coming from a background in graph algorithm development and network analysis, the field of connectome analysis seems to me a very interesting application domain. However, it is a domain I am just beginning to understand. Therefore I hope to get some feedback from both neuro- and computer scientists, starting with the following questions:
- It is my understanding that at the neural scale, the connectome is a graph of more than 10^10 nodes and 10^14 edges. If it could be comprehensively mapped at this scale - which i believe it cannot at this point due to a lack of imaging technology - would it be in the range of current computing capabilities to analyse such a network?
- Has the connectome been mapped at coarser scales? If yes, what graph sizes are we talking about?
- Are standard measures from network analysis (such as degree distribution, diameter, clustering coefficients, centrality, communities) relevant for connectome analysis? What are interpretations of such measures?
- What are other structures of interest in the connectome that could be revealed by graph algorithms? Is there a need for domain-specific algorithms to discover brain-specific graph structures?
- Are there publicly available datasets that represent the connectome as a graph?About the need for previous expert knowledge - not necessarily so. For example, you don't need expert knowledge to find a new motif, just search for motifs.Following
- Ergys Rexhepi added an answer:8Is there any simulator for home networks?I know NS and GloMoSim for Ad-hoc networks. Is there a simulator used for home networks?Try GNS3 if you have a powerful computer (except for catalyst switches) or you can perform everything you want with Packet Tracer, free cisco product.Following
- Deepankar Mitra added an answer:3Why not take advantage of electrical signals to charge mobile devices from the open air?As long as the air carries electrical signals, why not take advantage of these electrical signals to charge mobile devices, "charging in case of emergency," for example.I guess you must have heard about "witricity" or wireless electricity. But the problem with it is, it can't be deployed everywhere. It needs a proper set up. Check out this Wiki link-
and this one also:
- Mehdi Hedayatpour added an answer:4What is the importance of ecological memory to anthropogenic disturbance?Ecological memory (EM) is an important and relatively new concept in ecology. How can we apply our understanding of EM to anthropogenic disturbances? Do anthropogenic disturbances alter EM in some systems? Is EM erased by some types of anthropogenic disturbance? Can we design anthropogenic disturbances to optimize EM?Precipitation, soil water, and other factors affect plant and ecosystem processes at multiple time scales. A common assumption is that water availability at a given time directly affects processes at that time. Recent work, especially in pulse-driven, semiarid systems, shows that antecedent water availability, averaged over several days to a couple weeks, can be just as or more important than current water status. Precipitation patterns of previous seasons or past years can also impact plant and ecosystem functioning in many systems. However, we lack an analytical framework for quantifying the importance of and time-scale over which past conditions affect current processes. This study explores the ecological memory of a variety of plant and ecosystem processes. We use memory as a metaphor to describe the time-scale over which antecedent conditions affect the current process. Existing approaches for incorporating antecedent effects arbitrarily select the antecedent integration period (e.g., the past 2 weeks) and the relative importance of past conditions (e.g., assign equal or linearly decreasing weights to past events). In contrast, we utilize a hierarchical Bayesian approach to integrate field data with process-based models, yielding posterior distributions for model parameters, including the duration of the ecological memory (integration period) and the relative importance of past events (weights) to this memory. We apply our approach to data spanning diverse temporal scales and four semiarid sites in the western US: leaf-level stomatal conductance (gs, sub-hourly scale), soil respiration (Rs, hourly to daily scale), and net primary productivity (NPP) and tree-ring widths (annual scale). For gs, antecedent factors (daily rainfall and temperature, hourly vapor pressure deficit) and current soil water explained up to 72% of the variation in gs in the Chihuahuan Desert, with a memory of 10 hours for a grass and 4 days for a shrub. Antecedent factors (past soil water, temperature, photosynthesis rates) explained 73-80% of the variation in sub-daily and daily Rs. Rs beneath shrubs had a moisture and temperature memory of a few weeks, while Rs in open space and beneath grasses had a memory of 6 weeks. For pinyon pine ring widths, the current and previous year accounted for 85% of the precipitation memory; for the current year, precipitation received between February and June was most important. A similar result emerged for NPP in the short grass steppe. In both sites, tree growth and NPP had a memory of 3 years such that precipitation received >3 years ago had little influence. Understanding ecosystem dynamics requires knowledge of the temporal scales over which environmental factors influence ecological processes, and our approach to quantifying ecological memory provides a means to identify underlying mechanisms.Following
- Moon 14 added an answer:4Evaluation matrix for new MAC protocol in wireless sensor networksWhat performance metrics are to be measured for the new MAC protocol for wireless sensor networks with multi-rate sensor nodes?
Especially I focus on proportional fairness between nodes.Thanks a lot
What about throughput?Following
About Network Science
Physical, engineered, information, biological, cognitive, semantic and social network research.