Network Science

Network Science

  • Rozália Klára Bakó added an answer:
    How can I measure the complexity of an information flow?

    Once an information flow (IF) can be represented as a process, can we measure your complexity as a network?
    According Brandes, Robins, McCranie & Wasserman (2013), in order to think in terms of network first is necessary to have elements (E) and processes (P) of a network model: phenomenon (E), which passes through an abstraction (P), making the concept of the (E) which may be represented (P) and data network (E).
    Taking the IF as a broad phenomenon, can we introduce the concept of network processes, which can be viewed as a web of interdependent tasks or activities, its products, which are more than the sum of its components and the participating organizations, can be understood as a network of people and communications?
    Which measure can be more appropriate in this case?

  • Ranga Dabarera added an answer:
    In an agent network (or graph) how can I find the most influential node to drive the whole network to a certain direction/opinion or idea?

    This can be seen in many disciplines under different terminology. I need to get get as much ideas as possible. Feel free to let me know how it is handled in realms closer to your research. For instance

    • Sociophysics: Opinion formation on social networks 
    • Marketing: viral marketing 
    • Microfinance : Diffusion models to study the influential individuals, etc...

    In brief the idea is: given a network, if we need to drive an idea, which node (or agent) we should select. All your comments are highly appreciated. Thank you in advance.

    Ranga Dabarera

    Professor Bin Jiang: Thank you very much for the publication professor. Really appreciate your insight in this. 

  • George Stoica added an answer:
    Social media: Downvoting and degrading in social networks – What are the reasons? - What are the reactions?
    We have recently seen a lot of down-votings in some threads. We also saw different kinds of reactions on this phenomenon. I would like to ask a question about these reactions.

    First I assume that there are several possible reasons for downvotings: dissent – misunderstandings – misuse of buttons without knowing it– a social scientist who writes a paper about the reactions – a test carried out by RG – some technical problem, etc.

    I would like to have a discussion about the reactions on the part of researchers. We have seen calls to ban down-voters or to cancel their anonymity. Elections in democracies are anonymous and there are good reasons for this. I think most of us agree about some basic traits of democracy, the right to stay anonymous is among them. What about these basics in social media?
    George Stoica

    Dear Ljubomir,

    The main reason is that the down-voters did not like my answers & explanations. What's strange, is that I have been downloaded for mathematical solutions to my own problems (which are correct, because I made those problems), or even for the problems themselves! For this ones, I have no explanation. 

    Sincerely, George

  • Valdis E Krebs added an answer:
    Can anybody please name some common applications of Complex Networks (aka. network science) preferably related (but not limited) to computer science?
    I'm trying to write a report and I need some hot topics/applications to emphasize. Does anybody also know about the particularly active area of research in Complex Networks?
    Valdis E Krebs

    Take a look at the blog --

    Many applications of network analysis are illustrated there.  Most from real client engagements.

  • Valdis E Krebs added an answer:
    Social network question: Is there a technical term for an ego network in which the central node is removed?

    I've seen networks of this type used to highlight communities within your list of friends on Facebook. It removes yourself from the network, because this node is by default connected to everyone and clutters the visualization.  Looking to see if there is a standard technical term for such a network.

    Valdis E Krebs

    Ego's alter network works in academia.  With business clients I use Ego's "network neighborhood" -- a more common term and because there are both direct and indirect alters.  Here is a picture of Erdøs' famous network neighborhood... with Paul Erdøs removed.

  • Carl J. Mueller added an answer:
    Have you come across any papers or material on identifying the path complexity in an ER diagram?

    I am trying to understand how a complexity (quantified) can be attributed to any virtual path from an attribute to another attribute of a different entity.

    Carl J. Mueller

    Formal models are good for estimating the development effort, but the model is completed late in the process (waterfall, scrum). Generally, management wants an effort estimate that included the modeling or design process. In essence, the question is what will it cost to provide the require facilities, This is why the point techniques are favored by many developers.

  • Mohammad zakaria Masoud added an answer:
    What is the well position of base station in leach, leach-c and pegasis?

    Hi friends,

    After your experiences, tell me what the best position of the base station in an area of 100 x 100 m, is the center, or the extremity or outside corresponding to the best density (number of nodes)?

    Thank you.

    Best regards

    Mohammad zakaria Masoud

    All WSN papers that deal with clustering locate sink in the center. The locations of other nodes are random with normal distribution. this makes the center more near to most of the nodes. If you try to use K-mean algrithm to find four different clusters you will find that they located around the center. 

  • Gino Tesei added an answer:
    How can I change standardized predicted values of neural network into non-standard values?

    I used nnet package in R to train the neural network and make prediction. At first, because the output values were large, i used the formula (x-xmin)/(xmax-xmin) to standardize them in range of 0 to 1. After training the network, i predicted the output values. The result is a range of data in range of 0 and 1. How can i un standardize the predicted values to have predicted values of the first unit?


    Gino Tesei

    Dear Ghazaleh,

    here are some suggestions based on the information you provided.

    As the values of input variables are a mixture of binary (hence, they should belong to range [0,1]) and quantitative currency (hence, they should belong to to range like [1000$, 1000000$])  I suggest to scale the input variables so that all ones belong to the same range [0,1]; so it means applying the transformation (X-mean(X))/sd(X) to all input variables

    If you use caret you can just specify this transformation inside the the pre-process object. For example, if XX is your train set (not scaled) of 1980 observations, yy the related response variables (not scaled), and ZZ the test set (not scaled) of 540 observations,      

    fit <- train(y = yy, x = XX ,
    method = "avNNet",  

    preProc = c("center", "scale"),

    tuneGrid = expand.grid(.decay = c(0.001, .01, .1), .size = seq(1, 27, by = 2), .bag = FALSE), 

    linout = TRUE, trace = FALSE, maxit = 1000,
    trControl = trainControl(method = "repeatedcv",repeats = 5, number = 10)) 

    When you make the prediction  - myPred = predict(fit,ZZ) -  you don't need to scale the test set, as caret scales it for you.  

    If nnet still coverges pretty quicly with this code, I would suggest 3 transformations on the train set and test set on cascade, i.e. applying i+1 transformation only if the problem is still not fixed with the i transformation. These transformations are very easy to do with caret. 

    1) removing near zero var predictors

    data = rbind(XX,ZZ)

    PredToDel = nearZeroVar(data)

    data = data [,-PredToDel]

    XX = data[1:1980,]

    ZZ = data[1981:nrow(data),]

    2) removing predictors that make ill-conditioned square matrix

    data = rbind(XX,ZZ)

    PredToDel = trim.matrix( cov( data ) )

    XX = data[1:1980,]

    ZZ = data[1981:nrow(data),]

    3) removing high correlated predictors

    data = rbind(XX,ZZ)

    PredToDel = findCorrelation(cor( data ))

    XX = data[1:1980,]

    ZZ = data[1981:nrow(data),]

    Hope this can help you  

  • Sashank Dara added an answer:
    Do you have references on Smart AV Networks?

    I am doing research on user experience with smart AV networks. If anyone has papers, case studies or documents about smart AV Networks, please do share.

    Sashank Dara

    what is a smart av network

  • Dr. M. Anand Kumar added an answer:
    How do vampire attacks work?

    i would like to know different parameter of vampire attack on nodes.

    Dr. M. Anand Kumar

    Vampire Attacks parameters: Stretch attacks create a long route(false Route) from Source to Destination which consumes lot of energy. For Example Valid route(Path) from Source to destination A-->C === A-->B-->C. But false route will be some thing like A-->C === A-->B-->D. -->E-->F-->C. Parameters are Source, Destination, Packet Id, Time to live, Next Hop, Routers etc. The parameters mainly depend on the specific Network. The parameters that are mentioned above are common to all.

  • Christian L. Staudt added an answer:
    Is general sparsification of complex networks possible?

    Nowadays complex network data sets are reaching enormous sizes, so that their analysis challenges algorithms, software and hardware. One possible approach to this is to reduce the amount of data while preserving important information. I'm specifically interested in methods that, given a complex network as input, filter out a significant fraction of the edges while preserving structural properties such as degree distribution, components, clustering coefficients, community structure and centrality. Another term used in this context is "backbone", a subset of important edges that represent the network structure.

    There are methods to sparsify/filter/sample edges and preserve a specific property. But are there any methods that aim to preserve a large set of diverse properties?

    Christian L. Staudt

    This recent paper of ours is an attempt at an answer to above question.

    • Source
      [Show abstract] [Hide abstract]
      ABSTRACT: Sparsification reduces the size of networks while preserving structural and statistical properties of interest. Various sparsifying algorithms have been proposed in different contexts. We contribute the first systematic conceptual and experimental comparison of edge sparsifi-cation methods on a diverse set of network properties. It is shown that they can be understood as methods for rating edges by importance and then filtering globally by these scores. In addition, we propose a new sparsification method (Local Degree) which preserves edges leading to local hub nodes. All methods are evaluated on a set of 100 Facebook social networks with respect to network properties including diameter, connected components, community structure, and multiple node centrality measures. Experiments with our implementations of the sparsification methods (using the open-source network analysis tool suite NetworKit) show that many network properties can be preserved down to about 20% of the original set of edges. Furthermore, the experimental results allow us to differentiate the behavior of different methods and show which method is suitable with respect to which property. Our Local Degree method is fast enough for large-scale networks and performs well across a wider range of properties than previously proposed methods.
  • Bin Jiang added an answer:
    What are the applications of network analysis?

    There is basically no limit to the phenomena that can be modeled and analyzed in terms of complex networks - entities and their relationships between which can be represented as the nodes and edges of a graph, and which form a non-trivial pattern. So let's make this a small survey:

    • Where in your research do you employ complex networks and network analysis methods?
    • What are your data sources? How big are they?
    • Which tools do you use for the network analysis process?
    • What did you learn from the network analysis?
    Bin Jiang

    I would point you to the very dynamic field of complex networks, involving large amounts of nodes and edges, and community detection is particularly active subject. The following paper presented a new way of thinking in community detection or classification in general:

    Jiang B. and Ma D. (2015), Defining least community as a homogeneous group in complex networks, Physica A: Statistical Mechanics and its Applications, 428, 154-160.

    Jiang B., Duan Y., Lu F., Yang T. and Zhao J. (2014), Topological structure of urban street networks from the perspective of degree correlations, Environment and Planning B: Planning and Design, 41(5), 813-828.

  • Jarret Cassaniti added an answer:
    Is anyone working on Ebola contact-tracing with mobile phones?

    Is anyone working on Contact Tracing, in the current Ebola outbreaks, using non-interview (non-F2F) techniques (such as tracking mobile phones) to gather information on movements/interactions of infectious individuals? 

    Jarret Cassaniti

    Hi Valdis, you can check the Ebola Communication Network: You can also post your question on the Springboard for Health Communication Professionals:

  • Ra'ed (Moh'd Taisir) Masa'deh added an answer:
    Where can I find dataset for Optimization Queue for Scheduling Tasks in Cloud Computing ?

    My work requires the Real-world data sets for arrival of tasks and task service time.

    Ra'ed (Moh'd Taisir) Masa'deh

    Could you please see our paper on Job Scheduling for Cloud Computing Using Neural Networks. It might help. Yours,

  • Wu-Chen Su added an answer:
    What are the current active research areas in social network analysis?
    I'm trying to choose my MSc research area in the field of social network analysis but I'm new to this subject and am not aware of currently active research areas or the trends. I would appreciate any suggestion you might give, so I can choose the one closest to my interests.
    Wu-Chen Su

    You may consider my paper for current issues and trends of multiple online social network study to find suitable topics for your study.

    • Source
      [Show abstract] [Hide abstract]
      ABSTRACT: Nowadays, virtual communities gather and organize via different means on the Internet. For example, Online Social Networks (OSNs) are providing a new way of doing business for enterprises with their customers and business partners. They can interact with each other on many OSNs building closer relationships than before. In addition, organizations do have more choices to use and infer useful knowledge from these valuable data sources favoring organizational growth. In this regard, management challenges among multiple online social networks must be revealed to provide insights of strategic thinking for organizations. This study shows that current research of multiple online social networks are shifting from Homogeneous OSNs to Heterogeneous OSNs and Social Internetworking Scenarios (SISs) and producing data interoperability, data privacy and security issues among these virtual organizations. An exploration of current applications and unresolved challenges is helpful for the future organizational development of online societies.
      The Fourth International Conference on Digital Information and Communication Technology and its Applications; 05/2014
  • Gandhi Kishan Bipinchandra added an answer:
    Are there any good visualizations showing the turbulence of real-life scale free networks?

    I am looking for good (dynamic - i.e. dynamic gif or video) visualizations that show how turbulent real-life international scale-free networks can be. For instance, the global propagation of computer viruses; of 'viral' memes, etc.  Can anybody help me? 

    Gandhi Kishan Bipinchandra

    have a look it





    • Source
      [Show abstract] [Hide abstract]
      ABSTRACT: The specialty of desktop-as-a-service cloud computing is that user can access their desktop and can execute applications in virtual desktops on remote servers. Resource management and resource utilization are most significant in the area of desktop-as-a-service, cloud computing; however, handling a large amount of clients in the most efficient manner poses important challenges. Especially deciding how many clients to handle on one server, and where to execute the user applications at each time is important. This is because we have to ensure maximum resource utilization along with user data confidentiality, customer satisfaction, scalability, minimum Service level agreement (SLA) violation etc. Assigning too many users to one server leads to customer dissatisfaction, while assigning too little leads to higher investments costs. So we have taken into consideration these two situations also. We study different aspects to optimize the resource usage and customer satisfaction. Here in this paper We proposed Intelligent Resource Allocation (IRA) Technique which assures the above mentioned parameters like minimum SLA violation. For this, priorities are assigned to user requests based on their SLA Factors in order to maintain their confidentiality. The results of the paper indicate that by applying IRA Technique to the already existing overbooking mechanism will improve the performance of the system with significant reduction in SLA violation.
      International Journal of Computer Applications 04/2014; 96(3). DOI:10.5120/16778-6357

    + 2 more attachments

  • Maurizio Campolo added an answer:
    Which software are you using for complex network analysis?
    There is a variety of software packages which provide graph algorithms and network analysis capabilities. As a developer of network analysis algorithms and software, I wonder which tools are most popular with researchers working on real-world data. What are your requirements with respect to usability, scalability etc.? What are the desired features? Are there analysis tasks which you would like to do but are beyond the capabilities of your current tools?
    • Source
      [Show abstract] [Hide abstract]
      ABSTRACT: We introduce NetworKit, an open-source software package for high-performance analysis of large complex networks. Complex networks are equally attractive and challenging targets for data mining, and novel algorithmic solutions, including parallelization, are required to handle data sets containing billions of connections. Our goal for NetworKit is to package results of our algorithm engineering efforts and put them into the hands of domain experts. NetworKit is a hybrid combining the performance of kernels written in C++ with a convenient Python frontend. The package targets shared-memory platforms with OpenMP support. The current feature set includes various analytics kernels such as connected components, diameter, clustering coefficients, community detection, k-core decomposition, degree assortativity and multiple centrality indices, as well as a collection of graph generators. Scaling to massive networks is enabled by techniques such as parallel and sampling-based approximation algorithms. In comparison with related software, we propose NetworKit as a package geared towards large networks and satisfying three important criteria: High performance, interactive workflows and integration into an ecosystem of tested tools for data analysis and scientific computation.
    Maurizio Campolo

    I apply Brain Connectivity Toolbox (BCT,  source C++,  Matlab and Octave) contains a large selection of complex network measures, statistic and comparation, by Sporn/Rubinov.

    I use another source: .

  • Abdulmunem Khudhair added an answer:
    Book Recommendations
    Can anybody recomend me a book about networks in general?
    Thanks a lot
    Abdulmunem Khudhair

    Go for Cisco v5  

  • Ritesh Kumar added an answer:
    Is there any method for finding optimal number of communities using network based community detection methods such as louvain method?

    Graph based community detection methods are very effective in explaining the underlying structure of graph but i have not come across any method find optimal number of community similar to clustering methods.

    Ritesh Kumar

    I am sorry, I should have made it a bit clear.

    Say, I am trying to identify the communities in an unsupervised manner and for that  I am trying to maximize the modularity. Now, I get different number of communities with different nodes even at a single resolution parameter. The question arises which of the communities is the best i.e. is there any statistical criteria which can lead me to find that number? 

    Moreover, the choice of resolution parameter itself is a question mark.

  • Valdis E Krebs added an answer:
    What are some of the best models describing the epidemic spread over a network?
    Epidemic in Networks
    Looking for some of the best papers or thesis to go through.
    Valdis E Krebs

    The spread of TB/HIV in human networks...

    Follow links to original papers with the CDC at bottom of article.

  • James R Knaub added an answer:
    In cluster sampling method, On what basis we calculate the number of clusters to be selected?

    Please help me

    James R Knaub

    Zed -

    The number of clusters, I would think, would have to be a compromise between the difficulty in traveling to or otherwise reaching the clusters in the first stage, and the number of smallest units you can handle for a sample size. Cluster sampling, unlike stratification, actually increases the overall sample size needed, but may lower your cost. Also, in general, the larger your sample size, the greater might be your nonsampling error, like measurement error, but the convenience of cluster sampling (which is a randomization/design-based method) may ameliorate this to a degree.

    So to determine the number of clusters depends upon the convenience (NOT "convenience sampling") of this design, the sample size you can handle, and the accuracy you can attain considering both sampling error and nonsampling error. So this is rather customized.

    You could look into this in a book such as Cochran, W.G.(1977), Sampling Techniques, 3rd ed., John Wiley & Sons.

    Cheers - Jim

  • Ingo Vogt added an answer:
    Tools for analyzing large scale networks
    What kind of tool would you suggest for the analysis of large scale biological networks? The tools I have used so far are Cytoscape and Network Workbench. Both have some really nice features but also disadvantages. I would like to discuss with others about this topic and if there are recommendations for specific types of networks making a tool advantageous compared to others?
  • Abhishek Dwaraki added an answer:
    Is an Erdos-Renyi Communication network possible?
    We know very well that a communication network is always assumed to be or also exhibits the nature of Scale Free Networks. But is it possible that a communication network can be framed as ER Network or a Random Network?
    Links to any papers or thesis that model communication networks as ER or Random model would help too.
    Abhishek Dwaraki
    I think this might shed some useful information on whether networks can be modeled on the Erdos-Renyi model and can be scale free or not. It is basically a review paper, but has some interesting points that can be made use of.
  • Henning Meyerhenke added an answer:
    Datasets of networks for benchmarking community detection algorithms
    Is someone knows where to find datasets of networks with known communities (that's the important point), in order to have reference clusters to validate/invalidate community detection algorithms ?
  • Manikant Prasad added an answer:
    SIR or SIS model, which one is more accurate for explaining the spread of computer virus in a network?
    In SIS/SIRS model the network components which are infected are assumed to recover and go to susceptible state based immediately or after some time based on immunity of the virus.
    While in SIR model, once recovered, the nodes are assumed to have become immune to the same disease and no longer participates in the spread of epidemic.
    I was just wondering which model describes the computer virus epidemic more accurately.
    Manikant Prasad
    Thanks sir for your answer :)
  • Peteris Daugulis added an answer:
    What are current algorithmic challenges in connectome analysis?
    The human connectome is a comprehensive map of the neural connections in the brain - in other words, a graph. Coming from a background in graph algorithm development and network analysis, the field of connectome analysis seems to me a very interesting application domain. However, it is a domain I am just beginning to understand. Therefore I hope to get some feedback from both neuro- and computer scientists, starting with the following questions:

    - It is my understanding that at the neural scale, the connectome is a graph of more than 10^10 nodes and 10^14 edges. If it could be comprehensively mapped at this scale - which i believe it cannot at this point due to a lack of imaging technology - would it be in the range of current computing capabilities to analyse such a network?

    - Has the connectome been mapped at coarser scales? If yes, what graph sizes are we talking about?

    - Are standard measures from network analysis (such as degree distribution, diameter, clustering coefficients, centrality, communities) relevant for connectome analysis? What are interpretations of such measures?

    - What are other structures of interest in the connectome that could be revealed by graph algorithms? Is there a need for domain-specific algorithms to discover brain-specific graph structures?

    - Are there publicly available datasets that represent the connectome as a graph?
    Peteris Daugulis
    About the need for previous expert knowledge - not necessarily so. For example, you don't need expert knowledge to find a new motif, just search for motifs.
  • Ergys Rexhepi added an answer:
    Is there any simulator for home networks?
    I know NS and GloMoSim for Ad-hoc networks. Is there a simulator used for home networks?
    Ergys Rexhepi
    Try GNS3 if you have a powerful computer (except for catalyst switches) or you can perform everything you want with Packet Tracer, free cisco product.
  • Deepankar Mitra added an answer:
    Why not take advantage of electrical signals to charge mobile devices from the open air?
    As long as the air carries electrical signals, why not take advantage of these electrical signals to charge mobile devices, "charging in case of emergency," for example.
    Deepankar Mitra
    I guess you must have heard about "witricity" or wireless electricity. But the problem with it is, it can't be deployed everywhere. It needs a proper set up. Check out this Wiki link-
    and this one also:
  • Mehdi Hedayatpour added an answer:
    What is the importance of ecological memory to anthropogenic disturbance?
    Ecological memory (EM) is an important and relatively new concept in ecology. How can we apply our understanding of EM to anthropogenic disturbances? Do anthropogenic disturbances alter EM in some systems? Is EM erased by some types of anthropogenic disturbance? Can we design anthropogenic disturbances to optimize EM?
    Mehdi Hedayatpour
    Precipitation, soil water, and other factors affect plant and ecosystem processes at multiple time scales. A common assumption is that water availability at a given time directly affects processes at that time. Recent work, especially in pulse-driven, semiarid systems, shows that antecedent water availability, averaged over several days to a couple weeks, can be just as or more important than current water status. Precipitation patterns of previous seasons or past years can also impact plant and ecosystem functioning in many systems. However, we lack an analytical framework for quantifying the importance of and time-scale over which past conditions affect current processes. This study explores the ecological memory of a variety of plant and ecosystem processes. We use memory as a metaphor to describe the time-scale over which antecedent conditions affect the current process. Existing approaches for incorporating antecedent effects arbitrarily select the antecedent integration period (e.g., the past 2 weeks) and the relative importance of past conditions (e.g., assign equal or linearly decreasing weights to past events). In contrast, we utilize a hierarchical Bayesian approach to integrate field data with process-based models, yielding posterior distributions for model parameters, including the duration of the ecological memory (integration period) and the relative importance of past events (weights) to this memory. We apply our approach to data spanning diverse temporal scales and four semiarid sites in the western US: leaf-level stomatal conductance (gs, sub-hourly scale), soil respiration (Rs, hourly to daily scale), and net primary productivity (NPP) and tree-ring widths (annual scale). For gs, antecedent factors (daily rainfall and temperature, hourly vapor pressure deficit) and current soil water explained up to 72% of the variation in gs in the Chihuahuan Desert, with a memory of 10 hours for a grass and 4 days for a shrub. Antecedent factors (past soil water, temperature, photosynthesis rates) explained 73-80% of the variation in sub-daily and daily Rs. Rs beneath shrubs had a moisture and temperature memory of a few weeks, while Rs in open space and beneath grasses had a memory of 6 weeks. For pinyon pine ring widths, the current and previous year accounted for 85% of the precipitation memory; for the current year, precipitation received between February and June was most important. A similar result emerged for NPP in the short grass steppe. In both sites, tree growth and NPP had a memory of 3 years such that precipitation received >3 years ago had little influence. Understanding ecosystem dynamics requires knowledge of the temporal scales over which environmental factors influence ecological processes, and our approach to quantifying ecological memory provides a means to identify underlying mechanisms.
  • Moon 14 added an answer:
    Evaluation matrix for new MAC protocol in wireless sensor networks
    What performance metrics are to be measured for the new MAC protocol for wireless sensor networks with multi-rate sensor nodes?

    Especially I focus on proportional fairness between nodes.
    Moon 14
    Thanks a lot
    What about throughput?

About Network Science

Physical, engineered, information, biological, cognitive, semantic and social network research.

Topic followers (7,458) See all