## No full-text available

To read the full-text of this research,

you can request a copy directly from the author.

We consider a graph with n vertices, all pairs of which are connected by an edge; each edge is of given positive length. The following two basic problems are solved. Problem 1: construct the tree of minimal total length between the n vertices. (A tree is a graph with one and only one path between any two vertices.) Problem 2: find the path of minimal total length between two given vertices.

To read the full-text of this research,

you can request a copy directly from the author.

... Given two vertices s, t ∈ V , the SPP is looking for a path of minimum weight from s to t. Many algorithms for solving the SPP exist, for example Dijkstra's [12] or Bellman's and Ford's algorithm [6]. ...

... SPPs can also be formulated as binary integer programs [20,34]. Note that this approach is less efficient than the ones described in [12] or [6]. However, it gives a good intuition for an analogue approach to compute minimum-weight surfaces which we describe in Sect. ...

As the number one building material, concrete is of fundamental importance in civil engineering. Understanding its failure mechanisms is essential for designing sustainable buildings and infrastructure. Micro-computed tomography (μCT) is a well-established tool for virtually assessing crack initiation and propagation in concrete. The reconstructed 3d images can be examined via techniques from the fields of classical image processing and machine learning. Ground truths are a prerequisite for an objective evaluation of crack segmentation methods. Furthermore, they are necessary for training machine learning models. However, manual annotation of large 3d concrete images is not feasible. To tackle the problem of data scarcity, the image pairs of cracked concrete and corresponding ground truth can be synthesized. In this work we propose a novel approach to stochastically model crack structures via Voronoi diagrams. The method is based on minimum-weight surfaces, an extension of shortest paths to 3d. Within a dedicated image processing pipeline, the surfaces are then discretized and embedded into real μCT images of concrete. The method is flexible and fast, such that a variety of different crack structures can be generated in a short amount of time.

... Shortest path analysis is the simplest application of route optimization theory and is frequently operationalized using Dijkstra's shortest path algorithm (Golledge, 1995;Law & Traunmueller, 2018), which calculates the shortest distance between any two nodes (origin-destination pairs) in a network using street length as edge weights (Dijkstra, 1959). ...

Pedestrian activity is often measured in the formal parts of cities, yet it has rarely been studied in informal settlements, although they are typically adjacent to formal areas and residents participate in formal urban life. Route optimization and space syntax are two pedestrian activity theories that can be applied to predict path usage in urban areas. These theories have been tested in formal cities, but are they applicable in understudied informal settings? Using motion sensors, we measure pedestrian activity in a Cape Town informal settlement in the early morning and evening hours and test which theory best explains the sensor measurements. Route optimization is weakly correlated with average pedestrian activity, while space syntax performs even more poorly in predicting pedestrian activity. The predictive power of both theoretical calculations further varies by time of day. We find that both theories perform worst at the entrances/exits of the informal settlement—that is, the border between informal and formal. These results indicate that daily movement patterns in informal settlements may differ from formal areas and that the connection between the formal and informal city requires further study to better understand how pedestrian activity links these two types of areas. A new theory of route selection based on such an understanding, which also better incorporates the specific characteristics of informal urban settlements—such as high density, narrow, and constantly changing streets primarily used by residents—may be necessary to understand the needs of pedestrians within informal settlements as compared to formal areas.

... The process of Dijkstra's algorithm is to keep track of the local best solution as the visit point when we do not know the shortest distance from each node to the source node nor can we find the shortest path. Dijkstra's algorithm only works with graphs that have positive weights [26]. ...

To avoid destroying the natural environment, we can create tourist paths without disrupting ecological systems or rare places such as rainforests that contain endangered species. Likewise, in sustainable tourism, we should consider visiting national parks or national museums as a way to understand the core values and the meaning of that culture and environment more clearly. In this paper, we consider which points tourists need to avoid or visit for sustainable tourism. We designed an algorithm that can give a path to avoid certain points or to go to a preferred point. If this algorithm does not give any weight, it will give the shortest path from the start to the end, and it can decide which vertices to avoid or travel to. Moreover, it can be used to vary the weights of different positive or negative values to obtain a path to avoid a point or to reach a point. Compared to Dĳkstra’s algorithm, we can add a negative weight to the graph and still find the shortest path. In application, it can be used for path schedule decisions. We did not wave the large resources to calculate the walk length. In the usage scenario, users only need to provide the starting node, end node, avoidance point, and facing point to calculate the best path. This algorithm will give a good path for users. At the same time, users can use this algorithm to implement sustainable travel route planning, such as going to museums, avoiding rare environments, etc. So, this algorithm provides a new way to decide the best path. Finally, the experimental results show that the classic algorithms cannot avoid points. In real tourism, tourists can use this algorithm for travel planning to achieve sustainable tourism.

... A well-known and widely used basis for many static algorithms is Dijkstra [44], which identifies the shortest path to visit by iterating through edges connected from the start node and, at each node, computing the distance from the start. This approach spreads out in all directions until the target node is reached, and the path is determined by stepping backwards from the target. ...

The benefits of multi-robot systems are substantial, bringing gains in efficiency, quality, and cost, and they are useful in a wide range of environments from warehouse automation, to agriculture and even extend in part to entertainment. In multi-robot system research, the main focus is on ensuring efficient coordination in the operation of the robots, both in task allocation and navigation. However, much of this research seldom strays from the theoretical bounds; there are many reasons for this, with the most-prominent and -impactful being resource limitations. This is especially true for research in areas such as multi-robot path planning (MRPP) and navigation coordination. This is a large issue in practice as many approaches are not designed with meaningful real-world implications in mind and are not scalable to large multi-robot systems. This survey aimed to look into the coordination and path-planning issues and challenges faced when working with multi-robot systems, especially those using a prioritised planning approach, and identify key areas that are not well-explored and the scope of applying existing MRPP approaches to real-world settings.

... Route planning is more complex than identifying a path or route in a graph since it considers journeys, directions, and intermediate transfers between bus stops or stations. The shortest path of a graph is a frequent issue; various algorithms have been developed to solve it [11]- [14]. Depending on the factors used to calculate the weight of graph edges, the algorithm may be bi-or multi-criteria. ...

... Pathfinding methods in navigation networks are usually based on the standard A* algorithm (Hart et al., 1968), which is a heuristic extension of Dijkstra's (1959) classic procedure for finding the shortest path between graph nodes. Adaptation of such an approach to the situation of simultaneous operation of many agents with specific kinematic constraints was described, for example, by Hönig et al. (2016). ...

The subject of this paper is the AvatarTraffic simulator – a computer system capable of modelling in real time environments such as subway stations or airport halls populated with tens or hundreds of moving figures, which, in addition to pedestrian traffic typical for this type of objects, can perform predefined sequences of events and actions formulated as a simulation scenario. Thanks to the integration with a real monitoring system, the simulator, in addition to providing data streams (including video) generated by the virtual scene, is also able to dynamically respond to actions taken by the system’s staff. Using the Unity simulation engine as the implementation platform, a number of practical problems had to be solved during the development, two of which are the subject of this article: a) supervising and correcting the work of AI algorithms used in Unity to simulate the pedestrian movement of avatars, and b) a textual description of the scenario of events taking place on the stage in a way editable for experts planning tests of the monitoring system. Some more challenging cases of people movement are discussed (including creating queues and passing through doors) and the paper presents original algorithms correcting the work of the Unity’s built-in methods in the situations when the coordinated behaviour of people groups is required. Because of the specifics of the simulator environment the scenario needed to be expressed in a JSON text file, and the article presents the implemented mechanisms of its compilation directly to the C# runtime environment and discusses the original command language which was created to model sequences of events and actions making up the scenario.

... Finally, we retrieve the optimal coupling P * by solving the linear program (6), subject to the constraints (5). Figure 4b shows the resulting travel times. Notably, our results are equivalent to those derived from the Dijkstra algorithm [21], given the same parameterization or undirected graph. (We omitted the image from Dijkstra's algorithm as it is identical to ours within floating-point precision, making its inclusion redundant.) ...

We present a theoretical framework that links Fermat’s principle of least time to optimal transport theory via a cost function that enforces local transport. The proposed cost function captures the physical constraints inherent in wave propagation; when paired with specific mass distributions, it yields shortest paths in the considered media through the optimal transport plans. In the discrete setting, our formulation results in physically significant optimal couplings, whose off-diagonal entries identify shortest paths in both directed and undirected graphs. For undirected graphs with positive edge weights, commonly used to parameterize seismic media, our method provides solutions to the Eikonal equation consistent with those from the Dijkstra algorithm. For directed negative-weight graphs, corresponding to transportation cost matrices with negative entries, our approach aligns with the Bellman–Ford algorithm but offers considerable computational advantages. We also highlight potential research directions. These include the use of sparse cost matrices to reduce the number of unknowns and constraints in the considered transportation problem, and solving specific classes of optimal transport problems through the Dijkstra algorithm to enhance computational efficiency.

... Therefore, we define the starting point and the endpoint with the lowest cost value, located in the first scanline and last scanline, respectively. The Dijkstra minimum path algorithm is ultimately used to determine the vessel lumen boundaries [8,9]. ...

... Regions with RIOs greater than or equal to 0 were considered navigable, higher RIO values indicate safer navigational conditions in a given region, whereas values less than 0 indicate the regions' impassability. The RIO is used to plan the least-cost path accumulating the lowest total distance [29], which is also referred to as the shortest route (Route S). The starting and ending points of the route in this study are the Bering Strait and the Rotterdam, respectively. ...

Under the background of climate change, the Northeast Passage’s navigability is on the rise. Arctic sea fog significantly influences navigational efficiency in this region. Existing research primarily focuses on routes accumulating the lowest distance, neglecting routes with the lowest time and sea fog’s influence on route planning and navigational efficiency. This study compares the fastest and shortest routes and analyzes Arctic sea fog’s impact on the Northeast Passage from June to September (2001–2020). The results show that coastal areas are covered with less sea ice under notable monthly variations. Sea fog frequency is highest near coasts, declining with latitude. September offers optimal navigation conditions due to minimal ice and fog. When only sea ice is considered, the fastest route is approximately 4 days quicker than the shortest. The shortest route has migrated towards the higher latitude over two decades, while the fastest route remains closer to the Russian coast. Adding the impact of sea fog on the fastest route, the speed decreased by 30.2%, increasing sailing time to 45.1%. The new fastest route considering both sea ice and sea fog achieved a 13.9% increase in sailing speed and an 11.5% reduction in sailing time compared to the original fastest route.

... Path planning techniques commonly used for UAV applications are mostly focused on sampling-based methods (i.e., Rapidly Exploring Random Trees RRT [5]) for unstructured configurations. On the other hand Dijkstrabased solutions [6] have been used with increasing structuring of the environment. This allows schematizing the environment in connected segments which is suitable for graph search algorithms. ...

Hydrologic modeling has been a useful approach for analyzing water partitioning in catchment systems. It will play an essential role in studying the responses of watersheds under projected climate changes. Numerous studies have shown it is critical to include subsurface heterogeneity in the hydrologic modeling to correctly simulate various water fluxes and processes in the hydrologic system. In this study, we test the idea of incorporating geophysics‐obtained subsurface critical zone (CZ) structures in the hydrologic modeling of a mountainous headwater catchment. The CZ structure is extracted from a three‐dimensional seismic velocity model developed from a series of two‐dimensional velocity sections inverted from seismic travel time measurements. Comparing different subsurface models shows that geophysics‐informed hydrologic modeling better fits the field observations, including streamflow discharge and soil moisture measurements. The results also show that this new hydrologic modeling approach could quantify many key hydrologic fluxes in the catchment, including streamflow, deep infiltration, and subsurface water storage. Estimations of these fluxes from numerical simulations generally have low uncertainties and are consistent with estimations from other methods. In particular, it is straightforward to calculate many hydraulic fluxes or states that may not be measured directly in the field or separated from field observations. Examples include quickflow/subsurface lateral flow, soil/rock moisture, and deep infiltration. Thus, this study provides a useful approach for studying the hydraulic fluxes and processes in the deep subsurface (e.g., weathered bedrock), which needs to be better represented in many earth system models.

This paper proposes a new fully automatic computational framework from continuum structural topology optimization to beam structure design. Firstly, the continuum structural topology optimization is performed to find the optimal material distribution. The centers of the elements (i.e., vertices) in the final topology are considered as the original model of the skeleton extraction. Secondly, the Floyd-Warshall algorithm is used to calculate the geodesic distances between vertices. By combining the geodesic distance-based mapping function and a coarse-to-fine partition scheme, the original model is partitioned into regular components. The skeleton can be extracted by using edges to link the barycenter of the components and decomposed into branches by identified joint vertices. Each branch is normalized into a straight line. After mesh generation, a beam finite element model is established. Compared to other methods in the literature, the beam structures reconstructed by the proposed method have a desirable centeredness and keep the homotopy properties of the original models. Finally, the cross-sectional areas of members in the beam structure are considered as the design variables, and the sizing optimization is performed. Four numerical examples, both 2D and 3D, are employed to demonstrate the validity of the automatic computational framework. The proposed method extracts a parameterized beam finite element model from the topology optimization result that bridges the gap between the topology optimization of continuum structures and the subsequent optimization or design that enables a fully automatic design of beam-like structures.

Major Depressive Disorder (MDD) is a mental health disorder that affects millions of people worldwide. It is characterized by persistent feelings of sadness, hopelessness, and a loss of interest in activities that were once enjoyable. MDD is a major public health concern and is the leading cause of disability, morbidity, institutionalization, and excess mortality, conferring high suicide risk. Pharmacological treatment with Selective Serotonin Reuptake Inhibitors (SSRIs) and Serotonin Noradrenaline Reuptake Inhibitors (SNRIs) is often the first choice for their efficacy and tolerability profile. However, a significant percentage of depressive individuals do not achieve remission even after an adequate trial of pharmacotherapy, a condition known as treatment-resistant depression (TRD).
To better understand the complexity of clinical phenotypes in MDD we propose Network Intervention Analysis (NIA) that can help health psychology in the detection of risky behaviors, in the primary and/or secondary prevention, as well as to monitor the treatment and verify its effectiveness. The paper aims to identify the interaction and changes in network nodes and connections of 14 continuous variables with nodes identified as "Treatment" in a cohort of MDD patients recruited for their recent history of partial response to antidepressant drugs. The study analyzed the network of MDD patients at baseline and after 12 weeks of drug treatment.
At baseline, the network showed separate dimensions for cognitive and psychosocial-affective symptoms, with cognitive symptoms strongly affecting psychosocial functioning. The MoCA tool was identified as a potential psychometric tool for evaluating cognitive deficits and monitoring treatment response. After drug treatment, the network showed less interconnection between nodes, indicating greater stability, with antidepressants taking a central role in driving the network. Affective symptoms improved at follow-up, with the highest predictability for HDRS and BDI-II nodes being connected to the Antidepressants node.
NIA allows us to understand not only what symptoms enhance after pharmacological treatment, but especially the role it plays within the network and with which nodes it has stronger connections.

Mapping health facility catchment areas is important for estimating the population that uses the health facility, as a denominator for capturing spatial patterns of disease burden across space. Mapping activities to generate catchment areas are expensive exercises and are often not repeated on a regular basis. In this work, we demonstrated the generation of facility catchment areas in Blantyre, Malawi using crowdsourced road data and open-source mapping tools. We also observed travel speeds associated with different means of transportation were made in five randomly selected residential communities within Blantyre city. AccessMod version 5.8 was used to process the generated data to quantify travel time and catchment areas of health facilities in Blantyre city. When these catchments were compared with georeferenced patients originating, an average of 94.2 percent of the patients came from communities within the generated catchments. The study suggests that crowdsourced data resources can be used for the delineation of catchment areas and this information can confidently be used in efforts to stratify the burden of diseases such as malaria.

A main goal of probabilistic fault displacement hazard analysis (PFDHA) is to quantify displacement along and across an identified active fault that poses a hazard to nearby infrastructure such as roads, bridges, pipelines, and telecommunications. PFDHA relies on empirical models developed using data sets of displacement measurements and mapped surface rupture traces compiled from past global surface rupturing earthquakes by field surveys or remote sensing. However, current approaches to determine the location of the main rupture trace are subjective and lack repeatability due to different geological interpretations of the often complex network of mapped rupture traces. This subjectivity makes it difficult to compile and analyze displacement measurements and ruptures from multiple events in a consistent manner. This study provides an objective and repeatable approach to define a main rupture trace that can be applied to either field or remote sensing data. The new approach defined here can be used in developing rupture trace connectivity and geometry for use in displacement model developments and for use in objectively defining the input fault trace for assessing fault displacement hazard.

In the past years, many quantum algorithms have been proposed to tackle hard combinatorial problems. In particular, the maximum independent set (MIS) is a known NP-hard problem that can be naturally encoded in Rydberg atom arrays. By representing a graph with an ensemble of neutral atoms one can leverage Rydberg dynamics to naturally encode the constraints and the solution to MIS. However, the classes of graphs that can be directly mapped “vertex-to-atom” on standard devices with two-dimensional capabilities are currently limited to Unit-Disk graphs. In this setting, the inherent spatial locality of the graphs can be leveraged by classical polynomial-time approximation schemes (PTAS) that guarantee an ε-approximate solution. In this work, we build upon recent progress made for using three-dimensioanl arrangements of atoms to embed more complex classes of graphs. We report experimental and theoretical results which represent important steps towards tackling combinatorial tasks on quantum computers for which no classical efficient ɛ-approximation scheme exists.

The computation of a group Steiner tree (GST) in various types of graph networks, such as social network and transportation network, is a fundamental graph problem in graphs, with important applications. In these graphs, time is a common and necessary dimension, for example, time information in social network can be the time when a user sends a message to another user. Graphs with time information can be called temporal graphs. However, few studies have been conducted on GST in terms of temporal graphs. This study analyzes the computation of GST for temporal graphs, i.e., the computation of temporal GST (TGST), which is shown to be an NP-hard problem. We propose an efficient solution based on a dynamic programming algorithm for our problem. This study adopts new optimization techniques, including graph simplification, state pruning, and A ∗ search, are adopted to dramatically reduce the algorithm search space. Moreover, we consider three extensions for our problem, namely the TGST with unspecified tree root, the progressive search of TGST, and the top-N search of TGST. Results of the experimental study performed on real temporal networks verify the efficiency and effectiveness of our algorithms.

Since 1920, almost all the traffic on rail crossing the Danube in Hungary, crosses it in Budapest via the Southern Railway Bridge which makes it heavily overloaded. This is a very disadvantageous situation not only for commercial shipping but also for military uses as there is certain heavy military equipment that can only be transported via rail.In our two-part article, we examine the locations of new bridges that could be alternatives to bypass Budapest and thus to reduce the traffic load on the railway lines of the capital. In this first part of our paper, we present the effect of a new Danube bridge as an alternative to the V0 railway line. We examine the possible sites of the bridge with several different route alternatives connecting it to the existing railway lines by using traffic simulation.

Since 1920, almost all the traffic on rail crossing the Danube in Hungary, crosses it in Budapest via the Southern Railway Bridge which makes it overloaded. This is a very disadvantageous situation not only for commercial shipping but also for military uses as there is certain heavy military equipment that can only be transported via rail.In our two-part article, we examine the locations of new bridges that could be alternatives to bypass Budapest and thus to reduce the traffic load on the railway lines of the capital. In this second part, we examine the situation on the river Tisza by simulating the existence of several bridge alternatives, both newly built and developed existing ones. We also suggest a combined way of development to treat the capacity changes in the context of the whole network by building two new bridges, one on each river.

In emergency management, the transportation scheduling of emergency supplies and relief personnel can be regarded as the multi-objective shortest path problem with mixed time window (MOSPPMTW), which has high requirements for timeliness and effectiveness, but the current solution algorithms cannot simultaneously take into account the solution accuracy and computational speed, which is very unfavorable for emergency path decision-making. In this paper, we establish MOSPPMTW matching emergency rescue scenarios, which simultaneously enables the supplies and rescuers to arrive at the emergency scene as soon as possible in the shortest time and at the smallest cost. To solve the complete Pareto optimal surface, we present a ripple spreading algorithm (RSA), which determines the complete Pareto frontier by performing a ripple relay race to obtain the set of Pareto optimal path solutions. The proposed RSA algorithm does not require an initial solution and iterative iterations and only needs to be run once to obtain the solution set. Furthermore, we prove the optimality and time complexity of RSA and conduct multiple sets of example simulation experiments. Compared with other algorithms, RSA performs better in terms of computational speed and solution quality. The advantage is especially more obvious in the computation of large-scale problems. It is applicable to various emergency disaster relief scenarios and can meet the requirements of fast response and timeliness.

As complex networks become ubiquitous in modern society, ensuring their reliability is crucial due to the potential consequences of network failures. However, the analysis and assessment of network reliability become computationally challenging as networks grow in size and complexity. This research proposes a novel graph-based neural network framework for accurately and efficiently estimating the survival signature and network reliability. The method incorporates a novel strategy to aggregate feature information from neighboring nodes, effectively capturing the response flow characteristics of networks. Additionally, the framework utilizes the higher-order graph neural networks to further aggregate feature information from neighboring nodes and the node itself, enhancing the understanding of network topology structure. An adaptive framework along with several efficient algorithms is further proposed to improve prediction accuracy. Compared to traditional machine learning-based approaches, the proposed graph-based neural network framework integrates response flow characteristics and network topology structure information, resulting in highly accurate network reliability estimates. Moreover, once the graph-based neural network is properly constructed based on the original network, it can be directly used to estimate network reliability of different network variants, i.e., sub-networks, which is not feasible with traditional non-machine learning methods. Several applications demonstrate the effectiveness of the proposed method in addressing network reliability analysis problems.

Human social interactions tend to vary in intensity over time, whether they are in person or online. Variable rates of interaction in structured populations can be described by networks with the time-varying activity of links and nodes. One of the key statistics to summarize temporal patterns is the inter-event time, namely the duration between successive pairwise interactions. Empirical studies have found inter-event time distributions that are heavy-tailed, for both physical and digital interactions. But it is difficult to construct theoretical models of time-varying activity on a network that reproduce the burstiness seen in empirical data. Here we develop a spanning-tree method to construct temporal networks and activity patterns with bursty behavior. Our method ensures any desired target inter-event time distributions for individual nodes and links, provided the distributions fulfill a consistency condition, regardless of whether the underlying topology is static or time-varying. We show that this model can reproduce burstiness found in empirical datasets, and so it may serve as a basis for studying dynamic processes in real-world bursty interactions.

This study focuses on the path planning problem for Unmanned Combat Vehicles (UCVs), where the goal is to find a viable path from the starting point to the destination while avoiding collisions with moving obstacles, such as enemy forces. The objective is to minimize the overall cost, which encompasses factors like travel distance, geographical difficulty, and the risk posed by enemy forces. To address this challenge, we have proposed a heuristic algorithm based on D* lite. This modified algorithm considers not only travel distance but also other military-relevant costs, such as travel difficulty and risk. It generates a path that navigates around both fixed unknown obstacles and dynamically moving obstacles (enemy forces) that change positions over time. To assess the effectiveness of our proposed algorithm, we conducted comprehensive experiments, comparing and analyzing its performance in terms of average pathfinding success rate, average number of turns, and average execution time. Notably, we examined how the algorithm performs under two UCV path search strategies and two obstacle movement strategies. Our findings shed light on the potential of our approach in real-world UCV path planning scenarios.

Weinberger: Formal Procedures, for Connecting Terminals with a Minimum Total Wire Length

- H Loberman
- H. Loberman