Article

Operations Analysis During the Underwater Search for Scorpion

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

This paper discusses the operations analysis in the underwater search for the remains of the submarine ScorpionThe a priori target location probability distribution for the search was obtained by monte-carlo procedures based upon nine different scenarios concerning the Scorpion loss and associated credibility weights. These scenarios and weights were postulated by others. Scorpion was found within 260 yards of the search grid cell having the largest a priori probabilityFrequent computations of local effectiveness probabilities (LEPs) were carried out on scene during the search and were used to determine an updated (a posteriori) target location distribution. This distribution formed the basis for recommendation of the current high probability areas for searchThe sum of LEPs weighted by the a priori target location probabilities is called search effectiveness probability (SEP) and was used as the overall measure of effectiveness for the operation. SEP and LEPs were used previously in the Mediterranean H-bomb searchOn-scene and stateside operations analysis are discussed and the progress of the search is indicated by values of SEP for various periods during the operation.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... This search has a duration of 4-5 weeks after the aircraft accident. The sound from the pingers is likely to be audible at distances less than 1 to 1. 5 ...
... Classic references on search theory applied in practice are [5] and [6]. Typically the process follows the following steps: 1. Develop an initial, or prior, distribution for the search object. ...
Article
Search planning based on a Bayesian updating of the prior location distribution of the search object, to account for search effort expended, forms the basis of current approaches. This approach has been very successful in practice. Based on experiences of planning the deep ocean search for the South African Airways Flight SA295, and search planning for the Air France Flight AF447, a set of principles are proposed for guiding the gathering and relating of data on which to build the prior location distribution. These principles bring structure and rigour to a phase of the planning that is very difficult and unstructured conceptually. These principles guide one in what data to look for, how to look for such data, and helps one improve planning scenarios by relating the data to each other on a common basis. These principles also apply in general contexts where evidence is sought to explain phenomena.
... Analysts were sent on-scene to update the distribution for search effort and to recommend the allocation of the continuing search effort. This work is documented in Richardson and Stone [1971]. Again this process was strikingly successful. ...
... In 1968, this search methodology was further developed by Richardson and Stone [1971] to produce probability maps for the successful search for the remains of the nuclear submarine, USS Scorpion. The technology reached a more advanced state of maturity in the CASP developed for the U.S. Coast Guard by Richardson and Discenza [1980] with assistance from Stone and others. ...
Article
Full-text available
Fundamental limitations inherent in manual search planning methods have severely limited the application of advances in several areas that could improve the efficiency and effectiveness of the U.S. Coast Guard's search and rescue mission. These areas include advances in search theory, environmental data products, knowledge of detection profiles for various sensors, and knowledge of leeway behavior. The U.S. Coast Guard's computerized search planning aids have not kept up with advances in these areas or with technology in general. This report reviews the history and recent advances of search theory and its application to a variety of search problems. It then reviews the history of the U.S. Coast Guard's search planning methods, showing where search theory was initially applied, albeit in a necessarily very limited way, and where later modifications departed from the theoretical basis of the original methodology. Several computerized search planning decision support tools are analyzed and compared, as are the differences between an analytic approach and a simulation approach. The results are summarized in a matrix. The U.S. Coast Guard needs a new search planning decision support tool for search and rescue and other missions. This tool should use the simulation approach due to its power and flexibility as compared to analytic techniques.
... Metron's previous work in search applications, detailed in references [1,2,3], includes searches for the U.S. nuclear submarine Scorpion, the SS Central America, and Steve Fossett's crash site. In addition, Metron played a key role in developing the U. S Coast Guard's Search and Rescue Optimal Planning System (SAROPS) which has been successfully employed to plan and execute searches for ships and personnel lost at sea [4]. ...
... Using a Bayesian approach we organized this material into consistent scenarios, quantified the uncertainties with probability distributions, weighted the relative likelihood of each scenario, and performed a simulation to produce a prior PDF for the location of the wreck. This is the same methodology that was pioneered in [1] and incorporated into SAROPS. ...
Conference Paper
Full-text available
On 1 June 2009 Air France Flight 447, with 228 passengers and crew aboard, disappeared over the South Atlantic during a night flight from Rio de Janeiro to Paris. An international air and surface search effort located the first floating debris during the sixth day of search. Three phases of unsuccessful search for the underwater wreckage ensued. Phase I was a passive acoustic search for the aircraft's underwater locator beacons. Phases II and III were side-looking sonar searches scanning the ocean bottom for the wreckage field. In July of 2010 the French Bureau d'Enquêtes et d'Analyses tasked Metron to review the searches and produce posterior probability maps for the location of the wreckage. These maps were used to plan the next phase of search beginning in March 2011. On April 3, after one week of search, the wreckage was located in a high probability area of the map.
... Bayesian analysis is ideally suited to planning complicated and difficult searches involving uncertainties that are quantified by a combination of objective and subjective probabilities. This approach has been applied to a number of important and successful searches in the past, in particular, the searches for the USS Scorpion [7] and SS Central America [8]. This approach is the basis for the U.S. Coast Guard's Search and Rescue Optimal Planning System (SAROPS) that is used to plan Coast Guard maritime searches for people and ships missing at sea [5]. ...
... In the analysis that we performed for the BEA, we were not called upon to provide a recommended allocation of search effort but only to compute the posterior distribution for the location of the wreckage. The approach taken for this analysis follows the model described in [7] and [8]. The information about a complex search is often inconsistent and contradictory. ...
Article
Full-text available
In the early morning hours of June 1, 2009, during a flight from Rio de Janeiro to Paris, Air France Flight AF 447 disappeared during stormy weather over a remote part of the Atlantic carrying 228 passengers and crew to their deaths. After two years of unsuccessful search, the authors were asked by the French Bureau d'Enqu\^{e}tes et d'Analyses pour la s\'{e}curit\'{e} de l'aviation to develop a probability distribution for the location of the wreckage that accounted for all information about the crash location as well as for previous search efforts. We used a Bayesian procedure developed for search planning to produce the posterior target location distribution. This distribution was used to guide the search in the third year, and the wreckage was found with one week of undersea search. In this paper we discuss why Bayesian analysis is ideally suited to solving this problem, review previous non-Bayesian efforts, and describe the methodology used to produce the posterior probability distribution for the location of the wreck.
... The optimal search method used to find the Scorpion submarine (Richardson and Stone 1971) focusses on exploring a very large area to find a stationary target. This approach grids the area and then uses an a priori probability distribution to decide which cells have the highest probability of containing the target. ...
... In terms of path-planning, both approaches are very similar, but Ablavsky et al. could not manage resource constraints or non-search goals. It would be worth comparing our search pattern-based approach with the probabilistic methods of (Richardson and Stone 1971) and (Furukawa et al. 2012), on experiments to find a stationary or drifting target, to explore scaling. Our future work will explore this further. ...
Conference Paper
Full-text available
Micro Aerial Vehicles (MAVs) are increasingly regarded as a valid low-cost alternative to UAVs and ground robots in surveillance missions and a number of other civil and military applications. Re- search on autonomous MAVs is still in its infancy and has focused almost exclusively on integrating control and computer vision techniques to achieve reliable autonomous flight. In this paper, we de- scribe our approach to using automated planning in order to elicit high-level intelligent behaviour from autonomous MAVs engaged in surveillance applications. Planning offers effective tools to handle the unique challenges faced by MAVs that re- late to their fast and unstable dynamics as well as their low endurance and small payload capabilities. We demonstrate our approach by focusing on the “Parrot AR.Drone2.0” quadcopter and Search- and-Tracking missions, which involve searching for a mobile target and tracking it after it is found.
... In early July of 2009 the French Bureau d'Enquêtes et d'Analyses pour la sécurité de l'aviation civile, abbreviated as BEA, contacted Metron for assistance in the preparation of Phase II of the search, utilizing side-looking sonar to scan the ocean bottom for the wreckage field. Metron's previous work in search applications, detailed in references [1,2,3], included the search for the U.S. nuclear submarine Scorpion, the SS Central America, and the overland search for Steve Fossett's crash site. In addition, Metron played a key role in the development of the US Coast Guard's SAROPS software, which has been successfully employed to plan and execute searches for ships and personnel lost at sea [4]. ...
... Note that SAROPS accounts for the crosswind leeway as well as the downwind leeway in performing its reverse drift computations. It also accounts for the uncertainty in leeway predictions by assigning a statistical distribution to the leeway based on the standard error of the regression performed to generate the equations in (1). SAROPS samples the leeway for each particle undergoing reverse drift. ...
... Cette approche ne peut être retenu dans notre étude car elle ne prend pas suffisamment en considération l'aléa. La seconde démarche est l'analyse ex post considérant l'approche structuré de recherche des débris de l'avion et sa localisation (Richardson et al., 1971;Stone et al., 2011) ; la compréhension des effets météorologiques dans l'explication de l'accident (Kaplan et al., 2005 ;Bottyan et al., 2010, Cabrera, 2011 et la communication de crise (Sevin, 2010). La complexité des démarches nous conduit à répondre à plusieurs questions de recherche : H1 : La réglementation de la formation des pilotes dans un système complexe est-elle un outil de gestion de risque ou l'illusion de contrôle ? ...
... Cette démarche socio-économique (Plane, 2003 ;Savall et al., 2009) met en évidence que l'entreprise Air France privilégie une stratégie ex ante de l'accident sans proposer des scénarios de gestion de crise au-delà des documents réglementaires applicables. Selon les critères de risque élaboré par l'OACI, la probabilité ou l'occurrence de risque d'un accident reste très faible voire improbable. ...
Chapter
Full-text available
La croissance continue du trafic aérien et l’augmentation corrélative des accidents conduisent les acteurs du transport aérien à valoriser la gestion des connaissances en matière de gestion du risque. Elle vise à favoriser le capital immatériel détenu par les différents acteurs en vues, d’une part, de permettre une meilleure efficacité dans la réalisation des processus et, d’autre part, de faciliter la transmission du savoir et de l’expertise des acteurs (Beler, 2008). Le mode aérien est devenu un transport de masse où le progrès technologique et l’infime probabilité d’accident conduisent à développer la croyance du risque nul. L’homme a banalisé le ciel avec l’aviation commerciale et l’on parle aujourd’hui d’airways, (d’autoroute du ciel) où plus le transport est sûr moins l’aléa est accepté (Cahier, 2009). Lorsque survient l’accident, et donc ce qui n’aurait pas dû advenir, on tente de définir des causes, parfois des effets et surtout des responsabilités (Mont Saint Odile, 1992 et Concorde à Gonesse, 2000). Dans le cadre d’une démarche qualité, élaborée par les acteurs du transport aérien afin d’améliorer la satisfaction du client, un processus de résolution de problème a été initié à l’aide des quelques outils suivants : PCDA (Plan-Do-Check-Act) ; TOPS (Team-Oriented-Problem-Solving) ; Six Sigma ou DMAICS (Define, Measure, Improve, Control and Standardize). Le problème de ces démarches résident dans le caractère formalisé de l’incertitude et l’incapacité de traiter l’évitement de l’accident. Il est précisé (OACI, 2007) que la sécurité a pour objectif d’éviter l’accident et la sûreté d’empêcher tout acte malveillant. L’anticipation et la maîtrise du danger dans le secteur aérien s’établit sous la forme de normes internationales contenus dans la Convention de Chicago (1944), les annexes relatives aux accidents par l’OACI et souvent renforcées par des directives européennes et intégrées dans le droit français (Code de l’Aviation Civile).
... Searching in a geometric space is an active area of research, predating computer technology. The applications are varied, ranging from robotics to search-and-rescue operations in the high seas [133,120], to avalanche rescue [33], to office discovery automation [90,60,91], to scheduling of heuristic algorithms for solvers searching an abstract solution space [104,105,119,13,116]. Within academia, the field has seen two marked boosts in activity. The first was motivated by the loss of weaponry off the coast of Spain in 1966 in what is known as the Palomares incident and of the USS Thresher and Scorpion submarines in 1963 and 1966 respectively [133,146]. ...
... The applications are varied, ranging from robotics to search-and-rescue operations in the high seas [133,120], to avalanche rescue [33], to office discovery automation [90,60,91], to scheduling of heuristic algorithms for solvers searching an abstract solution space [104,105,119,13,116]. Within academia, the field has seen two marked boosts in activity. The first was motivated by the loss of weaponry off the coast of Spain in 1966 in what is known as the Palomares incident and of the USS Thresher and Scorpion submarines in 1963 and 1966 respectively [133,146]. A second renewed thrust took place in the late 1980s when the applications for autonomous robots became apparent. ...
... Some of the earliest documented searches divided the search region into smaller cells, and assigned probabilities to each of those cells based on a structured mix of subjective and objective information. For example, Figure 3 shows maps from the 1967 search for the USS Scorpion (Richardson and Stone 2006) and Figure 4 ...
... Composite probability map for the USS Scorpion. Used with permission from (Richardson and Stone 2006) ...
Article
US wilderness search and rescue consumes thousands of person-hours and millions of dollars annually. Timeliness is critical: the probability of success decreases substantially after 24 hours. Although over 90% of searches are quickly resolved by standard “reflex” tasks, the remainder require and reward intensive planning. Planning begins with a probability map showing where the lost person is likely to be found. The MapScore project described here provides a way to evaluate probability maps using actual historical searches. In this work we generated probability maps the Euclidean distance tables in (Koester 2008), and using Doke's (2012) watershed model. Watershed boundaries follow high terrain and may better reflect actual barriers to travel. We also created a third model using the joint distribution using Euclidean and watershed features. On a metric where random maps score 0 and perfect maps score 1, the Euclidean distance model scored 0.78 (95%CI: 0.74–0.82, on 376 cases). The simple watershed model by itself was clearly inferior at 0.61, but the Combined model was slightly better at 0.81 (95%CI: 0.77–0.84).
... Theoretical studies of search strategies can be traced back to World War II, during which the US navy tried to most efficiently hunt for submarines and developed rationalized search procedures [Champagne et al., 2003]. Similar search algorithms have since been developed and utilized in the context of castaway rescue operations [Frost and Stone, 2001], or even for the recovery of Scorpion, a nuclear submarine lost near the Azores in 1968 [Richardson and Stone, 1971]. Another important and widely studied example of search processes at the macroscopic scale relates to animals searching for mates, food or a shelter [Charnov, 1976, O'Brien et al., 1990, Bell, 1991, Viswanathan et al., 1999, Shlesinger, 2006, 2 INTRODUCTION Edwards et al., 2007 which we discuss in more details in this thesis. ...
... Richardson (1967) described his successful application of search theory to find a Hydrogen bomb on the ocean floor in 1964. The same principles were applied to the impressively successful search for the wreck of the submarine USS Scorpion that was found within 260 yards of the highest probability cell in the distribution (Richardson & Stone, 1971). ...
... . 연구자 중 Koopman의 연구는 현재 탐색 문제 연구의 시발점이 되는 중요한 업적을 남겼다 [2] . 전후 탐색이론을 응용한 사례로는 1968년에 실종된 미 핵잠수함 '스콜피온' 탐색 작전 [7] 과 CASP(Computer Assisted Search Planning) 체계를 이용한 구소련 잠수 함 탐색 등의 사례가 있다 [9] . ...
Article
It is not easy job to find a underwater target using sonar system in the ASW operations. Many researchers have tried to solve anti-submarine search problem aiming to maximize the probability of detection under limited searching conditions. The classical 'Search Theory' deals with search allocation problem and search path problem. In both problems, the main issue is to prioritize the searching cells in a searching area. The number of possible searching path that is combination of the consecutive searching cells increases rapidly by exponential function in the case that the number of searching cells or searchers increases. The more searching path we consider, the longer time we calculate. In this study, an effective algorithm that can maximize the probability of detection in shorter computation time is presented. We show the presented algorithm is quicker method than previous algorithms to solve search problem through the comparison of the CPU computation time.
... It also requires an understanding of the probability of discovering the object within an area as a function of search time or effort applied there (since the ease and cost of detecting a lost object can potentially vary among locations). The theory has been applied in multiple real-life situations such as the search for missing aircraft and naval vessels (Richardson and Stone, 1971) and is even integrated into the United States coast-guard computer assisted search and rescue (Richardson and Discenza, 1980). Here we apply it in a somewhat unusual way to understand why some organisms are selected to give the illusion of being conspicuous, when they are cryptic at rest. ...
Article
Full-text available
Some cryptic animals have conspicuous color patches that are displayed when they move. This “flash behavior” may serve several functions, but perhaps the most widely invoked explanation is that the display makes it harder for the signaler to be found by predators once it has settled. There is now some experimental evidence that flash behavior while fleeing can enhance the survivorship of prey in the manner proposed. However, to date there has been no explicit mathematical model to help understand the way in which flash displays might interfere with the search process of predators. Here we apply Bayesian search theory to show that the higher the conspicuousness of a prey item, the sooner a predator should give up searching for it in an area where it appears to have settled, although the relationship is not always monotonically decreasing. Thus, fleeing prey that give the impression of being conspicuous will tend to survive at a higher rate than prey seen to flee in their cryptic state, since predators search for flashing prey for an inappropriately short period of time. The model is readily parameterized and makes several intuitive predictions including: (1) the more confident a predator is that a prey item has settled in a given area, the longer it will search there, (2) the more conspicuous the flash display, the greater its effect in reducing predation, (3) flash behavior will especially benefit those prey with an intermediate level of crypsis when at rest, and (4) the success of flash displays depends on the predator being uncertain of the prey’s resting appearance. We evaluate the empirical evidence for these predictions and discuss how the model might be further developed, including the incorporation of mimicry which would maintain the deception indefinitely.
... Stochastic searching [1] underlies a wide variety of processes in biology [2][3][4], animal foraging [5][6][7][8][9], chemical reactions [10,11], and search operations for missing people or lost items [12][13][14]. The basic goal is to minimize the time needed to successfully find a desired target. ...
Article
Full-text available
We investigate a stochastic search process in one dimension under the competing roles of mortality, redundancy, and diversity of the searchers. This picture represents a toy model for the fertilization of an oocyte by sperm. A population of $N$ independent and mortal diffusing searchers all start at $x=L$ and attempt to reach the target at $x=0$. When mortality is irrelevant, the search time scales as $\tau_D(\ln N)^{-5/4}$ for $N\gg 1$, where $\tau_D\sim L^2/D$ is the diffusive time scale. Conversely, when the mortality rate $\mu$ of the searchers is sufficiently large, the search time scales as $\sqrt{\tau_D/\mu}$. When the diffusivities of very short-lived searchers are distinct, the searchers with a non-trivial optimal diffusivity are most likely to reach the target. We also discuss the effect of chemotaxis on the search time and its fluctuations.
... The first theoretical studies on this subject date back to World War II, when the U.S. Navy was hunting submarines [56,193]. The same algorithms have been rationalized to be used in rescue context [92,171]: once a ship or a submarine is lost, it can be advantageous to scan the shores with an optimized strategy, in order to rescue as fast as possible the survivors. The same kind of search process applies to animals looking for food, mate or shelter, as mentioned above [17,22,90,120,130,153,218,219]. ...
Article
Full-text available
First-passage properties in general, and the mean first-passage time (MFPT) in particular, are widely used in the context of diffusion-limited processes. Real processes are not always purely Brownian: in the last few years, non-Brownian behaviors have been observed in an increasing number of systems. Especially single particle experiments in living cells provide striking examples for systems in which non-Brownian behavior of subdiffusive kind has been repeatedly observed experimentally. Here we present a method based on first-passage properties to gain more detailed insight into the actual physical processes underlying the anomalous diffusion behavior, and to probe the environment in which this diffusion process evolves. This method allows us to discriminate between three prominent models of subdiffusion: continuous time random walks, diffusion on fractals, and fractional Brownian motion. We also investigate the search efficiency of random walks on discrete networks for a specific target. We show how to compute first-passage properties on those networks in order to optimize the search process, as well as general bounds on the global mean first-passage time (GMFPT). Using those results, we estimate the impact on the search efficiency of several parameters, namely the target connectivity, the target motion, or the network topology.
... The theory of optimal search for a stationary target was first developed by Koopman (1946 and later refined by several authors such as Stone and Stanshinine (1971) and Stone (1973Stone ( , 1975Stone ( , 1976. It has been successfully applied in both civil and military search missions; see, for instance, Richardson and Stone (1971), Richardson et al. (1980), Stone (1992), Kratzke et al. (2010), and Stone et al. (2014). While recent research mainly focuses on moving targets (e.g. ...
Preprint
Full-text available
The uniformly optimal search plan is a cornerstone of the optimal search theory. It is well-known that when the target distribution is circular normal and the detection function is exponential, the uniformly search plan has several desirable properties. This article establishes that these properties hold for any continuous target distribution. The results provide useful information to the search team when they need to choose a non-circular normal target distribution in a real-world search mission.
... Optimal search theory has been applied in many searches for missing ships, submarines, and planes (Stone, 1992;Richardson and Stone, 1971;Stone et al., 2011) and dates back to the efforts of the Anti-Submarine Warfare Operations Research Group during World War II (Koopman, 1956a(Koopman, ,b, 1957. Dobbie (1968), Stone (1975), and Benkoski et al. (1981) provide thorough surveys of early search theory results while Washburn (2002) is a comprehensive reference on search models and searches for a moving target. ...
Article
Full-text available
We consider a search for an immobile object that can only be detected if the searcher is within a given range of the object during one of a finite number of instantaneous detection opportunities, i.e., “pings.” More specifically, motivated by naval searches for battery-powered flight data recorders of missing aircraft, we consider the trade-off between the frequency of pings for an underwater locator beacon and the duration of the search. First, assuming the search speed is known, we formulate a mathematical model to determine the pinging period that maximizes the probability that the searcher detects the beacon before it stops pinging. Next, we consider generalizations to discrete search speed distributions under a uniform beacon location distribution. Lastly, we present a case study based on the search for Malaysia Airlines Flight 370 that suggests the industry-standard beacon pinging period—roughly one second between pings—is too short.
... Stochastic searching [1] underlies many biological processes [2][3][4], animal foraging [5][6][7][8][9], as well as operations to find missing persons or lost items [10][11][12]. In all these situations, a basic goal is to minimize the time and/or the cost required to find the target. ...
Article
We investigate a stochastic search process in one, two, and three dimensions in which $N$ diffusing searchers that all start at $x_0$ seek a target at the origin. Each of the searchers is also reset to its starting point, either with rate $r$, or deterministically, with a reset time $T$. In one dimension and for a small number of searchers, the search time and the search cost are minimized at a non-zero optimal reset rate (or time), while for sufficiently large $N$, resetting always hinders the search. In general, a single searcher leads to the minimum search cost in one, two, and three dimensions. When the resetting is deterministic, several unexpected feature arise for $N$ searchers, including the search time being independent of $T$ for $1/T\to 0$ and the search cost being independent of $N$ over a suitable range of $N$. Moreover, deterministic resetting typically leads to a lower search cost than in stochastic resetting.
... he first practical issues relating to search for a lost target were posed by B. Koopman in the US Navy during World War II (Koopman 1946) and revolved around providing efficient methods of detecting submarines. Search theory techniques were then used by the US Navy to plan searches for objects such as the H-bomb lost in the ocean near Palomares, Spain, in 1966 and the submarine Scorpion lost in 1968 (Richardson and Stone 1971). In the same years, theory of optimal search emerged as branch of operations research and focused on stationary targets (Stone 1975). ...
Article
Full-text available
It is particularly challenging to devise techniques for underpinning the behaviour of autonomous vehicles in surveillance missions as these vehicles operate in uncertain and unpredictable environments where they must cope with little stability and tight deadlines in spite of their restricted resources. State-of-the-art techniques typically use probabilistic algorithms that suffer a high computational cost in complex real-world scenarios. To overcome these limitations, we propose a hybrid approach that combines the probabilistic reasoning based on the target motion model offered by Monte Carlo simulation with long-term strategic capabilities provided by automated task planning. We demonstrate our approach by focusing on one particular surveillance mission, search-and-tracking, and by using two different vehicles, a fixed-wing UAV deployed in simulation and the “Parrot AR.Drone2.0” quadcopter deployed in a physical environment. Our experimental results show that our unique way of integrating probabilistic and deterministic reasoning pays off when we tackle realistic missions.
... We abstract problem as a global optimization problem with the number constraints of searching ships and aircrafts to establish a three-dimensional maritime searching and global optimization model [19]. The model takes into account the searching area, the maximum speed, the ability of ships and aircrafts to search, the initial distance, the maximum endurance time and other factors of searching ships and aircraft, and comprehensively analyze the relation between the search coverage time and search power quantity (cost), then we obtain the optimization (economic and feasible). ...
Article
The issue of searching missing aircraft is valuable after the event of MH370. This paper provides a global optimal model to foster the efficiency of maritime search. Firstly, the limited scope, a circle whose center is the last known position of the aircraft, should be estimated based on the historical data recorded before the disappearance of the aircraft. And Bayes' theorem is applied to calculate the probability that the plane falling in the region can be found. Secondly, the drift of aircraft debris under the influence of wind and current is considered via Finite Volume Community Ocean Model(FVCOM) and Monte Carlo Method(MC), which make the theory more reasonable. Finally, a global optimal model about vessel and aircraft quantitative constraints is established, which fully considers factors including the area of sea region to be searched, the maximum speed, search capabilities, initial distance of the vessels by introducing 0-1 decision variables.
... A prior probability distribution on target location A function relating search effort and detection probability A constrained amount of search effort (Richardson and Stone, 1971), and the H-bomb lost off the coast of Spain in 1964 (Richardson, (1967) are examples of searches for stationary targets. Other examples include searches for downed aircraft, hidden natural resources (gas, oil, minerals, etc.), searches for archeological sites and artifacts, and even searches for something as mundane as lost car keys. ...
... Bayesian search is based on the assumption that the search space can be divided into finite cells/graphs and that each cell represents individual probability of detection [10]. The goal is to determine the optimal path to find the target(s), according to the probability distribution function (PDF) [11] [12]. Although each model relies on different assumptions and uses different objectives, viewed from a computational standpoint, they are all NP-hard problems [8] [9] [10]. ...
Article
Full-text available
Search is an essential technology for rescue and other mobile robot applications. Many robotic search and rescue systems rely on teleoperation. One of the key problems in search tasks is how to cover the search space efficiently. Search is also central to humans’ daily activities. This paper analyzes and models human search behavior using data from actual teleoperation experiments. The analysis of the experimental data uses a novel technique to decompose search data, based on structure learning and K-means clustering. The analysis explores three hypotheses: (1) humans are able to solve a complex search task by breaking it up into smaller tasks, (2) humans consider both coverage and motion cost, and (3) robots can outperform humans in search problems. The enhanced understanding of human search strategies can then be applied to the design of human-robot interfaces and search algorithms. The paper describes a technique for augmenting human search. Since the objective functions in search problems are submodular, greedy algorithms can generate near-optimal subgoals. These subgoals then can be used to guide humans in searching. Experiments showed that the humans’ search performance is improved with the subgoals’ assistance.
... Metron brought to the table a wealth of experience in search planning operations [4] [5]. Most recently, this expertise had been formalized in the search planning and assessment algorithms contained in the Search and Rescue Optimal Planning System (SAROPS), a computer-based tool used by the U.S. Coast Guard (USCG) to plan searches for persons and vessels missing at sea [6]. ...
Conference Paper
Steve Fossett, a famous adventurer, disappeared on 3 September 2007 in a remote area of Nevada during a solo pleasure flight in a small aircraft. An intense search conducted by both private parties and multiple government agencies was eventually suspended with negative results. Fossett's wreck was discovered in California's eastern Sierra Nevada mountains by a hiker almost one year later. This paper describes an independent effort to direct search efforts using probabilistic search theory and optimization techniques. Using field data from various sources, several scenarios were constructed to establish a prior distribution of the crash site. Search efforts were assessed to generate a posterior of the crash location for future search efforts.
... The assumptions of Bayesian search are that the search area can be divided into finite cells/graphs and that each cell represents individual probability of detection (PD) (See Fig. 1c). The goal is to determine the optimal path to find the lost or moving target(s) according to the probability distribution function (PDF) (Richardson and Stone 1971;Stone et al. 2014). The three steps of Bayesian search for a target are as follows: The first is to compute prior PDF according to motion information (e.g. ...
Article
Full-text available
The goal of search is to maximize the probability of target detection while covering most of the environment in minimum time. Existing approaches only consider one of these objectives at a time and most optimal search problems are NP-hard. In this research, a novel approach for search problems is proposed that considers three objectives: (1) coverage using the fewest sensors; (2) probabilistic search with the maximal probability of detection rate (PDR); and (3) minimum-time trajectory planning. Since two of three objective functions are submodular, the search problem is reformulated to take advantage of this property. The proposed sparse cognitive-based adaptive optimization and PDR algorithms are within ((Formula presented.)) of the optimum with high probability. Experiments show that the proposed approach is able to search for targets faster than the existing approaches.
Chapter
This chapter discusses the state of the art of Minimum Time Search (MTS) problem, analysing with greater detail several works that have motivated this thesis. The chapter is divided into two sections. The first one discusses, from a general point of view, related probabilistic search problems such as coverage or the Travelling Salesman Problem (TSP), stressing their common characteristics and differences with MTS. The second section analyzes in more detail the state of the art of Probabilistic Search (PS), which aims to find the best Unmanned Vehicles (UV) search trajectories in uncertain environments and which encompasses the MTS problem.
Article
The systematic search behaviour is a backup system that increases the chances of desert ants finding their nest entrance after foraging when the path integrator has failed to guide them home accurately enough. Here we present a mathematical model of the systematic search that is based on extensive behavioural studies in North African desert ants Cataglyphis fortis. First, a simple search heuristic utilising Bayesian inference and a probability density function is developed. This model, which optimises the short-term nest detection probability, is then compared to three simpler search heuristics and to recorded search patterns of Cataglyphis ants. To compare the different searches a method to quantify search efficiency is established as well as an estimate of the error rate in the ants' path integrator. We demonstrate that the Bayesian search heuristic is able to automatically adapt to increasing levels of positional uncertainty to produce broader search patterns, just as desert ants do, and that it outperforms the three other search heuristics tested. The searches produced by it are also arguably the most similar in appearance to the ant's searches.
Conference Paper
Spatiotemporal reasoning is a basic form of human cognition for problem solving. To utilize this potential in the steadily increasing number of mobile and Web applications, significant amounts of spatiotemporal data need to be available. This paper advocates user-generated content and crowdsourcing techniques as a means to create rich, both, in terms of quantity and quality, spatiotemporal datasets.
Article
Full-text available
The Geosynchronous Satellite Launch Vehicle (GSLV) launched from SHAR, Sriharikota on 10 July 2006 fell into the Bay of Bengal due to failure of the first stage L-40 engine. Search for recovering strap on engine(s) for conducting failure analysis was undertaken. Sagar Kanya, Sagar Purvi and Sagar Paschimi - the vessels of the Ministry of Earth Sciences, Government of India, and Akademik Boris Petrov, a research vessel that was hired, were deployed in search and recovery operations. Methods adopted for search were swath bathymetric surveys, side scan sonar surveys, underwater videography and search by remotely operated vehicles. Divers guided by survey results, recovered three engines and parts of a fourth engine within 100 days of the mishap from 10 to 30 m depth in the Bay of Bengal.
Article
This paper provides an overview of the Computer-Assisted Search Planning (CASP) system developed for the United States Coast Guard. The CASP information processing methodology is based upon Monte Carlo simulation to obtain an initial probability distribution for target location and to update this distribution to account for drift due to currents and winds. A multiple scenario approach is employed to generate the initial probability distribution. Bayesian updating is used to reflect negative information obtained from unsuccessful search. The principal output of the CASP system is a sequence of probability “maps” which display the current target location probability distributions throughout the time period of interest. CASP also provides guidance for allocating search effort based upon optimal search theory.
Article
This article provides a survey of published works in search theory.
Article
NRL's Deep Sea Floor Era began in 1963 when, in response to the loss of the nuclear submarine THRESHER, a small team began developing the instruments and methods to search the deep ocean floor. After photographing THRESHER in 1964 the team helped locate and recover an H-bomb in 1966. In 1968 the team located the sunken submarine SCORPION, in 1969 they found and helped recover the submersible ALVIN and 1970 they found the French submarine EURYDICE. Five times between 1970 and 1974 they photographed the nerve-gas-laden LE BARRON RUSSELL BRIGGS and collect water samples above her deck at a depth of three miles. The converted cargo ship MIZAR (T-AGOR-11) was the key surface platform in these missions. Her NRL-designed center well permitted towing a suite of instruments from the center of pitch and roll. Three hull-mounted hydrophones, a transponder on the sea floor and a responder on the instrument suite were the elements of an underwater tracking system. Cameras, strobe lights, magnetometer and side- looking sonar were the prime search instruments. A multiple water sampler and a biological trawl were added for environmental missions. A development of the search team called LIBEC, for Light Behind Camera, expanded the areal coverage of conventional underwater cameras by sixty times. The Era essentially ended when sponsorship for MIZAR was transferred to the Naval Sea Systems Command in early 1975.
Article
Since World War II, the principles of search theory have been applied successfully in numerous important operations. These include the 1966 search for a lost H-bomb in the Mediterranean near Palomares, Spain, the 1968 search for the lost nuclear submarine Scorpion near the Azores, and the 1974 underwater search for unexploded ordnance during clearance of the Suez Canal. The U.S. Coast Guard employs search theory in its open ocean search and rescue planning. Search theory is also used in astronomy, and in radar search for satellites. Numerous additional applications, including those to industry, medicine, and mineral exploration, are discussed in the proceedings of the 1979 NATO Advanced Research Institute on Search Theory and Applications. Applications to biology and machine maintenance and inspection is described in the literature. Further references to the literature are provided in the first section of this document. This is followed in the second section by an illustration of how search theory can be used to solve an optimal search problem.
Article
;Contents: Detection theory and detection models; Decision criteria; Two detection models; Detection model applications; Search theory and search models; The probability of detection during a search; Experimental validation of detection models; Sweep width determination for a random track angle; A parallel sweep search model; The optimal allocation of search effort problem; Target position probability distributions which change in time.
Article
Purpose The purpose is to develop search and detection strategies that maximize the probability of detection of mine-like objects. Design/methodology/approach The author have developed a methodology that incorporates variational calculus, number theory and algebra to derive a globally optimal strategy that maximizes the expected probability of detection. Findings The author found a set of look angles that globally maximize the probability of detection for a general class of mirror symmetric targets. Research limitations/implications The optimal strategies only maximize the probability of detection and not the probability of identification. Practical implications In the context of a search and detection operation, there is only a limited time to find the target before life is lost; hence, improving the chance of detection will in real terms be translated into the difference between success or failure, life or death. This rich field of study can be applied to mine countermeasure operations to make sure that the areas of operations are free of mines so that naval operations can be conducted safely. Originality/value There are two novel elements in this paper. First, the author determine the set of globally optimal look angles that maximize the probability of detection. Second, the author introduce the phenomenon of concordance between sensor images.
Article
Full-text available
For the first time in history, a scientifically sound yet practical method for objectively determining detection probabilities for objects of importance to search and rescue (SAR) in the land environment was successfully developed and field-tested. Data was collected using volunteer searchers and analyzed with simplified analysis techniques, all at very low cost. This work opens the door for resolving search planning and evaluation issues that have been vigorously debated within the land SAR community for nearly 30 years but never settled. Searching is by its very nature a probabilistic process. However, a carefully planned search using the right tools and concepts is significantly more likely to succeed and, of equal importance when lives are at stake, succeed sooner. Planning a search consists of evaluating all the available information and then, since it is not generally possible to do a thorough search everywhere all at once, deciding how to best utilize the available, and often limited, search resources. However, a carefully planned search using the right tools and concepts is significantly more likely to succeed and, of equal importance when lives are at stake, succeed sooner. The simplest metric for quantifying "detectability" is a value called the "effective sweep (or search) width" (ESW). This concept reduces the combined effects of all the factors affecting detection (sensor, environment, search object) in a given search situation to a single number characterizing search object "detectability" for that situation. Effective Sweep Width can be considered a "detectability index" that takes everything into consideration. An experimental methodology to determine effective sweep width had already been piloted and discussed in A Method for Determining Effective Sweep Widths For Land Searches: Procedures for Conducting Detection Experiments. That report made several suggestions for enhancements and noted several difficulties that occurred during the pilot experime
Chapter
This paper provides an overview of computer assisted search (CAS) information processing. The objective is to give the reader an idea of how computers can be used in real time search planning to combine subjective assumptions with search results to provide a better answer to the question “Where is the target?”
Article
This paper concerns the approximation of optimal allocations by δ allocations. δ allocations are obtained by fixing an increment δ of effort and deciding at each step upon a single cell in which to allocate the entire increment. It is shown that δ allocations may be used as a simple method of approximating optimal allocations of effort resulting from constrained separable optimization problems involving a finite number of cells. The results are applied to find δ allocations (called δ plans) which approximate optimal search plans. δ plans have the property that as δ 0, the mean time to find the target using a δ plan approaches the mean time when using the optimal plan. δ plans have the advantage that. they are easily computed and more easily realized in practice than optimal plans which tend to be difficult to calculate and to call for spreading impractically small amounts of effort over large areas.
Article
In the paper a Bayesian method for conduction of a rescue operation for search of stationary and moving target is described. Using Monte-Carlo simulation, the method is compared to naïve search method that does not take into account the information obtained during the search efforts. Since the rescue operation often involve considerable expense, the Bayesian approach can significantly reduce the costs of the operation.
Article
A model for determining an optimal search pattern for a wind-driven ship lost at an uncertain position.
Conference Paper
This work presents a method for deriving efficient search strategies for an autonomous unmanned aerial system (UAS). First the method is used to optimize flight path for an airplane with a simple downward-pointing fixed camera. The aircraft performs banking maneuvers in order to expand the search area. In that case, the optimizer solves for the best airplane speed, bank angle, maximum heading change, and average flight path separation subjected to the constraints of the airplane, the camera, and the relative size of the target on the ground. Next, the method is used to maximize the amount of ground area viewed by a pan-tilt-zoom camera mounted to an airplane subjected to constraints of the camera, airplane performance, and the relative size of the target on the ground. An optimizer solves for the best camera angles, zoom setting, and the flight path separation between parallel searches. The input parameters used in the simulation results of this paper are specific to a low-cost / low-altitude UAS, but the method can be generalized for application with different aircraft and camera systems. As expected, the gimbaled camera configuration is capable of searching more area than the fixed downward-pointing camera configuration, however, the fixed camera is more competitive than expected. Finally, a flight test with the fixed camera is flown to compare with the simulation results. © 2011 by the American Institute of Aeronautics and Astronautics, Inc.
Article
Full-text available
This paper provides a brief history of some operational particle filters that were used by the U. S. Coast Guard and U. S. Navy. Starting in 1974 the Coast Guard system provided Search and Rescue Planning advice for objects lost at sea. The Navy systems were used to plan searches for Soviet submarines in the Atlantic, Pacific, and Mediterranean starting in 1972. The systems operated in a sequential, Bayesian manner. A prior distribution for the target"s location and movement was produced using both objective and subjective information. Based on this distribution, the search assets available, and their detection characteristics, a near-optimal search was planned. Typically, this involved visual searches by Coast Guard aircraft and sonobuoy searches by Navy antisubmarine warfare patrol aircraft. The searches were executed, and the feedback, both detections and lack of detections, was fed into a particle filter to produce the posterior distribution of the target"s location. This distribution was used as the prior for the next iteration of planning and search.
Article
Versions of the Neyman–Pearson lemma are given (Theorems 1 and 2) which provide sufficiency criteria for constrained extrema of nonlinear functionals with continuous or discrete variation. The use of ratio contours is described as a technique for producing a trial solution to be tested against Theorem 1 or 2. Examples of applications include improvements over previously published methods of solutions of certain problems. Relationship to previous versions of the lemma and to the method of Lagrange multipliers is discussed.
Article
This paper considers file problem of optimal search for a stationary target when the detection capability of the search sensor, as characterized by its sweep width, is fixed but not known in advance. Most of the results are confined to the assumption of a gamma prior sweep-width distribution and to the cases of normal and uniform prior target-location distributions. Explicit formulas are obtained for the optimal search plans, the probability of detection versus time, the expected time to detection, and the posterior marginal distributions for target location and sweep width. In the case of a normal prior target-location distribution, expected time to detection is compared for the optimal plan and search plans based on the assumption of a known value for sweep width; this includes a sensitivity analysis treating actual sweep width as a parameter. For a target of value, the search plan that maximizes expected net return is determined. This plan requires specification of a rule for terminating an unsuccessfu...
Article
This paper finds the optimal search plan (in the sense of minimizing mean time to find the target) for a class of searches for a stationary target. In this class the target must be contacted by one sensor and identified by another. Complicating the search is the possibility of false targets which must be identified to be distinguished from the target. Search plans are described by a search effort density function and a policy for investigating contacts. The existence of false targets is determined by a known density function for the mean number of false targets in a region. Under the condition that contact investigation, once begun, must not be interrupted until the contact is identified, it is shown that the optimal plan, in a specified class, is to allocate search effort according to a Neyman–Pearson type of allocation and to investigate contacts immediately. It is also shown that if at any time an optimal search is stopped and it is decided to replan the search, the optimal plan is to continue the original plan. This last result makes use of the posterior target location distribution when there are unidentified contacts. This distribution is also found in this paper. Examples of optimal search plans are presented as well as an example to show that, in general, it is not possible to maximize the probability of finding the target at each instant of time during a search.
Optimal Search With Poisson Distributed Nonlinear Functional Versions of the Neyman-Pearson Lemma
  • L D Stone
  • J A Stanshine
  • C A Persingers
  • D H Wagner
Stone, L. D., J. A. Stanshine, and C. A. Persinger, " Optimal Search With Poisson Distributed [S] Wagner, D. H., " Nonlinear Functional Versions of the Neyman-Pearson Lemma, " SIAM Re-tions Research 5,613-626 (1957). publication. SIAM Journal on Applied Mathematics, 20, 241-263 (1971).