Benjamin Grocholsky’s research while affiliated with Carnegie Mellon University and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (10)


Perception for Safe Autonomous Helicopter Flight and Landing
  • Conference Paper

May 2016

·

1 Read

·

4 Citations

·

·

Adam Stambler

·

[...]

·

Rotorcraft operating in unprepared locations must be capable of safe flight in the presence of unmapped obstacles and absence of GPS, and must be able to quickly and reliably assess a potential landing zone's suitability for landing. This paper presents technologies that address these needs and the result of their application to an actual unmanned helicopter prototype.


Figure 1. Comparison of visible light (left) and LWIR (right) camera images of a small barge on Lake Erie seen from 1.2 nautical miles away; the cameras are mounted on a manned helicopter flying toward the barge.  
Figure 2. Pose estimation from deck pattern detection. The pose estimate is used to overlay a green rendering of the white pattern painted on the the ¼ scale deck. The patterns align almost exactly. At close range, (below 50 meters), the ship deck will fall out of the narrow field-of-view of the cameras used to detect the landing platform at long distances. Other sensors, though, will come into range at about 200 meters from the deck and remain effective until touchdown. Lidar will be able to provide wide field of view range and bearing measurements to reflective deck markers with centimeter-level accuracy. A sample lidar output is shown in Figure 3. Most lidars operate in the near-IR band, which allows them better performance then visible light sensors in DVE conditions.  
Figure 3 Example of scanning lidar data. The right image shows the data overlaid with the ship deck ground truth model.
Robust Autonomous Ship Deck Landing for Rotorcraft
  • Conference Paper
  • Full-text available

May 2016

·

1,371 Reads

·

6 Citations

Landing rotorcraft on a ship deck is a difficult and dangerous task. The US Navy is interested in expanding landing capabilities in degraded visual environments, with impaired or no GPS signal, and in autonomous operations, while at the same time reducing the cost of guidance infrastructure on the ship deck. This paper describes how a suite of multi-modal sensors can provide relative pose estimate from an aircraft to a ship deck in a wide range of conditions. The sensor suite enables robust performance while requiring minimal deck side infrastructure. We describe a three-phase trajectory planner that allow for safe, autonomous landing on a ship deck based on the relative pose estimate from the sensor suite and knowledge of the aircraft dynamics. At the aircraft approaches the ship, the trajectory planner uses a ship deck motion model to time the landing for minimal touchdown impact.

Download

Figure 1. Schematic of the SALRS Virtual Testbed architecture. The SALRS Phase 1 project, described in this paper, is developing and assessing sensor models (red rectangle) that will be used in subsequent phases for the development of shiprelative navigation algorithms.  
Figure 2. The three phases of the SALRS Phase 1 project.  
Table 2 . Environmental conditions recorded with every dataset collected in SALRS Phase 1 and the specific device/sensor.
Figure 12. SALRS Phase 1 modeling and validation process. The inputs are the environmental measurements, sensor characteristics, and ground truth scene observed by the sensors (green boxes). The outputs are the simulated scenes and sensor data (orange box) that are compared to true data (purple box) for model validation.
Figure 13. Block diagram showing the Irma/Modtran interaction that produces synthetic images. Modtran blocks are colored blue, Irma blocks orange.
Sensor modeling for precision ship-relative navigation in degraded visual environment conditions

May 2015

·

399 Reads

·

2 Citations

Proceedings of SPIE - The International Society for Optical Engineering

The Navy and Marine Corps will increasingly need to operate unmanned air vehicles from ships at sea. Fused multi-sensor systems are desirable to ensure these operations are highly reliable under the most demanding at-sea conditions, particularly in degraded visual environments. The US Navy Sea-Based Automated Launch & Recovery System (SALRS) program aims at enabling automated/semi-automated launch and recovery of sea-based, manned and unmanned, fixed-and rotary-wing naval aircraft, and to utilize automated or pilot-augmented flight mechanics for carefree shipboard operations. This paper describes the goals and current results of SALRS Phase 1, which aims at understanding the capabilities and limitations of various sensor types through sensor characterization, modeling, and simulation, and assessing how the sensor models can be used for aircraft navigation to provide sufficient accuracy, integrity, continuity, and availability across all anticipated maritime conditions.



Figure 11. Unmodified Collins tracker with a multimodal distribution. In 30-60 frames the tracker has abandoned the top color.
Air-Ground Collaborative Surveillance with Human-Portable Hardware

January 2011

·

127 Reads

·

7 Citations

Coordination of unmanned aerial and ground vehicles (UAVs and UGVs) is immensely useful in a variety of surveillance and rescue applications, as the vehicles’ complementary strengths provide operating teams with enhanced mission capabilities. While many of today’s systems require independent control stations, necessitating arduous manual coordination between mul- tiple operators, this paper presents a multi-robot collaboration system, jointly developed by iRobot Corporation and Carnegie Mellon University, which features a unified interface for controlling multiple unmanned ve- hicles. Semi-autonomous subtasks can be directly executed through this interface, including: single-click automatic visual target tracking, way- point sequences, area search, and geo-location of tracked points of inter- est. Demonstrations of these capabilities on widely-deployed commercial unmanned vehicles are presented, including the use of UAVs as a commu- nication relay for multi-kilometer, non-line-of-sight operation of UGVs.


Comprehensive Automation for Specialty Crops: Year 1 results and lessons learned

October 2010

·

7,241 Reads

·

43 Citations

Intelligent Service Robotics

Comprehensive Automation for Specialty Crops is a project focused on the needs of the specialty crops sector, with a focus on apples and nursery trees. The project’s main thrusts are the integration of robotics technology and plant science; understanding and overcoming socio-economic barriers to technology adoption; and making the results available to growers and stakeholders through a nationwide outreach program. In this article, we present the results obtained and lessons learned in the first year of the project with a reconfigurable mobility infrastructure for autonomous farm driving. We then present sensor systems developed to enable three real-world agricultural applications—insect monitoring, crop load scouting, and caliper measurement—and discuss how they can be deployed autonomously to yield increased production efficiency and reduced labor costs. KeywordsSpecialty crops-Reconfigurable mobility-Crop intelligence-Insect monitoring-Crop load estimation-Caliper measurement


Fig. 1. Each row shows temporal progress of the estimate, using different motion models. (Row 1) Cartesian Motion Model, (Row 2) Polar Motion Model and (Row 3) Hybrid Motion Model with an EKF. In the figure, the green lines are the estimated path of the robot, the blue lines are the true path of the robot, the red ellipses represent the uncertainty of the estimate and the black dots are the particles within the particle filter. Each column presents snapshots of the filter at various times. (Col 1) shows the uncertainty ellipse of the robot’s position at times t = 150 , 300 , 400 . (Col 2) corresponds to times t = 600 , 900 , 1600 . (Col 3) show the same timesteps as in (Col 2), however, the robot’s initial position was [ x, y ] = [0 , 200] . It can be observed that while the Cartesian model (Row 1) does a reasonable job predicting the mean of the distribution, it fails to accurately capture the nonlinearities in the uncertainty distributions. Additionally, when the robot is initialized to the location [0 , 200] , the polar motion model is unable to correctly represent the uncertainty in the motion. This limitation of the polar motion model is due to its inability to move the origin of its coordinate frame. 
Modeling Mobile Robot Motion with Polar Representations

November 2009

·

74 Reads

·

7 Citations

This article compares several parameterizations and motion models for improving the estimation of the nonlinear uncertainty distribution produced by robot motion. In previous work, we have shown that the use of a modified polar parameterization provides a way to represent nonlinear measurements distributions in the Cartesian space as linear distributions in polar space. Following the same reasoning, we present a motion model extension that utilizes the same polar parameterization to achieve improved modeling of mobile robot motion in between measurements, gaining robustness with no additional overhead. We present both simulated and experimental results to validate the effectiveness of our approach.


Figure 10.Digital data fusion architecture currently under development
Integrated Long-range UAV/UGV Collaborative Target Tracking

May 2009

·

1,257 Reads

·

15 Citations

Proceedings of SPIE - The International Society for Optical Engineering

Coordinated operations between unmanned air and ground assets allow leveraging of multi-domain sensing and increase opportunities for improving line of sight communications. While numerous military missions would benefit from coordinated UAV-UGV operations, foundational capabilities that integrate stove-piped tactical systems and share available sensor data are required and not yet available. iRobot, AeroVironment, and Carnegie Mellon University are working together, partially SBIR-funded through ARDEC's small unit network lethality initiative, to develop collaborative capabilities for surveillance, targeting, and improved communications based on PackBot UGV and Raven UAV platforms. We integrate newly available technologies into computational, vision, and communications payloads and develop sensing algorithms to support vision-based target tracking. We first simulated and then applied onto real tactical platforms an implementation of Decentralized Data Fusion, a novel technique for fusing track estimates from PackBot and Raven platforms for a moving target in an open environment. In addition, system integration with AeroVironment's Digital Data Link onto both air and ground platforms has extended our capabilities in communications range to operate the PackBot as well as in increased video and data throughput. The system is brought together through a unified Operator Control Unit (OCU) for the PackBot and Raven that provides simultaneous waypoint navigation and traditional teleoperation. We also present several recent capability accomplishments toward PackBot-Raven coordinated operations, including single OCU display design and operation, early target track results, and Digital Data Link integration efforts, as well as our near-term capability goals.


Fig. 1. 
Fig. 2. Each Row presents a different example network. The first two are simulation examples in an obstacle free environment (with 7 nodes and 50 nodes) and the last is the real-world example in a segment of an office building (with 14 nodes). The map of the environment for the real-world example is overlaid on top of the true node locations and path. Left Column : Shows the true locations of the nodes ( ∗ ), all the inter-node measurements received (dashed black line) and the true path the robot took (dotted green line). Right Column : Shows the error lines for the Polar parameterization of the EKF, plotted along with the estimated path of the robot (solid gray line). The red cross marks ( × ) the estimated location of a node and the error lines (red) connect the estimates to the true location of the nodes (black dots). 
Fig. 3. 
Fig. 4. Mean position error of all the nodes in the real-world experiment for both the centralized (dashed red line) and decentralized (solid blue line) implementations. In the decentralized approach, the estimates of the isolated nodes drift at the start in the absence of sufficient measurements, increasing its mean error.
Decentralized mapping of robot-aided sensor networks

June 2008

·

71 Reads

·

21 Citations

Proceedings - IEEE International Conference on Robotics and Automation

A key problem in the deployment of sensor networks is that of determining the location of each sensor such that subsequent data gathered can be registered. We would also like the network to provide localization for mobile entities, allowing them to navigate and explore the environment. In this paper, we present a robust decentralized algorithm for mapping the nodes in a sparsely connected sensor network using range- only measurements and odometry from a mobile robot. Our approach utilizes an extended Kalman filter (EKF) in polar space allowing us to model the nonlinearities within the range-only measurements using Gaussian distributions. We also extend this unimodal centralized EKF to a multi-modal decentralized framework enabling us to accurately model the ambiguities in range-based position estimation. Each node within the network estimates its position along with its neighbor's position and uses a message-passing algorithm to propagate its belief to its neighbors. Thus, the global network localization problem is solved in pieces, by each node independently estimating its local network, greatly reducing the computation done by each node. We demonstrate the effectiveness of our approach using simulated and real-world experiments with little to no prior information about the node locations.


UAV-UGV collaboration with a PackBot UGV and Raven SUAV for pursuit and tracking of a dynamic target

April 2008

·

145 Reads

·

15 Citations

Proceedings of SPIE - The International Society for Optical Engineering

Fielded military unmanned systems are currently extending the reach of the U.S. forces in surveillance and reconnaissance missions. Providing long-range eyes on enemy operations, unmanned aerial vehicles (UAVs), such as the AeroVironment Raven, have proven themselves indispensable without risking soldiers' lives. Meanwhile, unmanned ground vehicles (UGVs), such as the iRobot PackBot, are quickly joining ranks in Explosive Ordnance Disposal (EOD) missions to identify and dispose of ordnance or to clear roads and buildings. UAV-UGV collaboration and the benefit of force multiplication is increasingly more tangible. iRobot Corporation and CMU Robotics Institute are developing the capability to simultaneously control the Raven small UAV (SUAV) and PackBot UGV from a single operator control unit (OCU) via waypoint navigation. Techniques to support autonomous collaboration for pursuing and tracking a dismounted soldier will be developed and integrated on a Raven-PackBot team. The Raven will survey an area and geolocate an operator-selected target. The Raven will share this target location with the PackBot and together they will collaboratively pursue the target intelligently to maintain track on the target. We will accomplish this goal by implementing a decentralized control and data fusion software architecture. The PackBot will be equipped with on-board waypoint navigation algorithms, a Navigator Payload containing a stereo-vision system, GPS, and a high-accuracy IMU. The Raven will have two on-board cameras, a side-looking and a forward-looking optical camera. The Supervisor OCU will act as the central mission planner, allowing the operator to monitor mission events and override vehicle tasks.

Citations (10)


... Further flight research to improve the autonomy algorithms was described in Refs. [14,15]. The latest reactive autonomy flight testing is being conducted in the Aircrew Labor In-Cockpit Automation System (ALIAS) program. ...

Reference:

Comparison of Autonomous Flight Control Performance Between Partial- and Full-Authority Helicopters
Perception for Safe Autonomous Helicopter Flight and Landing
  • Citing Conference Paper
  • May 2016

... The system state is directly observable by the global localization measurements provided by the GPS module. The congregated measurement model is defined as (15) with the measurement covariance given by (16) Hence, the global measurement likelihood is distributed according to (17) The measurement likelihood can be extended to more than two vehicles by appending their measurement models to the congregated measurement model and augmenting the measurement noise covariance. ...

Efficient target geolocation by highly uncertain small air vehicles
  • Citing Conference Paper
  • September 2011

... Helicopter maritime operations, especially deck landings differ from land-based ones (Horn and Bridges, 2007;Grocholsky et al., 2016;Frost et al., 2021) and are performed according to the preselected procedures (Arora et al., 2013). According to Anonymous (2003), six navy helicopter-ship operations can be distinguished: fore/aft procedure, relative wind or into wind procedure, cross-deck procedure, aft/fore or facing astern procedure, astern procedure and oblique procedure. ...

Robust Autonomous Ship Deck Landing for Rotorcraft

... In references [16] and [17], two other network infrastructures have also been proposed for urban environment and surveillance network, respectively. There are also works that consider air-ground collaboration for alpine communications as discussed in [18], [19]. Further collaborative wireless networks could be found in the literature encompassing a variety of applications such as file delay minimization for content uploading to media cloud [23], an eavesdropping attack [27], a near-optimal packet allocation algorithm for content uploading to media cloud [28], a store-and-delivery based media access control for precision agriculture [29] and a dynamic self-calibration [30]. ...

Air-Ground Collaborative Surveillance with Human-Portable Hardware

... The system described in [96] focuses on enhancing surveillance and targeting through the integration of PackBot UGVs and Raven UAVs. This study introduces a novel Decentralized Data Fusion technique that effectively merges data from both UAV and UGV platforms, improving the ability to track moving targets in open environments. ...

Integrated Long-range UAV/UGV Collaborative Target Tracking

Proceedings of SPIE - The International Society for Optical Engineering

... In this category, unmanned aerial vehicles (UAVs) are employed for EOD activities [18][19][20]. However, despite their flexible mobility and quick inspections [21], UAVs have limitations in real EOD operations, including limited payload capacity, short battery life, and vulnerability to weather conditions [22]. ...

UAV-UGV collaboration with a PackBot UGV and Raven SUAV for pursuit and tracking of a dynamic target
  • Citing Article
  • April 2008

Proceedings of SPIE - The International Society for Optical Engineering

... The query returned 440 articles. By applying the inclusion and exclusion criteria in Table 1, the final number of articles selected was reduced to 19 (Massa et al. 2006;Singh et al. 2010;Yao et al. 2010;Xiong et al. 2012;Bergerman et al. 2012;Zhong et al. 2015;Denis et al. 2016;Felezi et al. 2016;da Silva Ferreira et al. 2018;Romero et al. 2018;Ushimi 2019;Ma et al. 2020;Cruz et al. 2020;Gola et al. 2021;Low et al. 2021;Cheng et al. 2022;Mathew et al. 2021;Dogra et al. 2022;Liu et al. 2022). We then extended the search to Google Scholar using the query "Reconfigurable Manufacturing Systems" agriculture. ...

Comprehensive Automation for Specialty Crops: Year 1 results and lessons learned

Intelligent Service Robotics

... Additionally , when dealing with a mobile node, care needs to be taken to properly model the motion of the moving node. Whether odometry information is available or if a random walk model is assumed, it needs to be incorporated into the lter correctly [12]. We demonstrate the eectiveness of our proposed network mapping algorithm on two types of experiments. ...

Modeling Mobile Robot Motion with Polar Representations

... For this reason, and owing to recent technological developments, decentralized MRS architectures are getting prominence. For example in [9], a decentralized algorithm was used to localize a flock of robotic sensor networks. Environmental monitoring tasks were performed by a decentralized swarm of robots in [10]- [12], including with heterogeneous swarms [13]. ...

Decentralized mapping of robot-aided sensor networks

Proceedings - IEEE International Conference on Robotics and Automation