
Julie Stephany Berrio- Doctor of Engineering
- Researcher at The University of Sydney
Julie Stephany Berrio
- Doctor of Engineering
- Researcher at The University of Sydney
About
68
Publications
13,151
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
563
Citations
Introduction
Current institution
Additional affiliations
February 2009 - July 2012
Education
February 2004 - February 2009
Universidad Autonoma de Occidente
Field of study
- Mechatronics
Publications
Publications (68)
The safe operation of autonomous vehicles (AVs) is highly dependent on their understanding of the surroundings. For this, the task of 3D semantic occupancy prediction divides the space around the sensors into voxels, and labels each voxel with both occupancy and semantic information. Recent perception models have used multisensor fusion to perform...
To operate safely, autonomous vehicles (AVs) need to detect and handle unexpected objects or anomalies on the road. While significant research exists for anomaly detection and segmentation in 2D, research progress in 3D is underexplored. Existing datasets lack high-quality multimodal data that are typically found in AVs. This paper presents a novel...
Intersections are geometric and functional key points in every road network. They offer strong landmarks to correct GNSS dropouts and anchor new sensor data in up-to-date maps. Despite that importance, intersection detectors either ignore the rich semantic information already computed onboard or depend on scarce, hand-labeled intersection datasets....
Road damage can create safety and comfort challenges for both human drivers and autonomous vehicles (AVs). This damage is particularly prevalent in rural areas due to less frequent surveying and maintenance of roads. Automated detection of pavement deterioration can be used as an input to AVs and driver assistance systems to improve road safety. Cu...
Autonomous Vehicles (AVs) are being partially deployed and tested across various global locations, including China, the USA, Germany, France, Japan, Korea, and the UK, but with limited demonstrations in Australia. The integration of machine learning (ML) into AV perception systems highlights the need for locally labelled datasets to develop and tes...
3D semantic occupancy prediction aims to forecast detailed geometric and semantic information of the surrounding environment for autonomous vehicles (AVs) using onboard surround-view cameras. Existing methods primarily focus on intricate inner structure module designs to improve model performance, such as efficient feature sampling and aggregation...
Existing autonomous driving datasets are predominantly oriented towards well-structured urban settings and favorable weather conditions, leaving the complexities of rural environments and adverse weather conditions largely unaddressed. Although some datasets encompass variations in weather and lighting, bad weather scenarios do not appear often. Ra...
Vehicle-to-everything (V2X) collaborative perception has emerged as a promising solution to address the limitations of single-vehicle perception systems. However, existing V2X datasets are limited in scope, diversity, and quality. To address these gaps, we present Mixed Signals, a comprehensive V2X dataset featuring 45.1k point clouds and 240.6k bo...
High-definition (HD) maps aim to provide detailed road information with centimeter-level accuracy, essential for enabling precise navigation and safe operation of autonomous vehicles (AVs). Traditional offline construction methods involve several complex steps, such as data collection, point cloud generation, and feature extraction, but these metho...
The increasing transition of human-robot interaction (HRI) context from controlled settings to dynamic, real-world public environments calls for enhanced adaptability in robotic systems. This can go beyond algorithmic navigation or traditional HRI strategies in structured settings, requiring the ability to navigate complex public urban systems cont...
Autonomous vehicles are being tested in diverse environments worldwide. However, a notable gap exists in evaluating datasets representing natural, unstructured environments such as forests or gardens. To address this, we present a study on localisation at the Australian Botanic Garden Mount Annan. This area encompasses open grassy areas, paved path...
High-Definition (HD) maps aim to provide comprehensive road information with centimeter-level accuracy, essential for precise navigation and safe operation of Autonomous Vehicles (AVs). Traditional offline construction methods involve multiple complex steps—such as data collection, point cloud map generation, and feature extraction—which not only i...
Occlusion is a major challenge for LiDAR-based object detection methods as it renders regions of interest unobservable to the ego vehicle. A proposed solution to this problem comes from collaborative perception via Vehicle-to-Everything (V2X) communication, which leverages a diverse perspective thanks to the presence of connected agents (vehicles a...
IEEE Young Professionals (IEEE YP) is a dedicated section of IEEE created for recent graduates and those in the early stages of their careers. It offers a range of resources, networking events, and professional development programs aimed at individuals within the first 10 years following the completion of their initial professional degree. However,...
In an era where innovation doesn’t stop, the need for a supportive community for those in the early stages of their careers is more critical than ever. Recognizing this, the IEEE Robotics and Automation Society (RAS) presents the new Young Professionals (YPs) Committee. The committee aims to foster the growth and development of young engineers, sci...
The increasing transition of human-robot interaction (HRI) context from controlled settings to dynamic, real-world public environments calls for enhanced adaptability in robotic systems. This can go beyond algorithmic navigation or traditional HRI strategies in structured settings, requiring the ability to navigate complex public urban systems cont...
Deploying 3D detectors in unfamiliar domains has been demonstrated to result in a significant 70-90% drop in detection rate due to variations in lidar, geography, or weather from their training dataset. This domain gap leads to missing detections for densely observed objects, misaligned confidence scores, and increased high-confidence false positiv...
A comprehensive understanding of 3D scenes is crucial in autonomous vehicles (AVs), and recent models for 3D semantic occupancy prediction have successfully addressed the challenge of describing real-world objects with varied shapes and classes. However, existing methods for 3D semantic occupancy prediction heavily rely on surround-view camera imag...
In the past decade, automotive companies have invested significantly in autonomous vehicles (AV), but achieving widespread deployment remains a challenge in part due to the complexities of safety evaluation. Traditional distance-based testing has been shown to be expensive and time-consuming. To address this, experts have proposed scenario-based te...
For smart vehicles driving through signalised intersections, it is crucial to determine whether the vehicle has right of way given the state of the traffic lights. To address this issue, camera based sensors can be used to determine whether the vehicle has permission to proceed straight, turn left or turn right. This paper proposes a novel end to e...
We introduce Multi-Source 3D (MS3D), a new self-training pipeline for unsupervised domain adaptation in 3D object detection. Despite the remarkable accuracy of 3D detectors, they often overfit to specific domain biases, leading to suboptimal performance in various sensor setups and environments. Existing methods typically focus on adapting a single...
Deploying 3D detectors in unfamiliar domains has been demonstrated to result in a drastic drop of up to 70-90% in detection rate due to variations in lidar, geographical region, or weather conditions from their original training dataset. This domain gap leads to missing detections for densely observed objects, misaligned confidence scores, and incr...
For smart vehicles driving through signalised intersections, it is crucial to determine whether the vehicle has right of way given the state of the traffic lights. To address this issue, camera based sensors can be used to determine whether the vehicle has permission to proceed straight, turn left or turn right. This paper proposes a novel end to e...
In this paper, we improve the single-vehicle 3D object detection models using LiDAR by extending their capacity to process point cloud sequences instead of individual point clouds. In this step, we extend our previous work on rectification of the shadow effect in the concatenation of point clouds to boost the detection accuracy of multi-frame detec...
The IEEE Robotics and Automation Society (RAS) Women in Engineering Committee organized a virtual event for the 2022 International Conference on Robotics and Automation (ICRA) and was honored to host Lydia Kavraki, Katherine J. Kuchenbecker, and Vandi Verma as keynote speakers and panelists. These distinguished women have made significant contribut...
We introduce Multi-Source 3D (MS3D), a new self-training pipeline for unsupervised domain adaptation in 3D object detection. Despite the remarkable accuracy of 3D detectors, they often overfit to specific domain biases, leading to suboptimal performance in various sensor setups and environments. Existing methods typically focus on adapting a single...
The paper addresses the vehicle-to-X (V2X) data fusion for cooperative or collective perception (CP). This emerging and promising intelligent transportation systems (ITS) technology has enormous potential for improving efficiency and safety of road transportation. Recent advances in V2X communication primarily address the definition of V2X messages...
Every autonomous driving dataset has a different configuration of sensors, originating from distinct geographic regions and covering various scenarios. As a result, 3D detectors tend to overfit the datasets they are trained on. This causes a drastic decrease in accuracy when the detectors are trained on one dataset and tested on another. We observe...
Autonomous vehicles have the potential to lower the accident rate when compared to human driving. Moreover, it is the driving force of the automated vehicles' rapid development over the last few years. In the higher Society of Automotive Engineers (SAE) automation level, the vehicle's and passengers' safety responsibility is transferred from the dr...
Sampling discrepancies between different manufacturers and models of lidar sensors result in inconsistent representations of objects. This leads to performance degradation when 3D detectors trained for one lidar are tested on other types of lidars. Remarkable progress in lidar manufacturing has brought about advances in mechanical, solid-state, and...
Recent Autonomous Vehicles (AV) technology includes machine learning and probabilistic techniques that add significant complexity to the traditional verification and validation methods. The research community and industry have widely accepted scenario-based testing in the last few years. As it is focused directly on the relevant crucial road situat...
The paper addresses the vehicle-to-X (V2X) data fusion for cooperative or collective perception (CP). This emerging and promising intelligent transportation systems (ITS) technology has enormous potential for improving efficiency and safety of road transportation. Recent advances in V2X communication primarily address the definition of V2X messages...
Autonomous Vehicles (AV)'s wide-scale deployment appears imminent despite many safety challenges yet to be resolved. The modern autonomous vehicles will undoubtedly include machine learning and probabilistic techniques that add significant complexity to the traditional verification and validation methods. Road testing is essential before the deploy...
Sampling discrepancies between different manufacturers and models of lidar sensors result in inconsistent representations of objects. This leads to performance degradation when 3D detectors trained for one lidar are tested on other types of lidars. Remarkable progress in lidar manufacturing has brought about advances in mechanical, solid-state, and...
For autonomous vehicles to operate persistently in a typical urban environment, it is essential to have high accuracy position information. This requires a mapping and localisation system that can adapt to changes over time. A localisation approach based on a single-survey map will not be suitable for long-term operation as it does not incorporate...
An automated vehicle operating in an urban environment must be able to perceive and recognise objects and obstacles in a three-dimensional world for navigation and path planning. In order to plan and execute accurate and sophisticated driving maneuvers, a high-level contextual understanding of the surroundings is essential. Due to the recent progre...
The fusion of sensor data from heterogeneous sensors is crucial for robust perception in various robotics applications that involve moving platforms, for instance, autonomous vehicle navigation. In particular, combining camera and lidar sensors enables the projection of precise range information of the surrounding environment onto visual images. It...
For autonomous vehicles to operate persistently in a typical urban environment, it is essential to have high accuracy position information. This requires a mapping and localisation system that can adapt to changes over time. A localisation approach based on a single-survey map will not be suitable for long-term operation as it does not incorporate...
An automated vehicle operating in an urban environment must be able to perceive and recognise object/obstacles in a three-dimensional world while navigating in a constantly changing environment. In order to plan and execute accurate sophisticated driving maneuvers, a high-level contextual understanding of the surroundings is essential. Due to the r...
Vision and lidar are complementary sensors that are incorporated into many applications of intelligent transportation systems.
These sensors have been used to great effect in research related to perception, navigation and deep-learning applications.
Despite this success, the validation of algorithm robustness has recently been recognised as a major...
One of the fundamental challenges in the design of perception systems for autonomous vehicles is validating the performance of each algorithm under a comprehensive variety of operating conditions. In the case of vision-based semantic segmentation, there are known issues when encountering new scenarios that are sufficiently different to the training...
The fusion of sensor data from heterogeneous sensors is crucial for robust perception in various robotics applications that involve moving platforms, for instance, autonomous vehicle navigation. In particular, combining camera and lidar sensors enables the projection of precise range information of the surrounding environment onto visual images. It...
To navigate through urban roads, an automated vehicle must be able to perceive and recognize objects in a three-dimensional environment. A high-level contextual understanding of the surroundings is necessary to plan and execute accurate driving maneuvers. This paper presents an approach to fuse different sensory information, Light Detection and Ran...
For autonomous vehicles, a high-level understanding of the 3D world will allow vehicles to navigate along urban roads. By providing geographical locations and semantic understanding of the environment, the vehicle can gain the capability to perform correct driving manoeuvres. This work presents a novel methodology to geo-reference semantically labe...
This paper proposes an automated method to obtain the extrinsic calibration parameters between a camera and a 3D lidar with as low as 16 beams. We use a checkerboard as a reference to obtain features of interest in both sensor frames. The calibration board centre point and normal vector are automatically extracted from the lidar point cloud by expl...
One of the fundamental challenges in the design of perception systems for autonomous vehicles is validating the performance of each algorithm under a comprehensive variety of operating conditions. In the case of vision-based semantic segmentation, there are known issues when encountering new scenarios that are sufficiently different to the training...
To navigate through urban roads, an automated vehicle must be able to perceive and recognize objects in a three-dimensional environment. A high level contextual understanding of the surroundings is necessary to execute accurate driving maneuvers. This paper presents a novel approach to build three dimensional semantic octree maps from lidar scans a...
To operate in an urban environment, an automated vehicle must be capable of accurately estimating its position within a global map reference frame. This is necessary for optimal path planning and safe navigation. To accomplish this over an extended period of time, the global map requires long-term maintenance. This includes the addition of newly ob...
El lenguaje de señas es el autóctono, utilizado por las personas sordas para comunicarse. Se compone de movimientos y expresiones realizadas a través de diferentes partes del cuerpo. En Colombia, hay gran ausencia de tecnologías encaminadas al aprendizaje e interpretación de éste; por ende, es un compromiso social, llevar a cabo iniciativas que pro...
Abstract
— Current autonomous driving applications require
not only the occupancy information of the close environment but
also reactive maps to represent dynamic surroundings. There
is also benefit from incorporating semantic classification into
the map to assist the path planning in changing scenarios. This
paper presents an approach to building...
This invention relates to a mobile robot monitored by computer which works in confined spaces, having the ability to monitor all areas where to analyze concentrations of gases is needed by using several gas sensors with specific operating ranges, from this point of view, this mechatronic device is intended to travel on almost any surface and atmosp...
The following paper presents how several navigation algorithms, methodologies and the tests' results, using three global (Voronoi diagrams, occupation maps, visibility graphs) and three local (neural network, fuzzy logic, potential fields) techniques, for a Lego NXT mobile robot to arrive to a settled goal were developed. Analyzing the techniques'...
This paper presents a robust algorithm that is implemented for segmentation and characterization of traces obtained through a sweep process performed by a laser sensor. The process yields polar parameters that define segments of straight lines, which describe the scanning environment. A Mean-Shift Clustering strategy that uses the average of laser...
This paper presents a robust algorithm for segmentation and characterization of lines detected by a laser sensor. We propose a strategy of Mean Shift Clustering which using the points of the laser scan performs a classification stage based on an ellipsoidal orientable window previous to the line segment parameterization. Each data set is processed...
This article presents a method of straight lines extraction, based principally in an algorithm of robust line extraction named Hough transform, for the parameterization of straight lines that are found in the set of points obtained by a 2D laser scan, then the parameters of the straight lines (obtained by the Hough transform) and the original point...