
Subek SharmaTribhuvan University · Department of Electronics and Computer Engineering
Subek Sharma
Computer Engineering
Artificial Intelligence, Computer Vision, Medical Image Analysis, Robotics, Biomedical Informatics
About
13
Publications
2,822
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
2
Citations
Introduction
Subek Sharma holds a Bachelor's degree in Computer Engineering from Paschimanchal Campus, Lamachaur, Pokhara, affiliated with Tribhuvan University. His academic interests revolve around Artificial Intelligence, Deep Learning, and Computer Vision, with a focus on their interdisciplinary applications.
Education
July 2013 - March 2018
Sainik Awasiya Mahavidyala Chitwan
Field of study
Publications
Publications (13)
Biodiversity plays a crucial role in maintaining the balance of the ecosystem. However, poaching and unintentional human activities contribute to the decline in the population of many species. Hence, active monitoring is required to preserve these endangered species. Current human-led monitoring techniques are prone to errors and are labor-intensiv...
This study evaluates the performance of various deep learning models, specifically DenseNet, ResNet, VGGNet, and YOLOv8, for wildlife species classification on a custom dataset. The dataset comprises 575 images of 23 endangered species sourced from reputable online repositories. The study utilizes transfer learning to fine-tune pre-trained models o...
This study evaluates the performance of various deep learning models, specifically DenseNet, ResNet, VGGNet, and YOLOv8, for wildlife species classification on a custom dataset. The dataset comprises 575 images of 23 endangered species sourced from reputable online repositories. The study utilizes transfer learning to fine-tune pre-trained models o...
This project aimed to tackle the limitations of manual techniques for anaemia detection, which are time-consuming and rely heavily on medical expertise, often causing delays and possible misdiagnosis. Implementing the U-Net++ architecture, the system automates the identification and categorization of Anemia stages. Using conjunctiva, nail and palm...
Landmarks are culturally, historically, and architecturally significant places that aid in navigation, foster community identity, and offer insight into a region's heritage and character. However, most people including tourists are unaware of the whereabouts of landmarks and their significance and a lot of time is wasted by people searching for det...
Neural networks have revolutionized the field of deep learning, enabling remarkable advancements in various domains. One crucial aspect that greatly influences the performance of a neural network is the initialization of its parameters, particularly the weights. In this article, we will explore three common initialization methods: Zeros Initializat...
Landmarks are culturally, historically, and architecturally significant places that aid in navigation, foster community identity, and offer insight into a region's heritage and character. However, most people including tourists are unaware of the whereabouts of landmarks and their significance and a lot of time is wasted by people searching for det...
Landmarks are culturally, historically, and architecturally significant places that aid in navigation, foster community identity, and offer insight into a region's heritage and character. However, most people including tourists are unaware of the whereabouts of landmarks and their significance and a lot of time is wasted by people searching for det...
This research proposal, titled "Anemia Detection in Pregnant Women in Nepal using Deep Learning," is prompted by the critical public health concern of anemia during pregnancy in Nepal. Anemia poses significant risks to maternal and fetal health, necessitating more accurate and timely detection methods. The dataset for this study will be meticulousl...
Machine learning algorithms play a crucial role in the field of natural language processing (NLP). Naive Bayes is one of the popular and effective algorithms used in NLP. It is a probabilistic machine learning algorithm based on Bayes' theorem. The term "naive" indicates the assumption of independence among the features. Despite this assumption, Na...
Landmarks are culturally, historically, and architecturally significant places that aid in navigation, foster community identity, and offer insight into a region's heritage and character. However, most people including tourists are unaware of the whereabouts of landmarks and their significance and a lot of time is wasted by people searching for det...
Locality Sensitive Hashing (LSH) is a powerful technique utilized in Natural Language Processing (NLP) for grouping similar words based on their word embeddings. This paper provides an overview of LSH and its application in NLP tasks, focusing on the organization of similar words in translation tasks. Traditional hashing techniques often fail to gu...
Questions
Questions (3)
Can artificial intelligence truly grasp the perceptual intricacies of optical illusions, or does its 'understanding' merely scratch the surface of statistical patterns? How can we bridge the gap between AI image recognition and the nuanced, subjective experience of optical illusions that human perception effortlessly captures?
In navigating the complex landscape of medical research, addressing interpretability and transparency challenges posed by deep learning models is paramount for fostering trust among healthcare practitioners and researchers. One formidable challenge lies in the inherent complexity of these algorithms, often operating as black boxes that make it challenging to decipher their decision-making processes. The intricate web of interconnected nodes and layers within deep learning models can obscure the rationale behind predictions, hindering comprehension. Additionally, the lack of standardized methods for interpreting and visualizing model outputs further complicates matters. Striking a balance between model sophistication and interpretability is a delicate task, as simplifying models for transparency may sacrifice their intricate capacity to capture nuanced patterns. Overcoming these hurdles requires concerted efforts to develop transparent architectures, standardized interpretability metrics, and educational initiatives that empower healthcare professionals to confidently integrate and interpret deep learning insights in critical scenarios.
Adversarial attacks in deep learning involve manipulating input data to mislead models, leading to incorrect predictions. By exploiting vulnerabilities, attackers can craft adversarial examples that appear normal to humans but deceive the model. Mitigating these attacks is crucial for building robust and secure deep learning models.
How can adversarial attacks be mitigated in deep learning models, and what are the most effective defense mechanisms to ensure robustness and security?
Network
Cited