Science topic

Image Processing - Science topic

All kinds of image processing approaches.
Questions related to Image Processing
  • asked a question related to Image Processing
Question
85 answers
How do you think artificial intelligence can affect medicine in real world. There are many science-fiction dreams in this regard!
but how about real-life in the next 2-3 decades!?
Relevant answer
Answer
AI could pose pandemic-scale biosecurity risks. Here’s how to make it safer
"Advances in artificial intelligence (AI) promise to transform biomedical research, but could pose significant biosafety and biosecurity risks, argue three public health researchers and two policy researchers. They urge governments and AI developers to work with safety and security experts to mitigate harms that could result in the greatest loss of life and disruption to society, such as outbreaks of transmissible pathogens. That means building a scientific consensus through processes that engage diverse, independent experts..."
  • asked a question related to Image Processing
Question
3 answers
Image Processing Algorithms, Quantum Computing.
Relevant answer
Answer
Dear Doctor
Go To
Quantum Computing in Image Processing
  • January 2023
  • DOI: 10.3233/ATDE221232
  • In book: Recent Developments in Electronics and Communication Systems
  • License CC BY-NC 4.0
  • Kumud Sachdeva, Rajan Sachdeva, Himanshu Gupta
[Abstract
If you read that quantum machine learning applications solve some traditional machine learning problems at an amazing speed, be sure to check if they return quantum results. Quantum outputs, such as magnitude-encoded vectors, limit the use of applications and require additional specifications on how to extract practical and useful results. In satellite images, degradation of image contrast and color quality is a common problem, which brings great difficulties for information extraction and visibility of image objects. This is due to atmospheric conditions, which can cause a loss of color and contrast in the image. Regardless of the band, image enhancement plays a vital role in presenting details more clearly. The research includes new computational algorithms that accelerate multispectral satellite imagery to improve land cover mapping and study feature extraction. The first method is a quantum Fourier transform algorithm based on quantum computing, and the second method is a parallel modified data algorithm based on parallel computing. These calculation algorithms are applied to multispectral images for improvement. The quantum-processed image is compared with the original image, and better results are obtained in terms of visual interpretation of the extracted information. Quantitatively evaluate the performance of the improved method and evaluate the applicability of the improved technology to different land types.]
  • asked a question related to Image Processing
Question
1 answer
Dear Researcher,
I hope this message finds you well. My professor and I are looking for a skilled and advanced programmer proficient in Python and MATLAB to join our research group. Our focus is on publishing high-quality, Q1 papers in the field of Artificial Intelligence-based Structural Health Monitoring in Civil Engineering.
The ideal candidate should have expertise in:
  • Deep Learning and Machine Learning
  • Signal and Image Processing
  • Optimization Algorithms
  • Coding and Programming with Python/MATLAB
If you are passionate about research, enjoy publishing, and have sufficient time to dedicate to our projects, we would be delighted to invite you to join us.
Please send your CV to hosein_saffaryosefi@alumni.iust.ac.ir .
Best regards, Hossein Safar Yousefifard School of Civil Engineering Iran University of Science and Technology
Relevant answer
Answer
Welcome
  • asked a question related to Image Processing
Question
6 answers
The sample is a silicon wafer, as the source of illumination comes close to being perpendicular, the surface is masked by the reflection.
Relevant answer
Answer
You could try inserting a sheet of white paper (not too thick) between your wafer and the light source. This should minimize the reflection on your wafer. Otherwise maybe an illumination with a very small incident angle?
  • asked a question related to Image Processing
Question
3 answers
Looking for research collaboration for edited book and other kind of publication
Domain computer science preferably Image Processing
Relevant answer
Answer
Interested, I am interested in collaboration for an editorial book. Kindly contact me at mkhankku@gmail.com for further discussion. Pranshu Saxena
Dr Mudassir Khan
  • asked a question related to Image Processing
Question
1 answer
A Prospective Researcher interested in “Database in order to Image Processing of Geotechnical issues such as Slope stability and landslides during seismic motions" Hope this text finds you well. I would like to find a valid Database consisting images for a data-driven investigation of the phenomena and risk. In this regard, would you mind introducing me a reliable database. Best Regards
Relevant answer
Answer
Yes, there are several databases available that provide images and data related to landslides and slope stability, often used for geotechnical analysis and image processing research. Here are some notable options:
1. Landslide Inventory Maps – The US Geological Survey (USGS) and the European Space Agency (ESA) provide comprehensive landslide inventories with georeferenced images, which are valuable for analyzing landslide occurrences and patterns.
2. Global Landslide Catalog (GLC) – Managed by NASA, this catalog contains landslide data with images and geolocations, primarily associated with rainfall-triggered landslides, but it also includes a variety of slope stability scenarios.
3. Japanese Landslide Inventory – Available through the National Research Institute for Earth Science and Disaster Resilience (NIED) in Japan, this database focuses on earthquake-triggered landslides and offers high-resolution images that are highly relevant for seismic studies.
4. SLID: Landslide Image Dataset – This dataset is curated for machine learning purposes and includes labeled images of landslides, making it suitable for image processing and classification tasks.
These databases are widely used in geotechnical research and offer credible, high-quality resources for data-driven analyses of slope stability and landslide risks.
  • asked a question related to Image Processing
Question
1 answer
[CFP]2024 4th International Symposium on Artificial Intelligence and Big Data (AIBFD 2024) - December
AIBDF 2024 will be held in Ganzhou during December 27-29, 2024. The conference will focus on the artificial intelligence and big data, discuss the key challenges and research directions faced by the development of this field, in order to promote the development and application of theories and technologies in this field in universities and enterprises, and provide innovative scholars who focus on this research field, engineers and industry experts provide a favorable platform for exchanging new ideas and presenting research results.
Conference Link:
Topics of interest include, but are not limited to:
◕Track 1:Artificial Intelligence
Natural language processing
Fuzzy logic
Signal and image processing
Speech and natural language processing
Learning computational theory
......
◕Track 2:Big data technology
Decision support system
Data mining
Data visualization
Sensor network
Analog and digital signal processing
......
Important dates:
Full Paper Submission Date: December 23, 2024
Registration Deadline: December 23, 2024
Conference Dates: December 27-29, 2024
Submission Link:
Relevant answer
Answer
Please, is this conference hybrid?
  • asked a question related to Image Processing
Question
3 answers
Hi there. I am new to the field of corrections and I was wondering if by having a dark field and flat field image from a test, if it is possible to obtain a formula with the offset and gain value so I can apply it to any image that comes from the sensor in order to apply a relative radiometric correction.
From what I've read the dark image gives me the offset value, right? And the relative gain value is (dark - flat)/mean(dark-flat). Is this right?
Considering that these are test images, if I want to apply this formula to real images of the sensor then I'm guessing that I have to obtain a default value from the gain and offset matrices. Maybe by getting their mean?
Not sure if this is the way to go. I've also seen that for the dark field i could make a histogram and see the value with the highest peak and maybe choose that as my offset value? But not sure how that would work for the flat image.
Any help is appreciated as I am a little bit loss on what are the best steps here.
Relevant answer
Answer
First of all, we have to consider the possibility of averaging a stack of several captures (>10) to eliminate temporal noise. So when I say "image", I mean the average of the stack. In this way, we can minimize the problem of spatial non-uniformity. What we have to do is:
1) to obtain an image in darkness, I'll call it DSNU;
2) to obtain an image in front of a Flat Field at the end of the scale in the dynamic range of the instrument, but making sure that there is no pixel in the image that enters into saturation; I'll call this FF;
3) to obtain PRNU = FF - DSNU;
4) to obtain the average value of PRNU, I'll call it PRNUm;
5) and considering that we already have the reference image, a raw image, which I'll call RAW;
6) finally you can obtain your corrected flat field image (Icorr), assuming a linear behavior throughout the entire dynamic range of the sensor, as follows: Icorr = (RAW-DSNU) x PRNUm / (FF - DSNU).
Regards!
  • asked a question related to Image Processing
Question
2 answers
会议征稿:第四届计算机、物联网与控制工程国际学术会议(CITCE 2024)
Call for papers: 2024 4th International Conference on Computer, Internet of Things and Control Engineering (CITCE 2024) will be held on November 1-3, 2024 in Wuhan, China as a hybrid meeting.
Conference website(English):https://ais.cn/u/IJfQVv
重要信息
大会官网(投稿网址):https://ais.cn/u/IJfQVv
大会时间:2024年11月1-3日
大会地点:中国-武汉
收录类型:EI Compendex,Scopus
会议详情
第四届计算机、物联网与控制工程国际学术会议(CITCE 2024)将于2024年11月1-3日在中国-武汉召开。CITCE 2024将围绕计算机、物联网与控制工程的最新研究领域,为来自国内外高等院校、科学研究所、企事业单位的专家、教授、学者、工程师等提供一个分享专业经验、扩大专业网络、展示研究成果的国际平台,以期推动该领域理论、技术在高校和企业的发展和应用,也为参会者建立业务或研究上的联系以及寻找未来事业上的全球合作伙伴。大会诚邀国内外高校、科研机构专家、学者,企业界人士及其他相关人员参会交流。
征稿主题
1. 计算机科学:演算法、图像处理、计算机视觉、机器学习、智能数据分析与数据挖掘、数学和计算机建模、人工智能、神经网络、系统安全、机器人与自动化、信息系统、高性能计算、网路通讯、人机交互、电脑建模等;
2. 物联网:人工智能技术与应用、CPS技术与智能信息系统、物联网环境中的多网资源共享、物联网技术体系架构 、物联网中的云计算与大数据、边缘智能与区块链、智慧城市、物联网可穿戴设备 、智能家居、物联网与传感器技术等;
3. 控制工程:系统和自动化、电气系统、过程控制、工业控制技术、计算机科学与工程、电子工程学、软件工程、控制技术、传感器网络、移动互联网、无线网络和系统、计算机控制系统、自适应和最优控制、智能控制、电气自动化、智能控制和智能系统、智能管理和决策、分布式控制、驱动电机和控制技术、动力电池管理和维护技术、微传感器和执行器、移动机器人等
**其它相关主题均可
会议出版
会议投稿经过2-3位组委会专家严格审核后,最终所录用的论文将被ACM ICPS (ACM International Conference Proceeding Series)(ISBN:979-8-4007-1184-8) 出版论文集,并提交至ACM Digital library,EI Compendex, Scopus检索。目前该会议论文检索非常稳定。
参会方式
1、作者参会:一篇录用文章允许一名作者免费参会;
2、参会类型
(1)口头汇报:10-15分钟的全英PPT演讲;
*开放给所有投稿作者与自费参会人员;针对论文或者论文里面的研究做一个10-15min的英文汇报,需要自备PPT,无模板要求,会前根据会议邮件通知进行提交,详情联系会议秘书。
(2)海报展示:自制电子版海报,会议安排展示;
*开放给所有投稿作者与自费参会人员;格式:全英-A1尺寸-竖版,需自制;制作后提交海报图片至会议邮箱IC_CITCE@163.com,主题及海报命名格式为:海报展示+姓名+订单号。
(3)仅参会:非投稿作者,现场听众参会。
*仅开放给自费参会人员
(4)投稿参会链接:https://ais.cn/u/IJfQVv
Relevant answer
Answer
For more details about the conference, please visit the official website of the conference: https://ais.cn/u/IJfQVv
  • asked a question related to Image Processing
Question
6 answers
For my master thesis, I am working on Mobile Laser Scanner data which my duty is Extraction of Powerlines Poles. My data is about 10 Kilometers long and has approximately 60 the powerline poles. Fortunately, my algorithm has extracted 58 the poles correctly and two others poles were not completely extracted by Mobile Laser Scanner system which caused proposed algorithm can not extract them. The proposed algorithm is completely automatic and does not need many parameters for extraction.
My main question is that which circumstances do need my implementation to be published in a good ISI journal?
Relevant answer
Answer
To get your thesis accepted in a reputable ISI journal, ensure that your implementation is:
1. Original: Offers a novel contribution or solution.
2. Well-Researched: Thoroughly reviews and builds on existing literature.
3. Methodologically Sound: Uses clear, reproducible methods.
4. Significant: Demonstrates clear advantages or improvements over existing work.
5. Well-Written: Communicates ideas clearly and is well-organized.
Meeting these criteria increases your chances of acceptance.
  • asked a question related to Image Processing
Question
4 answers
会议征稿:【中国算力大会主办】2024算法、高性能计算与人工智能国际学术会议(AHPCAI 2024)定于2024年6月21-23日在中国郑州举行。会议主要围绕算法、高性能计算与人工智能等研究领域展开讨论。
Call for papers-2024 International Conference on Algorithms, High Performance Computing and Artificial Intelligence
重要信息
大会官网(投稿网址):https://ais.cn/u/AvAzUf​
会议时间:2024年6月21-23日
会议地点:中国·郑州
接受/拒稿通知:投稿后1周左右
收录检索:EI Compendex,Scopus
征稿主题
1、算法(算法分析/近似算法/可计算性理论/进化算法/遗传算法/数值分析/在线算法/量子算法/随机化算法/排序算法/算法图论和组合/计算几何/计算技术和应等)
2、人工智能(自然语言处理/知识表现/智能搜索/机器学习/感知问题/模式识别/逻辑程序设计软计算/不精确和不确定的管理/人工生命/神经网络/复杂系统/遗传算法/计算机视觉等)
3、高性能计算(网络计算技术/高性能计算软件与工具开发/计算机系统评测技术/云计算系统/移动计算系统/点对点计算/网格和集群计算/服务和互联网计算/效用计算等)
4、图像处理 (图像识别/图片检测网络/机器人视觉/聚类/图像数字化/图像增强和复原/图像数据编码/图像分割/模拟图像处理/数字图像处理/图像描述/光学图像处理/数字信号处理/图像传输等)
5、其他相关主题
论文出版
1、 本会议投稿经过2-3位组委会专家严格审核之后,最终所录用的论文将由SPIE - The International Society for Optical Engineering (ISSN: 0277-786X)出版,出版后提交EI Compendex, Scopus检索。
*AHPCAI前三届已完成EI检索!稳定且快速!
2、本届会议作为2024算力大会分会进行公开征稿,高质量论文将遴选至2024算力大会,由IEEE出版,见刊后由期刊社提交至 EI Compendex和Scopus检索。
参会方式
1、作者参会:一篇录用文章允许一名作者免费参会;
2、主讲嘉宾:申请主题演讲,由组委会审核;
3、口头演讲:申请口头报告,时间为10分钟;
4、海报展示:申请海报展示,A1尺寸,彩色打印;
5、听众参会:不投稿仅参会,也可申请演讲及展示。
6、报名参会:https://ais.cn/u/AvAzUf
Relevant answer
Answer
Wishing you every success, International Journal of Complexity in Applied Science and Technology
  • asked a question related to Image Processing
Question
2 answers
Dear RG-Community,
I have been using Agisoft Metashape for UAV imagery processing for quite a while now. Also a while ago, I stumbled upon the Micasense GitHub repository and saw that individual radiometric correction procedures are recommended there (e.g., vignetting and row gradient removal -> https://micasense.github.io/imageprocessing/MicaSense%20Image%20Processing%20Tutorial%201.html). Now I was checking which of those radiometric corrections are also performed during processing in Agisoft Metashape. In the manual (Professional version) I could only find that vignetting is mentioned.
Did anyone of you know how to learn more about the detailed process in Agisoft Metashape..or even better..to perform a thorough radiometric image correction that removes any radiometric bias without running into the risk that this collides with the Agisoft Metashape process?
Thanks for your help,
Rene
Relevant answer
Answer
Thanks for responding Fulgence Hatangimana your answer provides a brief overview of first doing the radiometric correction by following the the GitHub repo by Micasense. However, my question was trying to get into the details of radiometric correction that is done by Metashape. For example, if I apply the row gradient correction before processing my images in Metashape, how do I know that Metashape is not trying to repeat the same correction?
  • asked a question related to Image Processing
Question
4 answers
How Thermal Image Processing works in Agriculture sector?
Relevant answer
Answer
Thank you sir@Alexey chernyavskiy.
  • asked a question related to Image Processing
Question
3 answers
I want to analyze some images (nearly 1000) in a loop. I want to analyze HSV and RGB. I have masked those images in ImageJ, which is binary masked. I tried to explore them in R, but all the results came as NA. I also checked those images (some of them) separately in R to determine whether they were correctly masked, and the result was in matrix 0,0,0 1,1,1. But still, the result is NA. I used a chatbot to generate and analyze code. Can anyone suggest any codes and packages?
Relevant answer
Answer
I think Python is perfect for it.
  • asked a question related to Image Processing
Question
2 answers
IEEE 2024 4th International Symposium on Computer Technology and Information Science(ISCTIS 2024) will be held during July 12-14, 2024 in Xi’an, China.
Conference Webiste: https://ais.cn/u/Urm6Vn
---Call For Papers---
The topics of interest for submission include, but are not limited to:
1. Computer Engineering and Technology
Computer Vision & VR
Multimedia & Human-computer Interaction
Image Processing & Understanding
PDE for Image Processing
Video compression & Streaming
Statistic Learning & Pattern Recognition
......
2. Information Science
Digital Signal Processing (DSP)
Advanced Adaptive Signal Processing
Optical communication technology
Communication and information system
Physical Electronics and Nanotechnology
Wireless communication technology·
......
All accepted papers of ISCTIS 2024 will be published in conference proceedings by IEEE, which will be submitted to IEEE Xplore,EI Compendex, Scopus for indexing.
Important Dates:
Full Paper Submission Date: June 20, 2024
Registration Deadline: June 25, 2024
Final Paper Submission Date: June 26, 2024
Conference Dates: July 12-14, 2024
For More Details please visit:
Relevant answer
Answer
Thanks for sharing. I wish you every success in your task.
  • asked a question related to Image Processing
Question
3 answers
I have a video of Brownian motion of microbes under a microscope. From this video, I need to calculate the angular velocity of a particular prominent cell. From my understanding, what I know is that I need to identify some points within the cell, track its location at successive time frames and thus get the velocity by dividing the distance traveled by that point by time.
Now my question is given the quality of video I have, what is the best way to track some points within the video? My quality of video is such that manual identification of points at different time snapshots within the video is not possible.
Attached a particular time snapshot of the video as an image here.
Relevant answer
Answer
I can suggest the following approach.
1. Cells detection using instance segmentation or just detection. There are a lot of approaches.
2. Track cells. The simplest approach is SORT track algorithm.
3. Analyze a cell (each track)
3.1 Align cell crops using circle detection, build "video" for each cell
3.2 Calculate optical flow
3.3 Convert the optical flow from cartesian to circular coordinates
3.4 The <mean> amplitude of result vector field is rotation speed.
Probably, you will need to crop this field or do some additional transformation in another steps.
  • asked a question related to Image Processing
Question
1 answer
Hi, I am working on domain adaptation for emotional detection on Hollywood Actor/Actress data and I want its adaptation for Pakistani Actor/Actress picture. Is there any data available online or anyone have, please share... It's urgent I have a research project to do.
Relevant answer
Answer
Maybe emontion detection would quicken if you expanded to cartoons like Mulan: https://www.researchgate.net/publication/380540439_Mulan's_Critical_Whiteness
  • asked a question related to Image Processing
Question
2 answers
2024 4th International Conference on Computer, Remote Sensing and Aerospace (CRSA 2024) will be held at Osaka, Japan on July 5-7, 2024.
Conference Webiste: https://ais.cn/u/MJVjiu
---Call For Papers---
The topics of interest for submission include, but are not limited to:
1. Algorithms
Image Processing
Data processing
Data Mining
Computer Vision
Computer Aided Design
......
2. Remote Sensing
Optical Remote Sensing
Microwave Remote Sensing
Remote Sensing Information Engineering
Geographic Information System
Global Navigation Satellite System
......
3. Aeroacoustics
Aeroelasticity and structural dynamics
Aerothermodynamics
Airworthiness
Autonomy
Mechanisms
......
All accepted papers will be published in the Conference Proceedings, and submitted to EI Compendex, Scopus for indexing.
Important Dates:
Full Paper Submission Date: May 31, 2024
Registration Deadline: May 31, 2024
Conference Date: July 5-7, 2024
For More Details please visit:
Invitation code: AISCONF
*Using the invitation code on submission system/registration can get priority review and feedback
Relevant answer
Answer
Dear Kazi Redwan ,Regular Registration(4 - 6 pages) fee is 485 USD. Online presentation is accepted. All accepted papers will be published in the Conference Proceedings, and submitted to EI Compendex, Scopus for indexing.
For More Details about registration please visithttp://www.iccrsa.org/registration_all
For Paper submission: https://ais.cn/u/MJVjiu
  • asked a question related to Image Processing
Question
2 answers
Call for Papers
CMC-Computers, Materials & Continua new special issue“Emerging Trends and Applications of Deep Learning for Biomedical Signal and Image Processing”is open for submission now.
Submission Deadline: 31 March 2025
Guest Editors
  • Prof. Batyrkhan Omarov, Al-Farabi Kazakh National University, Kazakhstan
  • Prof. Aigerim Altayeva, International Information Technology University,Kazakhstan
  • Prof. Bakhytzhan Omarov, International University of Tourism and Hospitality, Kazakhstan
Summary
In this special issue, we delve into the cutting-edge advancements and transformative applications of deep learning techniques within the realms of biomedical engineering and healthcare. Deep learning, a subset of artificial intelligence, has emerged as a groundbreaking tool, offering unparalleled capabilities in interpreting complex biomedical signals and images. This issue brings together a collection of research articles, reviews, and case studies that highlight the innovative integration of deep learning methodologies for analyzing physiological signals (such as EEG, ECG, and EMG) and medical images (including MRI, CT scans, X-rays, and etc.).
The content spans a broad spectrum, from theoretical frameworks and algorithm development to practical applications and case studies, providing insights into the current state-of-the-art and future directions in this rapidly evolving field. Key themes include, but are not limited to, the development of novel deep learning models for disease diagnosis and prognosis, enhancement of image quality and interpretation, real-time monitoring and analysis of biomedical signals, and personalized healthcare solutions.
Contributors to this issue showcase the significant impact of deep learning on improving diagnostic accuracy, enabling early detection of abnormalities, and facilitating personalized treatment plans. Furthermore, discussions extend to ethical considerations, data privacy, and the challenges of implementing AI technologies in clinical settings, offering a comprehensive overview of the landscape of deep learning applications in biomedical signal and image processing.
Through a blend of technical depth and accessibility, this special issue aims to inform and inspire researchers, clinicians, and industry professionals about the potential of deep learning to revolutionize healthcare, paving the way for more innovative, efficient, and personalized medical care.
For submission guidelines and details, visit: https://www.techscience.com/.../special.../biomedical-signal
Relevant answer
Answer
Dear Paulo Bolinhas Your enthusiasm for the topic is contagious, and we couldn't agree more that this special issue is a treasure trove of ideas and discoveries. We hope it will inspire further research and innovation in the field.
  • asked a question related to Image Processing
Question
7 answers
I am delighted to announce that with the endless effort and cooperation with my brother Prof. Mostafa Elhosseini, we succeeded to wrap up our special issues, entitled, "Deep and Machine Learning for Image Processing: Medical and Non-medical Applications," with this nice Editorial paper that highlights the research innovations of the valued contributors and open the research for more future endeavors. It is worth mentioning that this special issue attracted more than 35 contributions with only 12 were published at the end. Please enjoy reading it and shout-out to my professional co-Editor, Prof. Mostafa Elhosseini, all the contributors, and Electronics Editorial Office.
The link for the paper can be found here:
Relevant answer
Answer
Dear Mohamed Shehata, you call yourself successful people, but unable to answer my simple questions regarding your roles as an editor?!
Remember now and forever Mohamed that, real scientists, don't leave or block but certainly stay and answer their critics scientifically and logically, also remember Mohamed that real research is product and/or services oriented but not just some manuscripts that were published with force of money, and change nothing in our real world!!!
  • asked a question related to Image Processing
Question
2 answers
2024 4th International Conference on Image Processing and Intelligent Control (IPIC 2024) will be held from May 10 to 12, 2024 in Kuala Lumpur, Malaysia.
Conference Webiste: https://ais.cn/u/ZBn2Yr
---Call For Papers---
The topics of interest for submission include, but are not limited to:
◕ Image Processing
- Image Enhancement and Recovery
- Target detection and tracking
- Image segmentation and labeling
- Feature extraction and image recognition
- Image compression and coding
......
◕ Intelligent Control
- Sensors in Intelligent Photovoltaic Systems
- Sensors and Laser Control Technology
- Optical Imaging and Image Processing in Intelligent Control
- Fiber optic sensing technology in the application of intelligent photoelectric system
......
All accepted papers will be published in conference proceedings, and submitted to EI Compendex, Inspec and Scopus for indexing.
Important Dates:
Full Paper Submission Date: April 19, 2024
Registration Deadline: May 3, 2024
Final Paper Submission Date: May 3, 2024
Conference Dates: May 10-12, 2024
For More Details please visit:
Invitation code: AISCONF
*Using the invitation code on submission system/registration can get priority review and feedback
Relevant answer
Answer
Thank you
  • asked a question related to Image Processing
Question
1 answer
2024 IEEE 7th International Conference on Computer Information Science and Application Technology (CISAT 2024) will be held on July 12-14, 2024 in Hangzhou, China.
---Call For Papers---
The topics of interest for submission include, but are not limited to:
◕ Computational Science and Algorithms
· Algorithms
· Automated Software Engineering
· Bioinformatics and Scientific Computing
......
◕ Intelligent Computing and Artificial Intelligence
· Basic Theory and Application of Artificial Intelligence
· Big Data Analysis and Processing
· Biometric Identification
......
◕ Software Process and Data Mining
· Software Engineering Practice
· Web Engineering
· Multimedia and Visual Software Engineering
......
◕ Intelligent Transportation
· Intelligent Transportation Systems
· Vehicular Networks
· Edge Computing
· Spatiotemporal Data
All papers, both invited and contributed, the accepted papers, will be published and submitted for inclusion into IEEE Xplore subject to meeting IEEE Xplore's scope and quality requirements, and also submitted to EI Compendex and Scopus for indexing. All conference proceedings paper can not be less than 4 pages.
Important Dates:
Full Paper Submission Date: April 14, 2024
Submission Date: May 12, 2024
Registration Deadline: June 14, 2024
Conference Dates: July 12-14, 2024
For More Details please visit:
Invitation code: AISCONF
*Using the invitation code on submission system/registration can get priority review and feedback
Relevant answer
Please let me know if anyone is interested to o
  • asked a question related to Image Processing
Question
1 answer
2024 3rd International Conference on Automation, Electronic Science and Technology (AEST 2024) in Kunming, China on June 7-9, 2024.
---Call For Papers---
The topics of interest for submission include, but are not limited to:
(1) Electronic Science and Technology
· Signal Processing
· Image Processing
· Semiconductor Technology
· Integrated Circuits
· Physical Electronics
· Electronic Circuit
......
(2) Automation
· Linear System Control
· Control Integrated Circuits and Applications
· Parallel Control and Management of Complex Systems
· Automatic Control System
· Automation and Monitoring System
......
All accepted full papers will be published in the conference proceedings and will be submitted to EI Compendex / Scopus for indexing.
Important Dates:
Full Paper Submission Date: April 1, 2024
Registration Deadline: May 24, 2024
Final Paper Submission Date: May 31, 2024
Conference Dates: June 7-9, 2024
For More Details please visit:
Invitation code: AISCONF
*Using the invitation code on submission system/registration can get priority review and feedback
Relevant answer
Answer
Useful thing
  • asked a question related to Image Processing
Question
2 answers
The 3rd International Conference on Optoelectronic Information and Functional Materials (OIFM 2024) will be held in Wuhan, China from April 5 to 7, 2024.
The annual Optoelectronic Information and Functional Materials conference (OIFM) offers delegates and members a forum to present and discuss the most recent research. Delegates and members will have numerous opportunities to join in discussions on these topics. Additionally, it offers fresh perspectives and brings together academics, researchers, engineers, and students from universities and businesses throughout the globe under one roof.
---𝐂𝐚𝐥𝐥 𝐅𝐨𝐫 𝐏𝐚𝐩𝐞𝐫𝐬---
The topics of interest for submission include, but are not limited to:
1. Optoelectronic information science
- Optoelectronics
- Optical communication and optical network
- Optical fiber communication and system
......
2. Information and Communication Engineering
- Communication and information system
- Wireless communication, data transmission
- Switching and broadband network
......
3. Materials science and Engineering
- New materials
- Optoelectronic functional materials and devices
- Bonding material
......
All accepted full papers will be published in the conference proceedings and will be submitted to EI Compendex / Scopus for indexing.
𝐈𝐦𝐩𝐨𝐫𝐭𝐚𝐧𝐭 𝐃𝐚𝐭𝐞𝐬:
Full Paper Submission Date: February 5, 2024
Registration Deadline: March 22, 2024
Final Paper Submission Date: March 29, 2024
Conference Dates: April 5-7, 2024
𝐅𝐨𝐫 𝐌𝐨𝐫𝐞 𝐃𝐞𝐭𝐚𝐢𝐥𝐬 𝐩𝐥𝐞𝐚𝐬𝐞 𝐯𝐢𝐬𝐢𝐭:
Relevant answer
Answer
Dear Sergey Alexandrovich Shoydin ,only English manuscripts are accepted in the conference.
  • asked a question related to Image Processing
Question
3 answers
I am trying to train a CNN model in Matlab to predict the mean value of a random vector (the Matlab code named Test_2 is attached). To further clarify, I am generating a random vector with 10 components (using rand function) for 500 times. Correspondingly, the figure of each vector versus 1:10 is plotted and saved separately. Moreover, the mean value of each of the 500 randomly generated vectors are calculated and saved. Thereafter, the saved images are used as the input file (X) for training (70%), validating (15%) and testing (15%) a CNN model which is supposed to predict the mean value of the mentioned random vectors (Y). However, the RMSE of the model becomes too high. In other words, the model is not trained despite changing its options and parameters. I would be grateful if anyone could kindly advise.
Relevant answer
Answer
Dear Renjith Vijayakumar Selvarani and Dear Qamar Ul Islam,
Many thanks for your notice.
  • asked a question related to Image Processing
Question
5 answers
I am working on lane line detection using lidar point clouds and using sliding window to detect lane lines. As lane lines have higher intensity values compared to asphalt, we can use the intensity values to differentiate lane lines from the low intensity non-lane-line points. However my lane detection suffers from noisy points i.e. high intensity non-lane line points. I've tried intensity thresholding, and statistical outlier removal based on intensity, but they don't seem to work as I am dealing with some pretty noisy point clouds. Please suggest some non-AI based methods which i can use to get rid of the noisy points.
Relevant answer
Answer
Can you show some example images?
  • asked a question related to Image Processing
Question
4 answers
In the rapidly evolving landscape of the Internet of Things (IoT), the integration of blockchain, machine learning, and natural language processing (NLP) holds promise for strengthening cybersecurity measures. This question explores the potential synergies among these technologies in detecting anomalies, ensuring data integrity, and fortifying the security of interconnected devices.
Relevant answer
Answer
Imagine we're talking about a superhero team-up in the world of tech, with blockchain, machine learning (ML), and natural language processing (NLP) joining forces to beef up cybersecurity in IoT environments.
First up, blockchain. It's like the trusty sidekick ensuring data integrity. By nature, it's transparent and tamper-proof. So, when you have a bunch of IoT devices communicating, blockchain can help keep that data exchange secure and verifiable. It's like having a digital ledger that says, "Yep, this data is legit and hasn't been messed with."
Then, enter machine learning. ML is the brains of the operation, constantly learning and adapting. It can analyze data patterns from IoT devices to spot anything unusual. Think of it as a detective that's always on the lookout for anomalies or suspicious activities.
And finally, there's NLP. It's a bit like the communicator of the group. In this context, NLP can be used to sift through tons of textual data from IoT devices or networks, helping to identify potential security threats or unusual patterns that might not be obvious at first glance.
Put them all together, and you've got a powerful team. Blockchain keeps the data trustworthy, ML hunts down anomalies, and NLP digs deeper into the data narrative. This combo can seriously level up cybersecurity in IoT, making it harder for bad actors to sneak in and cause havoc. Cool, right?
  • asked a question related to Image Processing
Question
1 answer
Seeking insights on optimizing CNNs to meet low-latency demands in real-time image processing scenarios. Interested in efficient model architectures or algorithmic enhancements.
Relevant answer
Answer
Here are several optimization strategies for Convolutional Neural Networks (CNNs) to achieve real-time image processing with stringent latency requirements:
1. Model Architecture Optimization:
  • Reduce Model Size:Employ depthwise separable convolutions to reduce parameters and computations. Utilize smaller-sized filters (e.g., 3x3 instead of 5x5). Reduce the number of filters in convolutional layers. Consider efficient model architectures like MobileNet, ShuffleNet, or EfficientNet.
  • Employ Depthwise Separable Convolutions: These split a standard convolution into two separate operations, significantly reducing computations and parameters.
  • Channel Pruning: Identify and remove less-important channels from convolutional layers to reduce model size without compromising accuracy.
2. Quantization:
  • Reduce Precision:Quantize weights and activations from 32-bit floating-point to lower precision formats (e.g., 8-bit integers) for faster computations and smaller model size.
3. Hardware Acceleration:
  • Utilize Specialized Hardware:Deploy CNNs on GPUs, TPUs, or specialized AI accelerators (e.g., Intel Movidius, NVIDIA Jetson) optimized for deep learning computations.
4. Software Optimization:
  • Efficient Libraries:Leverage highly optimized deep learning libraries like TensorFlow Lite, PyTorch Mobile, or OpenVINO for efficient model deployment on resource-constrained devices.
  • Kernel Fusion: Combine multiple computations into a single kernel for reduced memory access and improved performance.
5. Input Optimization:
  • Reduce Image Resolution: Process lower-resolution images to reduce computational load while ensuring acceptable accuracy.
6. Model Pruning:
  • Remove Unnecessary Parameters: Identify and eliminate redundant or less-significant parameters from the trained model to reduce its size and computational complexity.
7. Knowledge Distillation:
  • Transfer Knowledge: Train a smaller, faster model to mimic the behavior of a larger, more accurate model, benefiting from its knowledge while achieving real-time performance.
8. Early Exiting:
  • Terminate Early: Allow for early decision-making in the model, especially for applications with varying levels of confidence requirements. This can reduce computations for easier-to-classify inputs.
By carefully combining these techniques, developers can create CNN-based real-time image processing systems that meet stringent latency requirements while maintaining high accuracy.
  • asked a question related to Image Processing
Question
3 answers
This question blends various emerging technologies to spark discussion. It asks if sophisticated image recognition AI, trained on leaked bioinformatics data (e.g., genetic profiles), could identify vulnerabilities in medical devices connected to the Internet of Things (IoT). These vulnerabilities could then be exploited through "quantum-resistant backdoors" – hidden flaws that remain secure even against potential future advances in quantum computing. This scenario raises concerns for cybersecurity, ethical hacking practices, and the responsible development of both AI and medical technology.
Relevant answer
Answer
Combining image-trained neural networks, bioinformatics breaches, and quantum-resistant backdoors has major limitations.
Moving from image-trained neural networks to bioinformatics data requires significant domain transfer, which is not straightforward due to the distinct nature of these data types and tasks.
Secure IoT medical devices are designed with robust security features in mind and deployed. Successful attacks requires exploiting a specific vulnerability in the implementation of security measures, rather than the reliance on neural network capabilities.
Deliberately inserting backdoors and to the extent, even quantum-resistant ones, poses ethical and legal questions that would go against norms and standards of cybersecurity practitioners. The actions would violate privacy rights on the federal level, ethical standards and codes of conduct and pose severe legal consequences. Those would be the domestic ones; assuming we're keeping the products in the US.
Quantum computers with sufficient power to break current cryptographic systems are not yet available. Developing quantum-resistant backdoors knowingly anticipates a future scenario to be truth that is still today largely theoretical, without being proven or true.
  • asked a question related to Image Processing
Question
4 answers
𝟮𝟬𝟮𝟰 𝟱𝘁𝗵 𝗜𝗻𝘁𝗲𝗿𝗻𝗮𝘁𝗶𝗼𝗻𝗮𝗹 𝗖𝗼𝗻𝗳𝗲𝗿𝗲𝗻𝗰𝗲 𝗼𝗻 𝗖𝗼𝗺𝗽𝘂𝘁𝗲𝗿 𝗩𝗶𝘀𝗶𝗼𝗻, 𝗜𝗺𝗮𝗴𝗲 𝗮𝗻𝗱 𝗗𝗲𝗲𝗽 𝗟𝗲𝗮𝗿𝗻𝗶𝗻𝗴 (𝗖𝗩𝗜𝗗𝗟 𝟮𝟬𝟮𝟰) 𝘄𝗶𝗹𝗹 𝗯𝗲 𝗵𝗲𝗹𝗱 𝗼𝗻 𝗔𝗽𝗿𝗶𝗹 𝟭𝟵-𝟮𝟭, 𝟮𝟬𝟮𝟰.
𝐈𝐦𝐩𝐨𝐫𝐭𝐚𝐧𝐭 𝐃𝐚𝐭𝐞𝐬:
Full Paper Submission Date: February 1, 2024
Registration Deadline: March 1, 2024
Final Paper Submission Date: March 15, 2024
Conference Dates: April 19-21, 2024
---𝐂𝐚𝐥𝐥 𝐅𝐨𝐫 𝐏𝐚𝐩𝐞𝐫𝐬---
The topics of interest for submission include, but are not limited to:
- Vision and Image technologies
- DL Technologies
- DL Applications
All accepted papers will be published by IEEE and submitted for inclusion into IEEE Xplore subject to meeting IEEE Xplore's scope and quality requirements, and also submitted to EI Compendex and Scopus for indexing.
𝐅𝐨𝐫 𝐌𝐨𝐫𝐞 𝐃𝐞𝐭𝐚𝐢𝐥𝐬 𝐩𝐥𝐞𝐚𝐬𝐞 𝐯𝐢𝐬𝐢𝐭:
Relevant answer
Answer
Great opportunity!
  • asked a question related to Image Processing
Question
4 answers
Today AI is emerging rapidly than ever. When it comes to production we have so many arguments about how to add artificial intelligence or where to add artificial intelligence. Even to monitor the production performance we still do it manually. Image processing AI to gather from live video data needs a lot of processing and great quality infrastructure. What do you think about monitoring production performance, can we use Image processing technology, and how to make it more precise?
Relevant answer
Answer
At its core, AI image processing is the marriage of two cutting-edge fields: artificial intelligence (AI) and computer vision. It's the art and science of bestowing computers with the remarkable ability to understand, interpret, and manipulate visual data—much like the human visual system. Imagine an intricate dance between algorithms and pixels, where machines "see" images and glean insights that elude the human eye.
Regards,
Shafagat
  • asked a question related to Image Processing
Question
2 answers
It will be very helpful for me to know this my above questions.
Relevant answer
Answer
MDPI is a publisher that allows you to publish in a very short time. Specifically, I suggest considering the journals 'Mathematics,' 'Applied Sciences,' and 'Journal of Imaging.' These publications are notable for their quick turnaround times. Personally, I have had the opportunity to publish in all three and have also served as a reviewer for articles in the first two journals, all of which have been positive experiences.
If you are interested in publishing in 'Mathematics' and 'Applied Sciences,' I would recommend exploring special issues related to imaging.
  • asked a question related to Image Processing
Question
1 answer
I am trying to get an insight of the above mentioned research paper, specially about the filtering process used to remove grid artifacts. However, I find it difficult to understand it correctly.
I would be much grateful if anyone could help me to clarify a few questions that I have.
My questions are as follows:
1) what are the pixel values of the Mean filter they used? they mention about using an improved Mean filter, but what is the improvement?
2) do they apply the Mean filter in the whole patch image (seems like it), or only in the grid signal region (characteristic peak range)?
3) what do they mean by (u1,v1) being Fmax value? does that mean that the center pixel of the Mean filter is replaced by this max value?
Thanks in advance!
Relevant answer
Answer
This is a fourier domain filtering technique. Note the area that is zeroed out in the IFFT is the frequency component of that noise frequency. This technique is used in medical imaging for mamo, ct, and mri.
  • asked a question related to Image Processing
Question
5 answers
Hello,
I have imageJ image processing software and would like to calculate and plot the curvature in the beam using this software. I searched online and it suggests downloading the PhotoBend plugin. Can someone suggest any solution for determining the curvature of a beam using image processing software?
Relevant answer
Answer
To optimize the bending curvature by controlling the scribing parameters—the depth, number, and interval of the scribed grooves, finite element analysis was conducted on the bending tests of scribed polyethylene terephthalate films. Moreover, the influences of the parameters on the stress/strain near the grooves were investigated. The maximum stress/strain and curvature generally increased with an increase in depth, whereas these values decreased with an increase in number and intervals.
Regards,
Shafagat
  • asked a question related to Image Processing
Question
7 answers
Recently I have started using ChimeraX, version 1.1. Unfortunately, I could not find any options to export images at a resolution greater than 96 dpi. I have tried the steps written here (http://plato.cgl.ucsf.edu/pipermail/chimerax-users/2020-September/001508.html), but this did not work for me.
Is there any way to solve this issue? It will be a great help to me.
Relevant answer
Answer
Use the command line:
save filenameformat format-name ] [[ width w ][ height h ] | pixelSize p ] [ supersample N ] [ quality M ] [ transparentBackground  true | false ]
  • asked a question related to Image Processing
Question
2 answers
Hello . I have a series of SEM images in which there are some nanorods and nanoparticles. I want to see how many percent of nanorods and what percent of nanoparticles are there and separate them by color. Does anyone know how to do this with MATLAB software?
Relevant answer
Answer
The answer provided by Qamar Ul Islam is obviously AI generated, but the proposed code would somehow work. If you need real help with the software I have experience doing similar things in Matlab, so you may contact me. :)
  • asked a question related to Image Processing
Question
1 answer
What is the best way to convert events(x,y,polarity,time stamp) that are obtained from event camera via ROS to Frames for real time application?
Or is there a way to deal with these events directly without conversion?
Relevant answer
Answer
Murana Awad Dealing with events from an event camera in real-time applications often requires specialized processing due to their asynchronous and continuous nature. Event cameras provide data in the form of pixel-level events, such as changes in brightness (polarity), and timestamps when these changes occur. To utilize this data effectively, you have several options:
  1. Frame Reconstruction: One common approach is to convert events into frames or images, making them compatible with traditional computer vision techniques. You can accumulate events over short time intervals (e.g., milliseconds) to reconstruct frames. Event data can be aggregated into intensity images (e.g., by counting events) or used to create event-driven frames. The choice depends on your specific application. You can use libraries like DVS128, jAER, or custom scripts for this.
  2. Direct Processing: Some real-time applications, especially those focused on object tracking or optical flow, can benefit from processing events directly without frame reconstruction. Various algorithms are available for direct event processing. The event data is often processed using techniques like event-driven optical flow or event-based algorithms for object tracking. Libraries like EVT-Stream can be used for direct processing.
  3. Sensor Fusion: In certain cases, event data can be fused with data from other sensors, such as traditional cameras or LIDAR, to enhance perception and enable more comprehensive real-time applications. Sensor fusion algorithms can combine the strengths of different sensor modalities.
  4. Deep Learning: Deep learning approaches, especially convolutional neural networks (CNNs), can be trained on event data directly, bypassing frame reconstruction. Event-based CNNs have shown promise in tasks like object recognition and tracking. Training neural networks on event data requires specialized datasets and architectures.
  5. ROS Integration: If you are working with Robot Operating System (ROS), you can utilize ROS packages and libraries specifically designed for event cameras. These packages simplify data acquisition and integration with other ROS components.
The choice of the best approach depends on your specific real-time application and its requirements. Consider factors such as the desired output, computational resources available, and the nature of the tasks you need to perform. It's often beneficial to start with existing libraries and frameworks tailored to event cameras, as they can save you significant development time. Additionally, experimenting with different approaches and assessing their performance is essential for optimizing your real-time event-based system.
  • asked a question related to Image Processing
Question
17 answers
Hi everyone
I'm facing a real problem when trying to export data results from imageJ (fiji) to excel to process it later.
The problem is that I have to change manually the dots (.) , commas (,) even when changing the properties in excel (from , to .) in order not count the numbers as thousands, (let's say I have 1,302 = one point three zero two) it count it as (1302 = one thousand three hundred and two) when I transfer to excel...
Lately I found a nice plugin (Localized copy...) that can change the numbers format locally in imageJ so it can be used easily by excel.
Unfortunately, this plugin has some bugs because it can only copy one line of the huge data that I have and only for one time (so I have to close and reopen the image again).
is there anyone that has faced this problem? Can anyone suggest me please another solutions??
Thanks in advance
Problem finally solved... I got the new version of 'Localized copy' plugin from the owner Mr Wolfgang Gross (not sure if I have the permission to upload it here).
Relevant answer
Answer
Jonas Petersen cool! some answers after years XD
  • asked a question related to Image Processing
Question
5 answers
i have worked on image processing for image fusion and image watermarking.
At present time i want to work on big data analysis and apply it in medical image processing. 
Relevant answer
Answer
Dear Professor Mahendra Kumar,
Could you please have a look at my following article:
Best regards,
Nidhal
  • asked a question related to Image Processing
Question
5 answers
I have a brain MRI dataset which contains four image modalities: T1, T2, Flair and T1 contrast-enhanced. From this dataset, I want to segment the Non-Enhancing tumor core, Peritumoral Edema and GD-enhancing tumor. I'm confused about which modality I should use for each of the mentioned diseases.
I will be thankful for any kind of help to clear up my confusion.
Relevant answer
Answer
The choice of MRI modality for the detection of brain tumors depends on the specific characteristics of the tumor and the clinical question being addressed. MRI is a versatile imaging modality that offers different sequences, each providing unique information about the brain and its pathologies. In the context of brain tumor detection, the following MRI sequences are commonly used:
  1. T1-weighted (T1W) Imaging: T1-weighted images provide good anatomical detail and are useful for visualizing brain structures. They can help identify the location and size of tumors based on their contrast with surrounding tissues. However, T1W images may not always be sufficient for characterizing tumor tissue types.
  2. T2-weighted (T2W) Imaging: T2-weighted images are sensitive to water content and are particularly useful for detecting edema and peritumoral changes. T2W images can help identify tumor margins and assess the tumor's relationship with the surrounding brain tissue.
  3. Fluid-Attenuated Inversion Recovery (FLAIR) Imaging: FLAIR imaging is a T2-weighted sequence that suppresses the signal from cerebrospinal fluid (CSF). This sequence is highly sensitive to edema and is often used to highlight peritumoral edema, making it valuable in tumor detection and characterization.
  4. T1-weighted with Gadolinium Contrast Enhancement (T1-Gd): Gadolinium-based contrast agents enhance the signal in regions with increased vascular permeability, such as tumor tissue. T1-Gd images can enhance the visibility of tumors, especially when there is a blood-brain barrier disruption.
  5. Diffusion-Weighted Imaging (DWI): DWI measures the diffusion of water molecules within tissues. DWI is valuable for evaluating tissue cellularity and can aid in differentiating between solid tumors and abscesses or cystic lesions.
  6. Perfusion-Weighted Imaging (PWI): PWI assesses cerebral blood flow and perfusion in brain tissue. It can be helpful in characterizing tumor vascularity and distinguishing between high-grade and low-grade tumors.
In clinical practice, a combination of these MRI sequences is often used to improve the accuracy of brain tumor detection and characterization. The initial evaluation usually includes T1W, T2W, and FLAIR sequences, which provide essential information about the tumor's location and extent. The addition of contrast-enhanced T1-weighted imaging (T1-Gd) can enhance the visibility of tumors and provide additional information about tumor vascularity.
  • asked a question related to Image Processing
Question
12 answers
Dear Researchers.
These days machine learning application in cancer detection has been increased by developing a new method of Image processing and deep learning. In this regard, what is your idea about a new image processing method and deep learning for cancer detection?
Thank you in advance for participating in this discussion.
Relevant answer
Answer
Convolutional Neural Networks (CNNs) have been highly successful in various image analysis tasks, including cancer detection. However, traditional CNNs treat all image regions equally when making predictions, which might not be optimal when certain regions contain critical information for cancer detection. To address this, incorporating an attention mechanism into CNNs can significantly improve performance.
Attention mechanisms allow the model to focus on the most informative parts of the image while suppressing less relevant regions. The attention mechanism can be applied to different levels of CNN architectures, such as at the pixel level, spatial level, or channel level. By paying more attention to relevant regions, the CNN with an attention mechanism can enhance the model's ability to detect subtle patterns and features associated with cancerous regions in medical images.
When using CNNs with attention mechanisms for cancer detection, it is crucial to have a sufficiently large dataset with labeled medical images to train the model effectively. Transfer learning with pre-trained models on large-scale image datasets can also be useful to leverage existing knowledge and adapt it to the cancer detection task with a smaller dataset.
Remember that implementing and training deep learning models for cancer detection requires expertise in both deep learning and medical image analysis. Additionally, obtaining annotated medical image datasets and ensuring proper validation and evaluation are essential for developing an accurate and robust cancer detection system. Collaborating with medical professionals and researchers is often necessary to ensure the clinical relevance and accuracy of the developed methods.
  • asked a question related to Image Processing
Question
2 answers
I have an image (most likely a spectrogram), may be square or rectangle, won't know until received. I need to down sample one axis, say the 'x' axis. So if a spectrogram, I will be down sampling the frequencies (the time, 'y' axis, would remain the same). I was thinking of doing a nearest neighbor of the frequency components. Any idea how can I go about this? Any suggestions would be appreciated. Thanks...
Relevant answer
Answer
Mohammad Imam , thanks for the input, I'll look into this
  • asked a question related to Image Processing
Question
6 answers
I want to overlay a processed image onto an elevation view of a ETABS model using openCV and ETABS API in c# !
Relevant answer
Answer
Professionally nobody is teaching these tools yet in our industry... but if you are willing to do that, you must learn any programming language like Python, VBA, or MATLA. Then, you can start writing the API code for the specific task you want to do. OAPI changed my life since I implemented that in my research studies. I hope this is useful.
  • asked a question related to Image Processing
Question
6 answers
I'm Eagerly waiting to get a chance in the research field image processing
Relevant answer
Answer
Firstly go deep into the minor details of image processing. and what is the history of it, how it used to work and how it is working right now. based on this, you can dig up for the solutions that you can improve in it. Then with that solution you can opt for writing research paper on it...
I would say with this you can atleast start......
  • asked a question related to Image Processing
Question
2 answers
I have a dataset of rice leaf for training and testing in machine learning. Here is the link: https://data.mendeley.com/datasets/znsxdctwtt/1
I want to develop my project with these techniques;
  1. RGB Image Acquisition & Preprocessing (HSV Conversation, Thresholding and Masking)
  2. Image Segmentation(GLCM matrices, Wavelets(DWT))
  3. Classifications (SVM, CNN ,KNN, Random Forest)
  4. Results with Matlab Codings.
  • But I have a confusion for final scores for confusion matrices. So I need any technique to check which extraction method is good for dataset.
  • My main target is detection normal and abnormal(disease) leaf with labels.
#image #processing #mathematics #machinelearning #matlab #deeplearning
Relevant answer
Answer
There are two commonly used extraction techniques that can be appropriate for rice plant disease detection: (1) Color-based Extraction and (2) Texture-based Extraction.The most appropriate extraction technique for rice plant disease detection depends on the specific requirements and characteristics of the dataset and the detection algorithm being used.
  • asked a question related to Image Processing
Question
3 answers
Actually, I am working these field. Sometimes I don't understand what should I do. If anyone supervise me, I will be thankful.
I have a dataset of rice leaf for training and testing in machine learning. Here is the link: https://data.mendeley.com/datasets/znsxdctwtt/1
I want to develop my project with these techniques;
  1. RGB Image Acquisition & Preprocessing (HSV Conversation, Thresholding and Masking)
  2. Image Segmentation(GLCM matrices, Wavelets(DWT))
  3. Classifications (SVM, CNN ,KNN, Random Forest)
  4. Results with Matlab Codings.
  • But I have a confusion for final scores for confusion matrices. So I need any technique to check which extraction method is good for dataset.
  • My main target is detection normal and abnormal(disease) leaf with labels.
Attached image is collected from a paper.
Relevant answer
Answer
Sure, I'd be happy to provide you with guidelines for your Matlab project. Please reach out to me via email at erickkirui@kabarak.ac.ke, and I will promptly assist you with the necessary guidance for your project
  • asked a question related to Image Processing
Question
3 answers
Greetings!
I want to create Artificial Neural Network in MATLAB version r2015a for recognition of 8 classes of bacteria images.
Genuinely i'm having some hard times of extracting in the image processing task in MATLAB, correctly to extract the following features but with which method - method based on the threshold pixel values in the binarization of the images or using edge detection first and then to use Generalized Hough Transformation to obtain the desired shape, but honestly i dont know which approach to take with the methods i've mention.
For the data splitting of the extracted features with cvpartition.
The desired ANN architectures i'm planning to use are:
1. feedforwardnet (backprogragation ANN with Gradient descent , MSE error)
2. Cascade feedforward network
Also i'm interested to use The Cascaded-Correlation Learning Architecture.
Also is there any information out there that explains the GUI in MATLAB when the Neural Network completes the training , i want to learn more about how performance window works, Plot regression , Error histogram , Confussion matrix.
Thanks for your time!
Relevant answer
Answer
The ability of bacteria to recognize kin provides a means to form social groups. In turn these groups can lead to cooperative behaviors that surpass the ability of the individual. Kin recognition involves specific biochemical interactions between a receptor(s) and an identification molecule(s). To ensure that nonkin are excluded and kin are included, recognition specificity is critical and depends on the number of loci and polymorphisms involved.
Regards,
Shafagat
  • asked a question related to Image Processing
Question
4 answers
If we acquire a tomography dataset, we can extract alot of physical properties from it, including porosity and permeability. These properties are not directly measured using conventional experiment. Instead, they are calculated using different image processing algorithms. To this end, is there any guideline on how to report such results in terms of significant digits?
Thanks.
Relevant answer
Answer
You bring up a good point. In the case where the tomography dataset is provided with a resolution in unit length, it may not be straightforward to estimate the measurement uncertainty of more complex properties such as permeability or porosity.
In this case, one approach is to use the resolution as a guide and estimate the measurement uncertainty based on the expected level of variation in the property. For example, if the resolution of the tomography dataset is 1 micrometer and the expected level of variation in the permeability or porosity is on the order of 10%, then a reasonable estimate for the measurement uncertainty might be on the order of 0.1 times the average value.
When reporting physical properties derived from tomography datasets, it is important to balance the need for accuracy and precision with the practical limitations of the measurement and the significance of the results. In general, it is recommended to report physical properties with the appropriate number of significant digits to convey the level of uncertainty and enable meaningful comparison with other results, but not to report more digits than necessary.
Ultimately, the appropriate number of significant digits to report will depend on the specific context and level of uncertainty associated with the measurement. If there is uncertainty about the appropriate number of significant digits to use, it may be helpful to consult with a subject matter expert or refer to relevant standards or guidelines in the field.
I hope this helps you.
Thank you
  • asked a question related to Image Processing
Question
3 answers
Hello
In image processing and image segmentation studies are these values the same?
mIoU
IoU
DSC (Dice similarity coefficient)
F1 score
Can we convert them together?
Relevant answer
Answer
As far as I know, mIoU is just the mean IoU computed over a batch of data.
  • asked a question related to Image Processing
Question
3 answers
As AI continues to progress and surpass human capabilities in various areas, many jobs are at risk of being automated and potentially disappearing altogether. Signal processing, which involves the analysis and manipulation of signals such as sound and images, is one area that AI is making significant strides in. With AI's ability to adapt and learn quickly, it may be able to process signals more efficiently and effectively than humans. This could ultimately lead to fewer job opportunities in the field of signal processing, and a shift toward more AI-powered solutions. The impact of automation on the job market is a topic of ongoing debate and concern, and examining the potential effects on specific industries such as signal processing can provide valuable insights into the future of work.
Relevant answer
Answer
Please read my paper
An Adaptive Filter to Pick up a Wiener Filter from the Error using MSE with and Without Noise
This is a system that is able to learn.
The paper is a singles and systems.
The topic is AI.
I think the two fields support each other.
Thank you
Ziad
  • asked a question related to Image Processing
Question
7 answers
Dear Colleagues, I started this discussion to collect data on the use of the Azure Kinect camera in research and industry. It is my intention to collect data about libraries, SDKs, scripts and links, which may be useful to make life easier for users and developers using this sensor.
Notes on installing on various operating systems and platforms (Windows, Linux, Jetson, ROS)
SDKs for programming
Tools for recording and data extraction (update 10/08/2023)
Tools for fruit sizing and yield prediction (update 19/09/2023)
  • AK_SW_BENCHMARKER. Python based GUI tool for fruit size estimation and weight prediction. (https://pypi.org/project/ak-sw-benchmarker/)
  • AK_VIDEO_ANALYSER. Python based GUI tool for fruit size estimation and weight prediction from videos recorded with the Azure Kinect DK sensor camera in Matroska format. It receives as input a set of videos to analyse and gives as result reports in CSV datasheet format with measures and weight predictions of each detected fruit. (https://pypi.org/project/ak-video-analyser/).
Demo videos to test the software (update 10/08/2023)
Papers, articles (update 09/05/2024)
Agricultural
Clinical applications/ health
Keywords:
#python #computer-vision #computer-vision-tools
#data-acquisition #object-detection #detection-and-simulation-algorithms
#camera #images #video #rgb-d #rgb-depth-image
#azure-kinect #azure-kinect-dk #azure-kinect-sdk
#fruit-sizing #apple-fruit-sizing #fruit-yield-trials #precision-fruticulture #yield-prediction #allometry
Relevant answer
Answer
Thank you Cristina, your work is interesting and helpful.
  • asked a question related to Image Processing
Question
1 answer
Hello,
I am working on a research project that involves detecting cavities or other teeth problems in panoramic X-rays. I am looking for datasets that I can use to train my convolutional neural network. I have been searching on the internet for such datasets, but I didn't find anything so far. Any suggestions are greatly appreciated! Thank you in advance!
Relevant answer
Answer
you may have a look at:
Good luck and
best regards
G.M.
  • asked a question related to Image Processing
Question
2 answers
Need to publish research paper in impact factor journal having higher acceptance rate and faster review time.
Relevant answer
Answer
There are several fast publication journals that focus on image processing, including:
IEEE Transactions on Image Processing: This journal is published by the Institute of Electrical and Electronics Engineers (IEEE) and focuses on research related to image processing, including image enhancement, restoration, segmentation, and analysis. It typically takes around 3-6 months to get a decision on a submitted manuscript.
IEEE Signal Processing Letters: Another publication from IEEE, this journal focuses on research in signal processing, including image processing, audio processing, and speech processing. The journal aims to provide a rapid turnaround time for accepted manuscripts, with a typical review time of around 2-3 months.
Journal of Real-Time Image Processing: This Springer journal focuses on research related to real-time image and video processing, including algorithms, architectures, and systems. The journal has a fast publication process, with accepted papers published online within a few weeks of acceptance.
  • asked a question related to Image Processing
Question
3 answers
As my protein levels appears to be varying in different cell types and different layers and localization (cytoplasm/nucelus) of the root tip of Arabidopsis (in the background of Wild type and mutant plants).
I wonder what should be my approach to compare differences in protein expression levels and localization between two genotypes.
I take Z-stack in a confocal microscope, usually I make a maximum intensity profile of Z- stack and try to understand the differences but as the differences are not only in intesities but also cell types and layers in that case how should I choose the layers between two samples?
My concern is how to find out exact layers between two genotypes as the root thickness is not always same and some z-stacks for example have 55 slices and some have 60.
thanks!
Relevant answer
Answer
Hi, the answer provided by Prof. Janak Trivedi is pretty comprehensive, agree with that. The ideal approach would be to capture equal number of slices for each stack, but I guess some samples have the signal spread over a greater depth (axially) so you don't want to miss out that signal. Also, you mentioned you make "a maximum intensity profile of Z- stack". So I suggest you average out and also make a montage of your stacks (ImageJ options) and then compare the intensity profiles. Additionally. check out this article:
Hope it helps.
  • asked a question related to Image Processing
Question
3 answers
I am trying to open fMRI images in my PC but (I think) no appropriate software is present in PC. Hence I am not able to open indidial images in my PC.
Relevant answer
Answer
Look the link, maybe useful.
Regards,
Shafagat
  • asked a question related to Image Processing
Question
5 answers
I have a photo of bunches of walnut fruit in rows and I want to develop a semi-automated workflow for ImageJ to label them and create a new image from the edges of each selected ROI.
What I have done until now is Segmenting walnuts from the background by suitable threshold> then select the all of the walnuts as a single ROI>
Now I need to know how can I label, the different regions of ROI and count them in numbers to add to the ROI manager. Finally, these ROIs must be cropped from their edges and new image from each walnut should save individually.
Thoughts on how to do this, as well as tips on the code to do so, would be great.
Thanks!
Relevant answer
Answer
Hi,
  1. duplicate item.
  2. fill the current ROI with max/min intensity color (or perhaps invert selection and delete everything else?)
  3. use segmentation to make an ROI for each of those blobs.
  4. add those ROIs to the manager.
  5. For more information about this subject, I suggest you see the links on the topic.
Best regards
  • asked a question related to Image Processing
Question
2 answers
Basically I was Interested in Skin Diseases Detection Using Image Processing
Kindly suggest me technology to be used and a research problem
Relevant answer
Answer
I suggest you use deep neural network models for disease diagnosis.
This field is very interesting.
  • asked a question related to Image Processing
Question
6 answers
I'm currently doing research in image processing using tensors, and I found that many test images repeatedly appear across related literature. They include: Airplane, Baboon, Barbara, Facade, House, Lena, Peppers, Giant, Wasabi, etc. However, they are not referenced with a specific source. I found some of them from the SIPI dataset, but many others are missing. I'm wondering if there are "standards" for the selection of test images, and where can the standardized images be found. Thank you!
Relevant answer
Answer
Often known datasets like COCO are used for testing because it's well standardized and balanced. I don't know what kind of research you are doing, but you can see popular datasets here: https://imerit.net/blog/22-free-image-datasets-for-computer-vision-all-pbm/
If this is not what you are looking for, then you can search on Roboflow or Kaggle.
  • asked a question related to Image Processing
Question
12 answers
I’m currently training a ML model that can estimate sex based on dimensions of proximal femur from radiographs. I’ve taken x-ray images from ALL of the samples in the osteological collection in Chiang Mai, left side only, which came to a total of 354 samples. I also took x-ray photos of the right femur and posterior-anterior view of the same samples (randomized, and only selective few n=94 in total) to test the difference, dimension wise. I have exhausted all the samples for training the model and validating (5-fold), which results in great accuracy of sexing. So, I am wondering whether it is appropriate to test the models with right-femur and posterior-anterior view radiographs, which will then be flipped to resemble left femur x-ray images, given the limitations of our skeletal collection?
Relevant answer
It depends on whether the results of image identification are invariant to the software system with respect to rotation, scaling and image transfer
  • asked a question related to Image Processing
Question
9 answers
Say I have a satellite image of known dimensions. I also know the size of each pixel. The coordinates of some pixels are given to me, but not all. How can I calculate the coordinates for each pixel, using the known coordinates?
Thank you.
Relevant answer
Answer
Therefore, you have 20x20 = 400 control points. If you do georeferencing in Qgis, you can use all control points or some of them, like every 5 Km (16-points). During resampling, all pixels have coordinates in the ground system.
If you do not do georeferencing (no resampling), then you could calculate the coordinates of unknown pixels by interpolation. Suppose a pixel size a [m], then in one km, you have p = 1000/a pixels, and therefore known coordinates have the first(x1,y1) and the last(x2,y2) pixel. The slope angle between the first and last pixel is:
s = arc-tan[(x2-x1)/(y2-y1)]. Therefore, a pixel of a distance d from the first pixel has coordinates x = x1 + d.sin(s) and y = y1 +d.cos(s). You can do either row of column interpolation or both and take the average.
  • asked a question related to Image Processing
Question
1 answer
Hi everyone, so in the field of magnetometry there is a vast body of work relating to the identification of various ferromagnetic field conditions but very little devoted to that of diamagnetic anomalies in the datasets both for airborne and satellite sources. For my current application were utilizing satellite based magnetometry data and are already working on image process algorithms that can enhance the spatial resolution of the dataset for more localized ground-based analysis. However, we're having difficulties in creating any form of machine learning system that can identify the repelling forces of diamagnetic anomalies underground primarily due to the weakness of the reversed field itself. I was just wondering if anyone had any sources relating to this kind of remote sensing application or any technical principles that we could apply to help jumpstart the projects development. Thanks for any and all information.
Relevant answer
Answer
Satellite magnetometers some of the time elapse through locales of plasma, like the terrestrial ionosphere, where the ionization is huge enough that a portion of the first surrounding field is prohibited from the plasma. This decrease of field inside the plasma region comes from the 'diamagnetic' impact of the charged particles in their helical trajectory around the magnetic field lines.
CNN is a strong algorithm for image processing. These algorithm are presently the best algorithm we have for the automated processing of pictures. You can used that for your work.
  • asked a question related to Image Processing
Question
8 answers
Hello dear RG community.
I started working with PIV some time ago. It's being an excruciating time of figuring out how to deal with the thing (even though I like PIV).
Another person I know spent about 2.5 months figuring out how to do smoke viz. And yet another person I know is desperately trying to figure out how to do LIF (with no success so far).
As a newcomer to the area I can't emphasize how valuable any piece of help is.
I noticed there is not one nice forum covering everything related to flow visualization.
There are separate forums on PIV analysis and general image processing (let me take an opportunity here to express my sincere gratitude to Dr. Alex Liberzon for the OpenPIV Google group that he is actively maintaining). Dantec and LaVision tech support is nice indeed.
But, still, I feel like I want one big forum about absolutely anything related to flow vis: how to troubleshoot hardware, how to pick particles, best practices in image preprocessing, how to use commercial GUI, how to do smoke vis, how to do LIF, infraction index matching for flow vis in porous media, PIV in very high speed flows, shadowgraphing, schlieren and so on.
Reading about theory of PIV and how to do it is one thing. But when it comes to obtaining images - oh, that can easily turn to a nightmare! I want a forum where we can share practical skills.
I'm thinking about creating a flow vis StackExchange website.
Area51 is a part of StackExchange where one can propose a StakExchange website. They have pretty strict rules for proposals. Proposals have to go through 3 stages of life cycle before they are allowed to become full-blown StackExchange websites. The main criteria is how many people visit the proposed website, ask and answer questions.
Before a website is proposed, one need to ensure there are people interested in the subject. Once the website has been proposed, one has 3 days to get at least 5 questions posted and answered, preferably, by the people who had expressed their interest in the topic. If the requirement is fulfilled the proposal is allowed to go on.
Thus, I'm wondering what does dear RG community think? Are there people interested in the endeavor? Is there a "seeding community" of enthusiasts who are ready to post and answer at least 5 questions withing the first 3 days?
If so, let me know in the comments, please. I will propose a community and post the instructions for you how to register in Area51, verify your email and post and answer the questions.
Bear in mind, that since we have not only to post the questions but also answer them the "seeding community" should better include flow vis experts.
Relevant answer
Answer
Our Flow visualization Stack exchange is up and running!
We need 5 example questions and 5 users within the first 3 days lest to be taken down. Those interested, please, hurry up.
Note, Stack exchange didn't give me specific instructions how to register. Just gave me the link that I have provided above. Go ahead try it, if you experience any issues with it, please post your experience here.
  • asked a question related to Image Processing
Question
6 answers
How to plot + at center of circle after getting circle from Hough transform?
I obtained the center in workspace as "centers [ a, b] ".
When I am plotting with this command
plot(centers ,'r+', 'MarkerSize', 3, 'LineWidth', 2);
then I get the '+' at a and b on the same axis.
Relevant answer
Answer
Danishtah Quamar centers = imfindcircles(A, radius) locates the circles in image A with radii that are about equal to the radius. The result, centers, is a two-column matrix holding the (x,y) coordinates of the image's circular centers.
  • asked a question related to Image Processing