Science topic
Image Processing - Science topic
All kinds of image processing approaches.
Questions related to Image Processing
How do you think artificial intelligence can affect medicine in real world. There are many science-fiction dreams in this regard!
but how about real-life in the next 2-3 decades!?
Image Processing Algorithms, Quantum Computing.
Dear Researcher,
I hope this message finds you well. My professor and I are looking for a skilled and advanced programmer proficient in Python and MATLAB to join our research group. Our focus is on publishing high-quality, Q1 papers in the field of Artificial Intelligence-based Structural Health Monitoring in Civil Engineering.
The ideal candidate should have expertise in:
- Deep Learning and Machine Learning
- Signal and Image Processing
- Optimization Algorithms
- Coding and Programming with Python/MATLAB
If you are passionate about research, enjoy publishing, and have sufficient time to dedicate to our projects, we would be delighted to invite you to join us.
Please send your CV to hosein_saffaryosefi@alumni.iust.ac.ir .
Best regards,
Hossein Safar Yousefifard
School of Civil Engineering
Iran University of Science and Technology
The sample is a silicon wafer, as the source of illumination comes close to being perpendicular, the surface is masked by the reflection.
Looking for research collaboration for edited book and other kind of publication
Domain computer science preferably Image Processing
A Prospective Researcher interested in “Database in order to Image Processing of Geotechnical issues such as Slope stability and landslides during seismic motions"
Hope this text finds you well. I would like to find a valid Database consisting images for a data-driven investigation of the phenomena and risk. In this regard, would you mind introducing me a reliable database.
Best Regards
[CFP]2024 4th International Symposium on Artificial Intelligence and Big Data (AIBFD 2024) - December
AIBDF 2024 will be held in Ganzhou during December 27-29, 2024. The conference will focus on the artificial intelligence and big data, discuss the key challenges and research directions faced by the development of this field, in order to promote the development and application of theories and technologies in this field in universities and enterprises, and provide innovative scholars who focus on this research field, engineers and industry experts provide a favorable platform for exchanging new ideas and presenting research results.
Conference Link:
Topics of interest include, but are not limited to:
◕Track 1:Artificial Intelligence
Natural language processing
Fuzzy logic
Signal and image processing
Speech and natural language processing
Learning computational theory
......
◕Track 2:Big data technology
Decision support system
Data mining
Data visualization
Sensor network
Analog and digital signal processing
......
Important dates:
Full Paper Submission Date: December 23, 2024
Registration Deadline: December 23, 2024
Conference Dates: December 27-29, 2024
Submission Link:
Hi there. I am new to the field of corrections and I was wondering if by having a dark field and flat field image from a test, if it is possible to obtain a formula with the offset and gain value so I can apply it to any image that comes from the sensor in order to apply a relative radiometric correction.
From what I've read the dark image gives me the offset value, right? And the relative gain value is (dark - flat)/mean(dark-flat). Is this right?
Considering that these are test images, if I want to apply this formula to real images of the sensor then I'm guessing that I have to obtain a default value from the gain and offset matrices. Maybe by getting their mean?
Not sure if this is the way to go. I've also seen that for the dark field i could make a histogram and see the value with the highest peak and maybe choose that as my offset value? But not sure how that would work for the flat image.
Any help is appreciated as I am a little bit loss on what are the best steps here.
会议征稿:第四届计算机、物联网与控制工程国际学术会议(CITCE 2024)
Call for papers: 2024 4th International Conference on Computer, Internet of Things and Control Engineering (CITCE 2024) will be held on November 1-3, 2024 in Wuhan, China as a hybrid meeting.
Conference website(English):https://ais.cn/u/IJfQVv
重要信息
大会官网(投稿网址):https://ais.cn/u/IJfQVv
大会时间:2024年11月1-3日
大会地点:中国-武汉
收录类型:EI Compendex,Scopus
会议详情
第四届计算机、物联网与控制工程国际学术会议(CITCE 2024)将于2024年11月1-3日在中国-武汉召开。CITCE 2024将围绕计算机、物联网与控制工程的最新研究领域,为来自国内外高等院校、科学研究所、企事业单位的专家、教授、学者、工程师等提供一个分享专业经验、扩大专业网络、展示研究成果的国际平台,以期推动该领域理论、技术在高校和企业的发展和应用,也为参会者建立业务或研究上的联系以及寻找未来事业上的全球合作伙伴。大会诚邀国内外高校、科研机构专家、学者,企业界人士及其他相关人员参会交流。
征稿主题
1. 计算机科学:演算法、图像处理、计算机视觉、机器学习、智能数据分析与数据挖掘、数学和计算机建模、人工智能、神经网络、系统安全、机器人与自动化、信息系统、高性能计算、网路通讯、人机交互、电脑建模等;
2. 物联网:人工智能技术与应用、CPS技术与智能信息系统、物联网环境中的多网资源共享、物联网技术体系架构 、物联网中的云计算与大数据、边缘智能与区块链、智慧城市、物联网可穿戴设备 、智能家居、物联网与传感器技术等;
3. 控制工程:系统和自动化、电气系统、过程控制、工业控制技术、计算机科学与工程、电子工程学、软件工程、控制技术、传感器网络、移动互联网、无线网络和系统、计算机控制系统、自适应和最优控制、智能控制、电气自动化、智能控制和智能系统、智能管理和决策、分布式控制、驱动电机和控制技术、动力电池管理和维护技术、微传感器和执行器、移动机器人等
**其它相关主题均可
会议出版
会议投稿经过2-3位组委会专家严格审核后,最终所录用的论文将被ACM ICPS (ACM International Conference Proceeding Series)(ISBN:979-8-4007-1184-8) 出版论文集,并提交至ACM Digital library,EI Compendex, Scopus检索。目前该会议论文检索非常稳定。
参会方式
1、作者参会:一篇录用文章允许一名作者免费参会;
2、参会类型
(1)口头汇报:10-15分钟的全英PPT演讲;
*开放给所有投稿作者与自费参会人员;针对论文或者论文里面的研究做一个10-15min的英文汇报,需要自备PPT,无模板要求,会前根据会议邮件通知进行提交,详情联系会议秘书。
(2)海报展示:自制电子版海报,会议安排展示;
*开放给所有投稿作者与自费参会人员;格式:全英-A1尺寸-竖版,需自制;制作后提交海报图片至会议邮箱IC_CITCE@163.com,主题及海报命名格式为:海报展示+姓名+订单号。
(3)仅参会:非投稿作者,现场听众参会。
*仅开放给自费参会人员
(4)投稿参会链接:https://ais.cn/u/IJfQVv
For my master thesis, I am working on Mobile Laser Scanner data which my duty is Extraction of Powerlines Poles. My data is about 10 Kilometers long and has approximately 60 the powerline poles. Fortunately, my algorithm has extracted 58 the poles correctly and two others poles were not completely extracted by Mobile Laser Scanner system which caused proposed algorithm can not extract them. The proposed algorithm is completely automatic and does not need many parameters for extraction.
My main question is that which circumstances do need my implementation to be published in a good ISI journal?
会议征稿:【中国算力大会主办】2024算法、高性能计算与人工智能国际学术会议(AHPCAI 2024)定于2024年6月21-23日在中国郑州举行。会议主要围绕算法、高性能计算与人工智能等研究领域展开讨论。
Call for papers-2024 International Conference on Algorithms, High Performance Computing and Artificial Intelligence
重要信息
大会官网(投稿网址):https://ais.cn/u/AvAzUf
会议时间:2024年6月21-23日
会议地点:中国·郑州
接受/拒稿通知:投稿后1周左右
收录检索:EI Compendex,Scopus
征稿主题
1、算法(算法分析/近似算法/可计算性理论/进化算法/遗传算法/数值分析/在线算法/量子算法/随机化算法/排序算法/算法图论和组合/计算几何/计算技术和应等)
2、人工智能(自然语言处理/知识表现/智能搜索/机器学习/感知问题/模式识别/逻辑程序设计软计算/不精确和不确定的管理/人工生命/神经网络/复杂系统/遗传算法/计算机视觉等)
3、高性能计算(网络计算技术/高性能计算软件与工具开发/计算机系统评测技术/云计算系统/移动计算系统/点对点计算/网格和集群计算/服务和互联网计算/效用计算等)
4、图像处理 (图像识别/图片检测网络/机器人视觉/聚类/图像数字化/图像增强和复原/图像数据编码/图像分割/模拟图像处理/数字图像处理/图像描述/光学图像处理/数字信号处理/图像传输等)
5、其他相关主题
论文出版
1、 本会议投稿经过2-3位组委会专家严格审核之后,最终所录用的论文将由SPIE - The International Society for Optical Engineering (ISSN: 0277-786X)出版,出版后提交EI Compendex, Scopus检索。
*AHPCAI前三届已完成EI检索!稳定且快速!
2、本届会议作为2024算力大会分会进行公开征稿,高质量论文将遴选至2024算力大会,由IEEE出版,见刊后由期刊社提交至 EI Compendex和Scopus检索。
参会方式
1、作者参会:一篇录用文章允许一名作者免费参会;
2、主讲嘉宾:申请主题演讲,由组委会审核;
3、口头演讲:申请口头报告,时间为10分钟;
4、海报展示:申请海报展示,A1尺寸,彩色打印;
5、听众参会:不投稿仅参会,也可申请演讲及展示。
6、报名参会:https://ais.cn/u/AvAzUf
Dear RG-Community,
I have been using Agisoft Metashape for UAV imagery processing for quite a while now. Also a while ago, I stumbled upon the Micasense GitHub repository and saw that individual radiometric correction procedures are recommended there (e.g., vignetting and row gradient removal -> https://micasense.github.io/imageprocessing/MicaSense%20Image%20Processing%20Tutorial%201.html). Now I was checking which of those radiometric corrections are also performed during processing in Agisoft Metashape. In the manual (Professional version) I could only find that vignetting is mentioned.
Did anyone of you know how to learn more about the detailed process in Agisoft Metashape..or even better..to perform a thorough radiometric image correction that removes any radiometric bias without running into the risk that this collides with the Agisoft Metashape process?
Thanks for your help,
Rene
How Thermal Image Processing works in Agriculture sector?
I want to analyze some images (nearly 1000) in a loop. I want to analyze HSV and RGB. I have masked those images in ImageJ, which is binary masked. I tried to explore them in R, but all the results came as NA. I also checked those images (some of them) separately in R to determine whether they were correctly masked, and the result was in matrix 0,0,0 1,1,1. But still, the result is NA. I used a chatbot to generate and analyze code. Can anyone suggest any codes and packages?
IEEE 2024 4th International Symposium on Computer Technology and Information Science(ISCTIS 2024) will be held during July 12-14, 2024 in Xi’an, China.
Conference Webiste: https://ais.cn/u/Urm6Vn
---Call For Papers---
The topics of interest for submission include, but are not limited to:
1. Computer Engineering and Technology
Computer Vision & VR
Multimedia & Human-computer Interaction
Image Processing & Understanding
PDE for Image Processing
Video compression & Streaming
Statistic Learning & Pattern Recognition
......
2. Information Science
Digital Signal Processing (DSP)
Advanced Adaptive Signal Processing
Optical communication technology
Communication and information system
Physical Electronics and Nanotechnology
Wireless communication technology·
......
All accepted papers of ISCTIS 2024 will be published in conference proceedings by IEEE, which will be submitted to IEEE Xplore,EI Compendex, Scopus for indexing.
Important Dates:
Full Paper Submission Date: June 20, 2024
Registration Deadline: June 25, 2024
Final Paper Submission Date: June 26, 2024
Conference Dates: July 12-14, 2024
For More Details please visit:
I have a video of Brownian motion of microbes under a microscope. From this video, I need to calculate the angular velocity of a particular prominent cell. From my understanding, what I know is that I need to identify some points within the cell, track its location at successive time frames and thus get the velocity by dividing the distance traveled by that point by time.
Now my question is given the quality of video I have, what is the best way to track some points within the video? My quality of video is such that manual identification of points at different time snapshots within the video is not possible.
Attached a particular time snapshot of the video as an image here.
Hi, I am working on domain adaptation for emotional detection on Hollywood Actor/Actress data and I want its adaptation for Pakistani Actor/Actress picture. Is there any data available online or anyone have, please share... It's urgent I have a research project to do.
2024 4th International Conference on Computer, Remote Sensing and Aerospace (CRSA 2024) will be held at Osaka, Japan on July 5-7, 2024.
Conference Webiste: https://ais.cn/u/MJVjiu
---Call For Papers---
The topics of interest for submission include, but are not limited to:
1. Algorithms
Image Processing
Data processing
Data Mining
Computer Vision
Computer Aided Design
......
2. Remote Sensing
Optical Remote Sensing
Microwave Remote Sensing
Remote Sensing Information Engineering
Geographic Information System
Global Navigation Satellite System
......
3. Aeroacoustics
Aeroelasticity and structural dynamics
Aerothermodynamics
Airworthiness
Autonomy
Mechanisms
......
All accepted papers will be published in the Conference Proceedings, and submitted to EI Compendex, Scopus for indexing.
Important Dates:
Full Paper Submission Date: May 31, 2024
Registration Deadline: May 31, 2024
Conference Date: July 5-7, 2024
For More Details please visit:
Invitation code: AISCONF
*Using the invitation code on submission system/registration can get priority review and feedback
Call for Papers
CMC-Computers, Materials & Continua new special issue“Emerging Trends and Applications of Deep Learning for Biomedical Signal and Image Processing”is open for submission now.
Submission Deadline: 31 March 2025
Guest Editors
- Prof. Batyrkhan Omarov, Al-Farabi Kazakh National University, Kazakhstan
- Prof. Aigerim Altayeva, International Information Technology University,Kazakhstan
- Prof. Bakhytzhan Omarov, International University of Tourism and Hospitality, Kazakhstan
Summary
In this special issue, we delve into the cutting-edge advancements and transformative applications of deep learning techniques within the realms of biomedical engineering and healthcare. Deep learning, a subset of artificial intelligence, has emerged as a groundbreaking tool, offering unparalleled capabilities in interpreting complex biomedical signals and images. This issue brings together a collection of research articles, reviews, and case studies that highlight the innovative integration of deep learning methodologies for analyzing physiological signals (such as EEG, ECG, and EMG) and medical images (including MRI, CT scans, X-rays, and etc.).
The content spans a broad spectrum, from theoretical frameworks and algorithm development to practical applications and case studies, providing insights into the current state-of-the-art and future directions in this rapidly evolving field. Key themes include, but are not limited to, the development of novel deep learning models for disease diagnosis and prognosis, enhancement of image quality and interpretation, real-time monitoring and analysis of biomedical signals, and personalized healthcare solutions.
Contributors to this issue showcase the significant impact of deep learning on improving diagnostic accuracy, enabling early detection of abnormalities, and facilitating personalized treatment plans. Furthermore, discussions extend to ethical considerations, data privacy, and the challenges of implementing AI technologies in clinical settings, offering a comprehensive overview of the landscape of deep learning applications in biomedical signal and image processing.
Through a blend of technical depth and accessibility, this special issue aims to inform and inspire researchers, clinicians, and industry professionals about the potential of deep learning to revolutionize healthcare, paving the way for more innovative, efficient, and personalized medical care.
For submission guidelines and details, visit: https://www.techscience.com/.../special.../biomedical-signal
I am delighted to announce that with the endless effort and cooperation with my brother Prof. Mostafa Elhosseini, we succeeded to wrap up our special issues, entitled, "Deep and Machine Learning for Image Processing: Medical and Non-medical Applications," with this nice Editorial paper that highlights the research innovations of the valued contributors and open the research for more future endeavors. It is worth mentioning that this special issue attracted more than 35 contributions with only 12 were published at the end. Please enjoy reading it and shout-out to my professional co-Editor, Prof. Mostafa Elhosseini, all the contributors, and Electronics Editorial Office.
The link for the paper can be found here:
2024 4th International Conference on Image Processing and Intelligent Control (IPIC 2024) will be held from May 10 to 12, 2024 in Kuala Lumpur, Malaysia.
Conference Webiste: https://ais.cn/u/ZBn2Yr
---Call For Papers---
The topics of interest for submission include, but are not limited to:
◕ Image Processing
- Image Enhancement and Recovery
- Target detection and tracking
- Image segmentation and labeling
- Feature extraction and image recognition
- Image compression and coding
......
◕ Intelligent Control
- Sensors in Intelligent Photovoltaic Systems
- Sensors and Laser Control Technology
- Optical Imaging and Image Processing in Intelligent Control
- Fiber optic sensing technology in the application of intelligent photoelectric system
......
All accepted papers will be published in conference proceedings, and submitted to EI Compendex, Inspec and Scopus for indexing.
Important Dates:
Full Paper Submission Date: April 19, 2024
Registration Deadline: May 3, 2024
Final Paper Submission Date: May 3, 2024
Conference Dates: May 10-12, 2024
For More Details please visit:
Invitation code: AISCONF
*Using the invitation code on submission system/registration can get priority review and feedback
2024 IEEE 7th International Conference on Computer Information Science and Application Technology (CISAT 2024) will be held on July 12-14, 2024 in Hangzhou, China.
---Call For Papers---
The topics of interest for submission include, but are not limited to:
◕ Computational Science and Algorithms
· Algorithms
· Automated Software Engineering
· Bioinformatics and Scientific Computing
......
◕ Intelligent Computing and Artificial Intelligence
· Basic Theory and Application of Artificial Intelligence
· Big Data Analysis and Processing
· Biometric Identification
......
◕ Software Process and Data Mining
· Software Engineering Practice
· Web Engineering
· Multimedia and Visual Software Engineering
......
◕ Intelligent Transportation
· Intelligent Transportation Systems
· Vehicular Networks
· Edge Computing
· Spatiotemporal Data
All papers, both invited and contributed, the accepted papers, will be published and submitted for inclusion into IEEE Xplore subject to meeting IEEE Xplore's scope and quality requirements, and also submitted to EI Compendex and Scopus for indexing. All conference proceedings paper can not be less than 4 pages.
Important Dates:
Full Paper Submission Date: April 14, 2024
Submission Date: May 12, 2024
Registration Deadline: June 14, 2024
Conference Dates: July 12-14, 2024
For More Details please visit:
Invitation code: AISCONF
*Using the invitation code on submission system/registration can get priority review and feedback
2024 3rd International Conference on Automation, Electronic Science and Technology (AEST 2024) in Kunming, China on June 7-9, 2024.
---Call For Papers---
The topics of interest for submission include, but are not limited to:
(1) Electronic Science and Technology
· Signal Processing
· Image Processing
· Semiconductor Technology
· Integrated Circuits
· Physical Electronics
· Electronic Circuit
......
(2) Automation
· Linear System Control
· Control Integrated Circuits and Applications
· Parallel Control and Management of Complex Systems
· Automatic Control System
· Automation and Monitoring System
......
All accepted full papers will be published in the conference proceedings and will be submitted to EI Compendex / Scopus for indexing.
Important Dates:
Full Paper Submission Date: April 1, 2024
Registration Deadline: May 24, 2024
Final Paper Submission Date: May 31, 2024
Conference Dates: June 7-9, 2024
For More Details please visit:
Invitation code: AISCONF
*Using the invitation code on submission system/registration can get priority review and feedback
The 3rd International Conference on Optoelectronic Information and Functional Materials (OIFM 2024) will be held in Wuhan, China from April 5 to 7, 2024.
The annual Optoelectronic Information and Functional Materials conference (OIFM) offers delegates and members a forum to present and discuss the most recent research. Delegates and members will have numerous opportunities to join in discussions on these topics. Additionally, it offers fresh perspectives and brings together academics, researchers, engineers, and students from universities and businesses throughout the globe under one roof.
---𝐂𝐚𝐥𝐥 𝐅𝐨𝐫 𝐏𝐚𝐩𝐞𝐫𝐬---
The topics of interest for submission include, but are not limited to:
1. Optoelectronic information science
- Optoelectronics
- Optical communication and optical network
- Optical fiber communication and system
......
2. Information and Communication Engineering
- Communication and information system
- Wireless communication, data transmission
- Switching and broadband network
......
3. Materials science and Engineering
- New materials
- Optoelectronic functional materials and devices
- Bonding material
......
All accepted full papers will be published in the conference proceedings and will be submitted to EI Compendex / Scopus for indexing.
𝐈𝐦𝐩𝐨𝐫𝐭𝐚𝐧𝐭 𝐃𝐚𝐭𝐞𝐬:
Full Paper Submission Date: February 5, 2024
Registration Deadline: March 22, 2024
Final Paper Submission Date: March 29, 2024
Conference Dates: April 5-7, 2024
𝐅𝐨𝐫 𝐌𝐨𝐫𝐞 𝐃𝐞𝐭𝐚𝐢𝐥𝐬 𝐩𝐥𝐞𝐚𝐬𝐞 𝐯𝐢𝐬𝐢𝐭:
I am trying to train a CNN model in Matlab to predict the mean value of a random vector (the Matlab code named Test_2 is attached). To further clarify, I am generating a random vector with 10 components (using rand function) for 500 times. Correspondingly, the figure of each vector versus 1:10 is plotted and saved separately. Moreover, the mean value of each of the 500 randomly generated vectors are calculated and saved. Thereafter, the saved images are used as the input file (X) for training (70%), validating (15%) and testing (15%) a CNN model which is supposed to predict the mean value of the mentioned random vectors (Y). However, the RMSE of the model becomes too high. In other words, the model is not trained despite changing its options and parameters. I would be grateful if anyone could kindly advise.
I am working on lane line detection using lidar point clouds and using sliding window to detect lane lines. As lane lines have higher intensity values compared to asphalt, we can use the intensity values to differentiate lane lines from the low intensity non-lane-line points. However my lane detection suffers from noisy points i.e. high intensity non-lane line points. I've tried intensity thresholding, and statistical outlier removal based on intensity, but they don't seem to work as I am dealing with some pretty noisy point clouds. Please suggest some non-AI based methods which i can use to get rid of the noisy points.
In the rapidly evolving landscape of the Internet of Things (IoT), the integration of blockchain, machine learning, and natural language processing (NLP) holds promise for strengthening cybersecurity measures. This question explores the potential synergies among these technologies in detecting anomalies, ensuring data integrity, and fortifying the security of interconnected devices.
Seeking insights on optimizing CNNs to meet low-latency demands in real-time image processing scenarios. Interested in efficient model architectures or algorithmic enhancements.
This question blends various emerging technologies to spark discussion. It asks if sophisticated image recognition AI, trained on leaked bioinformatics data (e.g., genetic profiles), could identify vulnerabilities in medical devices connected to the Internet of Things (IoT). These vulnerabilities could then be exploited through "quantum-resistant backdoors" – hidden flaws that remain secure even against potential future advances in quantum computing. This scenario raises concerns for cybersecurity, ethical hacking practices, and the responsible development of both AI and medical technology.
𝟮𝟬𝟮𝟰 𝟱𝘁𝗵 𝗜𝗻𝘁𝗲𝗿𝗻𝗮𝘁𝗶𝗼𝗻𝗮𝗹 𝗖𝗼𝗻𝗳𝗲𝗿𝗲𝗻𝗰𝗲 𝗼𝗻 𝗖𝗼𝗺𝗽𝘂𝘁𝗲𝗿 𝗩𝗶𝘀𝗶𝗼𝗻, 𝗜𝗺𝗮𝗴𝗲 𝗮𝗻𝗱 𝗗𝗲𝗲𝗽 𝗟𝗲𝗮𝗿𝗻𝗶𝗻𝗴 (𝗖𝗩𝗜𝗗𝗟 𝟮𝟬𝟮𝟰) 𝘄𝗶𝗹𝗹 𝗯𝗲 𝗵𝗲𝗹𝗱 𝗼𝗻 𝗔𝗽𝗿𝗶𝗹 𝟭𝟵-𝟮𝟭, 𝟮𝟬𝟮𝟰.
𝐈𝐦𝐩𝐨𝐫𝐭𝐚𝐧𝐭 𝐃𝐚𝐭𝐞𝐬:
Full Paper Submission Date: February 1, 2024
Registration Deadline: March 1, 2024
Final Paper Submission Date: March 15, 2024
Conference Dates: April 19-21, 2024
---𝐂𝐚𝐥𝐥 𝐅𝐨𝐫 𝐏𝐚𝐩𝐞𝐫𝐬---
The topics of interest for submission include, but are not limited to:
- Vision and Image technologies
- DL Technologies
- DL Applications
All accepted papers will be published by IEEE and submitted for inclusion into IEEE Xplore subject to meeting IEEE Xplore's scope and quality requirements, and also submitted to EI Compendex and Scopus for indexing.
𝐅𝐨𝐫 𝐌𝐨𝐫𝐞 𝐃𝐞𝐭𝐚𝐢𝐥𝐬 𝐩𝐥𝐞𝐚𝐬𝐞 𝐯𝐢𝐬𝐢𝐭:
Today AI is emerging rapidly than ever. When it comes to production we have so many arguments about how to add artificial intelligence or where to add artificial intelligence. Even to monitor the production performance we still do it manually. Image processing AI to gather from live video data needs a lot of processing and great quality infrastructure. What do you think about monitoring production performance, can we use Image processing technology, and how to make it more precise?
It will be very helpful for me to know this my above questions.
I am trying to get an insight of the above mentioned research paper, specially about the filtering process used to remove grid artifacts. However, I find it difficult to understand it correctly.
I would be much grateful if anyone could help me to clarify a few questions that I have.
My questions are as follows:
1) what are the pixel values of the Mean filter they used? they mention about using an improved Mean filter, but what is the improvement?
2) do they apply the Mean filter in the whole patch image (seems like it), or only in the grid signal region (characteristic peak range)?
3) what do they mean by (u1,v1) being Fmax value? does that mean that the center pixel of the Mean filter is replaced by this max value?
Thanks in advance!
Hello,
I have imageJ image processing software and would like to calculate and plot the curvature in the beam using this software. I searched online and it suggests downloading the PhotoBend plugin. Can someone suggest any solution for determining the curvature of a beam using image processing software?
Recently I have started using ChimeraX, version 1.1. Unfortunately, I could not find any options to export images at a resolution greater than 96 dpi. I have tried the steps written here (http://plato.cgl.ucsf.edu/pipermail/chimerax-users/2020-September/001508.html), but this did not work for me.
Is there any way to solve this issue? It will be a great help to me.
Hello . I have a series of SEM images in which there are some nanorods and nanoparticles. I want to see how many percent of nanorods and what percent of nanoparticles are there and separate them by color. Does anyone know how to do this with MATLAB software?
What is the best way to convert events(x,y,polarity,time stamp) that are obtained from event camera via ROS to Frames for real time application?
Or is there a way to deal with these events directly without conversion?
Hi everyone
I'm facing a real problem when trying to export data results from imageJ (fiji) to excel to process it later.
The problem is that I have to change manually the dots (.) , commas (,) even when changing the properties in excel (from , to .) in order not count the numbers as thousands, (let's say I have 1,302 = one point three zero two) it count it as (1302 = one thousand three hundred and two) when I transfer to excel...
Lately I found a nice plugin (Localized copy...) that can change the numbers format locally in imageJ so it can be used easily by excel.
Unfortunately, this plugin has some bugs because it can only copy one line of the huge data that I have and only for one time (so I have to close and reopen the image again).
is there anyone that has faced this problem? Can anyone suggest me please another solutions??
Thanks in advance
Problem finally solved... I got the new version of 'Localized copy' plugin from the owner Mr Wolfgang Gross (not sure if I have the permission to upload it here).
i have worked on image processing for image fusion and image watermarking.
At present time i want to work on big data analysis and apply it in medical image processing.
I have a brain MRI dataset which contains four image modalities: T1, T2, Flair and T1 contrast-enhanced. From this dataset, I want to segment the Non-Enhancing tumor core, Peritumoral Edema and GD-enhancing tumor. I'm confused about which modality I should use for each of the mentioned diseases.
I will be thankful for any kind of help to clear up my confusion.
Dear Researchers.
These days machine learning application in cancer detection has been increased by developing a new method of Image processing and deep learning. In this regard, what is your idea about a new image processing method and deep learning for cancer detection?
Thank you in advance for participating in this discussion.
I have an image (most likely a spectrogram), may be square or rectangle, won't know until received. I need to down sample one axis, say the 'x' axis. So if a spectrogram, I will be down sampling the frequencies (the time, 'y' axis, would remain the same). I was thinking of doing a nearest neighbor of the frequency components. Any idea how can I go about this? Any suggestions would be appreciated. Thanks...
I want to overlay a processed image onto an elevation view of a ETABS model using openCV and ETABS API in c# !
I'm Eagerly waiting to get a chance in the research field image processing
I have a dataset of rice leaf for training and testing in machine learning. Here is the link: https://data.mendeley.com/datasets/znsxdctwtt/1
I want to develop my project with these techniques;
- RGB Image Acquisition & Preprocessing (HSV Conversation, Thresholding and Masking)
- Image Segmentation(GLCM matrices, Wavelets(DWT))
- Classifications (SVM, CNN ,KNN, Random Forest)
- Results with Matlab Codings.
- But I have a confusion for final scores for confusion matrices. So I need any technique to check which extraction method is good for dataset.
- My main target is detection normal and abnormal(disease) leaf with labels.
#image #processing #mathematics #machinelearning #matlab #deeplearning
Actually, I am working these field. Sometimes I don't understand what should I do. If anyone supervise me, I will be thankful.
I have a dataset of rice leaf for training and testing in machine learning. Here is the link: https://data.mendeley.com/datasets/znsxdctwtt/1
I want to develop my project with these techniques;
- RGB Image Acquisition & Preprocessing (HSV Conversation, Thresholding and Masking)
- Image Segmentation(GLCM matrices, Wavelets(DWT))
- Classifications (SVM, CNN ,KNN, Random Forest)
- Results with Matlab Codings.
- But I have a confusion for final scores for confusion matrices. So I need any technique to check which extraction method is good for dataset.
- My main target is detection normal and abnormal(disease) leaf with labels.
Attached image is collected from a paper.
Greetings!
I want to create Artificial Neural Network in MATLAB version r2015a for recognition of 8 classes of bacteria images.
Genuinely i'm having some hard times of extracting in the image processing task in MATLAB, correctly to extract the following features but with which method - method based on the threshold pixel values in the binarization of the images or using edge detection first and then to use Generalized Hough Transformation to obtain the desired shape, but honestly i dont know which approach to take with the methods i've mention.
For the data splitting of the extracted features with cvpartition.
The desired ANN architectures i'm planning to use are:
1. feedforwardnet (backprogragation ANN with Gradient descent , MSE error)
2. Cascade feedforward network
Also i'm interested to use The Cascaded-Correlation Learning Architecture.
Also is there any information out there that explains the GUI in MATLAB when the Neural Network completes the training , i want to learn more about how performance window works, Plot regression , Error histogram , Confussion matrix.
Thanks for your time!
If we acquire a tomography dataset, we can extract alot of physical properties from it, including porosity and permeability. These properties are not directly measured using conventional experiment. Instead, they are calculated using different image processing algorithms. To this end, is there any guideline on how to report such results in terms of significant digits?
Thanks.
Hello
In image processing and image segmentation studies are these values the same?
mIoU
IoU
DSC (Dice similarity coefficient)
F1 score
Can we convert them together?
As AI continues to progress and surpass human capabilities in various areas, many jobs are at risk of being automated and potentially disappearing altogether. Signal processing, which involves the analysis and manipulation of signals such as sound and images, is one area that AI is making significant strides in. With AI's ability to adapt and learn quickly, it may be able to process signals more efficiently and effectively than humans. This could ultimately lead to fewer job opportunities in the field of signal processing, and a shift toward more AI-powered solutions. The impact of automation on the job market is a topic of ongoing debate and concern, and examining the potential effects on specific industries such as signal processing can provide valuable insights into the future of work.
Dear Colleagues, I started this discussion to collect data on the use of the Azure Kinect camera in research and industry. It is my intention to collect data about libraries, SDKs, scripts and links, which may be useful to make life easier for users and developers using this sensor.
Notes on installing on various operating systems and platforms (Windows, Linux, Jetson, ROS)
- Azure Kinect camera setup (automated scripts for Linux). https://github.com/juancarlosmiranda/azure_kinect_notes
- Azure Kinect ROS Driver. https://github.com/microsoft/Azure_Kinect_ROS_Driver
SDKs for programming
- Microsoft SDK C/C++. https://learn.microsoft.com/en-us/azure/kinect-dk/sensor-sdk-download
- Azure Kinect Body Tracking SDK. https://learn.microsoft.com/en-us/azure/kinect-dk/body-sdk-download
- Github Azure Kinect SDK. https://github.com/microsoft/Azure-Kinect-Sensor-SDK
- KinZ an Azure Kinect toolkit for Python and Matlab.
- pyk4a - a simple and pythonic wrapper in Python 3 for the Azure-Kinect-Sensor-SDK. https://github.com/etiennedub/pyk4a
Tools for recording and data extraction (update 10/08/2023)
- Azure Kinect DK recorder. https://learn.microsoft.com/en-us/azure/kinect-dk/azure-kinect-recorder
- Azure Kinect Viewer. https://learn.microsoft.com/en-us/azure/kinect-dk/azure-kinect-viewer
- AK_SM_RECORDER. A simple GUI recorder based on Python to manage Azure Kinect camera devices in a standalone mode. (https://pypi.org/project/ak-sm-recorder/)
- AK_ACQS is a software solution for data acquisition in fruit orchards using a sensor system boarded on a terrestrial vehicle. It allows the coordination of computers and sensors through the sending of remote commands via a GUI. At the same time, it adds an abstraction layer on library stack of each sensor, facilitating its integration. This software solution is supported by a local area network (LAN), which connects computers and sensors from different manufacturers ( cameras of different technologies, GNSS receiver) for in-field fruit yield testing. (https://github.com/GRAP-UdL-AT/ak_acquisition_system)
- AK_FRAEX is a desktop tool created for post-processing tasks after field acquisition. It enables the extraction of information from videos recorded in MKV format with the Azure Kinect camera. Through a GUI, the user can configure initial parameters to extract frames and automatically create the necessary metadata for a set of images. (https://pypi.org/project/ak-frame-extractor/)
Tools for fruit sizing and yield prediction (update 19/09/2023)
- AK_SW_BENCHMARKER. Python based GUI tool for fruit size estimation and weight prediction. (https://pypi.org/project/ak-sw-benchmarker/)
- AK_VIDEO_ANALYSER. Python based GUI tool for fruit size estimation and weight prediction from videos recorded with the Azure Kinect DK sensor camera in Matroska format. It receives as input a set of videos to analyse and gives as result reports in CSV datasheet format with measures and weight predictions of each detected fruit. (https://pypi.org/project/ak-video-analyser/).
Demo videos to test the software (update 10/08/2023)
- AK_FRAEX - Azure Kinect Frame Extractor demo videos. https://doi.org/10.5281/zenodo.6968103
- AK_FRAEX - Azure Kinect Frame Extractor demo videos (updated with BGRA32 videos for 3d point cloud extration). https://doi.org/10.5281/zenodo.8232445
Papers, articles (update 09/05/2024)
Agricultural
- AKFruitData: A dual software application for Azure Kinect cameras to acquire and extract informative data in yield tests performed in fruit orchard environments. [https://www.sciencedirect.com/science/article/pii/S2352711022001492]
- AKFruitYield: Modular benchmarking and video analysis software for Azure Kinect cameras for fruit size and fruit yield estimation in apple orchards. [https://www.sciencedirect.com/science/article/pii/S2352711023002443]
- Assessing automatic data processing algorithms for RGB-D cameras to predict fruit size and weight in apples. [https://www.sciencedirect.com/science/article/pii/S0168169923006907]
Clinical applications/ health
- Experimental Procedure for the Metrological Characterization of Time-of-Flight Cameras for Human Body 3D Measurements. [ ]
- Hand tracking for clinical applications: validation of the Google MediaPipe Hand (GMH) and the depth-enhanced GMH-D frameworks. [ ]
Keywords:
#python #computer-vision #computer-vision-tools
#data-acquisition #object-detection #detection-and-simulation-algorithms
#camera #images #video #rgb-d #rgb-depth-image
#azure-kinect #azure-kinect-dk #azure-kinect-sdk
#fruit-sizing #apple-fruit-sizing #fruit-yield-trials #precision-fruticulture #yield-prediction #allometry
Hello,
I am working on a research project that involves detecting cavities or other teeth problems in panoramic X-rays. I am looking for datasets that I can use to train my convolutional neural network. I have been searching on the internet for such datasets, but I didn't find anything so far. Any suggestions are greatly appreciated! Thank you in advance!
Need to publish research paper in impact factor journal having higher acceptance rate and faster review time.
As my protein levels appears to be varying in different cell types and different layers and localization (cytoplasm/nucelus) of the root tip of Arabidopsis (in the background of Wild type and mutant plants).
I wonder what should be my approach to compare differences in protein expression levels and localization between two genotypes.
I take Z-stack in a confocal microscope, usually I make a maximum intensity profile of Z- stack and try to understand the differences but as the differences are not only in intesities but also cell types and layers in that case how should I choose the layers between two samples?
My concern is how to find out exact layers between two genotypes as the root thickness is not always same and some z-stacks for example have 55 slices and some have 60.
thanks!
I am trying to open fMRI images in my PC but (I think) no appropriate software is present in PC. Hence I am not able to open indidial images in my PC.
I have a photo of bunches of walnut fruit in rows and I want to develop a semi-automated workflow for ImageJ to label them and create a new image from the edges of each selected ROI.
What I have done until now is Segmenting walnuts from the background by suitable threshold> then select the all of the walnuts as a single ROI>
Now I need to know how can I label, the different regions of ROI and count them in numbers to add to the ROI manager. Finally, these ROIs must be cropped from their edges and new image from each walnut should save individually.
Thoughts on how to do this, as well as tips on the code to do so, would be great.
Thanks!
Basically I was Interested in Skin Diseases Detection Using Image Processing
Kindly suggest me technology to be used and a research problem
I'm currently doing research in image processing using tensors, and I found that many test images repeatedly appear across related literature. They include: Airplane, Baboon, Barbara, Facade, House, Lena, Peppers, Giant, Wasabi, etc. However, they are not referenced with a specific source. I found some of them from the SIPI dataset, but many others are missing. I'm wondering if there are "standards" for the selection of test images, and where can the standardized images be found. Thank you!
I’m currently training a ML model that can estimate sex based on dimensions of proximal femur from radiographs. I’ve taken x-ray images from ALL of the samples in the osteological collection in Chiang Mai, left side only, which came to a total of 354 samples. I also took x-ray photos of the right femur and posterior-anterior view of the same samples (randomized, and only selective few n=94 in total) to test the difference, dimension wise. I have exhausted all the samples for training the model and validating (5-fold), which results in great accuracy of sexing. So, I am wondering whether it is appropriate to test the models with right-femur and posterior-anterior view radiographs, which will then be flipped to resemble left femur x-ray images, given the limitations of our skeletal collection?
Say I have a satellite image of known dimensions. I also know the size of each pixel. The coordinates of some pixels are given to me, but not all. How can I calculate the coordinates for each pixel, using the known coordinates?
Thank you.
Hi everyone, so in the field of magnetometry there is a vast body of work relating to the identification of various ferromagnetic field conditions but very little devoted to that of diamagnetic anomalies in the datasets both for airborne and satellite sources. For my current application were utilizing satellite based magnetometry data and are already working on image process algorithms that can enhance the spatial resolution of the dataset for more localized ground-based analysis. However, we're having difficulties in creating any form of machine learning system that can identify the repelling forces of diamagnetic anomalies underground primarily due to the weakness of the reversed field itself. I was just wondering if anyone had any sources relating to this kind of remote sensing application or any technical principles that we could apply to help jumpstart the projects development. Thanks for any and all information.
Hello dear RG community.
I started working with PIV some time ago. It's being an excruciating time of figuring out how to deal with the thing (even though I like PIV).
Another person I know spent about 2.5 months figuring out how to do smoke viz. And yet another person I know is desperately trying to figure out how to do LIF (with no success so far).
As a newcomer to the area I can't emphasize how valuable any piece of help is.
I noticed there is not one nice forum covering everything related to flow visualization.
There are separate forums on PIV analysis and general image processing (let me take an opportunity here to express my sincere gratitude to Dr. Alex Liberzon for the OpenPIV Google group that he is actively maintaining). Dantec and LaVision tech support is nice indeed.
But, still, I feel like I want one big forum about absolutely anything related to flow vis: how to troubleshoot hardware, how to pick particles, best practices in image preprocessing, how to use commercial GUI, how to do smoke vis, how to do LIF, infraction index matching for flow vis in porous media, PIV in very high speed flows, shadowgraphing, schlieren and so on.
Reading about theory of PIV and how to do it is one thing. But when it comes to obtaining images - oh, that can easily turn to a nightmare! I want a forum where we can share practical skills.
I'm thinking about creating a flow vis StackExchange website.
Area51 is a part of StackExchange where one can propose a StakExchange website. They have pretty strict rules for proposals. Proposals have to go through 3 stages of life cycle before they are allowed to become full-blown StackExchange websites. The main criteria is how many people visit the proposed website, ask and answer questions.
Before a website is proposed, one need to ensure there are people interested in the subject. Once the website has been proposed, one has 3 days to get at least 5 questions posted and answered, preferably, by the people who had expressed their interest in the topic. If the requirement is fulfilled the proposal is allowed to go on.
Thus, I'm wondering what does dear RG community think? Are there people interested in the endeavor? Is there a "seeding community" of enthusiasts who are ready to post and answer at least 5 questions withing the first 3 days?
If so, let me know in the comments, please. I will propose a community and post the instructions for you how to register in Area51, verify your email and post and answer the questions.
Bear in mind, that since we have not only to post the questions but also answer them the "seeding community" should better include flow vis experts.
How to plot + at center of circle after getting circle from Hough transform?
I obtained the center in workspace as "centers [ a, b] ".
When I am plotting with this command
plot(centers ,'r+', 'MarkerSize', 3, 'LineWidth', 2);
then I get the '+' at a and b on the same axis.