Science topics: Mathematical SciencesData Science
Science topic
Data Science - Science topic
Data science combines the power of computer science and applications, modeling, statistics, engineering, economy and analytics.
Whereas a traditional data analyst may look only at data from a single source - a single measurement result, data scientists will most likely explore and examine data from multiple disparate sources.
According to IBM, "the data scientist will sift through all incoming data with the goal of discovering a previously hidden insight, which in turn can provide a competitive advantage or address a pressing business problem. A data scientist does not simply collect and report on data, but also looks at it from many angles, determines what it means, then recommends ways to apply the data."
Data Science has grown importance with Big Data and will be used to extract value from the Cloud to business across domains.
Questions related to Data Science
I have a B.Sc. Degree in Civil Engineering and M.Sc. Degree in Road and Transport Engineering and another M.Sc. Degree in Project Management Analysis and Evaluation. Since I am a Researcher and Lecturer in University, I have been doing researches focused on Pavement materials like concrete and and subgrade material stabilization, Driver behavior and risk related to construction project. Lately I became more interested in Sustainable infrastructure and environmental researches and I am planning to focus on getting a Ph.D. scholarship and for that I am learning the concept of Data Science, Machine Learning and AI with Python. Since the topic of sustainable infrastructure is the hottest topic now a days, I would be grateful if any one recommend me and assist me on topics focus on this specific area.
Thank you!
Modernizing civil engineering education involves incorporating new technologies, teaching methodologies, and industry practices to equip students with the necessary skills and knowledge to meet the challenges of the future.
Here are some key strategies to modernize civil engineering education:
- Update Curriculum: Regularly review and update the curriculum to include emerging technologies and trends in civil engineering. Introduce courses on topics like sustainable design, renewable energy, smart infrastructure, and digital construction.
- Incorporate Digital Tools: Integrate computer-aided design (CAD), Building Information Modeling (BIM), and other software tools into the curriculum to familiarize students with modern engineering workflows and industry standards.
- Hands-on Learning: Emphasize practical, hands-on experiences in addition to theoretical knowledge. Incorporate real-world projects and case studies to give students a taste of actual engineering challenges.
- Interdisciplinary Approach: Promote collaboration with other engineering disciplines and fields like architecture, environmental science, and data science. Encourage students to work in cross-functional teams to solve complex problems.
- Sustainability Focus: Highlight sustainable practices throughout the curriculum. Encourage students to think about environmental impact, life cycle assessments, and green infrastructure solutions.
- Industry Partnerships: Establish strong partnerships with industry professionals and companies. Invite guest speakers, organize workshops, and facilitate internships to expose students to the latest industry practices.
- Research and Innovation: Encourage faculty and students to engage in research and innovation. Support projects that address real-world challenges and have the potential for practical implementation.
- Online Learning: Utilize online platforms and digital resources to provide flexible learning options. This could include recorded lectures, virtual labs, and interactive simulations.
- Soft Skills Development: Emphasize the development of soft skills like communication, teamwork, leadership, and problem-solving, which are vital for success in the modern engineering workplace.
- Diversity and Inclusion: Foster an inclusive learning environment that welcomes individuals from diverse backgrounds, cultures, and perspectives. Encourage diversity in the engineering workforce.
- Ethics and Social Responsibility: Integrate ethical considerations and social responsibility principles into the curriculum, helping students understand the impact of engineering decisions on society and the environment.
- Continuing Education and Lifelong Learning: Encourage a culture of continuous learning among both students and faculty. Offer professional development opportunities for faculty to stay updated with the latest advancements.
- International Exposure: Promote international collaborations and exchange programs to expose students to global engineering challenges and diverse cultural perspectives.
- Entrepreneurship and Business Skills: Provide opportunities for students to learn about entrepreneurship and business aspects related to civil engineering projects, encouraging them to think beyond technical aspects.
By implementing these strategies, civil engineering education can better equip students with the skills and mindset required to tackle the challenges of a rapidly evolving world. It ensures that graduates are ready to make a positive impact on society and contribute to sustainable and innovative engineering practices.
Although the title of question asked seems weird. Its been a time I have stayed out of touch with research. I have been building on the basics of the above topics. I am looking for recent research topics to work or collaborate on any of the above topics.
Wishing you all a HAPPY NEW YEAR 2025.
Looking forward for the support and good wishes
Hi All,
I am actively seeking research assistant opportunities in molecular biology or bioinformatics. I recently completed my Master’s in Molecular Biology and Bioinformatics and have extensive experience analyzing NGS data and am proficient in Python, R, and Bash scripting. I'm keen on bioinformatics, data analysis, and data science opportunities where I can apply my skills. I'm open to both onsite and offsite opportunities. Any leads will be greatly appreciated. Thanks!
In what applications are AI and Big Data technologies, including Big Data Analytics and/or Data Science, combined?
In my opinion, AI and Big Data technologies are being combined in a number of areas where analysis of large data sets combined with intelligent algorithms allows for better results and automation of processes. One of the key applications is personalization of services and products, especially in the e-commerce and marketing sectors. By analyzing behavioral data and consumer preferences, AI systems can create personalized product recommendations, dynamic advertisements or tailored pricing strategies. The process is based on the analysis of huge datasets, which allow precise prediction of consumer behavior.
I described the key issues of opportunities and threats to the development of artificial intelligence technology in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
I described the applications of Big Data technologies in sentiment analysis, business analytics and risk management in my co-authored article:
APPLICATION OF DATA BASE SYSTEMS BIG DATA AND BUSINESS INTELLIGENCE SOFTWARE IN INTEGRATED RISK MANAGEMENT IN ORGANIZATION
And what is your opinion on this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz

Hello, I am seeking opportunities to contribute as a co-author on research papers in the fields of data science and machine learning. If you are currently working on a relevant research topic, I would be delighted to collaborate and offer my expertise.
I hope this message finds you well. I am reaching out to share a recent implementation of realtime image dataset creation using MATLAB, which I believe offers significant potential for practical applications in computer vision, machine learning, and data science. Our implementation focuses on capturing and processing images in real-time, which could be beneficial in areas such as object detection, surveillance, healthcare, and robotics.
Given the growing importance of real-time data processing in research and industry, I am exploring the possibility of publishing a research article that outlines the methodology, experimental setup, and potential applications of this implementation. Furthermore, I am actively seeking potential collaborators who may be interested in contributing to my extending this work.



In some cases, we need all the partial derivatives of a multi-variable function. If it is a scalar function (as usual), the collection of the first partial derivatives is called a Gradient. If it is a vector-valued multi-variable function, the collection of the first partial derivatives is called the Jacobian matrix.
In some other cases, we just need a partial derivative, just respect to one specific variable.
Here is where my problem starts:
In neural networks, the gradient of the loss function with respect to individual parameters (for example: ∂L/∂w11 where w11 represents the first weight of the first layer)
can theoretically be computed directly, using the chain rule without explicitly relying on Jacobians, In my opinion. By tracing the dependencies of a single weight through the network, it is possible to compute its gradient step by step. Because all the functions in the individual neurons, are scalar functions. Involving scalar relationships with individual parameters. Without the need to consider all the Linear Transformations across the layers.
An example chain rule representation for 1 layer network:
∂L/∂w11 = ∂L/∂a11 * ∂a/∂z11 * ∂z/∂w11 It can be applied to multiple-layer networks.
However, it is noted that Jacobians are necessary when propagating gradients through entire layers or networks because they compactly represent the relationship between inputs and outputs in vector-valued functions. But this requires all the partial derivatives, instead of one.
This raises a question: if it is possible to compute gradients directly for individual weights, why are Jacobians necessary in the chain rule of the backpropagation? Why do we need to compute all the partial derivatives at once?
I am waiting for your response. #DeepLearning #NeuralNetworks #MachineLearning #MachineLearningMathematics #DataScience #Mathematics
Hi
I am looking to contribute in research/review papers in the field of AI/ML
If anyone out there is researching on any topic I am willing to contribute as co-author, I want to gain experience so that I can conduct my own research.
How do you think artificial intelligence can affect medicine in real world. There are many science-fiction dreams in this regard!
but how about real-life in the next 2-3 decades!?
Publisher:
Emerald Publishing
Book Title:
Data Science for Decision Makers: Leveraging Business Analytics, Intelligence, and AI for Organizational Success
Editors:
· Dr. Miltiadis D. Lytras, The American College of Greece, Greece
· Dr. Lily Popova Zhuhadar, Western Kentucky University, USA
Book Description
As the digital landscape evolves, the integration of Business Analytics (BA), Business Intelligence (BI), and Artificial Intelligence (AI) is revolutionizing Decision-Making processes across industries. Data Science for Decision Makers serves as a comprehensive resource, exploring these fields' convergence to optimize organizational success. With the continuous advancements in AI and data science, this book is both timely and essential for business leaders, managers, and academics looking to harness these technologies for enhanced Decision-Making and strategic growth.
This book combines theoretical insights with practical applications, addressing current and future challenges and providing actionable guidance. It aims to bridge the gap between advanced analytical theories and their applications in real-world business scenarios, featuring contributions from global experts and detailed case studies from various industries.
Book Sections and Chapter Topics
Section 1: Foundations of Business Analytics and Intelligence
· The evolution of business analytics and intelligence
· Key concepts and definitions in BA and BI
· Data management and governance
· Analytical methods and tools
· The role of descriptive, predictive, and prescriptive analytics
Section 2: Artificial Intelligence in Business
· Overview of AI technologies in business
· AI for data mining and pattern recognition
· Machine learning algorithms for predictive analytics
· Natural language processing for business intelligence
· AI-driven decision support systems
Section 3: Integrating AI with Business Analytics and Intelligence
· Strategic integration of AI in business systems
· Case studies on AI and BI synergies
· Overcoming challenges in AI adoption
· The impact of AI on business reporting and visualization
· Best practices for AI and BI integration
Section 4: Advanced Analytics Techniques
· Advanced statistical models for business analytics
· Deep learning applications in BI
· Sentiment analysis and consumer behavior
· Realtime analytics and streaming data
· Predictive and prescriptive analytics case studies
Section 5: Ethical, Legal, and Social Implications
· Data privacy and security in AI and BI
· Ethical considerations in data use
· Regulatory compliance and standards
· Social implications of AI in business
· Building trust and transparency in analytics
Section 6: Future Trends and Directions
· The future of AI in business analytics
· Emerging technologies and their potential impact
· Evolving business models driven by AI and analytics
· The role of AI in sustainable business practices
· Preparing for the next wave of digital transformation
Objectives of the Book
· Provide a deep understanding of AI’s role in transforming business analytics and intelligence.
· Present strategies for integrating AI to enhance Decision-Making and operational efficiency.
· Address ethical and regulatory considerations in data analytics.
· Serve as a practical guide for executives, data scientists, and academics in a data-driven economy.
Important Dates
· Chapter Proposal Submission Deadline: 25 November 2024
· Full Chapter Submission Deadline: 31 January 2025
· Revisions Due: 4 April 2025
· Submission to Publisher: 1 May 2025
· Anticipated Publication: Winter 2025
Target Audience
· Business Professionals and Executives: Seeking insights to improve Decision-Making.
· Data Scientists and Business Analysts: Expanding their toolkit with AI and analytics techniques.
· Academic Researchers and Educators: Using it as a resource for teaching and research.
· IT and MIS Professionals: Enhancing their understanding of BI systems and data management.
· Policy Makers and Regulatory Bodies: Understanding the social and regulatory impacts of AI and analytics.
Keywords
· Artificial Intelligence
· Business Analytics
· Business Intelligence
· Data Science
· Decision-Making
Submission Guidelines
We invite chapter proposals that align with the outlined sections and objectives. Proposals should include:
· Title
· Authors and affiliations
· Abstract (200-250 words)
· Keywords
Contact Information
Dr. Miltiadis D. Lytras: miltiadis.lytras@gmail.com
Dr. Lily Popova Zhuhadar: lily.popova.zhuhadar@wku.edu
We are excited to invite researchers and practitioners to submit their work to the upcoming Workshop on Combating Illicit Trade, organized by Working Group 4 of the EU COST Action GLITSS. This workshop will focus on leveraging data science, artificial intelligence (AI), machine learning, and blockchain to address the global challenge of illicit trade.
Scope:
Illicit trade spans a wide range of domains, from trafficking of historical artifacts, human and wildlife trafficking, to environmental crimes. In this workshop, we aim to:
- Address challenges in collecting reliable datasets and developing robust performance measures.
- Explore the use of advanced technologies such as remote sensing, deep learning, network analysis, and blockchain to combat illicit trade.
- Foster collaboration across academia, industry, and policy to innovate and share methodologies for the detection and prevention of illicit trade.
Topics of Interest:
- Machine Learning, Deep Learning, and Reinforcement Learning
- Explainable AI and Computer Vision
- Remote Sensing and Spatial Data Analysis
- Pattern Recognition and Predictive Analytics
- Illicit Trade: Human and Wildlife Trafficking, Artefacts, Cultural Property
- Environmental and Endangered Species Crimes
- Financial and Cyber Crimes
- Drugs, Arms, and Counterfeits
- Blockchain and Cryptography
Important Dates:
- Paper Submission: November 15, 2024
- Authors Notification: January 6, 2025
- Camera Ready and Registration: January 22, 2025
This workshop offers a unique opportunity to contribute to the global fight against illicit trade using cutting-edge technologies. We encourage authors to submit their research and join us in advancing this important field.
For more details on submission guidelines and registration, please visit https://icpram.scitevents.org/DSAIB-IllicitTrade.aspx.
Looking forward to your submissions!
The exponential development of quantum computing presents both enhanced opportunities and significant challenges in the field of cybersecurity. Quantum computing has the potential to revolutionize areas such as cryptography, data science, and artificial intelligence due to its ability to process information exponentially faster than classical computers. However, this power also introduces new vulnerabilities that could compromise the security of existing encryption methods.
Emerging Cybersecurity Threats from Quantum Computing:
- Breaking Classical Cryptographic Protocols: Classical cryptographic algorithms like RSA, Diffie-Hellman, and ECC (Elliptic Curve Cryptography) are foundational to modern cybersecurity, protecting everything from personal data to financial transactions. These methods rely on the complexity of certain mathematical problems (e.g., factoring large numbers or solving discrete logarithms), which are computationally difficult for classical computers to solve. However, Shor’s algorithm, a quantum algorithm, can solve these problems in polynomial time, making many classical encryption schemes vulnerable to decryption by sufficiently powerful quantum computers. This poses a serious threat to sensitive data stored or transmitted today.
- Quantum Key Distribution (QKD) Vulnerabilities: Quantum Key Distribution is a quantum encryption method that leverages the principles of quantum mechanics to securely exchange cryptographic keys. However, despite its potential, QKD is still in the experimental stage and faces scalability and technical challenges. A widespread, practical implementation could introduce new vulnerabilities, especially in the transmission of quantum keys over large-scale networks.
- Post-Quantum Cryptography Threats: Quantum computers may also disrupt the development and deployment of post-quantum cryptography (PQC) algorithms designed to be resistant to quantum attacks. As governments and organizations transition to quantum-safe encryption, the timeline for safe adoption may leave systems exposed to quantum-enabled attacks before quantum-resistant cryptographic systems are widely implemented.
Adapting Classical Encryption Techniques to Quantum Computing:
To mitigate the risks posed by quantum computing, there is a growing push towards developing and implementing quantum-resistant encryption methods. This includes adapting classical encryption techniques to maintain security in a quantum world.
- Post-Quantum Cryptography (PQC): PQC algorithms are being developed to be resistant to quantum computing’s ability to break traditional encryption schemes. These algorithms rely on problems that are believed to be hard for quantum computers to solve, such as: Lattice-based Lattice LatticeLattice-based cryptography: Uses the complexity of lattice problems to create encryption systems that are hard for quantum computers to break. Code-based cryptography: Utilizes error-correcting codes to form cryptographic systems that quantum computers are less likely to break. Hash-based cryptography: Uses cryptographic hash functions to create digital signatures that are resistant to quantum attacks. Multivariate polynomial cryptography: Relies on the difficulty of solving systems of multivariate polynomial equations over finite fields.
- Hybrid Encryption Models: A more immediate approach to secure systems in the quantum era is the use of hybrid encryption models that combine both classical and quantum-safe cryptographic methods. For example, an encrypted communication could use both RSA for the immediate security and a PQC algorithm for future-proofing, ensuring the data remains protected even after quantum computers become more powerful.
- Quantum-Safe Key Exchange Protocols: Traditional key exchange protocols, like Diffie-Hellman, need to be adapted to withstand quantum decryption capabilities. Researchers are investigating new key exchange mechanisms, such as lattice-based or code-based protocols, that can resist quantum algorithms. This would ensure secure key generation and distribution even in the presence of quantum threats.
- Quantum Cryptography and Quantum Key Distribution (QKD): As quantum computing advances, QKD techniques are being explored for their ability to provide theoretically unbreakable encryption. QKD relies on the principles of quantum mechanics, such as the no-cloning theorem and quantum superposition, to ensure secure key exchanges. However, practical, large-scale deployment is still in development, and integrating QKD into global systems will require overcoming significant technical and scalability challenges.
Conclusion:
The emergence of quantum computing is a transformative development in the field of technology, but it poses serious threats to traditional cybersecurity protocols. To safeguard sensitive data, researchers and industry experts are focusing on the development of quantum-resistant encryption algorithms, along with hybrid encryption systems that combine classical and post-quantum techniques. Adapting to the quantum era will require a collaborative, multi-disciplinary approach that spans cryptography, quantum physics, and cybersecurity. This research is crucial to preparing our global digital infrastructure for the future and ensuring that systems remain secure in the face of powerful quantum capabilities.
A postgraduate student in pedodontics in India aimed to evaluate the diagnostic accuracy of the Diagnodent device for detecting early caries in school children. With a sample of 100 children aged 6 to 12 years, the student conducted a study at a local school. Each child underwent a clinical examination followed by a Diagnodent assessment, which uses laser fluorescence to identify carious lesions.
Suggest statistical solution for the above scenario &
Appraise how this study underscores the potential of integrating data science methods in clinical settings, paving the way for evidence-based dental practices?
How can distinguish the difference between data science and data analysis.
A public health researcher conducted a longitudinal study to evaluate the effectiveness of three preventive dental procedures—topical fluoride application, pit and fissure sealants, and atraumatic restorative treatment (ART)—in reducing dental caries among school children. A sample of 60 students aged 6-12 years was randomly selected from three primary schools. Each child received the three treatments in a random order at different intervals over a 12-month period, with caries measurements taken at six time points: baseline, and at 1, 3, 6, 9, and 12 months post-treatment. The main goal was to assess the effectiveness of each procedure in preventing the progression of dental caries.
Suggest relevant statistical analysis for the above scenario with relevant justifications (Add online resources and citations from scientific portals)?
Hi,
I am a new Master's student in Data Science, looking for a remote research assistant position or collaboration opportunities. I want to deepen my applied knowledge in data science and explore its applications in various fields.
I am familiar with quantitative trading and recommendation systems but am also eager to learn how data science is applied in areas such as climate change, environmental studies, and pandemics.
If there are any available opportunities, I would love to contribute and expand my expertise.
Understanding Data in Machine Learning
In this video, I break down the essentials of data in machine learning—covering everything from data collection, cleaning, and preprocessing to its pivotal role in training accurate models. Whether you're a beginner or looking to strengthen your understanding of how data powers machine learning algorithms, this video will guide you through the core concepts!
Watch now: https://youtu.be/9Q_r73M03vQ
Key Topics Covered:
Types of data used in machine learning
How to handle missing and inconsistent data
Data normalization and transformation techniques
Best practices for preparing data for model training
Perfect for anyone eager to dive deeper into AI and data science!
Don't forget to like, share, and subscribe for more AI-based insights!
#MachineLearning #DataScience #AI #DataPreparation #MLBasics #DataCleaning #DeepLearning #AIForBeginners #ML2024
I am currently in the process of selecting a topic for my dissertation in Data Science. Given the rapid advancements and the increasing number of studies in this field, I want to ensure that my research is both original and impactful.
I would greatly appreciate your insights on which topics or areas within Data Science you feel have been overdone or are generally met with fatigue by the academic community. Are there any specific themes, methods, or applications that you think should be avoided due to their oversaturation in recent dissertations?
Your guidance would be invaluable in helping me choose a research direction that is both fresh and relevant.
Thank you in advance for your assistance!
I'm seeking co-authors for a research paper on enhancing malware detection using Generative Adversarial Networks (GANs). The paper aims to present innovative approaches to improving cybersecurity frameworks by leveraging GANs for synthetic data generation. We are targeting submission to a Scopus-indexed journal.
If you have expertise in cybersecurity, machine learning (especially GANs), or data science and are interested in contributing to this paper, please reach out to me.
I'm currently seeking postdoctoral research opportunities in multidisciplinary areas within Computer Science, with an interest in both academic and industry settings. My research interests include advanced cloud-based data management for smart buildings, NLP for low-resource languages like Amharic, AI and machine learning, data science and big data, human-computer interaction, and robotics. I'm open to discussing potential opportunities and collaborations in these fields. Please feel free to contact me if you are aware of any suitable positions.
AI has started increasing Unemplyment in the market already. isnt it ?
AI is taking jobs of - POETS, Musical fretarnity, IT people, DATA SCience people and more to come.
please do write your views.
Why specifically does Wolfram Matematica theorize reality is discrete in nature?
Maybe because reality is too unpredictable to be continuous. Plus, discrete data suggests either every entity is unique or simply too different for perfect predictions.
Hello All,
I am looking for researchers currently in academia who are interested in research in AI and machine learning applications for the telecom industry to collaborate and write research papers. I am a data scientist with 7 years of experience in the industry and with major telecom clients in the US. My research interests are Network Optimization, Network Operations, Data science for Telecom, Machine Learning, and AI.
Best,
Dileesh.
Please if anyone could help me find a proper research question, because I am in a pickle of finding one in my data science field, I have no idea what to go with my research question. please help.
Thank you.
Chalmers in his book: What is this thing called Science? mentions that Science is Knowledge obtained from information. The most important endeavors of science are : Prediction and Explanation of Phenomenon. The emergence of Big (massive) Data leads us to the field of Data Science (DS) with the main focus on prediction. Indeed, data belong to a specific field of knowledge or science (physics, economy, ....).
If DS is able to realize prediction for the field of sociology (for example), to whom the merit is given: Data Scientist or Sociologist?
10.1007/s11229-022-03933-2
#DataScience #ArtificialIntelligence #Naturallanguageprocessing #DeepLearning #Machinelearning #Science #Datamining
My Awesomest Network, As You may know, I am during a process of continuous learning and upskilling. Now I attend Data Science course (SQL, Python et cetera) and I have to do some projects. Could You help me With them, please?
I'm currently running my Master's in Data Science and I'm at a point where in writing my research proposal. I will writing my dissertation on THE ROLE OF DATA SCIENCE AND DECISION-MAKING IN ACHIEVING A SUCCESSFUL PROJECT DELIVERY,
I really need any materials, support i can get in making this a success.
I'm currently in a research project on wavelet transform denoising. Due to lack of statistical knowledge, I'm not able to do research on thresholding method, so I'm curious if there are any other research directions(more prefer an engineering project), thank you for your answer.
Combining Data Science & Physics in BSc Physics syllabus is possible and beneficial. It will increase employment opportunities for Physics graduates, enhance their education and make the curriculum more innovative. As a result, enrollment in the BSc Physics course is likely to increase.
I am currently exploring research opportunities in data science, C++ string manipulation, algorithm hybrid approaches, and healthcare-related machine learning. Are there any ongoing projects or research initiatives in these domains where my skills in data analysis using R and Python, coupled with expertise in algorithmic string manipulation, could be of value? Additionally, I am eager to contribute to collaborative efforts or co-authorship opportunities in these areas. If you have any relevant projects or suggestions, I would greatly appreciate your insights and potential for collaboration.
Thank you for your consideration.
Evaluation Metrics | L-01 | Basic Overview
Welcome to our playlist on "Evaluation Matrices in Machine Learning"! In this series, we dive deep into the key metrics used to assess the performance and effectiveness of machine learning models. Whether you're a beginner or an experienced data scientist, understanding these evaluation metrics is crucial for building robust and reliable ML systems.
📷 Check out our comprehensive guide to Evaluation Matrices in Machine Learning, covering topics such as:
Accuracy
Precision and Recall
F1 Score
Confusion Matrix
ROC Curve and AUC
MSE (Mean Squared Error)
RMSE (Root Mean Squared Error)
MAE (Mean Absolute Error)
Stay tuned as we explore each metric in detail, discussing their importance, calculation methods, and real-world applications. Whether you're working on classification, regression, or another ML task, these evaluation matrices are fundamental to measuring model performance accurately.
Don't forget to subscribe for more insightful content on machine learning and data science! 📷
#MachineLearning #DataScience #EvaluationMetrics #ModelPerformance #DataAnalysis #AI #MLAlgorithms #Precision #Recall #Accuracy
Feedback link: https://maps.app.goo.gl/UBkzhNi7864c9BB1A
LinkedIn link for professional queries: https://www.linkedin.com/in/professorrahuljain/
Join my Telegram link for Free PDFs: https://t.me/+xWxqVU1VRRwwMWU9
Connect with me on Facebook: https://www.facebook.com/professorrahuljain/
Watch Videos: Professor Rahul Jain Link: https://www.youtube.com/@professorrahuljain
I have observed a notable trend wherein individuals from diverse fields are transitioning towards domains such as data science, data analytics, and machine learning (ML). Concurrently, there is a growing interest in exploring the synergies between these fields and ML to augment productivity. However, beyond mere application knowledge, there arises an important question: How can one effectively impart a deeper understanding of the conceptual frameworks and fundamental principles underlying these ML algorithms, while abstracting away from the technical details
InfoScience Trends offers comprehensive scientific content that delves into various facets of information science research. This includes but is not limited to topics such as information retrieval, data management, information behavior ethics and policy, human-computer interaction, information visualization, information literacy, digital libraries, information technology, information systems, social informatics, data science, and more.
need a. Suggestion for best article related to management studies in data science
Can I have some good, relevant topics related to data science for my dissertation, please. The topic should yield good results.
how AI,ML and data science used in core industry?
what type of tools you should be equipped with?
Hello,
I have the following problem. I have made three measurements of the same event under the same measurement conditions.
Each measurement has a unique probability distribution. I have already calculated the mean and standard deviation for each measurement.
My goal is to combine my three measurements to get a general result of my experiment.
I know how to calculate the combined mean: (x_comb = (x1_mean+x2_mean+x3_mean)/3)
I don't know how to calculate the combined standard deviation.
Please let me know if you can help me. If you have any other questions, don't hesitate to ask me.
Thank you very much! :)
What aspects of working with data are the most time-consuming in your research activities?
- Data collection
- Data processing and cleaning
- Data analysis
- Data visualization
What functional capabilities would you like to see in an ideal data work platform?
Colleagues, good day!
We would like to reach out to you for assistance in verifying the results we have obtained.
We employ our own method for performing deduplication, clustering, and data matching tasks. This method allows us to obtain a numerical value of the similarity between text excerpts (including data table rows) without the need for model training. Based on this similarity score, we can determine whether records match or not, and perform deduplication and clustering accordingly.
This is a direct-action algorithm, relatively fast and resource-efficient, requiring no specific configuration (it is versatile). It can be used for quickly assessing previously unexplored data or in environments where data formats change rapidly (but not the core data content), and retraining models is too costly. It can serve as the foundation for creating personalized desktop data processing systems on consumer-grade computers.
We would like to evaluate the quality of this algorithm in quantitative terms, but we cannot find widely accepted methods for such an assessment. Additionally, we lack well-annotated datasets for evaluating the quality of matching.
If anyone is willing and able to contribute to the development of this topic, please step forward.
Sincerely, The KnoDL Team
I am working with a time series dataset using the `fable` package (R). I have fitted several models (e.g., ARIMA) and generated forecasts. The accuracy calculation is resulting in NaN values, and the warning suggests incomplete out-of-sample data.
I am seeking guidance on how to handle this incomplete out-of-sample data issue and successfully calculate accuracy metrics for my time series forecasts. If anyone has encountered a similar problem or has expertise in time series analysis with R, your insights would be greatly appreciated.

How can the development of artificial intelligence technologies and applications help the development of science, the conduct of scientific research, the processing of results obtained from scientific research?
In recent discussions on the ongoing rapid development of artificial intelligence technologies, including generative artificial intelligence and general artificial intelligence, and their rapidly growing applications, a number of both positive determinants of this development are emerging but also a number of potential risks and threats are being identified. Recently, the key risks associated with the development of artificial intelligence technologies include not only the possibility of using AI technologies by cyber criminals and in hacking activities; the use of open-access tools based on generative artificial intelligence on the Internet to create crafted texts, photos, graphics and videos and their posting on social media sites to create fake news and generate disinformation; the use of "creations" created with applications based on intelligent chatbots in the field of marketing communications; the potential threat to many jobs being replaced by AI technology but also in the development of increasingly superior generative artificial intelligence technology, which may soon be creating new, even more superior AI technologies that could escape human control. Currently, all leading technology and Internet companies are developing their intelligent chatbots and AI-based tools, including generative AI and/or general AI, which they are already making available on the Internet or will soon do so. In this way, a kind of technological arms race is currently being realized between major technology companies at the forefront of ICT, Internet and Industry 4.0/5.0 information technologies. The technological progress that is currently taking place is accelerating as part of the transition from Industry 4.0 to Industry 5.0 technologies. In the context of the emerging threats mentioned above, many companies, enterprises, banks are already implementing and developing certain tools, applications based on AI in order to increase the efficiency of certain processes carried out within the framework of their business, logistics, financial activities, etc. In addition, in the ongoing discussions on the possibility of applying AI technologies in aspects interpreted positively, in solving various problems of the current development of civilization, including to support ongoing scientific research, to support the development of science in various disciplines of science. Accordingly, an important area of positive applications of AI technology is the use of this technology to improve the efficiency of reliably and ethically conducted scientific research. Thus, the development of science could be supported by the implementation of AI technology into the realm of science.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How can the development of artificial intelligence technologies and applications help the development of science, the conduct of scientific research, the processing of results obtained from scientific research?
How can the development of artificial intelligence help the development of science and scientific research?
And what is your opinion on this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research. In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz

What are the possibilities of applying AI-based tools, including ChatGPT and other AI applications in the field of predictive analytics in the context of forecasting economic processes, trends, phenomena?
The ongoing technological advances in ICT and Industry 4.0/5.0, including Big Data Analytics, Data Science, cloud computing, generative artificial intelligence, Internet of Things, multi-criteria simulation models, digital twins, Blockchain, etc., make it possible to carry out advanced data processing on increasingly large volumes of data and information. The aforementioned technologies contribute to the improvement of analytical processes concerning the operation of business entities, including, among others, in the field of Business Intelligence, economic analysis as well as in the field of predictive analytics in the context of forecasting processes, trends, economic phenomena. In connection with the dynamic development of generative artificial intelligence technology over the past few quarters and the simultaneous successive increase in the computing power of constantly improved microprocessors, the possibilities of improving predictive analytics in the context of forecasting economic processes may also grow.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
What are the possibilities of applying AI-based tools, including ChatGPT and other AI applications for predictive analytics in the context of forecasting economic processes, trends, phenomena?
What are the possibilities of applying AI-based tools in the field of predictive analytics in the context of forecasting economic processes?
And what is your opinion on this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz

Good morning everyone! I've just finished reading Shyon Baumann's paper on "Intellectualization and Art World Development: Film in the United States." This excellent paper includes a substantial section of textual analysis where various film reviews are examined. These reviews are considered a fundamental space for the artistic legitimation of films, which, during the 1960s, increasingly gained artistic value. To achieve this, Baumann focuses on two dimensions: critical devices and lexical enrichment. The paper is a bit dated, and the methodologies used can be traced back to a time when text analysis tools were not as widespread or advanced. On the other hand, they are not as advanced yet. The question is: are you aware of literature/methodologies that could provide insights to extend Baumann's work using modern text analysis technologies?
In particular, following the dimensions analyzed by Baumann:
a) CHANGING LANGUAGE
- Techniques for the formation of artistic dictionaries that can replace the manual construction of dictionaries for artistic vocabulary (Baumann reviews a series of artistic writings and extracts terms, which are then searched in film reviews). Is it possible to do this automatically?
b) CHANGING CRITICAL DEVICES
- Positive and negative commentary -> I believe tools capable of performing sentiment analysis can be successfully applied to this dimension. Are you aware of any similar work?
- Director is named -> forming a giant dictionary of directors might work. But what about the rest of the crew who worked on the film? Is there a way to automate the collection of information on people involved in films?
- Comparison of directors -> Once point 2, which is more feasible, is done, how to recognize when specific individuals are being discussed? Does any tool exist?
- Comparison of films -> Similar to point 3.
- Film is interpreted -> How to understand when a film is being interpreted? What dimensions of the text could provide information in this regard? The problem is similar for all the following dimensions:
- Merit in failure
- Art vs. entertainment
- Too easy to enjoy
Expanding methods in the direction of automation would allow observing changes in larger samples of textual sources, deepening our understanding of certain historical events. The data could go more in-depth, providing a significant advantage for those who want to view certain artistic phenomena in the context of collective action.
Thank you in advance!
The topic of my master's thesis is "The use of Big Data and Data Science technologies to assess the investment attractiveness of companies."
I plan to design and implement a machine for market analysis, using graphs.
I will be grateful to you for links to scientific articles on this topic.
One of the most essential foundations of artificial intelligence and several other fields is linear algebra. For the first time, the new version of Sheldon Axler's essential book Linear Algebra Done Right, one of the most trusted linear algebra textbooks, has been made accessible to everyone for free:
Have fun reading it.
I have a netCDF4 (.nc) file having ocean SST data, with coordinates (lat, lon, time). I want to predict and plot maps for the future. How can I do this using python?
Please recommend a python code for time series forecasting based on this approach.
I have a monthly netCDF4 file containing chlorophyll-a values, and I aim to forecast these values using time series analysis.
My approach involves computing monthly spatial averages for this entire region and then forecasting these averages. Is this methodology valid?
Additionally, could you recommend a Python code for time series forecasting based on this approach?
Is it feasible to predict values for individual grid points without considering spatial averaging?
My study area encompasses an oceanic region of approximately 45,000 sq km near the southern coast of Sri Lanka.
I have an exciting opportunity for you to contribute your expertise and help with the case study as part of my doctoral research.
🔬 About the Research: I'm on a mission to uncover innovative approaches for enhancing cybersecurity through the power of machine learning, with a case study in a format of a quantitative survey.
I'm looking for any specialist and expert with background in Cybersecurity, Machine Learning and Data Science.
I understand your time is precious, which is why the case study is designed to be concise, requiring just a short 10-15 minute commitment.
You can contribute by clicking on the following link: https://forms.gle/HWhH7dvJEpBU3rMTA
I am conducting a research project involving the use of the MACD (Moving Average Convergence Divergence) signal indicator for analyzing multivariate time series data, possibly for trading purposes.
I've defined some initial parameters such as ema_short_period, ema_long_period, and signal_period. However, I'm interested in insights and best practices for parameter selection in such analyses.
I used these values to calculate and implement this indicator.
ema_short_period = 12
ema_long_period = 26
signal_period = 9
What parameters should I consider when dealing with multivariate data, and how can I optimize these parameters for my specific analysis goals?
Additionally, if anyone has experience with using the MACD in multivariate time series analysis, I'd appreciate any advice or insights you can provide.
I'm implementing this using python.
Thank you!
Let's find the most essential and reliable no-code data science tools to speed up the elaboration of the research results. Thanks to Avi Chawla (source: LinkedIn post), I have some suggestions for you here. Let us know your tips.
Gigasheet
- Browser-based no-code tool to analyze data at scale
- Use AI to conduct data analysis
- It's like a combination of Excel + Pandas with no scale limitations
- Analyze up to 1B rows
Mito
- Create a spreadsheet interface in Jupyter Notebook
- Yse Mito AI to conduct data analysis
- Automatically generates Python code for each analysis
PivotTableJS
- Create Pivot tables, aggregations, and charts using drag-and-drop
- Add heatmaps to tables
- Works within Jupyter notebook
Drawdata
- Draw any 2D scatter dataset by dragging the mouse
- Export the data as DataFrame, CSV, or JSON
- Create a histogram and line plot by dragging the mouse
PyGWalker
- Open a tableau-style interface in Jupyter notebook
- Analyze a DataFrame as you would in Tableau
Visual Python
- A GUI-based Python code generator
- Import libraries, perform data I/O, create plots, and write code for ML models by clicking buttons
Tensorflow Playground
- Provides an elegant UI to build, train, and visualize neural networks
- Browser-based tool
- Change data, model architecture, hyperparameters, etc. by clicking buttons
ydata-profiling
- Generate a standardized EDA report for your dataset
- Works in a Jupyter notebook
- Covers info about missing values, data statistics, correlation, and data interactions
If i have got a matrix of 16x12 and i want to create 3 classes.Is there any machine learning technique which can identify the lower and upper boundary levels for each of the classes.
This part seems extremely difficult to optimize.
How can data science and statistical analysis be used to improve the shipping and logistics industry?
Data augmentation creates something from nothing?
Data augmentation creates something from nothing?
Hello people, I have a dataset of inhibitors as binary labels ( Zeros - Inactive , Ones - Active ). I have my ML/AI model working, now I would like to know how many are the best inhibitors out of these. Could anyone help me what should I do and what can be done to resolve my problems?
TIA
#DrugDesign #ML #AI #DataScience #DrugDiscovery
Is it possible to build a highly effective forecasting system for future financial and economic crises based on artificial intelligence technology in combination with Data Science analytics, Big Data Analytics, Business Intelligence and/or other Industry 4.0 technologies?
Is it possible to build a highly effective, multi-faceted, intelligent forecasting system for future financial and economic crises based on artificial intelligence technology in combination with Data Science analytics, Big Data Analytics, Business Intelligence and/or other Industry 4.0 technologies as part of a forecasting system for complex, multi-faceted economic processes in such a way as to reduce the scale of the impact of the paradox of a self-fulfilling prediction and to increase the scale of the paradox of not allowing a predicted crisis to occur due to pre-emptive anti-crisis measures applied?
What do you think about the involvement of artificial intelligence in combination with Data Science, Big Data Analytics, Business Intelligence and/or other Industry 4.0 technologies for the development of sophisticated, complex predictive models for estimating current and forward-looking levels of systemic financial, economic risks, debt of the state's public finance system, systemic credit risks of commercially operating financial institutions and economic entities, forecasting trends in economic developments and predicting future financial and economic crises?
Research and development work is already underway to teach artificial intelligence to 'think', i.e. the conscious thought process realised in the human brain. The aforementioned thinking process, awareness of one's own existence, the ability to think abstractly and critically, and to separate knowledge acquired in the learning process from its processing in the abstract thinking process in the conscious thinking process are just some of the abilities attributed exclusively to humans. However, as part of technological progress and improvements in artificial intelligence technology, attempts are being made to create "thinking" computers or androids, and in the future there may be attempts to create an artificial consciousness that is a digital creation, but which functions in a similar way to human consciousness. At the same time, as part of improving artificial intelligence technology, creating its next generation, teaching artificial intelligence to perform work requiring creativity, systems are being developed to process the ever-increasing amount of data and information stored on Big Data Analytics platform servers and taken, for example, from selected websites. In this way, it may be possible in the future to create "thinking" computers, which, based on online access to the Internet and data downloaded according to the needs of the tasks performed and processing downloaded data and information in real time, will be able to develop predictive models and specific forecasts of future processes and phenomena based on developed models composed of algorithms resulting from previously applied machine learning processes. When such technological solutions become possible, the following question arises, i.e. the question of taking into account in the built intelligent, multifaceted forecasting models known for years paradoxes concerning forecasted phenomena, which are to appear only in the future and there is no 100% certainty that they will appear. Well, among the various paradoxes of this kind, two particular ones can be pointed out. One is the paradox of a self-fulfilling prophecy and the other is the paradox of not allowing a predicted crisis to occur due to pre-emptive anti-crisis measures applied. If these two paradoxes were taken into account within the framework of the intelligent, multi-faceted forecasting models being built, their effect could be correlated asymmetrically and inversely proportional. In view of the above, in the future, once artificial intelligence has been appropriately improved by teaching it to "think" and to process huge amounts of data and information in real time in a multi-criteria, creative manner, it may be possible to build a highly effective, multi-faceted, intelligent forecasting system for future financial and economic crises based on artificial intelligence technology, a system for forecasting complex, multi-faceted economic processes in such a way as to reduce the scale of the impact of the paradox of a self-fulfilling prophecy and increase the scale of the paradox of not allowing a predicted crisis to occur due to pre-emptive anti-crisis measures applied. In terms of multi-criteria processing of large data sets conducted with the involvement of artificial intelligence, Data Science, Big Data Analytics, Business Intelligence and/or other Industry 4. 0 technologies, which make it possible to effectively and increasingly automatically operate on large sets of data and information, thus increasing the possibility of developing advanced, complex forecasting models for estimating current and future levels of systemic financial and economic risks, indebtedness of the state's public finance system, systemic credit risks of commercially operating financial institutions and economic entities, forecasting economic trends and predicting future financial and economic crises.
In view of the above, I address the following questions to the esteemed community of scientists and researchers:
Is it possible to build a highly effective, multi-faceted, intelligent forecasting system for future financial and economic crises based on artificial intelligence technology in combination with Data Science, Big Data Analytics, Business Intelligence and/or other Industry 4.0 technologies in a forecasting system for complex, multi-faceted economic processes in such a way as to reduce the scale of the impact of the paradox of the self-fulfilling prophecy and to increase the scale of the paradox of not allowing a forecasted crisis to occur due to pre-emptive anti-crisis measures applied?
What do you think about the involvement of artificial intelligence in combination with Data Science, Big Data Analytics, Business Intelligence and/or other Industry 4.0 technologies to develop advanced, complex predictive models for estimating current and forward-looking levels of systemic financial risks, economic risks, debt of the state's public finance system, systemic credit risks of commercially operating financial institutions and economic entities, forecasting trends in economic developments and predicting future financial and economic crises?
What do you think about this topic?
What is your opinion on this subject?
Please respond,
I invite you all to discuss,
Thank you very much,
Warm regards,
Dariusz Prokopowicz

hey guys, I'm working on a new project where I should transfer Facebook ads campaigns data to visualize in tableau or Microsoft power BI, and this job should be done automatically daily, weekly or monthly, I'm planning to use python to build a data pipeline for this, do you have any suggestions or any Resources I can read or any projects similar I can get inspired from ? thank you .