Science topic
Deep Learning - Science topic
Explore the latest questions and answers in Deep Learning, and find Deep Learning experts.
Questions related to Deep Learning
Less training data, Less model performance. Is it inevitable that pre-training + few-shot learning will not be as good as sufficient data in a specific field?
I am looking for research focused on classifying Arabic text into ( Verb / Noun/ Letter )
most of what I found is stemming, deep learning stuff but not word classification.
Any help please ?!
In your opinion, does it make sense to create a new generation of something similar to ChatGPT, which will use databases built solely on the basis of continuously updated data, information, objectively verified knowledge resources taken from online scientific knowledge bases, online scientific portals and online indexing databases of scientific publications?
I'm curious to know what you think about this? This kind of solution based on an intelligent publication search system and an intelligent content analysis system of retrieved publications on an online scientific portal could be of great help to researchers and scientists. In my opinion, the creation of a new generation of something similar to ChatGPT, which will use databases built solely on the basis of online scientific knowledge bases, online scientific portals and online scientific publication indexing databases makes sense if basic issues of copyright respect are met and such tools use continuously updated and objectively and scientifically verified knowledge, data and information resources. With such a solution, researchers and scientists conducting research on a specific topic would have the opportunity to review the literature within the millions of scientific publications collected on specific online scientific portals and scientific publication indexing databases. Besides, what is particularly important, the mentioned partially automated literature review would probably be realized in a relatively short time. Thus, an intelligent system for searching and analyzing the content of scientific publications would, in a short period of time, from among the millions of texts archived in specific scientific publication indexing databases, select those publications in which other researchers and scientists have described analogous, similar, related, correlated, related, etc. issues, results of scientific research conducted, selected publications within the same scientific discipline, the same topic or in the interdisciplinary field. Besides, an intelligent system for searching and analyzing the content of scientific publications could also categorize the retrieved publications into those in which other researchers and scientists confirmed analogous conclusions of conducted similar research, polemicized with the results of other researchers' research on a specific topic, obtained other results from conducted research, suggested other practical applications of obtained research results realized on the same or similar topic, etc. However, for ethical reasons and properly conducted research, i.e., respecting the research results of other researchers and scientists, it would be unacceptable for this kind of intelligent system for searching and analyzing the content of many publications available on specific databases for indexing scientific publications to enable plagiarism, i.e., to provide research results, provide retrieved content on specific issues and topics, etc., without accurately providing the source of the data, description of the source data, names of the authors of the publications, etc., and some unreliable researchers would take advantage of this opportunity. This kind of intelligent system for searching and analyzing the content of scientific publications should give for all searched publications full bibliographic descriptions, source descriptions, footnotes containing all the data that are necessary to develop full source footnotes for possible citation of specific studies, research results, theses, data, etc. contained in other publications written by other researchers and scientists. So, building this kind of intelligent tool would make sense if ChatGPT-type tools were properly improved and the system of laws for their use appropriately supplemented so that the use of ChatGPT-type tools does not violate copyrights and that these tools are used in accordance with ethics and do not generate misinformation. Improving these tools so that they do not generate disinformation, do not create "fictitious facts" in the form of descriptions, essays, photos, videos, etc. containing nicely described, presented never and nowhere seemingly facts is to keep Big Data systems updated, update data sets and information, based on which they create answers to questions, create descriptions, photos, companies and so on. This is important because current online tools like ChatGPT often create "nicely described fictitious facts," which is used to generate fake news and misinformation in online social media. When all that I have written above would be corrected and the use completed, and not only in some parts of the world but on a global scale, then the creation of a new generation of something similar to ChatGPT, which will use databases built solely on the basis of online scientific knowledge bases, online scientific portals and online indexing databases of scientific publications would make sense and could prove helpful to people, including researchers and scientists. Besides, the current online ChatGPT-type tools are not perfect, as they draw data not directly in real-time online from specific databases and knowledge contained in selected websites and portals, but draw information, knowledge, data from an offline database created some time ago. For example, currently the most popular ChatGPT still relies on a database of data, information, etc. contained in many publication texts downloaded from selected websites and web portals but not today or yesterday downloaded only in 2021! So these are data and information already outdated on many issues. Hence the absurdities, inconsistencies with the facts, creation of "fictitious facts" by ChatGPT in a significant part of the answers generated by this system to questions asked by Internet users. In view of the above, in a number of issues, both technological, organizational, formal, normative, etc., such intelligent systems should be improved so that they can be used in open access in the applications I wrote about above.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
In your opinion, does it make sense to create a new generation of something similar to ChatGPT, which will use databases built solely on the basis of continuously updated data, information, objectively verified knowledge resources taken from online scientific knowledge bases, online scientific portals and online indexing databases of scientific publications?
What do you think about creating a new generation of something similar to ChatGPT, which will use exclusively online scientific knowledge resources?
And what is your opinion about it?
What is your opinion on this topic?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Warm regards,
Dariusz Prokopowicz
Counting on your opinions, on getting to know your personal opinion, on a fair approach to the discussion of scientific issues, I deliberately used the phrase "in your opinion" in the question.
The above text is entirely my own work written by me on the basis of my research.
Copyright by Dariusz Prokopowicz

Recently,I know that there are many deep learning in atmospheric science field,I am interested in how to use the deep learning in prediction of drought and how could I start to learn machine learning
I would like to add some questions that made me stuck for sometimes. So i have an issue in imbalance segmentation data for landslide modelling using U-Net (my landslide data is way less than my non-landslide data). So my questions are:
1. Should i try to find a proper loss function for the imbalance problem? or should i focus on balancing data to improve my model?
2. Some suggest to use SMOTE (oversampling), but since my data are images (3D) i have found out that it is not suitable to use SMOTE for my data. So, any other suggestions?
Thank you,
Your suggestions will be appreciated.
All these terms are so confusing! I want to understand these terminologies in a very crisp and simplified manner. Can someone help me out with this confusion explaining their differences and real life examples? Any authorized books of reputed publishers? Thanks in advance.
The Unbearable Shallow Understanding ofDeep Learning
AlessioPlebe · GiorgioGrasso
Vol.:(0123456789)
Minds and Machines (2019) 29:515–553
Thank you.
Francisco Sercovich
I am trying to implement Deep Learning based approach for Dynamic Spectrum Sensing in 5G network. Facing some issue in "How to get the value of Probability of detection" and "Probability of False alarm". Kindly help me. If some one can share any Python code I will be very grateful. I am using RadionML2016.10b dataset.
Im trying to create an image classification model that classifies plants from an image dataset made up of 33 classes, the total amount of images is 41,808, the images are unbalanced but that is something me and my thesis team will work on using Kfold; but going back to the main problem.
The VGG16 model itself is from a pre-trained model from keras
My source code should be attached in this question (paste_1292099)
The results of a 15-epoch run is also attached as well
what I have done so far is changing the optimizers from SGD to Adam, but the results are generally the same.
Am I doing something wrong or is there anything I can do to improve on this model to get it to atleast be in a "working" state, regardless if its overfitting or the like as that can be fixed later.
This is also the link to our dataset:
It is specifically a dataset consisting of Medicinal Plants and Herbs in our region with their augmentations. The are not yet resized and normalized in the dataset.

If ChatGPT is merged into search engines developed by internet technology companies, will search results be shaped by algorithms to a greater extent than before, and what risks might be involved?
Leading Internet technology companies that also have and are developing search engines in their range of Internet information services are working on developing technological solutions to implement ChatGPT-type artificial intelligence into these search engines. Currently, there are discussions and considerations about the social and ethical implications of such a potential combination of these technologies and offering this solution in open access on the Internet. The considerations relate to the possible level of risk of manipulation of the information message in the new media, the potential disinformation resulting from a specific algorithm model, the disinformation affecting the overall social consciousness of globalised societies of citizens, the possibility of a planned shaping of public opinion, etc. This raises another issue for consideration concerning the legitimacy of creating a control institution that will carry out ongoing monitoring of the level of objectivity, independence, ethics, etc. of the algorithms used as part of the technological solutions involving the implementation of artificial intelligence of the ChatGPT type in Internet search engines, including those search engines that top the rankings of Internet users' use of online tools that facilitate increasingly precise and efficient searches for specific information on the Internet. Therefore, if, however, such a system of institutional control on the part of the state is not established, if this kind of control system involving companies developing such technological solutions on the Internet does not function effectively and/or does not keep up with the technological progress that is taking place, there may be serious negative consequences in the form of an increase in the scale of disinformation realised in the new Internet media. How important this may be in the future is evident from what is currently happening in terms of the social media portal TikTok. On the one hand, it has been the fastest growing new social medium in recent months, with more than 1 billion users worldwide. On the other hand, an increasing number of countries are imposing restrictions or bans on the use of TikTok on computers, laptops, smartphones etc. used for professional purposes by employees of public institutions and/or commercial entities. It cannot be ruled out that new types of social media will emerge in the future, in which the above-mentioned technological solutions involving the implementation of ChatGPT-type artificial intelligence into online search engines will find application. Search engines that may be designed to be operated by Internet users on the basis of intuitive feedback and correlation on the basis of automated profiling of the search engine to a specific user or on the basis of multi-option, multi-criteria search controlled by the Internet user for specific, precisely searched information and/or data. New opportunities may arise when the artificial intelligence implemented in a search engine is applied to multi-criteria search for specific content, publications, persons, companies, institutions, etc. on social media sites and/or on web-based multi-publication indexing sites, web-based knowledge bases.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
If ChatGPT is merged into search engines developed by online technology companies, will search results be shaped by algorithms to a greater extent than before, and what risks might be associated with this?
What is your opinion on the subject?
What do you think about this topic?
Please respond,
I invite you all to discuss,
Thank you very much,
Best wishes,
Dariusz Prokopowicz

How to create a system of digital, universal tagging of various kinds of works, texts, photos, publications, graphics, videos, etc. made by artificial intelligence and not by humans?
How to create a system of digital, universal labelling of different types of works, texts, texts, photos, publications, graphics, videos, innovations, patents, etc. performed by artificial intelligence and not by humans, i.e. works whose legal, ethical, moral, business, security qualification .... etc. should be different for what is the product of artificial intelligence?
Two days earlier, in an earlier post, I started a discussion on the question of the necessity of improving the security of the development of artificial intelligence technology and asked the following questions: how should the system of institutional control of the development of advanced artificial intelligence models and algorithms be structured, so that this development does not get out of control and lead to negative consequences that are currently difficult to foresee? Should the development of artificial intelligence be subject to control? And if so, who should exercise this control? How should an institutional system for controlling the development of artificial intelligence applications be built? Why are the creators of leading technology companies developing ICT, Internet technologies, Industry 4.0, including those developing artificial intelligence technologies, etc. now calling for the development of this technology to be periodically, deliberately slowed down, so that the development of artificial intelligence technology is fully under control and does not get out of hand? On the other hand, while continuing my reflections on the indispensability of improving the security of the development of artificial intelligence technology, analysing the potential risks of the dynamic and uncontrolled development of this technology, I hereby propose to continue my deliberations on this issue and invite you to participate in a discussion aimed at identifying the key determinants of building an institutional control system for the development of artificial intelligence, including the development of advanced models composed of algorithms similar or more advanced to the ChatGPT 4.0 system developed by the OpenAI company and available on the Internet. It is necessary to normatively regulate a number of issues related to artificial intelligence, both the issue of developing advanced models composed of algorithms that form artificial intelligence systems; posting these technological solutions in open access on the Internet; enabling these systems to carry out the process of self-improvement through automated learning of new content, knowledge, information, abilities, etc.; building an institutional system of control over the development of artificial intelligence technology and current and future applications of this technology in various fields of activity of people, companies, enterprises, institutions, etc. operating in different sectors of the economy. Recently, realistic-looking photos of well-known, highly recognisable people, including politicians, presidents of states in unusual situations, which were created by artificial intelligence, have appeared on the Internet on online social media sites. What has already appeared on the Internet as a kind of 'free creativity' of artificial intelligence, creativity both in terms of the creation of 'fictitious facts' in descriptions of events that never happened, in descriptions created as an answer to a question posed for the ChatGPT system, and in terms of photographs of 'fictitious events', already indicates the potentially enormous scale of disinformation currently developing on the Internet, and this is thanks to the artificial intelligence systems whose products of 'free creativity' find their way onto the Internet. With the help of artificial intelligence, in addition to texts containing descriptions of 'fictitious facts', photographs depicting 'fictitious events', it is also possible to create films depicting 'fictitious events' in cinematic terms. All of these creations of 'free creation' by artificial intelligence can be posted on social media and, in the formula of viral marketing, can spread rapidly on the Internet and can thus be a source of serious disinformation realised potentially on a large scale. Dangerous opportunities have therefore arisen for the use of technology to generate disinformation about, for example, a competitor company, enterprise, institution, organisation or individual. Within the framework of building an institutional control system for the development of artificial intelligence technology, it is necessary to take into account the issue of creating a digital, universal marking system for the various types of works, texts, photos, publications, graphics, films, innovations, patents, etc. performed by artificial intelligence and not by humans, i.e. works whose legal, ethical, moral, business, security qualification ..., should be different for what is the product of artificial intelligence. It is therefore necessary to create a system of digital, universal labelling of the various types of works, texts, photos, publications, graphics, videos, etc., made by artificial intelligence and not by humans. The only issue for discussion is therefore how this should be done.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How to create a system for the digital, universal marking of different types of works, texts, photos, publications, graphics, videos, innovations, patents, etc. made by artificial intelligence and not by humans, i.e. works whose legal, ethical, moral, business, security qualification .... etc. should be different for what is the product of artificial intelligence?
How to create a system of digital, universal labelling of different types of works, texts, photos, publications, graphics, videos, etc. made by artificial intelligence and not by humans?
What do you think about this topic?
What is your opinion on this subject?
Please respond,
I invite you all to discuss,
Thank you very much,
Best regards,
Dariusz Prokopowicz

What has been missing from the open-source availability of ChatGPT-type artificial intelligence on the Internet? What is missing in order to make it possible to comply with the norms of text publishing law, tax law, copyright law, property law, intellectual value law, to make it fully ethical, practical and effective, and to make it safe and not generate misinformation for Internet users to use this type of technology?
How should an automated system for verifying the authorship of texts and other works be structured and made openly available on the Internet in order to verify whether phrases, fragments of text, phrases, wording, etc. are present in a specific text submitted to the editors of journals or publishers of books and other text-based publications? If so, to what extent and from which source texts did the artificial intelligence extract specific phrases, fragments of text, thus giving a detailed description of the source texts, providing footnotes to sources, bibliographic descriptions of sources, etc., i.e. also as is done by efficient and effective computerised anti-plagiarism systems?
The recent appeal by the creators of ChatGPT-type artificial intelligence technology, the appeal by businessmen and founders and co-founders of start-ups developing artificial intelligence technology about the need to halt the development of this type of technology for at least six months confirms the thesis that something was not thought of when OpenAI made ChatGPT openly available on the Internet, that something was forgotten, that something was missing from the openly available ChatGPT-type artificial intelligence system on the Internet. I have already written about the issue of the potential massive generation of disinformation in my earlier posts and comments on previously formulated questions about ChatGPT technology and posted on my discussion profile of this Research Gate portal. On the other hand, to the issue of information security, the potential development of disinformation in the public space of the Internet, we should also add the issue of the lack of a structured system for the digital marking of "works" created by artificial intelligence, including texts, publications, photographs, films, innovative solutions, patents, artistic works, etc., in order to ensure the security of information. In this regard, it is also necessary to improve the systems for verifying the authorship of texts sent to journal editors, so as to verify that the text has been written in full compliance with copyright law, intellectual property law, the rules of ethics and good journalistic practice, the rules for writing texts as works of intellectual value, the rules for writing and publishing professional, popular science, scientific and other articles. It is necessary to improve the processes of verifying the authorship of texts sent to the editorial offices of magazines and publishing houses of various text publications, including the improvement of the system of text verification by editors and reviewers working in the editorial offices of popular-scientific, trade, scientific, daily and monthly magazines, etc., by creating for their needs anti-plagiarism systems equipped with text analysis algorithms in order to identify which fragments of text, phrases, paragraphs were created not by a human but by an artificial intelligence of the ChatGPT type, and whose authorship these fragments are. An improved anti-plagiarism system of this kind should also include tools for the precise identification of text fragments, phrases, statements, theses, etc. of other authors, i.e. providing full information in the form of bibliographic descriptions of source publications, providing footnotes to sources. An anti-plagiarism system improved in this way should, like ChatGPT, be made available to Internet users in an open access format. In addition, it remains to be seen whether it is also necessary to legally oblige editors of journals and publishers of various types of textual and other publications to use this kind of anti-plagiarism system in verifying the authorship of texts. Arguably, the editors of journals and publishers of books and other types of textual publications will be interested in doing so in order to apply this kind of automated verification system for the resulting publication works. At the very least, those editors of journals and publishers of books and other types of textual publications that recognise themselves and are recognised as reputable will be interested in using this kind of improved system to verify the authorship of texts sent to the editors. Another issue is the identification of technological determinants, including the type of technologies with which it will be possible to appropriately improve the automated verification system for the aforementioned issue of text authorship. Paradoxically, here again, the technology of artificial intelligence comes into play, which can and should prove to be of great help in the aforementioned issue of verification of the aforementioned question of authorship of texts and other works.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How should an automated and open-access online system for verifying the authorship of texts and other works be structured in order to verify whether phrases, text fragments, phrases, wordings, etc. are present in a specific text sent to the editors of journals or publishers of books and other textual publications? If YES, to what extent and from which source texts did the artificial intelligence retrieve specific phrases, fragments of text, thus giving detailed characteristics of the source texts, providing footnotes to sources, bibliographic descriptions of sources, etc., i.e. also as implemented by efficient and effective computerised anti-plagiarism systems?
What was missing from making a ChatGPT-type artificial intelligence system available on the Internet in an open access format? What is missing in order to make it possible to comply with the norms of text publishing law, tax law, copyright law, property law, intellectual property law, to make it fully ethical, practical and effective, and to make it safe and not generate disinformation for Internet users to use this type of technology?
What do you think about this topic?
What is your opinion on this subject?
Please respond,
I invite you all to discuss,
Thank you very much,
Best wishes,
Dariusz Prokopowicz

Small sample learning, why is it called Few-Shot Learning, not Few-Data Learning?
The fourth technological revolution currently underway is characterised by rapidly advancing ICT information technologies and Industry 4.0, including but not limited to machine learning, deep learning, artificial intelligence, ... what's next? Intelligent thinking autonomous robots?
The fourth technological revolution currently underway is characterised by rapidly advancing ICT information technologies and Industry 4.0, including but not limited to technologies learning machines, deep learning, artificial intelligence. Machine learning, machine learning, machine self-learning or machine learning systems are all synonymous terms relating to the field of artificial intelligence with a particular focus on algorithms that can improve themselves, improving automatically through the action of an experience factor within exposure to large data sets. Algorithms operating within the framework of machine learning build a mathematical model of data processing from sample data, called a learning set, in order to make predictions or decisions without being programmed explicitely by a human to do so. Machine learning algorithms are used in a wide variety of applications, such as spam protection, i.e. filtering internet messages for unwanted correspondence, or image recognition, where it is difficult or infeasible to develop conventional algorithms to perform the needed tasks. Deep learning is a kind of subcategory of machine learning, which involves the creation of deep neural networks, i.e. networks with multiple levels of neurons. Deep learning techniques are designed to improve, among other things, automatic speech processing, image recognition and natural language processing. The structure of deep neural networks consists of multiple layers of artificial neurons. Simple neural networks can be designed manually so that a specific layer detects specific features and performs specific data processing, while learning consists of setting appropriate weights, significance levels, value system for components of specific issues defined on the basis of processing and learning from large amounts of data. In large neural networks, the deep learning process is automated and self-contained to a certain extent. In this situation, the network is not designed to detect specific features, but detects them on the basis of the processing of appropriately labelled data sets. Both such datasets and the operation of neural networks themselves should be prepared by specialists, but the features are already detected by the programme itself. Therefore, large amounts of data can be processed and the network can automatically learn higher-level feature representations, which means that they can detect complex patterns in the input data. In view of the above, deep learning systems are built on Big Data Analytics platforms built in such a way that the deep learning process is performed on a sufficiently large amount of data. Artificial intelligence, denoted by the acronym AI (artificial intelligence), is respectively the 'intelligent', multi-criteria, advanced, automated processing of complex, large amounts of data carried out in a way that alludes to certain characteristics of human intelligence exhibited by thought processes. As such, it is the intelligence exhibited by artificial devices, including certain advanced ICT and Industry 4.0 information technology systems and devices equipped with these technological solutions. The concept of artificial intelligence is contrasted with the concept of natural intelligence, i.e. that which pertains to humans. In view of the above, artificial intelligence thus has two basic meanings. On the one hand, it is a hypothetical intelligence realised through a technical rather than a natural process. On the other hand, it is the name of a technology and a research field of computer science and cognitive science that also draws on the achievements of psychology, neurology, mathematics and philosophy. In computer science and cognitive science, artificial intelligence also refers to the creation of models and programmes that simulate at least partially intelligent behaviour. Artificial intelligence is also considered in the field of philosophy, within which a theory is developed concerning the philosophy of artificial intelligence. In addition, artificial intelligence is also a subject of interest in the social sciences. The main task of research and development work on the development of artificial intelligence technology and its new applications is the construction of machines and computer programmes capable of performing selected functions analogously to those performed by the human mind functioning with the human senses, including processes that do not lend themselves to numerical algorithmisation. Such problems are sometimes referred to as AI-difficult and include such processes as decision-making in the absence of all data, analysis and synthesis of natural languages, logical reasoning also referred to as rational reasoning, automatic proof of assertions, computer logic games e.g. chess, intelligent robots, expert and diagnostic systems, among others. Artificial intelligence can be developed and improved by integrating it with the areas of machine learning, fuzzy logic, computer vision, evolutionary computing, neural networks, robotics and artificial life. Artificial intelligence (AI) technologies have been developing rapidly in recent years, which is determined by its combination with other Industry 4.0 technologies, the use of microprocessors, digital machines and computing devices characterised by their ever-increasing capacity for multi-criteria processing of ever-increasing amounts of data, and the emergence of new fields of application. Recently, the development of artificial intelligence has become a topic of discussion in various media due to the open-access, automated and AI-enabled solution ChatGPT, with which Internet users can have a kind of conversation. The solution is based and learns from a collection of large amounts of data extracted in 2021 from specific data and information resources on the Internet. The development of artificial intelligence applications is so rapid that it is ahead of the process of adapting regulations to the situation. The new applications being developed do not always generate exclusively positive impacts. These potentially negative effects include the potential for the generation of disinformation on the Internet, information crafted using artificial intelligence, not in line with the facts and disseminated on social media sites. This raises a number of questions regarding the development of artificial intelligence and its new applications, the possibilities that will arise in the future under the next generation of artificial intelligence, the possibility of teaching artificial intelligence to think, i.e. to realise artificial thought processes in a manner analogous or similar to the thought processes realised in the human mind.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
The fourth technological revolution currently taking place is characterised by rapidly advancing ICT information technologies and Industry 4.0, including but not limited to machine learning technologies, deep learning, artificial intelligence, .... what's next? Intelligent thinking autonomous robots?
What do you think about this topic?
What is your opinion on this subject?
Please respond,
I invite you all to discuss,
Thank you very much,
Best regards,
Dariusz Prokopowicz

Can deep learning be harnessed to predict and prevent zero-day attacks in cloud environments, bolstering overall security posture?
Can you explain the concept of the vanishing gradient problem in deep learning? How does it affect the training of deep neural networks, and what techniques or architectures have been developed to mitigate this issue?
Can ChatGPT be used in conducting market and other analyses that are helpful in managing a business entity?
What could be other applications of generative artificial intelligence, the ChatGPT language model or other similar artificial intelligence technologies in the business of companies and enterprises operating in the SME sector?
Currently and prospectively, generative artificial intelligence, a ChatGPT-type language model, is finding various applications helpful in analyses carried out for business purposes. ChatGPT can be helpful, for example, in quickly drawing up a competitive analysis against a specific company, enterprise, start-up or other type of business entity. This kind of real-time analysis can be helpful in the effective management of a business entity. However, an issue for the time being may be the outdatedness of the data and information contained in the database that ChatGPT uses to answer questions. The aforementioned database is a kind of Big Data database built from data and information collected from a selection of multiple websites, but not now only in 2021. Large corporations, including large technology companies, have the financial capacity to create research and development departments within their company structures, where they develop technological innovations, new technologies, technology standards, etc., in order to be technologically at the forefront and maintain their strong position in specific markets. Consequently, large technology companies also have an interest in ensuring that emerging new technologies and innovations are quickly incorporated into their business. This also applies to the currently developing Industry 4.0 technologies, including machine learning, deep learning and artificial intelligence. On the other hand, companies and enterprises operating in the SME sector, including above all micro-enterprises and start-ups, have much more limited financial possibilities to finance the creation of research and development departments within their organisations, where new technologies, innovations and technological standards would be created. Of course, SME companies are also interested in incorporating new technological solutions and innovations into their businesses. However, the new technologies that they implement are not mainly the result of their research activities, but are bought and then implemented as ready-made solutions, already proven technologies or patents applied in other companies. However, some technological solutions for new technologies and/or specific applied and open-access technological solutions for information Internet services are also available for use by companies and enterprises operating in the SME sector already in the early stages of business development and with relatively small financial investments. For example, these are the Internet-accessible new online media, social media and online information services offered by large Internet-based technology companies, which have created specific standards in the implementation of certain Industry 4.0 technologies and have established a strong position in the market for certain online information and other services. Consequently, the currently rapidly developing artificial intelligence technology, including specific solutions of it made available on the Internet such as ChatGPT, can be used in specific business development support applications by various types of business entities, including companies and enterprises in the SME sector. For example, ChatGPT can be used in carrying out market and other analyses that are helpful in managing a business entity also operating in the SME sector. However, the applicability of both this kind of shared technological tool, a kind of language model of generative artificial intelligence made available on the Internet under an open access formula, and other technological solutions of artificial intelligence, which will probably soon be offered by the technology companies creating them, is much greater.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
What are the applications of ChatGPT in the business of companies and enterprises operating in the SME sector?
Can ChatGPT be used in conducting market and other analyses that are helpful in managing a business entity?
What could be other applications of generative artificial intelligence, the ChatGPT language model or other similar artificial intelligence technologies in the business of companies and enterprises operating in the SME sector?
What is your opinion on this topic?
What is your opinion on this subject?
Please answer,
I invite you all to discuss,
Thank you very much,
Best wishes,
Dariusz Prokopowicz

In your opinion, will a thinking artificial intelligence without human feelings be a more effective 'employee' in a company or a more effective and dangerous tool employed by a competing company?
Will the development of artificial intelligence technology help in terms of the management of business entities, public institutions, financial institutions, etc., and help to achieve the strategic business goals of certain commercially operating companies and businesses and perhaps public sector institutions as well?
The development of artificial intelligence may in the future lead to the creation of digital surrogates that mimic thought processes, artificial consciousness, emotional intelligence, etc. Many technology companies developing their business on the basis of new information technologies and Industry 4.0, which are being created and implemented in business, are conducting research to create new generations of artificial intelligence, which will be enriched with thought processes, artificial consciousness, emotional intelligence, etc. Some large technology companies that are also active online are working on creating their equivalents of generative artificial intelligence built in an advanced language model as ChatGPT. Internet users, companies and institutions are currently exploring the possibilities of practical applications of ChatGPT. The generation of this tool currently available online is already the fourth ChatGPT 4.0, but this is probably not the end of the development of this technology and its applications. In addition to this, some technology companies that are at the forefront of the development of artificial intelligence technology using various machine learning and deep learning solutions in conjunction with access to large sets of information and data collected on Big Data platforms are teaching artificial intelligence to perform various activities, jobs, solving increasingly complex tasks that until now have only been performed by humans. As part of these learning processes and the continual technological advances within ICT and Industry 4.0, leading technology companies are attempting to create a highly advanced artificial intelligence that will be capable of carrying out what we know as thought processes that have so far only taken place in the human brain (and perhaps in some animals). Perhaps in the future, an artificial intelligence will be created that is capable of simulating, digitally generating what we call human emotions, emotional intelligence. Perhaps in the future, an artificial intelligence will be created equipped with digitally generated artificial consciousness. What if, in the future, autonomous robots are created that are equipped not only with artificial intelligence, but also with digitally generated reactions that symbolise human emotions, are equipped with digitally generated thought processes, are equipped with artificial consciousness and behave as if they are also equipped with artificial emotional intelligence? Perhaps this will happen in the future. But the solutions that are currently being developed and the increasing applications of artificial intelligence that are emerging are devoid of typically human characteristics, i.e. thought processes within the framework of, among other things, abstract thinking, are devoid of emotional intelligence, human emotions and feelings, consciousness and so on. Will the rapidly developing artificial intelligence technology and also the rapidly appearing new and more numerous different applications of artificial intelligence technology solve many problems or will more new problems be generated? Perhaps a kind of thinking artificial intelligence will soon emerge, but one that does not have human feelings, e.g. empathy, in addition to having no digital equivalent of emotional intelligence. New applications for this kind of enhanced artificial intelligence will probably quickly emerge. Will it then be associated with the possibility of solving a multitude of problems, or could this kind of AI development generate new risks and dangers for humans? Technology companies pursuing this kind of technological advancement and improved artificial intelligence assume that a thinking artificial intelligence without human feelings can be a more effective 'employee' in a company. On the other hand, a thinking artificial intelligence being a more efficient and effective 'employee' may perhaps also be a more dangerous tool employed by a competitive company. The developers of these technological solutions usually start from the assumption that the development of artificial intelligence technology will help in the field of management of business entities, public institutions, financial institutions, etc. and will help to achieve the strategic business objectives of certain commercially operating companies and enterprises and perhaps also public sector institutions. However, we do not know whether this will be the case, whether the technological advances taking place in the field of artificial intelligence, the emergence of new generations of these technologies will generate only safe and positive applications for people, or whether new risks and threats will also emerge.
Counting on your opinions, on getting to know your personal opinion, on an honest approach to the discussion of scientific issues and not the ready-made answers generated in ChatGPT, I deliberately used the phrase "in your opinion" in the question.
The above text is entirely my own work written by me on the basis of my research.
I have not used other sources or automatic text generation systems such as ChatGPT in writing this text.
Copyright by Dariusz Prokopowicz
In view of the above, I address the following questions to the esteemed community of scientists and researchers:
Will the development of artificial intelligence technology help in the field of management of business entities, public institutions, financial institutions, etc. and will it help in achieving the strategic business goals of certain commercially operating companies and enterprises and perhaps also public sector institutions?
In your opinion, will a thinking artificial intelligence without human feelings be a more effective 'employee' in a company or a more effective and dangerous tool employed by a competitive company?
Will a thinking artificial intelligence without human feelings solve many problems or will it generate more new problems?
What do you think about this topic?
What is your opinion on this subject?
Please respond,
I invite you all to discuss,
Thank you very much,
Best wishes,
Dariusz Prokopowicz

Could you discuss the inherent limitations and challenges faced by deep learning algorithms, especially in terms of data requirements, interpretability, and adversarial attacks?
Whether Python language can act as a versatile tool to learn deep learning?
How can artificial intelligence break through the existing deep learning/neural network framework, and what are the directions?
What are the applications of machine learning, deep learning and/or artificial intelligence technologies to securities market analysis, including stock market analysis, bonds, derivatives?
ICT information technologies have already been implemented in banking and large companies operating in non-financial sectors of the economy since the beginning of the third technological revolution. Subsequently, the Internet was used to develop online and mobile banking. Perhaps in the future, virtual banking will be developed on the basis of the increasing scale of application of technologies typical of the current fourth technological revolution and the growing scale of implementation of Industry 4.0 technologies to businesses operating in both the financial and non-financial sectors of the economy. In recent years, various technologies for advanced, multi-criteria data processing have increasingly been applied to business entities in order to improve organisational management processes, risk management, customer and contractor relationship management, management of supply logistics systems, procurement, production, etc., and to improve the profitability of business processes. In order to improve the profitability of business processes, improve marketing communications, offer products and services remotely to customers, etc., such Industry 4.0 technologies as the Internet of Things, cloud computing, Big Data Analytics, Data Science, Blockchain, robotics, multi-criteria simulation models, digital twins, but also machine learning, deep learning and artificial intelligence are increasingly being used. In the field of improving the processes of equity investment management, the processes of carrying out economic and financial analyses, fundamental analyses concerning the valuation of specific categories of investment assets, including securities, i.e. improving the processes carried out in investment banking, ICT information technologies and Industry 4.0 have also been used for many years now. In this connection, there are also emerging opportunities to apply machine learning, deep learning and/or artificial intelligence technologies to the analysis of the securities market, including the analysis of the stock market, bonds, derivatives, etc., i.e. key aspects of business analytics carried out in investment banking. Improving such analytics through the use of the aforementioned technologies should, in addition to the issue of optimising investment returns, also take into account important aspects of the financial security of capital markets transactions, including issues of credit risk management, market risk management, systemic risk management, etc.
In view of the above, I would like to address the following question to the esteemed community of scientists and researchers:
What are the applications of machine learning, deep learning and/or artificial intelligence technologies for securities market analysis, including equity, bond, derivatives market analysis?
What is your opinion on the subject?
What do you think about this topic?
Please respond,
I invite you all to discuss,
Thank you very much,
Best regards,
Dariusz Prokopowicz

This is futher to the use of machine learning and deep learning models. I wont be using Physical models for the calculations of above.
How can deep learning models improve anomaly detection for enhancing cloud security against sophisticated cyber threats?
What are the optimal cloud deployment strategies to accelerate training and inference of deep learning models?
Can cloud resources mitigate resource constraints for training deep learning models on massive datasets?
Could you elaborate on the distinctions between supervised and unsupervised deep learning approaches, highlighting their respective use cases and advantages in various applications?
What are the trade-offs between using specialized cloud-based AI services versus building and training custom deep learning models?
How does cloud-based distributed computing impact the speed and performance of large-scale deep learning tasks?
What are the trade-offs between different cloud providers for cost-effective machine and deep learning model deployment?
I want to develop a system based on the neural network that can accurately and fast recognize human actions in real-time, both from live webcam feeds and pre-recorded videos. My goal is to employ state-of-the-art techniques that can handle diverse actions and varying environmental conditions.
I would greatly appreciate any insights, recommendations, or research directions that experts could provide me with.
Thank you so much in advance.
I was exploring differential privacy (DP) which is an excellent technique to preserve the privacy of the data. However, I am wondering what will be the performance metrics to prove this between schemes with DP and schemes without DP.
Are there any performance metrics in which a comparison can be made between scheme with DP and scheme without DP?
Thanks in advance.
Currently, I am exploring federated learning (FL). FL seems going to be in trend soon because of its promising functionality. Please share your valuable opinion regarding the following concerns.
- What are the current trends in FL?
- What are the open challenges in FL?
- What are the open security challenges in FL?
- Which emerging technology can be a suitable candidate to merge with FL?
Thanks for your time.
Nowdays, the machine learning techniques and deep learning techniques have been employed in tackling with various of kinds of security problems, such as malware mitigation. Is the difference of their performance obvious? Which is better?
Could you explain the foundational principle that underlies deep learning and how it differs from traditional machine learning methods?
Why CNN is better than SVM for image classification and which is better for image classification machine learning or deep learning?
What deep learning algorithms are used for image processing and which CNN algorithm is used for image classification?
Can anybody explain to me this? Is there any formula or equation to predict manually, the number of images that can be generated.
The concept of Circular Economy (CE) in the Construction Industry (CI) is mainly about the R-principles: Rethink, Reduce, Reuse, Repair, and Recycle. Thus, if the design stage following an effective job site management would include consideration of the whole lifecycle of the building with further directions of the possible use of the structure elements, the waste amount could be decreased or eliminated. Analysis of the current literature has shown that CE opportunities in CI are mostly linked to materials reuse. Other top-researched areas include the development of different circularity measures, especially during the construction period.
In the last decade, AI merged as a powerful method. It solved many problems in various domains, such as object detection in visual data, automatic speech recognition, neural translation, and tumor segmentation in computer tomography scans.
Despite the broader range of works on the circular economy, AI was not widely utilized in this field. Thus, I would like to ask if you have an opinion or idea on how Artificial intelligence (AI) can be useful in developing or applying circular construction activities?
Within the technological developments associated with artificial intelligence and mobile communication systems applied to healthcare, chatbots represent a trend that is increasing in popularity as an efficient mechanism that promotes interactions between application users for different sectors, since it provides personalized information and allows interactions in time and a capacity to reach millions of people at the same time. From the patient’s perspective, chatbot technologies as representation of natural language processing, along with deep learning and virtual reality, also referred as cognitive services, have been identified as healthcare drivers by their possibility for the creation of great impact applications on medical and preventive health services.
source: FROM THE EDITED VOLUME
Chatbots - The AI-Driven Front-Line Services for Customers https://www.intechopen.com/online-first/86857
How can machine and deep learning techniques be adapted or fine-tuned to accommodate variations in brain tumor types, locations, and patient populations for more personalized diagnosis and treatment?
How can machine and deep learning models be integrated with medical imaging technologies, such as MRI, CT, and PET scans, to improve brain tumor detection and classification?
What are the most commonly used machine and deep learning algorithms? Specifically, for forecasting hourly solar power generation.
What differentiates PyTorch in terms of usability and flexibility compared to other deep learning frameworks?
I am exploring the application of Deep Learning in the aspect of Inventory Management in Supply Chain. What are the specific techniques, models, or algorithms being used? Are there practical cases that have demonstrated significant improvement in stock management, cost reduction, or service level enhancement?
In your opinion, does it make sense to create a new generation of something similar to ChatGPT, which will use databases built solely on the basis of continuously updated data, information, objectively verified knowledge resources, a kind of online business advisor, using defined business websites and portals, financial and economic information portals, which will answer the questions of entrepreneurs, businessmen, managers in charge of companies and enterprises, who will ask questions about the future development of their business, their company, enterprise, corporation?
In my opinion, it makes sense to create a new generation of something similar to ChatGPT, which will use databases built solely on the basis of continuously updated data, information, objectively verified knowledge resources, a kind of online business advisor, using defined business websites and portals, financial and economic information portals, which will answer the questions of entrepreneurs, businessmen, managers in charge of companies and enterprises, who will ask questions about the future development of their business, their company, enterprise, corporation. Such intelligent systems drawing on large data and information resources, processing large sets of economic and financial information and data in real time on Big Data Analytics platforms, providing current analytical data to business intelligence systems supporting business management processes, can prove very useful as tools to facilitate organizational management processes, forecasting various scenarios of abnormal events and scenarios of developments in the business environment, diagnosing escalation of risks, supporting early warning systems, diagnosing and forecasting opportunities and threats to the development of the company or enterprise, providing warning signals for contingency and risk management systems.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
In your opinion, does it make sense to create a new generation of something similar to ChatGPT, which will use databases built solely on the basis of continuously updated data, information, objectively verified knowledge resources, a kind of online business advisor, using defined business websites and portals, financial and economic information portals, which will answer the questions of entrepreneurs, businessmen, managers in charge of companies and enterprises, who will ask questions about the future development of their business, their company, enterprise, corporation?
In your opinion, does it make sense to create a new generation of something similar to ChatGPT, a kind of intelligent online business advisor?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
Counting on your opinions, on getting to know your personal opinion, on a fair approach to the discussion of scientific issues, I deliberately used the phrase "in your opinion" in the question.
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz

What kind of innovative startups do you think can be created using a new generation of smart tools similar to ChatGPT and/or whose business activities would be helped by such smart tools and/or certain new business concepts would be based on such smart tools?
There is a growing body of data suggesting that innovative startups may be created using the next generation of ChatGPT-like smart tools and/or whose business activities would be helped by such smart tools and/or certain new business concepts would be based on such smart tools. On the one hand, there are already emerging Internet startups based on artificial intelligence systems specialized in specific areas of creating textual, graphic, video, etc. elaborations that are variants of something similar to ChatGPT. On the other hand, arguably, some of these kinds of solutions may in the future turn into a kind of online business advisors generating advice for entrepreneurs developing new innovative startups.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
What kind of innovative startups do you think could be developed using a new generation of smart tools similar to ChatGPT and/or whose business activities would be helped by such smart tools and/or certain new business concepts would be based on such smart tools?
What kind of innovative startups can be created based on the next generation of ChatGPT-like smart tools?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
Counting on your opinions, on getting to know your personal opinion, on a fair approach to the discussion of scientific issues, I deliberately used the phrase "in your opinion" in the question.
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz

The more benefit of large language model is its big capability, not benefit of few-shot learning ability?
What are the ethical and regulatory considerations associated with the integration of machine and deep learning technologies in brain tumor identification, and how can patient privacy and data security be ensured?
As a CS/SE student currently I'm finding an area for research.
Currently my idea was to investigate more about recommendation systems using ML and deep Learning, but with the timeline to complete and the project is limited.
In the above topic research gap is very limited.
Please suggest some research gaps for me to consider.
I'm expecting to use stock prices from the pre-covid period up to now to build a model for stock price prediction. I doubt regarding the periods I should include for my training and test set. Do I need to consider the pre-covid period as my training set and the current covid period as my test set? or should I include the pre-covid period and a part of the current period for my training set and the rest of the current period as my test set?
How can transfer learning and data augmentation techniques be leveraged to overcome data scarcity and improve the generalizability of machine and deep learning models for brain tumor analysis?
Would you like to use a completely new generation of ChatGPT-type tool that would be based on those online databases you would choose yourself?
What do you think about such a business concept for an innovative startup: creating a new generation of something similar to ChatGPT, which will use databases built solely on the basis of continuously updated data, information, objectively verified knowledge resources, and which will use only those online databases, knowledge bases, portals and websites that individual Internet users will select themselves?
In your opinion, does it make sense to create a new generation of something similar to ChatGPT, which will use databases built exclusively on the basis of continuously updated data, information, objectively verified knowledge resources, and which will use exclusively those Internet databases, knowledge bases, portals and websites that individual Internet users themselves will select, determine, define?
In my opinion, it makes sense to create a new generation of something similar to ChatGPT, which will use databases built exclusively on the basis of continuously updated data, information, objectively verified knowledge resources, and which will use exclusively those Internet databases, knowledge bases, portals and websites that individual Internet users themselves will select, define, define. This kind of solution, which would allow personalization of the functionality of such generative artificial intelligence systems, would significantly increase its functionality for individual users, Internet users, citizens. In addition, the scale of innovative solutions for practical applications of such personalized intelligent systems for analyzing content and data contained in selected specific Internet resources would increase significantly.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
In your opinion, does it make sense to create a new generation of something similar to ChatGPT, which will use databases built solely on the basis of continuously updated data, information, objectively verified knowledge resources, and which will use only those Internet databases, knowledge bases, portals and websites that individual Internet users themselves will select, specify, define?
What do you think of such a business concept for an innovative startup: the creation of a new generation of something similar to ChatGPT, which will use databases built exclusively on the basis of continuously updated data, information, objectively verified knowledge resources, and which will use exclusively those online databases, knowledge bases, portals and websites that individual Internet users will themselves select?
Would you like to use a completely new generation of ChatGPT-type tool, which would be based on those online databases that you yourself would select?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Counting on your opinions, on getting to know your personal opinion, on a fair approach to the discussion of scientific issues, I deliberately used the phrase "in your opinion" in the question.
Dariusz Prokopowicz

Deep Learning Frameworks for Image Recognition, Natural Language Processing (NLP), Speech Recognition, Recommender Systems, Generative Models, Time Series Analysis, Autonomous Vehicles, Healthcare, Robotics, and Financial Analysis, etc.
What are the state-of-the-art machine and deep learning techniques used for brain tumor identification, and how do they compare in terms of accuracy and efficiency?
I'm looking for opportunities in research Assistance or any kind of involvement in research in the fields of Machine Learning, Deep Learning, or NLP. I am eager to contribute my efforts and dedication to research endeavors. Please let me know if you have any openings for this kind of work.
Email: sharifulprince97@gmail.com
I am interested in publishing research on deep learning-based automatic speech recognition (ASR) for a specific low-resource language. What are the key areas or research gaps that I should focus on to contribute to this field? Are there any influential papers or research works that I should read to gain a comprehensive understanding of the current state-of-the-art in this area? I would greatly appreciate any guidance or recommendations from experts in this field. Thank you in advance!
With the development of artificial intelligence and the increase in the scale of its applications in the production of goods and the provision of services, is it the working time of people that should be reduced?
In connection with the development of artificial intelligence, which is increasingly replacing humans in the performance of various activities and professions, and in a situation of increasing labor productivity through its greater automation and objectification, should people's working time be reduced?
Accordingly, should people's working time be reduced, for example, from 5 working days per week to 4 working days?
Features of highly developed economies include high levels of productivity, innovation in the economic activities of companies and enterprises, the use of new technologies in manufacturing processes, labor productivity and income. High levels of labor productivity are largely due to the use of new technologies in the processes of producing goods and offering services. In recent years, new ICT information technologies, Industry 4.0 technologies, typical of the current fourth technological revolution, including machine learning technologies, deep learning and artificial intelligence, are being applied to the manufacturing processes of companies and enterprises operating in various sectors of the economy. these technologies are now and in the next several years will change labor markets by replacing people in certain jobs or supporting, improving the work done by people. As some of the work done by humans will be taken over by artificial intelligence then perhaps there will be an opportunity to reduce working hours for humans. When this kind of solution is applied then labor productivity should not decline, as productivity will be maintained at a certain level or will increase and employed citizens will have more time for personal activities, leisure, hobbies, for family, dal to develop personal passions, etc. and thus can be more productive and creative during work time.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
In view of the development of artificial intelligence, which is increasingly replacing humans in the performance of various activities and professions, and in a situation of increasing labor productivity through its greater automation and objectification, should people's working time be reduced?
Accordingly, should people's working hours be reduced, for example, from 5 working days per week to 4 working days?
With the development of artificial intelligence and the increase in the scale of its applications in the production of goods and the provision of services, should people's working time be reduced?
Will the increase in the scale of applications of artificial intelligence make it possible to reduce people's working time?
And what is your opinion about it?
What is your opinion on this issue?
Please answer,
I invite you all to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
