Science topic
Automation - Science topic
Controlled operation of an apparatus, process, or system by mechanical or electronic devices that take the place of human organs of observation, effort, and decision. (From Webster's Collegiate Dictionary, 1993)
Questions related to Automation
Please give your valuable opinions.....
How does artificial intelligence answering aesthetic questions result in automated emotions? My answer: maybe artificial intelligence will plausibly obtain emotions through learning aesthetics.
AI is expected to have a significant impact on the job market, influencing the nature of work and creating new opportunities while also automating certain tasks. Here are some ways AI may affect the job market and roles that are at risk of automation. How do you think?
- Automation of Routine Tasks
- Routine Cognitive Tasks
- Transportation and Delivery Services
- Manufacturing and Assembly
- Customer Support
- Data Entry and Analysis
- Certain Healthcare Tasks
- Financial Services
- Retail Jobs
However, it's important to note that while automation may eliminate certain jobs, it can also create new opportunities. Many experts argue that AI will lead to the creation of new roles that require uniquely human skills, such as creativity, emotional intelligence, critical thinking, and complex problem-solving. Additionally, there will be a growing demand for jobs that involve developing, maintaining, and managing AI systems.
To adapt to these changes, workers may need to acquire new skills and engage in lifelong learning to stay relevant in the evolving job market. Policies and initiatives that support retraining and upskilling will be crucial for helping the workforce navigate the transition brought about by AI and automation.
Hope to hear from you! Thanks!
Can artificial intelligence read minds? If so, how does that impact automated language translation? My answer: Artificial intelligence either can already read minds or potentially probably will be able to. Assuming the information in thoughts is translatable regardless of whatever language the person thinks in, the translation between humans may come faster. Humans are very visual creatures and language is fluid(always representing imagery in thoughts including extreme abstractions) thus, artificial intelligence translating one language to another could end the need for human language altogether, or maybe at least extremely facilitate translations.
Project Management Institute notes that 81% of professionals say that AI is impacting their organizations. That number is likely to increase further in the coming years. Automation holds immense promise. By automating low-value-add tasks, project managers may focus their efforts and energy on tasks that will most dramatically benefit their businesses, allowing them to affect greater change, increasing the possibility of each project reaching its strategic goals.
Recent years have seen AI adoption on a larger scale by organizations to ensure successful project completion in several ways such as:
-> Generating performance insights
-> Supporting the decision-making processes
-> Making estimates and predictions
-> Optimizing resource scheduling
-> Enabling data visualization
-> Performing risk analysis
Good morning everyone! I've just finished reading Shyon Baumann's paper on "Intellectualization and Art World Development: Film in the United States." This excellent paper includes a substantial section of textual analysis where various film reviews are examined. These reviews are considered a fundamental space for the artistic legitimation of films, which, during the 1960s, increasingly gained artistic value. To achieve this, Baumann focuses on two dimensions: critical devices and lexical enrichment. The paper is a bit dated, and the methodologies used can be traced back to a time when text analysis tools were not as widespread or advanced. On the other hand, they are not as advanced yet. The question is: are you aware of literature/methodologies that could provide insights to extend Baumann's work using modern text analysis technologies?
In particular, following the dimensions analyzed by Baumann:
a) CHANGING LANGUAGE
- Techniques for the formation of artistic dictionaries that can replace the manual construction of dictionaries for artistic vocabulary (Baumann reviews a series of artistic writings and extracts terms, which are then searched in film reviews). Is it possible to do this automatically?
b) CHANGING CRITICAL DEVICES
- Positive and negative commentary -> I believe tools capable of performing sentiment analysis can be successfully applied to this dimension. Are you aware of any similar work?
- Director is named -> forming a giant dictionary of directors might work. But what about the rest of the crew who worked on the film? Is there a way to automate the collection of information on people involved in films?
- Comparison of directors -> Once point 2, which is more feasible, is done, how to recognize when specific individuals are being discussed? Does any tool exist?
- Comparison of films -> Similar to point 3.
- Film is interpreted -> How to understand when a film is being interpreted? What dimensions of the text could provide information in this regard? The problem is similar for all the following dimensions:
- Merit in failure
- Art vs. entertainment
- Too easy to enjoy
Expanding methods in the direction of automation would allow observing changes in larger samples of textual sources, deepening our understanding of certain historical events. The data could go more in-depth, providing a significant advantage for those who want to view certain artistic phenomena in the context of collective action.
Thank you in advance!
I was thinking about Teensy 3.5, because it has DAC, but it is discontinued, unfortunately. Raspberry Pi Pico seems interesting, but it has PWMs only, which can be converted to DC voltage signal using PWM-to-Voltage converter.
We are going to use the Heidelberg OCT Spectralis for mouse OCT acquisition. However, we find several papers reporting the use of this instrument for rodents, we are not able to successfully do it. We have two separate devices (1- Heidelberg OCT Spectralis and 2- Heidelberg Hra 2). The Heidelberg OCT Spectralis has two lenses one 30° Standard Objective Lens and an Anterior Segment lens. The Heidelberg Hra 2 has 55° widefield lens. unfortunately, the 55° lens could not be assembled to Heidelberg OCT Spectralis.
As far as we know, given the high dioptre of the mouse eye, we should use the 55° widefield lens. However, using the standard 30° we get a rather acceptable cSLO image, no OCT image is displayed.
Can anyone help solve this problem? We already tried using an additional lens in front of the device lens but still not working, however, maybe the total dioptre of the lens was not enough.
Also, a paper suggests minor software modifications (using Alt+Ctrl+Shift+O in Heidelberg Eye Explorer software) which we could not figure out how that should be done. (Spectral domain optical coherence tomography in mouse models of retinal degeneration. Invest Ophthalmol Vis Sci. 2009 Dec;50(12):5888-95. doi: 10.1167/iovs.09-3724.)
These are some papers about using the Heidelberg OCT Spectralis for rodents:
1- Quantitative Analysis of Mouse Retinal Layers Using Automated Segmentation of Spectral Domain Optical Coherence Tomography Images. Trans. Vis. Sci. Tech. 2015;4(4):9. doi: https://doi.org/10.1167/tvst.4.4.9.
2- Tracking Longitudinal Retinal Changes in Experimental Ocular Hypertension Using the cSLO and Spectral Domain-OCT. Invest. Ophthalmol. Vis. Sci. 2010;51(12):6504-6513. doi: https://doi.org/10.1167/iovs.10-5551.
3- Giannakaki-Zimmermann H, Kokona D, Wolf S, Ebneter A, Zinkernagel MS. Optical Coherence Tomography Angiography in Mice: Comparison with Confocal Scanning Laser Microscopy and Fluorescein Angiography. Transl Vis Sci Technol. 2016 Aug 18;5(4):11. doi: 10.1167/tvst.5.4.11. PMID: 27570710; PMCID: PMC4997887.
Hello,
I’m conducting market research on the management tools used by researchers to automate workflow, visualize processes, and manager budget & supply.
Anyone willing to address few questions in this regard, please connect.
Im interested in researchers all around the world!
How is automation transforming agriculture? How will it impact jobs on farms and rural economies?
The term NDE 4.0 originates from the idea of the fourth industrial revolution, or Industry 4.0. It is defined as the cyber-physical confluence between non-destructive testing and evaluation (NDT&E) and digital technologies including Artificial Intelligence, Digital Twins, Robotics, Advanced Sensing, and Sensor Networks. These ideas have the potential to become a cyber-physical NDE ecosystem (similar to the ongoing transformations in the medical and mobile communication sectors) that reshapes many traditional NDE, structural health monitoring, and prognoses. NDE 4.0 is also about the inspection of new generations of multi-functional components like additively manufactured parts and about fully automated inspection systems integrated into smart factories. Interest in this interdisciplinary field is growing and has the potential to involve all aspects of NDE. This special issue is focused on NDE 4.0, especially from a physics perspective.
Guest editors:
- Antonello Tamburrino
- Johannes Vrana
- Norbert Meyendorf
- Zheng Liu

In the a molecule I am working on, there is a certain O-H bond. I wanted to study the variation of energy on changing the bond length, for that I did a relax-scan computation on g16. I have generated the output where in the .log file I have 50 different structures, each with their energies.
As my next step, I want to calculate vertical excitation energy for each of the 50 structures. I can do it manually by switching to each structure from the window shown in the attachment and use the GUI to generate a .com file of energy calculation for each of them.
But this process gets too tedious and eventually impossible as the number of structures in my scan increase to >100. Is there any way to automate this using some script? Any help will be appreciated

We submitted a manuscript to PLOS ONE and were accepted for publication on July 12, 2023. We paid the article processing charge a few days later, but the article has still not been published (after 40 days). A DOI has been generated but it leads nowhere (10.1371/journal.pone.0289151).
We have tried contacting PLOS ONE through multiple email addresses and have received no response. We also tried to call the phone number listed on our invoice to confirm payment, but were directed to an automated message that told us to send them an email. Due to 'high volume' they expected to get back to us in 7-10 business days.
Has anyone else had delays in publishing with PLOS lately?
How can end-to-end workflow automation be achieved for cloud-based machine and deep learning pipelines?
What is smart agriculture system using IoT in India and how can Internet of Things help farming by automating farming techniques?
Can artificial intelligence farming make agriculture more sustainable and how IoT and machine learning are automating agriculture?
suggest a topics that are related to automation or relays and if you a previous design project that are needed to innovate that would good also
What has been missing from the open-source availability of ChatGPT-type artificial intelligence on the Internet? What is missing in order to make it possible to comply with the norms of text publishing law, tax law, copyright law, property law, intellectual value law, to make it fully ethical, practical and effective, and to make it safe and not generate misinformation for Internet users to use this type of technology?
How should an automated system for verifying the authorship of texts and other works be structured and made openly available on the Internet in order to verify whether phrases, fragments of text, phrases, wording, etc. are present in a specific text submitted to the editors of journals or publishers of books and other text-based publications? If so, to what extent and from which source texts did the artificial intelligence extract specific phrases, fragments of text, thus giving a detailed description of the source texts, providing footnotes to sources, bibliographic descriptions of sources, etc., i.e. also as is done by efficient and effective computerised anti-plagiarism systems?
The recent appeal by the creators of ChatGPT-type artificial intelligence technology, the appeal by businessmen and founders and co-founders of start-ups developing artificial intelligence technology about the need to halt the development of this type of technology for at least six months confirms the thesis that something was not thought of when OpenAI made ChatGPT openly available on the Internet, that something was forgotten, that something was missing from the openly available ChatGPT-type artificial intelligence system on the Internet. I have already written about the issue of the potential massive generation of disinformation in my earlier posts and comments on previously formulated questions about ChatGPT technology and posted on my discussion profile of this Research Gate portal. On the other hand, to the issue of information security, the potential development of disinformation in the public space of the Internet, we should also add the issue of the lack of a structured system for the digital marking of "works" created by artificial intelligence, including texts, publications, photographs, films, innovative solutions, patents, artistic works, etc., in order to ensure the security of information. In this regard, it is also necessary to improve the systems for verifying the authorship of texts sent to journal editors, so as to verify that the text has been written in full compliance with copyright law, intellectual property law, the rules of ethics and good journalistic practice, the rules for writing texts as works of intellectual value, the rules for writing and publishing professional, popular science, scientific and other articles. It is necessary to improve the processes of verifying the authorship of texts sent to the editorial offices of magazines and publishing houses of various text publications, including the improvement of the system of text verification by editors and reviewers working in the editorial offices of popular-scientific, trade, scientific, daily and monthly magazines, etc., by creating for their needs anti-plagiarism systems equipped with text analysis algorithms in order to identify which fragments of text, phrases, paragraphs were created not by a human but by an artificial intelligence of the ChatGPT type, and whose authorship these fragments are. An improved anti-plagiarism system of this kind should also include tools for the precise identification of text fragments, phrases, statements, theses, etc. of other authors, i.e. providing full information in the form of bibliographic descriptions of source publications, providing footnotes to sources. An anti-plagiarism system improved in this way should, like ChatGPT, be made available to Internet users in an open access format. In addition, it remains to be seen whether it is also necessary to legally oblige editors of journals and publishers of various types of textual and other publications to use this kind of anti-plagiarism system in verifying the authorship of texts. Arguably, the editors of journals and publishers of books and other types of textual publications will be interested in doing so in order to apply this kind of automated verification system for the resulting publication works. At the very least, those editors of journals and publishers of books and other types of textual publications that recognise themselves and are recognised as reputable will be interested in using this kind of improved system to verify the authorship of texts sent to the editors. Another issue is the identification of technological determinants, including the type of technologies with which it will be possible to appropriately improve the automated verification system for the aforementioned issue of text authorship. Paradoxically, here again, the technology of artificial intelligence comes into play, which can and should prove to be of great help in the aforementioned issue of verification of the aforementioned question of authorship of texts and other works.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How should an automated and open-access online system for verifying the authorship of texts and other works be structured in order to verify whether phrases, text fragments, phrases, wordings, etc. are present in a specific text sent to the editors of journals or publishers of books and other textual publications? If YES, to what extent and from which source texts did the artificial intelligence retrieve specific phrases, fragments of text, thus giving detailed characteristics of the source texts, providing footnotes to sources, bibliographic descriptions of sources, etc., i.e. also as implemented by efficient and effective computerised anti-plagiarism systems?
What was missing from making a ChatGPT-type artificial intelligence system available on the Internet in an open access format? What is missing in order to make it possible to comply with the norms of text publishing law, tax law, copyright law, property law, intellectual property law, to make it fully ethical, practical and effective, and to make it safe and not generate disinformation for Internet users to use this type of technology?
What do you think about this topic?
What is your opinion on this subject?
Please respond,
I invite you all to discuss,
Thank you very much,
Best wishes,
Dariusz Prokopowicz

In the past month, August 2023, I have tried repeatedly to contact Researchgate.net staff about some papers I have posted here. One paper is posted only as an abstract, but the main text is missing; the other was posted for 3 days (abstract and full text) and then it disappeared from my research record. I wish to find out why?
My repeated attempts to contact Researchgate.net staff to receive answers to these questions have been declined. I received an automated message stating that "access to the 'contact' site has been denied".
I do not understand what the problem is here.
If anyone else has had similar experiences, please let me know.
Thank you very much!
To what extent can robots and AI replace human jobs across different sectors? What are the socioeconomic implications of widespread automation? How can governments and industries manage potential job displacement and ensure a smooth transition for the workforce?
What is the role of automation in agriculture and applications of robotic automation in food processing?
What is the effect of artificial intelligence and robotics in agriculture and role of robotics and automation in precision farming?
Automated hematology analyzer often miss to count the platelet with abnormal size, particularly macro platelet. So every case should be examined manually to exclude macrothrombocythemia.

"AI and ML technologies are streamlining and optimizing warehouse operations. Let's discuss their current and potential applications, from inventory optimization to automation. Share your experiences, studies, or questions on this topic."
Dear academic community,
I would like to invite you to participate in a survey that focuses on artificial intelligence, technostress, and their impact on productivity. Artificial intelligence (AI) is a field of technology that focuses on simulating human intelligence and automating tasks. AI can have a significant impact on academic productivity in a number of ways, such as automating tasks, helping with data analysis, and providing personalized learning experiences. However, AI can also lead to technostress among academic workers. Technostress is an emotional state that is caused by the use of technology and is characterized by feelings of stress, anxiety, and frustration. Technostress can be caused by a number of factors, such as overload, technological problems, and uncertainty about how to use technology.
Technostress can have a negative impact on academic productivity. Those who experience technostress may be less productive, more likely to make mistakes, and more likely to feel burned out. Technostress can also have a negative impact on the learning experience. Those who experience technostress are less likely to be interested in learning, less likely to participate actively in class, and more likely to feel overwhelmed.
It is important to research the relationship between AI, technostress, and academic productivity in order to understand how to use AI safely and effectively in the academic environment.
The survey is anonymous and voluntary; we will analyze the responses once the survey is concluded. Contact:
E- mail adress: simon.alzbeta.research@gmail.com
Note:
If you have any insights or suggestions regarding the topic, please don't hesitate to contact me via e-mail. After the research is concluded, we would be pleased to send you a summary of the study.
Best regards,
Author:
PhDr. Alžbeta Simon
Department of Management,
J. Selye University, Slovakia
What is the role of automation drones and robotics in agriculture and how drones could be the future of Indian farming?
Which are the sensors can be used in agriculture in IoT and use of sensors in the field of automation and control?
Would you like to use a completely new generation of ChatGPT-type tool that would be based on those online databases you would choose yourself?
What do you think about such a business concept for an innovative startup: creating a new generation of something similar to ChatGPT, which will use databases built solely on the basis of continuously updated data, information, objectively verified knowledge resources, and which will use only those online databases, knowledge bases, portals and websites that individual Internet users will select themselves?
In your opinion, does it make sense to create a new generation of something similar to ChatGPT, which will use databases built exclusively on the basis of continuously updated data, information, objectively verified knowledge resources, and which will use exclusively those Internet databases, knowledge bases, portals and websites that individual Internet users themselves will select, determine, define?
In my opinion, it makes sense to create a new generation of something similar to ChatGPT, which will use databases built exclusively on the basis of continuously updated data, information, objectively verified knowledge resources, and which will use exclusively those Internet databases, knowledge bases, portals and websites that individual Internet users themselves will select, define, define. This kind of solution, which would allow personalization of the functionality of such generative artificial intelligence systems, would significantly increase its functionality for individual users, Internet users, citizens. In addition, the scale of innovative solutions for practical applications of such personalized intelligent systems for analyzing content and data contained in selected specific Internet resources would increase significantly.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
In your opinion, does it make sense to create a new generation of something similar to ChatGPT, which will use databases built solely on the basis of continuously updated data, information, objectively verified knowledge resources, and which will use only those Internet databases, knowledge bases, portals and websites that individual Internet users themselves will select, specify, define?
What do you think of such a business concept for an innovative startup: the creation of a new generation of something similar to ChatGPT, which will use databases built exclusively on the basis of continuously updated data, information, objectively verified knowledge resources, and which will use exclusively those online databases, knowledge bases, portals and websites that individual Internet users will themselves select?
Would you like to use a completely new generation of ChatGPT-type tool, which would be based on those online databases that you yourself would select?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Counting on your opinions, on getting to know your personal opinion, on a fair approach to the discussion of scientific issues, I deliberately used the phrase "in your opinion" in the question.
Dariusz Prokopowicz

We have been dealing with digitization as a 'new concept' for more than 10 years now. Previously, it was referred to as automation and ERP/MES/CRM. Nowadays, the latest systems are encompassed under the term digitization. It seems to be a fundamental novelty. For years, the terms Data Mining, Process Mining, Artificial Intelligence (AI) and KI have also been part of the discussion. Suddenly, the term AI appears to overshadow everything. How do you see the connections between automation, ERP/MES/CRM, digitization, Data/Process Mining, and KI?
The widespread automation of jobs could lead to mass unemployment. Preparing for a future with increased leisure time and rethinking economic and societal structures becomes crucial to mitigate the negative impacts.

AI has the potential to automate and enhance various aspects of our lives, but it is unlikely to completely replace humans in the near future. While AI technology has made significant advancements in areas such as image recognition, natural language processing, and automation, there are several reasons why complete human replacement is unlikely in the near term:
- Complexity of human skills
- Ethical and social considerations
- Contextual understanding
- Social acceptance
Instead of outright replacement, the future is more likely to involve collaboration between humans and AI systems, with AI augmenting human capabilities and assisting in various tasks. This collaboration can lead to increased productivity, improved decision-making, and the automation of routine and repetitive tasks, freeing up humans to focus on higher-level activities that require uniquely human skills.
Hello, I'm trying to find pre labelled 90mm petri dishes to be included in an automation workflow for a new biotech lab. Does anyone know a brand? I can't find one! THX
Nidia
Hello everyone,
I am currently conducting a stencil printing simulation using ABAQUS. The simulation needs to be performed in 20 different locations on the stencil, requiring a separate simulation for each of these locations. In my case, all components remain fixed, and only the location of the blade changes across these 20 locations. The simulation consists of seven steps. Throughout these 20 simulations, all conditions remain identical from the first step until the fifth step. However, after the fifth step, I change the blade's location in the sixth step and continue the simulation in the seventh step.
Given that the first five steps are the same in all simulations, I would like to explore if there is a way to execute these steps only once and then reuse or restart the results for the remaining 19 simulations. In other words, I aim to find a method that avoids repeating the first five steps in the subsequent simulations. Although I have attempted to utilize the restart option, it did not prove successful due to the blade's location change in the sixth step.
I have to used IEC-61850 communication protocol used for substation automation through OPAL-RT?
I know its working and i have also verified my results through OPAL-RT but there is some problem i am having in case of its block diagram, that how i will introduce its components of working in my normal block diagram?
Using ChatGPT for Automated Literature Reviews what prompt ?
Hello all, I have been trying 5x Magmax 96 viral RNA isolation kit (AM1836-5) for extraction of total RNA and DNA samples from swab samples (assuming that the viral titer is too low). We have Magmax Express 96 magnetic particle procedure to automate the extraction procedure. So I prepare the plates as per user guide instructions and run the script using the instrument. Please note that I add a spike in RNA control in all of my samples and add two extraction controls in the plate with the spike in control and nuclease-free water instead of the sample to verify whether the extraction went okay. Following extraction, I run conventional PCR using primers for the spike in RNA control. I do quality check using NanoDrop and Qubit and the readings look okay. However, PCR did not work for any of the samples. No peak even for the spike in control in the tape station and Qiaxcel runs. Although PCR positive control worked perfectly and got the expected band size which likely indicates PCR worked okay. There might be something wrong in the extraction procedure which I can't figure out. I tried this process a couple of times with different concentrations of a spike in control (1 and 2 ul in lysis binding solution along with carrier RNA and buffer and isopropanol). Can you please enlighten me a bit about where might be the issue in the extraction process? Thanks a lot for you time and help!
I need short explanations on this area.
Thank you for your support.
Humans play a critical role in the development of next-generation AI in several ways:
1. AI developers: AI systems are developed by humans, and therefore, AI development is driven by the knowledge, skills, and creativity of the developers. AI developers are responsible for designing, coding, testing, and improving the AI systems.
2. Training AI: Humans also play a crucial role in training AI systems. AI systems are trained using large datasets, and humans are responsible for curating and curating those datasets.
3. Human-in-the-loop AI: Humans can also be included in the AI development process by building "human-in-the-loop" AI systems. Such systems require humans to be involved in critical decision-making processes that cannot be fully automated.
4. Ethical considerations: Human intervention is needed to ensure that AI is developed ethically, with a focus on human values and societal well-being. Humans must ensure that AI systems are transparent, responsible, and aligned with human values.
5. Interpreting AI output: Finally, humans play a critical role in interpreting the output of AI systems and analyzing the decisions made by AI systems. Humans must be involved in critical decision-making processes that involve AI systems and must be able to override AI decisions when needed.
Reference:
1. Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schulman, J., & Mané, D. (2016). Concrete problems in AI safety. arXiv preprint arXiv:1606.06565.
2. Adamson, D. (2019). Human and Machine Intelligence: An Overview. AI Magazine, 40(4), 11-24.
3. Bryson, J. J. (2018). Artificial intelligence and human rights. Philosophy & Technology, 31(4), 589-604.
4. Russell, S. J., & Norvig, P. (2010). Artificial intelligence: a modern approach (3rd ed.). Upper Saddle River, NJ: Prentice Hall.
5. Yudkowsky, E. (2013). The AI Alignment Problem: Why It’s Hard, and Where to Start. Machine Intelligence Research Institute.
In conclusion, the development of next-generation AI is heavily dependent on humans. Human input is needed at every stage of the AI development process. The collaboration between humans and AI systems leads to more effective systems that can achieve better results. Therefore, the role of humans in the development of AI cannot be overstated.
I am wondering if there is any way to refresh the input data from a dynamic text file in COMSOL for each iteration.
I have attempted to do this in Python, but COMSOL only solves the equation for the first input text file and not for any new files generated. The reason for this is that I have coupled COMSOL with a DEM-based software which feeds the input to COMSOL for each iteration. (same situation for the output to save the results as a text file)
While the connection is established through Python codes, I am unsure if Python can trigger the refresh button for each iteration!!!
Any suggestions would be greatly appreciated.

Hello,
Can anyone suggest me on storing state variable values in an array at different time steps? I'm trying to make computations at TIME(1) = 0.1, 0.6 and 0.9. I am able to compute state variables but I need to make calculations at the end of TIME(1)= 0.9. When I want to call values at T(0.1), T(0.6) and T(0.9), I'm unable to do it. These values are stored in ODB but I would like to automate the process so the parameters are updated for the next steps.
Thanks in advance
Hi folks,
I would like to automate our experiment. The mirror/prism, as well as the photodetector, are mounted on rotating stages (both the detector and stage have separate motors/controllers). I'm attempting to automate this entire experiment by using a graphical user interface. The problem is that once the turning begins, the reflected ray does not precisely land on the exact location on the photodetector and finally does not fall on the sensor area at increasing degrees.
I think the issue is that as the mirror moves, the laser beam hits a different spot on the mirror and thus theta/2*theta relationship is not exactly held. What is the solution?
Hello everyone, I am working on the elaboration of macroporous materials for bone reconstruction. I mainly characterized the porosity using SEM observation. For the moment I use ImageJ to analyze the porous structure (size, shape, orientation,…) but I haven’t found an efficient method to automate the image processing. I wonder if someone know an accurate automatic and efficient method or software than can make me save time ?
Thank you in advance,
I want to automate the measuring of neurite outgrowth in PC12 cells. Is there any programm or fiji plugin that can measure cellbody length ,neurite length and number of neurites in a unstained cells ?
It cannot be certain that artificial intelligence will completely eliminate the accounting profession, but it can lead to changes in the way of work and the skills required for accountants. Artificial intelligence can help improve the accuracy and efficiency of accounting work, through automation and smart analyzes of financial data.What is your opinion ?
In your opinion, is the development of artificial intelligence, which consists, among other things, of new generations of this technology creating ever more perfect generative artificial intelligence solutions, processing ever more data, performing ever more complex work and ever more creatively performing human-ordered tasks, a threat to people's creative and critical thinking?
It used to be that people remembered the tel numbers of people they called frequently. Nowadays, tel no.'s are entered into smartphones and do not need to be remembered. Various online information services are available on smartphones and we are using them more and more. In many countries, taxi drivers are increasingly using GPS navigation and no longer have to pass an exam to know the topography and the names of all the streets in a city. Technology is increasingly relieving people of various tasks and the need to remember a lot of data. On the other hand, threats are emerging in the form of the generation of disinformation on online social media by posting pictures and videos showing 'fictitious facts' created by artificial intelligence. As deepfake is now recognised as one of the greatest threats arising from artificial intelligence applications, so it is urgently necessary to create legal regulations that regulate the proper use of artificial intelligence-based tools, including respect for copyright, when artificial intelligence creates new works, texts, graphics, etc. using various publications taken from Internet resources. How much the development of artificial intelligence and its applications will change labour markets in the future is suggested by the results of predictive and futurological analyses, according to which up to half of human jobs globally could disappear by 2050. On the other hand, surveys of a number of companies and enterprises show that over the next few years, the majority of businesses plan to carry out investment processes involving the implementation of new Industry 4.0 technologies, including artificial intelligence, into their operations. Predictive analyses and futurological visions created on their basis show that with the technological progress, along with the emergence of successive generations of artificial intelligence, more and more perfect artificial intelligence systems will be created in the next few years. In addition to this, a tool is already available on the Internet in the form of an intelligent language model based on generative artificial intelligence, which, by generating answers to questions, creates texts in an automated way, based on knowledge resources taken from a large number of Internet sites, Internet article databases, Internet book libraries, etc. On the other hand, a kind of ChatGPT creativity is not yet applied perfected, because within this creativity, "fictitious facts", i.e. nicely described events that never happened, may be described by ChatGPT in the created texts. Arguably, these imperfections in the next generations of this tool and in other ChatGPT-like intelligent, automated, digital chatbots created by subsequent technology companies will be corrected. When they are corrected, it will then become increasingly common for people to use such tools made available on the Internet by commissioning artificial intelligence to write specific texts, which will be created by the artificial intelligence in an increasingly creative manner and successively making fewer and fewer mistakes in this creative process. Consequently, humans will commission more and more complex tasks for the artificial intelligence to perform and increasingly require the use of creativity, innovation, artistry, etc. Thus, there may be another threat to humanity which may be the abandonment of performing creative activities when these activities can be performed by artificial intelligence. Thus, a new category of threat to humanity may emerge from the technological advances made by artificial intelligence. It may therefore happen in the future that the development of artificial intelligence is a threat to people's creative and critical thinking.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
In your opinion, is the development of artificial intelligence, which consists, among other things, of new generations of this technology creating ever more perfect generative artificial intelligence solutions, processing ever more data, performing ever more complex work and ever more creatively performing human-ordered tasks, a threat to people's creative and critical thinking?
Is the development of artificial intelligence a threat to people's creative and critical thinking?
What do you think about this topic?
What is your opinion on this subject?
Please respond,
I invite you all to discuss,
Thank you very much,
Hoping to hear your opinions, to get to know your personal opinion, to have an honest approach to discussing scientific issues and not ChatGPT-generated ready-made answers, I deliberately used the phrase "in your opinion" in the question.
The above text is entirely my own work written by me on the basis of my research.
I have not used other sources or automatic text generation systems such as ChatGPT in writing this text.
Copyright by Dariusz Prokopowicz
Best wishes,
Dariusz Prokopowicz

The auditing industry is profoundly affected by artificial intelligence (AI). It is transforming how auditors carry out their responsibilities and enhancing the accuracy and efficacy of the auditing process. Tools propelled by artificial intelligence can automate routine tasks such as data extraction, analysis, and report generation, enabling auditors to focus on complex tasks that require human judgement and expertise. AI can also assist auditors in identifying risks and anomalies that may be overlooked by conventional auditing techniques, allowing them to provide clients with more valuable insights. AI can also improve the quality of audits by providing real-time surveillance and analysis of financial data, thereby reducing the risk of fraud and errors. In general, AI is revolutionising the auditing industry by enhancing audit quality, increasing audit efficacy, and decreasing audit costs.
I am currently working in the process of automating our research lab to be able to process a higher number of samples. Our work consists in detecting and quantifying pathogens in mammalian samples (e.g., blood, faeces, tissue, swabs). As such, I am interested in an automated extraction system which produces good DNA/RNA yields to be sent for NGS.
Until now, we have been using Qiagen kits for our manual extractions, so I thought that a machine from the same brand would do for us. However, I've been told that the QIAcube Connect does not really take that much work out of your hands, and that the sample volume obtained at the end of the process with the QIAcube HT is way lower than the one with the manual kit. I have also checked other machines, such as Thermo Scientific's Kingfisher Flex and its kits, but do not know how well they do in comparison with Qiagen's kits.
Based on your experience, which automated extraction system would you recommend? And which brand of kits have you used with it? The system and kits do not need to be from the brands mentioned here (as long as the produce good results).
Thank you very much in advance.
RNAComposer is a great web server for large-scale automated modeling of RNA structures up to 500 nt residues. We have a big RNA to be predicted. Anyone has good ideas? many thanks!
Does anyone use the TC20 automated cell counter to count PBMC?
I am looking for study cases, articles, papers about graphoscopy software. The aim is create solutions using graphic computation.
I was wondering what changes should I add in the MDS protocol to account for the Zn ion in the active site of a metalloprotein.
Also, I would be grateful if there was a bash script available to automate the process.
I want to integrate the new age technologies like Artificial intelligence and Machine learning with Marketing to automate the complete marketing process , for analytics of customer and market, forecasting, to study the customer behavior. for this which theories I should incorporate in this model? like what should be the theoretical contribution of this study?
Various domains have methods for compatibility testing. For example: Software, design, electrical (keyword E-Plan), control systems, network technology,
it does not have to be exclusively domains in the mechatronics field
For the current moment (Apr.2023) we have got the working cloud service with subscription. It's
contain the functions for list operations (import, export, comparing, cleaning) usually
needed as regular MDM system operations, capable of comparing lists of items like product descriptions from different vendors. Those lists are commonly created manually (without the help of automated systems) and though contain variety of fields and
contained types of information. Those data usually contains a lot of mistakes, typos etc. Two lists with 1 thousand positions each give us 1 million pairs to compare. Our associative index in acouple of minutes helps to determine only really needed pairs to compare (it is tenth of percent
of all positions usually), so the similarity is computed in minutes instead of hours on the basic PC with 8 CPUs and 8 GB RAM
As we know Industry 4.0 is using many new technology including robotics, but are the Industry 4.0 implementation is always necessary using automation? Because automation is heavily discussed in Industry 3.0, especially computerization automation. Or maybe we can ask : using automation are Industry 4.0 or Industry 3.0?
Does some one know some examples or tutorial how to use the biomaRt R package to automate the gene name recognition of the Ensembl database?
Best wishes,
- Recent advances in artificial intelligence (AI) are leading to the emergence of a new class of robot.
- In the next five years, our households and workplaces will become dependent upon the role of robots.
source: AI and robotics: How will robots help us in the future? | World Economic Forum (weforum.org)
Would like to connect if you have explored using quantitative methods for intersectionality analysis.
As AI continues to progress and surpass human capabilities in various areas, many jobs are at risk of being automated and potentially disappearing altogether. Signal processing, which involves the analysis and manipulation of signals such as sound and images, is one area that AI is making significant strides in. With AI's ability to adapt and learn quickly, it may be able to process signals more efficiently and effectively than humans. This could ultimately lead to fewer job opportunities in the field of signal processing, and a shift toward more AI-powered solutions. The impact of automation on the job market is a topic of ongoing debate and concern, and examining the potential effects on specific industries such as signal processing can provide valuable insights into the future of work.
Is the use of artificial intelligence in agriculture ethical, or does it contribute to the further automation and potential loss of jobs in the industry?
How can the implementation of artificial intelligence, Big Data Analytics and other Industry 4.0 technologies help in the process of automated generation of marketing innovations applied on online social media sites?
In recent years, the application of new Industry 4.0 technologies in the process of generating marketing innovations applied to online social media portals has been on the rise. For the purpose of improving marketing communication processes, including advertising campaigns conducted on social media portals and promoting specific individuals, brands of companies, institutions, their product offers, services, etc., sentiment analysis of Internet users' activity in social media is conducted, including analysis of changes in social opinion trends, general social awareness of citizens by verifying the content of banners, posts, entries, comments, etc. entered by Internet users in social media using computerised, analytical Big Data Analytics platforms. I have described this issue in my articles following their publication on my profile of this Research Gate portal. I invite you to collaborate with me on team research projects conducted in this area. Currently, an important developmental issue is also the application of Big Data Analytics platforms used to analyse the sentiment of Internet user activity in social media, which uses new technologies of Industry 4.0, including, among others, artificial intelligence, deep learning, machine learning, etc. Besides, the implementation of artificial intelligence, Big Data Analytics and other Industry 4.0 technologies can help in the process of automated generation of marketing innovations applied on online social media portals. An important issue in this topic is the proper construction of a computerised platform for the automated generation of marketing innovations applied on online social media portals, in which the new generations of Artificial Intelligence, Big Data Analytics and other Industry 4.0 technologies are used.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How can the implementation of artificial intelligence, Big Data Analytics and other Industry 4.0 technologies help in the process of automated generation of marketing innovations applied to online social media portals?
What do you think about this topic?
What is your opinion on this subject?
Please respond,
I invite you all to discuss,
Thank you very much,
Best wishes,
Dariusz Prokopowicz

What is your experience of talking to a Chatboot that acts as a call centre adviser on the hotline of a company, institution whose offer you sometimes or permanently use?
Do you like talking to a Chatbot, which, equipped with artificial intelligence, is a kind of IT robot that acts as an adviser to the call centre of the company or institution whose offer you sometimes or permanently use?
Could the use of Big Data Analytics technology to improve the autonomous, automated improvement of the chatbot system's database of questions and answers, supplemented by new questions and answers added to the database, created by artificial intelligence on the basis of available knowledge on the Internet and improved algorithms based on machine learning technology and improving the content quality and professionalism of call centre advice, help to solve the problems of this type of automated advice?
More and more companies and institutions, in order to optimise costs, as they call it, which usually means mainly reducing the number of employees, are employing Chatbots in their call centre departments as call centre advisors, which, being equipped with artificial intelligence, are a kind of computerised robots that replace humans. The costs of such computerised solutions equipped with smart technology are becoming cheaper and available to more and more companies, enterprises, financial and public institutions. Also, the question of the quality of the answers provided by chatbots to the questions asked or the questions asked in the continuation of the conversation both in terms of content, syntax, logic etc. are constantly being improved. Also, the database of ready-made questions and answers can be continuously expanded. These types of solutions can also be equipped with permanent self-improvement systems to improve the algorithms used by removing errors resulting from incorrect answers given by the chatbot or repeated questions, a kind of looping of the answers given by the chatbot when people interested in obtaining specific information give their answers. People calling the hotline who are interested in obtaining specific information usually act as potential or current customers of product or service offers from specific companies and institutions. On the one hand, they would usually like the phone call not to take too long and to get the information they need or to get specific, factually sound advice. In principle, a company, enterprise or institution that engages chatbots in its call centre departments has the analogous goal of improving the conversations that chatbots have with customers. However, it is often the case that the artificial intelligence involved in the chatbots is of an outdated generation, the algorithms involved using not the latest generation of machine learning technology cause a telephone conversation with a chatbot to take much longer than with the human call centre adviser whom the chatbot has replaced. This is because the outdated machine learning technology involved and previous generations of artificial intelligence ask several to several questions of the potential customer calling the hotline, in order to finally redirect the caller interested in a specific product or service offering or in need of specific advice to an advisor who will provide a factual and professional answer to the caller's queries, as the chatbot was unable to do. The question then arises as to why some companies and institutions are replacing their call centre employees, i.e. call centre advisers, with chatbots, since they have used outdated technology to create them and the resulting solutions generate an embarrassing situation and disgust with this type of telephone conversation instead of helping many potential customers? Is this an attempt to improve the image of a company presented in marketing communications as a modern company, using modern technology as it were, especially since another, competing company also already uses similar technological solutions. This happens more than once. Chatbots, which replace human hotline advisers in call centres, are presented as an example of modernity in the marketing communication of a company or institution, and potential or existing customers are disgusted by this kind of pseudo-advice and often look for the information they need on the website of the company or institution instead of calling the hotline once again. And perhaps this is precisely the point, to redirect a potential customer to the website of a particular company or institution, because this type of communication will be the cheapest for those offering certain products or services. But the cheapest solution for companies and institutions does not always mean the highest level of satisfaction for existing or potential customers. The application of deep learning technologies could help to improve this kind of automated call centre advisors. And, perhaps, the use of Big Data Analytics technology to improve the autonomous, automated improvement of the chatbot system's database of questions and answers, supplemented by new questions and answers added to the database, created by artificial intelligence on the basis of available knowledge on the Internet, and improved algorithms based on machine learning technology and improving the content quality and professionalism of call centre advice could help solve the problems described above.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Do you like to talk to a Chatbot, which, being equipped with artificial intelligence, is a kind of IT robot fulfilling the advisors' role as a call centre in the call centre of the company, institution whose offer you sometimes or permanently use?
Could the use of Big Data Analytics technology to improve the autonomous, automated improvement of the chatbot system's database of questions and answers, supplemented by new questions and answers added to the database, created by artificial intelligence on the basis of available knowledge on the Internet and improved algorithms based on machine learning technology and improving the content quality and professionalism of call centre advice, help to solve the problems of this type of automated advice?
What do you think about it?
What is your opinion on this subject?
Please respond,
I invite you all to discuss,
Thank you very much,
Warm regards,
Dariusz Prokopowicz

How could automation assist blood banks enhance the standard and reliability of their laboratories?
State differences in terms of:
- validity of results (accuracy and specificity)
- advantages and disadvantages
I feel that, in water resource management activities (specially flood management), most of the developed software tools are not widely or continuously used. The reason may be either the decision makers work independently from project to project or fully /partially automate the required processes unique to the project.
I would like to know your experiences as well as comments on the utilization of the software tools to assist flood management decisions.
Hi everyone,
I'm running shell buckling analysis with a shell with perfect geometry and consider geometric nonlinearities. The riks algorithm is set on automatic incrementation. In many cases, the solver gives out a warning message reporting negative eigenvalues, which means, that the bifurcation load may have been exceeded. However, the algorithm still increases the load proportionality factor and just 'runs over' the bifurcation load. This also happens if I decrease the initial and maximum increment. My solution so far is to check the message file for negative eigenvalues. However, this is inconvenient for automatisation of evaluation. Do you know of any other solution?