Science topics: Machine Intelligence
Science topic
Machine Intelligence - Science topic
Explore the latest questions and answers in Machine Intelligence, and find Machine Intelligence experts.
Questions related to Machine Intelligence
Hello everyone,
I’m planning to submit a paper to the IEEE International Geoscience and Remote Sensing Symposium (IGARSS)/Machine Intelligence for GeoAnalytics and Remote Sensing (MIGARS) (2025). Could anyone share details about the publication and registration costs?
Additionally, as I’m based in Iran, financial constraints make it challenging to afford these fees. Are there any known funding opportunities, fee waivers, or sponsorships available for participants from developing countries? I’d appreciate any guidance or tips for managing these costs.
Thank you for your help!
1—The organism must be mobile (James 1890), which would exclude plants and rocks and inanimate objects from this category.
2—The organism must be uni- or multi-cellular and eukaryotic, and dependent on oxygen to control its metabolism (Margulis 1970).
3—The organism must be able to replicate and be subjected to natural selection (Darwin 1859; Noble and Noble 2023).
4—Through its movements the organism must demonstrate volitional control and an ability to learn (Hebb 1949, 1960, 1968; Noble and Noble 2023). In short, does the brain of the organism respond to feedback from the environment and its internal state, e.g., the drive to procreate and homeostasis. In the vertebrate telencephalon, volition can be expressed as a readiness potential (Varela 1999ab). Organisms that have the foregoing characteristics range from the amoeba to the primate (including Homo sapiens) and other large-brained mammals, Cetaceans and Elephantidae.
5—The telencephalon of mammals is continuously active during waking state. The activity remains high per neuron (based on glucose consumption) to maintain consciousness, and this activity is not related to locomotion. Whether the foregoing properties extend to all vertebrates and to specific ganglia of invertebrates needs immediate empirical attention.
6—Metrics used to quantify consciousness: total amount of information stored in an organism, declaratively (i.e., in terms of sensation) and executably (i.e., in terms of body movement), and the maximal rate of information transfer internally by way of conducting neurons. Stephen Hawking had a minimal throughput of 0.1 bits per second (because of ALS), yet he could still perform internal computations.
7—Species intelligence should be based on an organism’s evolutionary presence. Crocodiles have thus far survived over 200 million years: two extinctions.
8—AI machines will never have the consciousness of animals, because they are not biological entities, and they will always need to be maintained by a human programmer/CEO—the liability holder—even if the device becomes extremely autonomous: Bezos will always be legally responsible for his algorithms, as the New York Times is responsible for what it publishes (Harari 2024).
9—And what about AI Singularity? Depending how one defines intelligence, machines are already smarter than the best Chess or Go Players, but an AI’s ability to solve problems the way humans do without having to use brute-force (namely, by way of high-energy-consuming supercomputers for back-propagated learning and massive memory storage) is yet not available (LeCun 2023), and may never be.
10—Human intelligence (not machine intelligence as suggested by Geoffrey Hinton) will extinguish humankind by nuclear or environmental annihilation or by synthesizing an unstoppable pathogen or by some combination of these (Chomsky 2023; Ellsberg 2023; Sachs 2023).
Farmers no longer have to apply water, fertilizers, and pesticides uniformly across entire fields. Instead, they can use the minimum quantities required and target very specific areas, or even treat individual plants differently. Benefits include: Higher crop productivity.
Currently, the AI world seems to be dividing into three directions: "AI" is now used synonymously with "Generative AI". In addition, "MI" is becoming established for machine intelligence including ML, i.e. machine learning. For classic AI = symbolic AI [Wooldridge (2020) The Road to Conscious Machines. P. 42] and its further development into "digital intelligence" including digital thinking, "DI" could be considered the third branch. What do you think? Is there "One AI" or how many different branches?
Translated with www.DeepL.com/Translator (free version)
Assuming an AI system is provided with the required data and computing resources to achieve a complex task. This task would not be achieved unless the machine is powered with a well written algorithm. Since an algorithm is a program or set of instructions that a machine executes, therefore, applying it defies the concept of machine intelligence. Do you agree?
Regarding Turing's proposal to follow the example of the development of intelligence in humans and apply it to machines. Specifically, the example of a child who, in addition to learning discipline, needs to learn the spirit of initiative and decision-making. Could this kind of intelligence be compared to the case of a simple machine, or what Turing called the “child machine,” which is provided with basic instructions that enable it to learn and make its decisions later?
"From science to law, from medicine to military questions, artificial intelligence is shaking up all our fields of expertise. All?? No?! In philosophy, AI is useless." The Artificial Mind, by Raphaël Enthoven, Humensis, 2024.
What are the analytical tools supported by artificial intelligence technology, machine learning, deep learning, artificial neural networks available on the Internet that can be helpful in business, can be used in companies and/or enterprises for improving certain activities, areas of business, implementation of economic, investment, business projects, etc.?
Since OpenAI brought ChatGPT online in November 2022, interest in the possibilities of using intelligent chatbots for various aspects of business operations has strongly increased among business entities. Intelligent chatbots originally only or mainly enabled conversations, discussions, answered questions using specific data resources, information and knowledge taken from a selection of multiple websites. Then, in the following months, OpenAI released other intelligent applications on the Internet, allowing Internet users to generate images, photos, graphics, videos, solve complex mathematical tasks, create software for new computer applications, generate analytical reports, process various types of documents based on the given commands and formulated commands. In addition to this, in 2023, other technology companies also began to make their intelligent applications available on the Internet, through which certain complex tasks can be carried out to facilitate certain processes, aspects of companies, enterprises, financial institutions, etc., and thus facilitate business. There is a steady increase in the number of intelligent applications and tools available on the Internet that can support the implementation of various aspects of business activities carried out in companies and enterprises. On the other hand, the number of new business applications of said smart applications is growing rapidly.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
What are the analytical tools available on the Internet supported by artificial intelligence technology, machine learning, deep learning, artificial neural networks, which can be helpful in business, can be used in companies and/or enterprises for improving certain activities, areas of business activity, implementation of economic, investment, business projects, etc.?
What are the AI-enabled analytical tools available on the Internet that can be helpful to business?
And what is your opinion on this topic?
What do you think about this topic?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
This is quite a topic to ponder about but I am particularly interested in knowing the strategic placement of human-centered inquiries in the advancement of contemporary AI systems despite the remarkable progressions in machine learning and particularly developments in the computer vision. How should we label the fusion, if any, and explicate various intriguing topics of discourse for the mutually beneficial association of two (ostensibly) mechanistically distinct intelligences. Why Human-inspired or Human-assisted AI might be important? Do we really need this association or let machines figure out their own way somehow through continious exploration? What makes humans special and/or not so special? Maybe as humans we have biases towards our species. For one thing, humans are not expert in “detection” missions. Particularly in absence of “attention”, their performance exponentially degrades leaving no room to excel in any task completion. Humans are not good in resolving “hidden correlation” tasks either. Their ability to estimate the joint probability distribution of a very large set of random events is extremely limited. However, they are good at trying/searching and finding out things, in other words, quite good at ontologies. Since the current AI research is oriented around reaching out the level of human-AI, one of the essential questions could be "What is missing to achieve Human-level AI?". An argument that is given is this: Without abstraction, reasoning, compositionality and fuctuality, it is almost impossible to achieve or get to human-level AI in the near future. Of course such a question brings out other interesting and relevant questions such as "What are the most obscure unsolved problems in the development of human-level AI?" or "Can human-specific attributes (coinciousness, morality, humor, causality, trust) can be transfered to machines? If yes, how and to what extent?". Even if we hypothetically assume human-inspired AI is in place, we still need to answers questions like "would this system be fully automatic or partially automatic and Human intervenable?". Should human counciousness position itself somewhere? if yes where in the generic design?. I think for the futuristic designs of AI, we need to look into the possibility of hybrid designs and think about their implementational details along the way...
How should the learning algorithms contained in ChatGPT technology be improved so that the answers to questions generated by this form of artificial intelligence are free of factual errors and fictitious 'facts'?
How can ChatGPT technology be improved so that the answers provided by the artificial intelligence system are free of factual errors and inconsistent information?
The ChatGPT artificial intelligence technology generates answers to questions based on an outdated set of data and information downloaded from selected websites in 2021. In addition, the learning algorithms contained in ChatGPT technology are not perfect, which means that the answers to questions generated by this form of artificial intelligence may contain factual errors and a kind of fictitious 'facts'. A serious drawback of using this type of tool is that the ChatGPT-generated answers may contain serious factual errors. When people ask about something specific, they may receive an answer that is not factually correct. ChatGPT often answers questions eloquently, but often its answers may not relate to existing facts. ChatGPT can generate a kind of fictitious 'facts', i.e. the generated answers may contain stylistically, phraseologically, etc. correctly formulated sentences containing descriptions and characteristics of certain objects presented as real existing but not actually existing. In the future, a ChatGPT-type system will be refined and improved, which will be self-learning and self-improving in the analysis of large data sets and will take into account newly emerging data and information to generate answers to the questions asked without making numerous mistakes as is currently the case.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How should the learning algorithms contained in the ChatGPT technology be improved so that the answers generated by this form of artificial intelligence to the questions asked are free of factual errors and fictitious "facts"?
How can ChatGPT technology be improved so that the answers provided by the artificial intelligence system are free of factual errors and inconsistent information?
What do you think about this subject?
What is your opinion on this subject?
Please respond,
I invite you all to discuss,
Thank you very much,
Best regards,
Dariusz Prokopowicz
What do you think about ChatGPT and ways it's been using?
Few new uses of ChatGPT:
1.- https://cleanup.pictures/
Remove unwanted objects from
#photos, people, text, and defects from any picture.
2.- www.resumeworded.com
Online #resume and #Linkedln grader instantly scores your resume and Linkedln profile and gives you detailed feedback on how to get more opportunities and interviews.
3.- https://soundraw.io/
Soundraw is a #music generator for creators.
Select the type of music you want genre, instruments, mood, length, etc and let Al generate beautiful songs for you.
4.- www.looka.com
Design a #Logo, make a #website, and create a #brand identity you'll love with the power of Al.
5.- www.copy.ai
Get a great copy that sells. #Copy.ai is an Al-powered #copywriter that generates high-quality copy for your business.”
This discussion is to discuss ideas and possible ways in which people's understanding of the human mind in any way could be applied to the current methods used to create this technology. Any and all people are able to respond, but please be respectful in how it is done.
HI all.
As a part my research work, I have segmented objects in an image for classification. After segmentation, the objects are having black backgrounds. And I used those images to train and test the proposed CNN model.
I want to know that how the CNN processes these black surrounding for image classification task.
Thank you
What we should ask, instead, is how to develop more informed and self-aware relationships with technologies that are programmed to take advantage of our liability to be deceived. It might sound paradoxical, but to better comprehend AI we need first to better comprehend ourselves. Contemporary AI technologies constantly mobilize mechanisms such as empathy, stereotyping, and social habits. To understand these technologies more deeply, and to fully appreciate the relationship we are building with them, we need to interrogate how such mechanisms work and which part deception plays in our interaction with “intelligent” machine
Image Source: https://www.maize.io/news/into-the-unknown/
Hello
I am a PhD student looking to read some recent good papers that can help me identify a research topic in RL for controls applications . I have been reading through quite a few papers/topics discussing model free vs model based RL etc . Not been able to find something , may be I don't understand it yet to the extent :) .
Just for the background : My experience is with Diesel , SI engines , vehicles and controls .
One of the topics/areas that seems interesting to me is learning using RL in uncertain scenarios, this might seem to broad for most of the people .
Another possible area would be RL for connected vehicles, self driving etc .
Any help/suggestion is welcome .
Since the importance of Machine Learning (ML) is significantly increasing, let's share our opinions, publications, or future applications on Optical Wireless Communication.
Thank you,
Ezgi Ertunc
We introduce the concept of Proton-Seconds and see it lends itself to a method of solving problems across a large range of disciplines in the Natural Sciences. The underpinnings seem to be in 6-fold symmetry. This lends itself to a Universal Form. We find this presents the Periodic Table of the Elements as a squaring of the circle. It is rather abstract thinking, but just as the moment we define truth and as a result it reverses, I think we can treat problem solving this way: As Patterns…The idea is there is nothing we can say is the truth, but we can solve problems through pattern recognition. I would think this manner of problem solving through pattern recognition could be employed in developing deep learning machine intelligence and AI for its method of imitating human learning to gain knowledge.
Deleted research item The research item mentioned here has been deleted
Exploring the similarities and differences between these three powerful machine learning tools (PCA, NMF, and Autoencoder) has always been a mental challenge for me. Anyone with knowledge in this field is welcome to share it with me.
Recently, I have been attracted by the paper "Stable learning establishes some common ground between causal inference and machine learning" published in Nature Machine Intelligence journal. After perusing it, I met with a problem regarding the connection between model explainability and spurious correlation.
I notice that in the paper, after introducing the three key factors(stability, explainability and fairness) ML researchers need to address, the authors make a further judgement that spurious correlation is a key source of risk. So far I have figured out why spurious correlation can cause ML models to lack stability and fairness, however, it is still unknown to me why spurious correlation can obstruct the research on explainable AI. As far as I know, there have been two lines of research on XAI, explaining black-box models and building inherently interpretable models. But I'm wondering if there are some concrete explanations about why spurious correlations are such a troublemaker when trying to design good XAL methods?
Can anyone suggest any ensembling methods for the output of pre-trained models? Suppose, there is a dataset containing cats and dogs. Three pre-trained models are applied i.e., VGG16, VGG19, and ResNet50. How will you apply ensembling techniques? Bagging, boosting, voting etc.
Like other meta-heuristic algorithms, some algorithms tend to be trapped in low diversity, local optima and unbalanced exploitation ability.
1- Enhance its exploratory and exploitative performance.
2- Overcome premature convergence (increase the fast convergence) and ease of falling (trapped) into a local optimum.
3- Increase the diversity of population and alleviate the prematurity convergence problem
4- The algorithm suffers from an immature balance between exploitation and exploration.
5- Maintain the diversity of solutions during the search, so that the tendency of stagnation towards the sub-optimal solutions can be avoided and the convergence rate can be boosted to obtain more accurate optimal solutions.
6- Slow convergence speed, inability to jump out of local optima and fixed step length.
7- Improve its population diversity in the search space.
What is the boundary between the tasks that need human interventions and the tasks that can be fully autonomous in the domain of civil and environmental engineering? What are ways of establishing a human-machine interface that combines the best parts of human intelligence and machine intelligence in different civil and environmental engineering problem-solving processes? Any tasks that can never be autonomous and need civil and environmental engineers? Coordinating international infrastructure projects? Operating future cities with many interactions between building facilities? We would love to learn from you about your existing work and thoughts in this broad area and hope we can build the future of humans and civil & environmental engineering together.
Please see this link for an article that serves as a starting point for this discussion initiated by an ASCE task force:
[Information] Special Issue - Intelligent Control and Robotics
Dear Researchers,
Does anybody here know if having accepted a paper in the 9th Machine Intelligence and Digital Interaction Conference is valid and valuable for a scientific resume?
Thank you in advance.
How do you interpret different concepts in NLP for time-series? For example “self-attention” and “positional embeddings” in transformers.
I am working on a project with Generative Adversarial Network to generate pseudo (minority) samples to expand the dataset and then using that expanded dataset to train my model for fault detection in machinery.
I have already tried my project with 2 machine temperature sensor datasets (containing timestamp and temperature data at that timestamp).
I am looking for a dataset having a similar structure (i.e timestamp with one sensor data) , For Example, Current sensor with time or pressure sensor with time.
I have searched Kaggle but most datasets on Kaggle are multivariate.
Where can I find Univariate Time-series data for fault detection in machinery?
#machinelearning #univariatedata #timeseriesdata #machinerydata #faultanalysis #faultdetection #anomalydetection #GAN #neuralnetwork
A lot of companies are investing a lot of their energy, money and time into digitalization and UX is at the forefront, leading the cause at some of these companies. What are the hottest topics of research in UX these days?
#uxdesign #uxresearch #ux #research #ai #innovation #artificialintelligence #machinelearning #datascience #deeplearning #ml #science #futureofbusiness #futureofai #futureinsights #futureoftech
As far as I know, machine intelligence of engineering field has basically presented two research ideas to realize the data-driven modeling and analysis of complex systems and solve the prediction or diagnosis problems: 1) by proposing advanced and complex algorithms with good adaptive ability, and 2) by employing simple and effective algorithm with good interpretability combined with the characteristics of practical engineering problems.
I wonder what colleagues think of these two research ideas, and whether there are any other ideas. Aiming at a certain research problem, how to effectively evaluate the innovation of these two ideas?
Hello, I'm currently studying a Deriche filter. I've managed to create python program for derivative smoothing from this article [1]. As I understand, Deriche used one α for smoothing in both directions - X and Y, so this filter seems to me isotrophic. Is it correct way to use two alphas − α_x and α_y to add anistrophy?
Can someone please explain me the relaion between α in Deriche filter and σ in Gaussian filter? In [1] i found an equation picture, but i do not understand the symbol П. Is it π? To my mind, it's not, because in the article they took α = 0.14 and σ = 10. From these values П = 625/19≈3.1887755.
Thank you.
[1]: Deriche, R. (1990). Fast algorithms for low-level vision. IEEE Transactions on Pattern Analysis and Machine Intelligence, 12(1), 78–87. doi:10.1109/34.41386
I am new to the field of Neural networks. Currently, I am working on training a CNN model to classify XRAY Images into Normal and Viral Pneumonia.
I am facing an issue of Constant Val accuracy while training the model. However, with each epoch the training accuracy is becoming better and both the losses (loss and Val loss) are decreasing.
How should I deal with this problem of the constant Val Accuracy?
Attaching below: The Screenshot of the Epochs
Been machine intelligent artificially, could we teach them to love artificially?
Recently, several works have been published on predictive analytics:
- Prediction-based Resource Allocation using LSTM and Minimum Cost and Maximum Flow Algorithm by Gyunam Park and Minseok Song (https://ieeexplore.ieee.org/abstract/document/8786063)
- Using Convolution Neural Networks for Predictive Process Analytics by Vincenzo Pasquadibisceglie et al. (https://ieeexplore.ieee.org/document/8786066)
Besides, there is a paper on how to discover a process model using neural networks:
My questions for this discussion are:
- It seems, that the field for machine learning approaches in process mining in not limited to predictions/discovery. Can we formulate the areas of possible applications?
- Can we use process mining techniques in machine learning? Can we, for example, mine how neural networks learn (in order to better understand their predictions)?
- If you believe that the subjects are completely incompatible, then, please, share your argument. Why do you think so?
- Finally, please, share known papers in which: process mining (PM) is applied in machine learning (ML) research, ML is applied in PM research, both PM and ML are applied to solve a problem. I believe, this will be useful for any reader of this discussion.
Hi everyone,
I am writing this to gather some suggestions for my thesis topic.
I am a student of MSc Quantitative Finance. I am in need of some suggestions from the experienced members for a research topic in Portfolio management.
My expertise are in statistics and empirical analysis. I believe that I will be able to present some good work in field portfolio analysis. Currently, I am researching for some good topics where I can apply machine learning or machine intelligence e.g. for forecasting portfolio performance or may be use it to asses portfolio optimization strategies.
I will be very grateful for you suggestions and guidance. If it suits you, you can also email me on narendarkumar306@gmail.com
Regards.
Hi all,
To work on a "predictive maintenance" issue, I need a real data set that contain sensor data so that i can train a model to predict or diagnose failure like high temperature alert .
I would appreciate it if anybody could help me to get a real data set.
Thanks
Article at this link https://www.quantamagazine.org/been-kim-is-building-a-translator-for-artificial-intelligence-20190110/ talks about "A New Approach to Understanding How Machines Think". It says in intro:
"Neural networks are famously incomprehensible — a computer can come up with a good answer, but not be able to explain what led to the conclusion. Been Kim is developing a “translator for humans” so that we can understand when artificial intelligence breaks down."
Are you aware of other research and researchers doing similar work? Can you share links, resources and/or your research on this, please? Thanks!
What is the difference between artificial intelligence and machine intelligence ?
The current developments in utilizing computer systems to study facial expressions highlight the facial changes in light of a man's interior enthusiastic states, aims, or social interchanges are based on complex visual data processing. The multimedia systems utilizing machine vision and computerized image processing procedures are incorporating land or flying remote detection and recognition of neurotic pressure conditions, shape and shading portrayal of organic products.
Papers:
M.Z. Uddin, M.M. Hassan, A. Almogren, A. Alamri, M. Alrubaian, and G. Fortino, “Facial expression recognition utilizing local direction-based robust features and deep belief network,” IEEE Access, vol. 5, pp. 4525-4536, 2017.
Y. Wu, T. Hassner, K. Kim, G. Medioni, and P. Natarajan, “Facial landmark detection with tweaked convolutional neural networks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017.
H. Ding, S.K. Zhou, and R. Chellappa, “Facenet2expnet: Regularizing a deep face recognition net for expression recognition,” In Automatic Face & Gesture Recognition (FG 2017), 2017 12th IEEE International Conference on, IEEE, pp. 118-126, 2017.
There are number of computational methods applied to simultaneous translation.
Papers:
M. Rusinol, D. Aldavert, R. Toledo, and J. Llados, “Browsing heterogeneous document collections by a segmentation-free word spotting method,” vol. 22. in Proc. of International Conference on Document Analysis and Recognition (ICDAR), IEEE, 2011, pp. 63–67.
V. Frinken, A. Fischer, R. Manmatha, and H. Bunke, “A novel word spotting method based on recurrent neural networks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, pp. 211–224, 2012.
What do you think: when artificial intelligence (AI) will be smarter than humans, if ever? Can you predict it and if yes, when it will approximately happen in your opinion?
You can also vote in poll at:
I have predicted multiple feature using a Neural Net model and I have found the Error Vector for the same (E= P-A). Now to determine whether a particular entry is Anomaly or Not , I need to convert those multiple Error Vector into a single one, so as o set Threshold. What should be the best technique for that?
The term was apparently introduced by John Alan Robinson in a 1970 paper in the Proceedings of the Sixth Annual Machine Intelligence Workshop, Edinburgh, 1970, entitled "Computational Logic: The Unification Computation" (Machine Intelligence 6:63-72, Edinburgh University Press, 1971). The expression is used in the second paragraph with a footnote claiming that *computational logic* (the emphasis is in the paper) is "surely a better phrase than 'theorem proving', for the branch of artificial intelligence which deals with how to make machines do deduction efficiently". This sounds like coining the term; no reference to a previous use is mentioned. Is anybody aware of a previous use of "computational logic" by someone else?
I have a dataset, which contains normal as well as abnormal data (counter data) behavior .
What is relation between Machine Intelligence and Smart Networks?
Because this is a Anomaly Detection problem.
So when you do the classification, in the training stage, you only use one-class-labeled data? or use both classes of data for anomaly?
As increasing number of samples to train neural network, the efficiency of system reduces. Can anyone tell me why this is happening?
For detecting and prediction spyware stealing data from the system without the user consciousness.
Tweet interactions over the 3-day risk conference (CCCR2016 at Cambridge University), including fear of AI and robotics, shows some resistance to a '10-commandments' type ethical template for robotics.
Ignoring the religious overtones, and beyond Asimov, aren't similar rules encoded into robotics necessary to ensure future law-abiding artificial intellects?
Mathematicians and philosophers alone are not the solution to ensuring non-threatening AI. Diverse teams of anthropologists, historians, linguists, brain-scientists, and more, collaborating could design ethical and moral machines.
My topic of PhD research is "Computer Aided Detection of Prostate Cancer from MRI Images using Machine Intelligence Techniques". From where do I have to start prostate segmentation? Does registration has to be done before segmentation or is it optional? Are there any open source codes available for learning the existing methods of prostate segmentation?
Or are they fed instructions only to follow?
Machine Education and Computational learning theory employs different models of learning using smart algorithms to make machines intelligent and learn faster. But in theory, do machines actually learn "anything"? They are fed instructions which they follow, whether associative, supervised, reinforcement, or adaptive learning or exploratory data mining, these are all information (instructions) that are fed into computers. There is no involvement of motivation and synthesis of learned materials.
What's the precise definition of machine learning?
My team and I have been assigned to produce a layout of a production line of a manufacturing company by using WITNESS 14. The layout of the production line has been done. However, we are currently having problem to allocate the labour to take the parts from the shelf and move it to the machines. We have been told that it required some coding to control the action of the labour. Enclosed is the layout of the production line we drawn for this project.
Thanks for the reply.
Does anyone know or have the triple feature extraction function code in Matlab?
It was proposed by Alexander Kadyrov, "The Trace Transform and Its Application", IEEE transaction on pattern analysis and machine intelligence, 2001.
Miguel Nicolelis and Ronald Cicurel claim that the brain is relativistic and cannot be simulated by a Turing Machine which is contrary to well marketed ideas of simulating /mapping the whole brain https://www.youtube.com/watch?v=7HYQsJUkyHQ If it cannot be simulated on digital computers what is the solution to understand the brain language?
Experiments such as the study of the Relationship between Weather Variables and Electric Power Demand inside a Smart Grid/Smart World Framework correlated the existing relationship between energy demand and climatic variables, in order to make better demand forecasts http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3478798/. The study shows interesting correlations, especially between temperature and electrical demand.
My question to ANN experts are, in time series microgrid load forecasting, would a self training or adaptive ANN architecture inherently take into consideration the cross correlation between weather data and the electrical demand pattern of a (rural) household/microgrid when the weather/climatic data is fed as a separate input (or inputs) into the ANN (or would it be better to try and normalize the real and historical demand profiles).
Creation of such a system is one of the main goals of artificial intelligence.
Two machine flow shop with blocking have been studied for about 50 years. The most famous solution is about converting it into a TSP problem. Is there other approach to get optimal solutions for this solution.
What level of intelligence do machines actually need/posses and how can this be compared. If the community is to create a Machine Quotient (MQ), how would this be compared to human cognition?
How do you measure machine intelligence and what level of cognitive processing do machines really need to behave autonomously. Is human level cognition required in ordinary machines or just those working remotely?
Supervised learning handle the classification problem with certain labeled training data and semi-supervised learning algorithm aims to improve the classifiers performance by the help of amount of unlabeled samples. While is there any theory or classical frame work to handle the training with soft class label? This soft label are prior knowledge of the training sample, which may be the class probabilities, class beliefs or expert experience values.
How can machine learning techniques play a role in HVAC systems? I am searching for techniques that are used for making efficient HVAC systems. And which issues of HVAC are solved by using these techniques. Need some reliable sources for understanding use and working of machine learning techniques in making HVAC systems.
I am working on Physical Activity Recognition using data acquired from smartphone sensors (gyroscope, magnetometer and accelerometer) and would like to compare different classifier performance on the dataset, but wondering which evaluation matrix would be best to use: True Positive Rate (TPR), False Positive Rate (FPR), Precision, Recall, F-score or overall classification accuracy? This is a six class problem (6 different activities).
If we are converting ECG signal, in frequency domain using FFT or other periodogram, and then calculating RR Interval from signal, then what will the input format for classification be (like SVM, Neural Network)? What features should we take for the input?
In the attached paper it was written to select input feature in form of NextRR (RR of Next Beat), PrevRR (RR of Previous Beat), and RRRatio (RR Ratio of previous to current beat).
My main problem with this solution is, that this way we will classify signal on the basis of single Beat, because NextRR, PrevRR, and RRRatio will be only for single beat.
For SVM with the help of SMO to solve linear problem what is the relation with data dimension, data size and time for training and resources (memory) that needs to increase linearly or have more than 1 degree relation. Is there any good reference for this question (Book or Paper)?
In cognitive computing, humans and machines work together. How will you devise your cognitive computing methodology so that machines and humans can learn from one another?
Today, due to technological advancements machines are turning to be more intelligent than human being. If we compare them, there are several similarities and differences.