Science topics: Machine Intelligence
Science topic

Machine Intelligence - Science topic

Explore the latest questions and answers in Machine Intelligence, and find Machine Intelligence experts.
Questions related to Machine Intelligence
  • asked a question related to Machine Intelligence
Question
1 answer
Hello everyone, I’m planning to submit a paper to the IEEE International Geoscience and Remote Sensing Symposium (IGARSS)/Machine Intelligence for GeoAnalytics and Remote Sensing (MIGARS) (2025). Could anyone share details about the publication and registration costs?
Additionally, as I’m based in Iran, financial constraints make it challenging to afford these fees. Are there any known funding opportunities, fee waivers, or sponsorships available for participants from developing countries? I’d appreciate any guidance or tips for managing these costs.
Thank you for your help!
Relevant answer
Answer
I cannot tell you the cost as at present due to currency fluctuation in recent time. As you inquire to know about funding, there could be funding opportunities but you must have navigate to see on the net.
  • asked a question related to Machine Intelligence
Question
4 answers
1—The organism must be mobile (James 1890), which would exclude plants and rocks and inanimate objects from this category.
2—The organism must be uni- or multi-cellular and eukaryotic, and dependent on oxygen to control its metabolism (Margulis 1970).
3—The organism must be able to replicate and be subjected to natural selection (Darwin 1859; Noble and Noble 2023).
4—Through its movements the organism must demonstrate volitional control and an ability to learn (Hebb 1949, 1960, 1968; Noble and Noble 2023). In short, does the brain of the organism respond to feedback from the environment and its internal state, e.g., the drive to procreate and homeostasis. In the vertebrate telencephalon, volition can be expressed as a readiness potential (Varela 1999ab). Organisms that have the foregoing characteristics range from the amoeba to the primate (including Homo sapiens) and other large-brained mammals, Cetaceans and Elephantidae.
5—The telencephalon of mammals is continuously active during waking state. The activity remains high per neuron (based on glucose consumption) to maintain consciousness, and this activity is not related to locomotion. Whether the foregoing properties extend to all vertebrates and to specific ganglia of invertebrates needs immediate empirical attention.
6—Metrics used to quantify consciousness: total amount of information stored in an organism, declaratively (i.e., in terms of sensation) and executably (i.e., in terms of body movement), and the maximal rate of information transfer internally by way of conducting neurons. Stephen Hawking had a minimal throughput of 0.1 bits per second (because of ALS), yet he could still perform internal computations.
7—Species intelligence should be based on an organism’s evolutionary presence. Crocodiles have thus far survived over 200 million years: two extinctions.
8—AI machines will never have the consciousness of animals, because they are not biological entities, and they will always need to be maintained by a human programmer/CEO—the liability holder—even if the device becomes extremely autonomous: Bezos will always be legally responsible for his algorithms, as the New York Times is responsible for what it publishes (Harari 2024).
9—And what about AI Singularity? Depending how one defines intelligence, machines are already smarter than the best Chess or Go Players, but an AI’s ability to solve problems the way humans do without having to use brute-force (namely, by way of high-energy-consuming supercomputers for back-propagated learning and massive memory storage) is yet not available (LeCun 2023), and may never be.
10—Human intelligence (not machine intelligence as suggested by Geoffrey Hinton) will extinguish humankind by nuclear or environmental annihilation or by synthesizing an unstoppable pathogen or by some combination of these (Chomsky 2023; Ellsberg 2023; Sachs 2023).
Relevant answer
Answer
Organisms that exhibit conscious life must be mobile, eukaryotic, oxygen-dependent, and capable of reproduction and natural selection. Consciousness in such organisms is tied to their ability to respond volitionally to environmental and internal feedback, a process mediated by brain structures like the telencephalon in vertebrates. Consciousness metrics include information storage and processing rates within neurons. Though AI can outperform humans in certain tasks, it lacks true biological consciousness and the capacity for experiential awareness. Human intelligence, not AI, poses greater existential risks due to potential for destructive actions like environmental degradation or creating dangerous pathogens.
  • asked a question related to Machine Intelligence
Question
5 answers
Farmers no longer have to apply water, fertilizers, and pesticides uniformly across entire fields. Instead, they can use the minimum quantities required and target very specific areas, or even treat individual plants differently. Benefits include: Higher crop productivity.
Relevant answer
Answer
For AI technology input data are required for plant growth, With the help of AI, scientists can identify the best-performing plant varieties and crossbreed them to create even better hybrids. A start-up called Crop pIant Technology uses AI to predict weather conditions and soil moisture levels, which helps farmers plan their crops' planting and irrigation. AI can significantly improve productivity by optimizing the ratio of economic output to the inputs required in production. Also analyzing market demand, and related to managing risk, breeding of seeds, soil health analysis, protecting crops, observing crop maturity, Insect and plant disease detection and studies of genetic engineering food. The use of drones and satellite imagery in conjunction with AI and CV can provide farmers with valuable data on crop health and growth, allowing them to make more informed decisions about their farming operations.
  • asked a question related to Machine Intelligence
Question
4 answers
Currently, the AI world seems to be dividing into three directions: "AI" is now used synonymously with "Generative AI". In addition, "MI" is becoming established for machine intelligence including ML, i.e. machine learning. For classic AI = symbolic AI [Wooldridge (2020) The Road to Conscious Machines. P. 42] and its further development into "digital intelligence" including digital thinking, "DI" could be considered the third branch. What do you think? Is there "One AI" or how many different branches?
Translated with www.DeepL.com/Translator (free version)
Relevant answer
Answer
The main kinds of AI are:
  1. Narrow AI – Specialized in specific tasks (e.g., virtual assistants).
  2. General AI – Hypothetical AI with human-like general intelligence.
  3. Artificial Superintelligence – Advanced AI surpassing human intelligence (theoretical)
  • asked a question related to Machine Intelligence
Question
4 answers
Assuming an AI system is provided with the required data and computing resources to achieve a complex task. This task would not be achieved unless the machine is powered with a well written algorithm. Since an algorithm is a program or set of instructions that a machine executes, therefore, applying it defies the concept of machine intelligence. Do you agree?
Relevant answer
Answer
??? There are actually two questions presented. The title question, which I don't understand, and the a closing question, "Since an algorithm is a program or set of instructions that a machine executes, therefore, applying it defies the concept of machine intelligence. Do you agree?" No, I do not agree. Machine intelligence is another term for the broad field of machine learning, and is't used very much because it causes confusion. It is not the same as AI. Several different machine learning algorithms, which are capable of learning, and changing their behavior, can, en masse, be combined to produce the complexity needed for artificial intelligence.
  • asked a question related to Machine Intelligence
Question
4 answers
Regarding Turing's proposal to follow the example of the development of intelligence in humans and apply it to machines. Specifically, the example of a child who, in addition to learning discipline, needs to learn the spirit of initiative and decision-making. Could this kind of intelligence be compared to the case of a simple machine, or what Turing called the “child machine,” which is provided with basic instructions that enable it to learn and make its decisions later?
Relevant answer
Answer
I consider AI to be about learning patterns and behaviours, turning them to decisions or predictions, well, kind of machine learning. However, some people say that more than one IF operation is enough to call program an AI. Guess it is up to personal preference.
  • asked a question related to Machine Intelligence
Question
288 answers
"From science to law, from medicine to military questions, artificial intelligence is shaking up all our fields of expertise. All?? No?! In philosophy, AI is useless." The Artificial Mind, by Raphaël Enthoven, Humensis, 2024.
Relevant answer
Answer
Arturo Geigel "I am one open to this dialogue because I recognize the need for philosophical contributions". Thank you for the momentum you bring to this Thread. There indeed is a need for Philosophy as the means humans have to understand fundamental truths about themselves, the world in which they live, and their relationships to the world and each other. In the world of today, AI appears as a powerful transformation in how things and ideas are designed and implemented in all areas of knowledge, technology, and way of life and thinking. In this regard, many questions should be asked: What role should Philosophy play in accompanying the predictable and almost inevitable advances and thrusts of AI? Can AI be involved in philosophical thinking? is AI capable of Philosophying? And in any case, should we preserve philosophical thought and place it, like a safeguard, above technical advances?
  • asked a question related to Machine Intelligence
Question
2 answers
What are the analytical tools supported by artificial intelligence technology, machine learning, deep learning, artificial neural networks available on the Internet that can be helpful in business, can be used in companies and/or enterprises for improving certain activities, areas of business, implementation of economic, investment, business projects, etc.?
Since OpenAI brought ChatGPT online in November 2022, interest in the possibilities of using intelligent chatbots for various aspects of business operations has strongly increased among business entities. Intelligent chatbots originally only or mainly enabled conversations, discussions, answered questions using specific data resources, information and knowledge taken from a selection of multiple websites. Then, in the following months, OpenAI released other intelligent applications on the Internet, allowing Internet users to generate images, photos, graphics, videos, solve complex mathematical tasks, create software for new computer applications, generate analytical reports, process various types of documents based on the given commands and formulated commands. In addition to this, in 2023, other technology companies also began to make their intelligent applications available on the Internet, through which certain complex tasks can be carried out to facilitate certain processes, aspects of companies, enterprises, financial institutions, etc., and thus facilitate business. There is a steady increase in the number of intelligent applications and tools available on the Internet that can support the implementation of various aspects of business activities carried out in companies and enterprises. On the other hand, the number of new business applications of said smart applications is growing rapidly.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
What are the analytical tools available on the Internet supported by artificial intelligence technology, machine learning, deep learning, artificial neural networks, which can be helpful in business, can be used in companies and/or enterprises for improving certain activities, areas of business activity, implementation of economic, investment, business projects, etc.?
What are the AI-enabled analytical tools available on the Internet that can be helpful to business?
And what is your opinion on this topic?
What do you think about this topic?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
there are many AI enabled machine learning tools available on internet ie.Scikitlearn, Tensor Flow, Azure Machine Learning, Google cloud AI platform, H2o.ai etc.
  • asked a question related to Machine Intelligence
Question
3 answers
This is quite a topic to ponder about but I am particularly interested in knowing the strategic placement of human-centered inquiries in the advancement of contemporary AI systems despite the remarkable progressions in machine learning and particularly developments in the computer vision. How should we label the fusion, if any, and explicate various intriguing topics of discourse for the mutually beneficial association of two (ostensibly) mechanistically distinct intelligences. Why Human-inspired or Human-assisted AI might be important? Do we really need this association or let machines figure out their own way somehow through continious exploration? What makes humans special and/or not so special? Maybe as humans we have biases towards our species. For one thing, humans are not expert in “detection” missions. Particularly in absence of “attention”, their performance exponentially degrades leaving no room to excel in any task completion. Humans are not good in resolving “hidden correlation” tasks either. Their ability to estimate the joint probability distribution of a very large set of random events is extremely limited. However, they are good at trying/searching and finding out things, in other words, quite good at ontologies. Since the current AI research is oriented around reaching out the level of human-AI, one of the essential questions could be "What is missing to achieve Human-level AI?". An argument that is given is this: Without abstraction, reasoning, compositionality and fuctuality, it is almost impossible to achieve or get to human-level AI in the near future. Of course such a question brings out other interesting and relevant questions such as "What are the most obscure unsolved problems in the development of human-level AI?" or "Can human-specific attributes (coinciousness, morality, humor, causality, trust) can be transfered to machines? If yes, how and to what extent?". Even if we hypothetically assume human-inspired AI is in place, we still need to answers questions like "would this system be fully automatic or partially automatic and Human intervenable?". Should human counciousness position itself somewhere? if yes where in the generic design?. I think for the futuristic designs of AI, we need to look into the possibility of hybrid designs and think about their implementational details along the way...
Relevant answer
Answer
Good psychology research is VERY rare (I would even say : non-existent). If proper and real science develops truly in psychology and such is done, it will be very helpful, CRITICALITY HELPFUL.
  • asked a question related to Machine Intelligence
Question
5 answers
How should the learning algorithms contained in ChatGPT technology be improved so that the answers to questions generated by this form of artificial intelligence are free of factual errors and fictitious 'facts'?
How can ChatGPT technology be improved so that the answers provided by the artificial intelligence system are free of factual errors and inconsistent information?
The ChatGPT artificial intelligence technology generates answers to questions based on an outdated set of data and information downloaded from selected websites in 2021. In addition, the learning algorithms contained in ChatGPT technology are not perfect, which means that the answers to questions generated by this form of artificial intelligence may contain factual errors and a kind of fictitious 'facts'. A serious drawback of using this type of tool is that the ChatGPT-generated answers may contain serious factual errors. When people ask about something specific, they may receive an answer that is not factually correct. ChatGPT often answers questions eloquently, but often its answers may not relate to existing facts. ChatGPT can generate a kind of fictitious 'facts', i.e. the generated answers may contain stylistically, phraseologically, etc. correctly formulated sentences containing descriptions and characteristics of certain objects presented as real existing but not actually existing. In the future, a ChatGPT-type system will be refined and improved, which will be self-learning and self-improving in the analysis of large data sets and will take into account newly emerging data and information to generate answers to the questions asked without making numerous mistakes as is currently the case.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How should the learning algorithms contained in the ChatGPT technology be improved so that the answers generated by this form of artificial intelligence to the questions asked are free of factual errors and fictitious "facts"?
How can ChatGPT technology be improved so that the answers provided by the artificial intelligence system are free of factual errors and inconsistent information?
What do you think about this subject?
What is your opinion on this subject?
Please respond,
I invite you all to discuss,
Thank you very much,
Best regards,
Dariusz Prokopowicz
Relevant answer
Answer
Dear people,
“We live in a world of radical ignorance, and the marvel is that any kind of truth cuts through the noise,” says Proctor. Even though knowledge is ‘accessible’, it does not mean it is accessed, he warns…the knowledge people have often comes from faith or tradition, or propaganda, more than anywhere else.”
Is it a reality - how do you think?
he Turk, also known as the Mechanical Turk or Automaton Chess Player (German: Schachtürke, lit. 'chess Turk'; Hungarian: A Török), was a fraudulent chess-playing machine constructed in the late 18th century. From 1770 until its destruction by fire in 1854 it was exhibited by various owners as an automaton, though it was eventually revealed to be an elaborate hoax.[2] Constructed and unveiled in 1770 by Wolfgang von Kempelen (1734–1804) to impress Empress Maria Theresa of Austria, the mechanism appeared to be able to play a strong game of chess against a human opponent, as well as perform the knight's tour, a puzzle that requires the player to move a knight to occupy every square of a chessboard exactly once.
The Turk was in fact a mechanical illusion that allowed a human chess master hiding inside to operate the machine. With a skilled operator, the Turk won most of the games played during its demonstrations around Europe and the Americas for nearly 84 years, playing and defeating many challengers including statesmen such as Napoleon Bonaparte and Benjamin Franklin. The device was later purchased in 1804 and exhibited by Johann Nepomuk Mälzel. The chess masters who secretly operated it included Johann Allgaier, Boncourt, Aaron Alexandre, William Lewis, Jacques Mouret, and William Schlumberger, but the operators within the mechanism during Kempelen's original tour remain a mystery.
Who answers to users in large language model built on great human thoughts database?
Thinking?
  • asked a question related to Machine Intelligence
Question
69 answers
What do you think about ChatGPT and ways it's been using?
Few new uses of ChatGPT:
1.- https://cleanup.pictures/ Remove unwanted objects from #photos, people, text, and defects from any picture. 2.- www.resumeworded.com Online #resume and #Linkedln grader instantly scores your resume and Linkedln profile and gives you detailed feedback on how to get more opportunities and interviews. 3.- https://soundraw.io/ Soundraw is a #music generator for creators. Select the type of music you want genre, instruments, mood, length, etc and let Al generate beautiful songs for you. 4.- www.looka.com Design a #Logo, make a #website, and create a #brand identity you'll love with the power of Al. 5.- www.copy.ai Get a great copy that sells. #Copy.ai is an Al-powered #copywriter that generates high-quality copy for your business.”
Relevant answer
Answer
Academics are very likely aware of the emergence and rapid growth of accessible and user-friendly AI writing software, such as the popular ChatGPT, and its potential utility in academia. Much of the current discourse from academics highlights fears about a rise in academic misconduct, but it would be remiss to ignore some of the potential advantages of AI.
Here, we identify three potential roles that AI could play in bridging attainment gaps...
  • asked a question related to Machine Intelligence
Question
4 answers
This discussion is to discuss ideas and possible ways in which people's understanding of the human mind in any way could be applied to the current methods used to create this technology. Any and all people are able to respond, but please be respectful in how it is done.
Relevant answer
Answer
A notion that understanding is knowledge is a good starting point, but no more than that. К тому же, значение AI слишком переоценивается, на мой взгляд.
  • asked a question related to Machine Intelligence
Question
3 answers
HI all.
As a part my research work, I have segmented objects in an image for classification. After segmentation, the objects are having black backgrounds. And I used those images to train and test the proposed CNN model.
I want to know that how the CNN processes these black surrounding for image classification task.
Thank you
Relevant answer
Answer
As a first guess I would agree with Aparna Sathya Murthy given that you retain the original image size. If you segment and extract the contents from the image in a size where the dominating elements are the relevant contents for the CNN, then the noise will be less and maybe labeling will not be needed(emphasis in maybe).
  • asked a question related to Machine Intelligence
Question
14 answers
What we should ask, instead, is how to develop more informed and self-aware relationships with technologies that are programmed to take advantage of our liability to be deceived. It might sound paradoxical, but to better comprehend AI we need first to better comprehend ourselves. Contemporary AI technologies constantly mobilize mechanisms such as empathy, stereotyping, and social habits. To understand these technologies more deeply, and to fully appreciate the relationship we are building with them, we need to interrogate how such mechanisms work and which part deception plays in our interaction with “intelligent” machine
Relevant answer
Answer
I think you have to break the phenomena into two layers. There is the layer of people outside the field in which the narrative fed can span from scientific reporting to science fiction. Trying to reason about the narrative at this layer is too complicated due to the many factors that move people. Also, since being non technical, they see an AI accomplish a task such as that of Lambda and hastily jump to conclusions (even the engineer at Google was misled by the output given by the AI).
The more wearisome layer is the scientific community which is driven by scientific results. The reason for focusing on the metric is as follows:
a) metric A gives an 'intelligence measure' B
b) test results using metric A gives results C
c) results C confirm 'intelligence metric' B
Notice that the problem is in accepting a) as valid to form an argument that will sway the scientific community. Since you have a reasonable argument structure most scientist will attribute the phenomena as valid. While I think that what you want to drive at is the construction of a), it is the steps in reasoning that ends with c) that is mostly at play in the phenomenon you mention.
Regards
  • asked a question related to Machine Intelligence
Question
3 answers
Hello
I am a PhD student looking to read some recent good papers that can help me identify a research topic in RL for controls applications . I have been reading through quite a few papers/topics discussing model free vs model based RL etc . Not been able to find something , may be I don't understand it yet to the extent :) .
Just for the background : My experience is with Diesel , SI engines , vehicles and controls .
One of the topics/areas that seems interesting to me is learning using RL in uncertain scenarios, this might seem to broad for most of the people .
Another possible area would be RL for connected vehicles, self driving etc .
Any help/suggestion is welcome .
Relevant answer
Answer
combining MARL and safety would be an interesting area
  • asked a question related to Machine Intelligence
Question
23 answers
Since the importance of Machine Learning (ML) is significantly increasing, let's share our opinions, publications, or future applications on Optical Wireless Communication.
Thank you,
Ezgi Ertunc
Relevant answer
  • asked a question related to Machine Intelligence
Question
4 answers
We introduce the concept of Proton-Seconds and see it lends itself to a method of solving problems across a large range of disciplines in the Natural Sciences. The underpinnings seem to be in 6-fold symmetry. This lends itself to a Universal Form. We find this presents the Periodic Table of the Elements as a squaring of the circle. It is rather abstract thinking, but just as the moment we define truth and as a result it reverses, I think we can treat problem solving this way: As Patterns…The idea is there is nothing we can say is the truth, but we can solve problems through pattern recognition. I would think this manner of problem solving through pattern recognition could be employed in developing deep learning machine intelligence and AI for its method of imitating human learning to gain knowledge.
Deleted research item The research item mentioned here has been deleted
Relevant answer
Answer
Okay Stay but I think in order for a pattern to exist a theme must recur, therefor it has a characteristic by which it abides, so that is a restriction or physics in a straight jacket as Feynmann calls it. I think it is much like improvising on a musical intsrument, you have to develop your ideas according to a rhythmic cycle. The proton-second is an abstract idea that can be applied to large amounts of particles over macroscopic time periods. Alas the timescale of a second is characteristic of the proton, in this paper it determines its radius.
  • asked a question related to Machine Intelligence
Question
10 answers
Exploring the similarities and differences between these three powerful machine learning tools (PCA, NMF, and Autoencoder) has always been a mental challenge for me. Anyone with knowledge in this field is welcome to share it with me.
Relevant answer
Answer
In machine learning projects we often run into curse of dimensionality problem where the number of records of data are not a substantial factor of the number of features. This often leads to a problems since it means training a lot of parameters using a scarce data set, which can easily lead to overfitting and poor generalization. High dimensionality also means very large training times. So, dimensionality reduction techniques are commonly used to address these issues. It is often true that despite residing in high dimensional space, feature space has a low dimensional structure.
Regards,
Shafagat
  • asked a question related to Machine Intelligence
Question
4 answers
Recently, I have been attracted by the paper "Stable learning establishes some common ground between causal inference and machine learning" published in Nature Machine Intelligence journal. After perusing it, I met with a problem regarding the connection between model explainability and spurious correlation.
I notice that in the paper, after introducing the three key factors(stability, explainability and fairness) ML researchers need to address, the authors make a further judgement that spurious correlation is a key source of risk. So far I have figured out why spurious correlation can cause ML models to lack stability and fairness, however, it is still unknown to me why spurious correlation can obstruct the research on explainable AI. As far as I know, there have been two lines of research on XAI, explaining black-box models and building inherently interpretable models. But I'm wondering if there are some concrete explanations about why spurious correlations are such a troublemaker when trying to design good XAL methods?
Relevant answer
Answer
Spurious correlation, or spuriousness, occurs when two factors appear casually related to one another but are not. The appearance of a causal relationship is often due to similar movement on a chart that turns out to be coincidental or caused by a third "confounding" factor.
A more data-driven approach to diagnosing spurious correlation is to use statistical techniques to examine the residuals. If the residuals exhibit autocorrelation, this suggests that some key variable may be missing from the analysis.
  • asked a question related to Machine Intelligence
Question
10 answers
Can anyone suggest any ensembling methods for the output of pre-trained models? Suppose, there is a dataset containing cats and dogs. Three pre-trained models are applied i.e., VGG16, VGG19, and ResNet50. How will you apply ensembling techniques? Bagging, boosting, voting etc.
Relevant answer
  • asked a question related to Machine Intelligence
Question
16 answers
Like other meta-heuristic algorithms, some algorithms tend to be trapped in low diversity, local optima and unbalanced exploitation ability.
1- Enhance its exploratory and exploitative performance.
2- Overcome premature convergence (increase the fast convergence) and ease of falling (trapped) into a local optimum.
3- Increase the diversity of population and alleviate the prematurity convergence problem
4- The algorithm suffers from an immature balance between exploitation and exploration.
5- Maintain the diversity of solutions during the search, so that the tendency of stagnation towards the sub-optimal solutions can be avoided and the convergence rate can be boosted to obtain more accurate optimal solutions.
6- Slow convergence speed, inability to jump out of local optima and fixed step length.
7- Improve its population diversity in the search space.
Relevant answer
Answer
like Mr . Joel Chacón, I think also that question is a dependent problem, there is no exact answer.
  • asked a question related to Machine Intelligence
Question
20 answers
What is the boundary between the tasks that need human interventions and the tasks that can be fully autonomous in the domain of civil and environmental engineering? What are ways of establishing a human-machine interface that combines the best parts of human intelligence and machine intelligence in different civil and environmental engineering problem-solving processes? Any tasks that can never be autonomous and need civil and environmental engineers? Coordinating international infrastructure projects? Operating future cities with many interactions between building facilities? We would love to learn from you about your existing work and thoughts in this broad area and hope we can build the future of humans and civil & environmental engineering together.
Please see this link for an article that serves as a starting point for this discussion initiated by an ASCE task force:
Relevant answer
Answer
You are most welcome dear Pingbo Tang .
Wish you the best always.
  • asked a question related to Machine Intelligence
Question
9 answers
[Information] Special Issue - Intelligent Control and Robotics
Relevant answer
Thanks for sharing.
  • asked a question related to Machine Intelligence
Question
5 answers
Dear Researchers,
Does anybody here know if having accepted a paper in the 9th Machine Intelligence and Digital Interaction Conference is valid and valuable for a scientific resume?
Thank you in advance.
Relevant answer
Answer
Dear Shima,
This is a very good question, which conferences are valuable to submit and participate. In Machine Leaning field the top conferences can be :
Neural Information Processing Systems (NIPS)
CVPR : IEEE/CVF Conference on Computer Vision and Pattern Recognition
AAAI Conference on Artificial Intelligence
If you need more technical information about the rank and quality of the ML and AI conference please let me know by email. (neshat.mehdi@gmail.com)
  • asked a question related to Machine Intelligence
Question
4 answers
How do you interpret different concepts in NLP for time-series? For example “self-attention” and “positional embeddings” in transformers.
Relevant answer
Answer
There are numerous benefits to utilizing the Transformer architecture over LSTM RNN. The two chief differences between the Transformer Architecture and the LSTM architecture are in the elimination of recurrence, thus decreasing complexity, and the enabling of parallelization, thus improving efficiency in computation.
Kind Regards
Qamar Ul Islam
  • asked a question related to Machine Intelligence
Question
4 answers
I am working on a project with Generative Adversarial Network to generate pseudo (minority) samples to expand the dataset and then using that expanded dataset to train my model for fault detection in machinery.
I have already tried my project with 2 machine temperature sensor datasets (containing timestamp and temperature data at that timestamp).
I am looking for a dataset having a similar structure (i.e timestamp with one sensor data) , For Example, Current sensor with time or pressure sensor with time.
I have searched Kaggle but most datasets on Kaggle are multivariate.
Where can I find Univariate Time-series data for fault detection in machinery?
#machinelearning #univariatedata #timeseriesdata #machinerydata #faultanalysis #faultdetection #anomalydetection #GAN #neuralnetwork
Relevant answer
Answer
You can check the following bearing datasets (vibration signals Vs time):
PS: both of the dataset are treated in this paper :
prognostic-data-repository/
Wind turbine high speed shaft data: https://www.mathworks.com/help/predmaint/ug/ wind-turbine-high-speed-bearing-prognosis.html
  • asked a question related to Machine Intelligence
Question
6 answers
A lot of companies are investing a lot of their energy, money and time into digitalization and UX is at the forefront, leading the cause at some of these companies. What are the hottest topics of research in UX these days?
#uxdesign #uxresearch #ux #research #ai #innovation #artificialintelligence #machinelearning #datascience #deeplearning #ml #science #futureofbusiness #futureofai #futureinsights #futureoftech
Relevant answer
Cloud based tools like Figma for great quick collaborative real time teamwork. Perfect for UX UI designers, writers and developers and testing.
  • asked a question related to Machine Intelligence
Question
3 answers
As far as I know, machine intelligence of engineering field has basically presented two research ideas to realize the data-driven modeling and analysis of complex systems and solve the prediction or diagnosis problems: 1) by proposing advanced and complex algorithms with good adaptive ability, and 2) by employing simple and effective algorithm with good interpretability combined with the characteristics of practical engineering problems.
I wonder what colleagues think of these two research ideas, and whether there are any other ideas. Aiming at a certain research problem, how to effectively evaluate the innovation of these two ideas?
Relevant answer
Answer
While it is good to take a broad view to understand the area that you are working to gain understanding of the field, your objetive for research should be focused on a particular area. I will agree with Karim Ibrahim in breaking the two categories and choosing one for research purposes as you have done in your previous publications.
To continue your search, I would focus on what are the areas of your potential advisors and look for common ground between your interest and theirs.
Regards
  • asked a question related to Machine Intelligence
Question
9 answers
Hello, I'm currently studying a Deriche filter. I've managed to create python program for derivative smoothing from this article [1]. As I understand, Deriche used one α for smoothing in both directions - X and Y, so this filter seems to me isotrophic. Is it correct way to use two alphas − α_x and α_y to add anistrophy?
Can someone please explain me the relaion between α in Deriche filter and σ in Gaussian filter? In [1] i found an equation picture, but i do not understand the symbol П. Is it π? To my mind, it's not, because in the article they took α = 0.14 and σ = 10. From these values П = 625/19≈3.1887755.
Thank you.
[1]: Deriche, R. (1990). Fast algorithms for low-level vision. IEEE Transactions on Pattern Analysis and Machine Intelligence, 12(1), 78–87. doi:10.1109/34.41386
Relevant answer
Answer
Aparna Sathya Murthy Thank you and sorry for long reply. I will use two alphas to create anistrophic filter!
You said that Deriche filter is a high pass filter, but Hale in [2] confirm it's a low pass filter.
  • asked a question related to Machine Intelligence
Question
11 answers
I am new to the field of Neural networks. Currently, I am working on training a CNN model to classify XRAY Images into Normal and Viral Pneumonia.
I am facing an issue of Constant Val accuracy while training the model. However, with each epoch the training accuracy is becoming better and both the losses (loss and Val loss) are decreasing.
How should I deal with this problem of the constant Val Accuracy?
Attaching below: The Screenshot of the Epochs
Relevant answer
  • asked a question related to Machine Intelligence
Question
35 answers
Been machine intelligent artificially, could we teach them to love artificially?
Relevant answer
Answer
I dont think that we fully understand the mystery of love to teach it to machines
  • asked a question related to Machine Intelligence
Question
44 answers
Recently, several works have been published on predictive analytics:
Besides, there is a paper on how to discover a process model using neural networks:
My questions for this discussion are:
  • It seems, that the field for machine learning approaches in process mining in not limited to predictions/discovery. Can we formulate the areas of possible applications?
  • Can we use process mining techniques in machine learning? Can we, for example, mine how neural networks learn (in order to better understand their predictions)?
  • If you believe that the subjects are completely incompatible, then, please, share your argument. Why do you think so?
  • Finally, please, share known papers in which: process mining (PM) is applied in machine learning (ML) research, ML is applied in PM research, both PM and ML are applied to solve a problem. I believe, this will be useful for any reader of this discussion.
Relevant answer
Answer
There are actually quite a lot of nice application of machine learning techniques in the context of business process variant analysis, which is a fairly large subset of the process mining literature.
For example, Folino, Cuzzocrea et al. have done a series of studies on variant analysis (or deviance mining) using various machine learning methods, including ensemble learning and clustering:
We recently conducted a literature survey of methods in the field of variant analysis, many of them based on machine learning techniques:
Related to the above, there is work on bayesian networks for delay analysis (explanatory rather than predictive):
The above is related to variant analysis and performance mining. But there is also work on anomaly detection in event logs using bayesian networks:
And using deep learning architectures:
As well as using deep learning models to compute alignments in order to correct anomalies:
And a bit related to the above, there was quite a bit of research on using trace clustering in the context of automated process discovery (e.g. Jochen De Weerdt)
So we can say that process mining and machine learning go well together. One should not forget though that BPM and process mining are application-oriented disciplines - their objective is to design approaches to improve business processes. Whereas machine learning is a horizontal discipline, it seeks to develop methods that can be adapted to a broad range of problems/settings. Process mining has tapped a lot into machine learning, but sure it has a lot more to exploit from it.
  • asked a question related to Machine Intelligence
Question
9 answers
Hi everyone,
I am writing this to gather some suggestions for my thesis topic.
I am a student of MSc Quantitative Finance. I am in need of some suggestions from the experienced members for a research topic in Portfolio management.
My expertise are in statistics and empirical analysis. I believe that I will be able to present some good work in field portfolio analysis. Currently, I am researching for some good topics where I can apply machine learning or machine intelligence e.g. for forecasting portfolio performance or may be use it to asses portfolio optimization strategies.
I will be very grateful for you suggestions and guidance. If it suits you, you can also email me on narendarkumar306@gmail.com
Regards.
Relevant answer
Answer
Topics
  • Behavioral Finance ...
  • Derivatives. Options ...
  • Factors, risk premia. Analysis of individual factors/risk premia ...
  • Fixed income and structured finance. ...
  • International Investing. ...
  • Legal/regulatory/public policy. ...
  • Long-term/retirement investing. ...
  • Mutual funds/passive investing/indexing.
  • asked a question related to Machine Intelligence
Question
3 answers
Hi all,
To work on a "predictive maintenance" issue, I need a real data set that contain sensor data so that i can train a model to predict or diagnose failure like high temperature alert .
I would appreciate it if anybody could help me to get a real data set.
Thanks
Relevant answer
Answer
Ashutosh Karna Thank you for your response.
my main objective is in oil and gase (industry 4.0) equipment maintenance use case and i need temperature,pressure ,humidity or volume flow sensors to predict supervised failures to train a model.
  • asked a question related to Machine Intelligence
Question
13 answers
Article at this link https://www.quantamagazine.org/been-kim-is-building-a-translator-for-artificial-intelligence-20190110/ talks about "A New Approach to Understanding How Machines Think". It says in intro:
"Neural networks are famously incomprehensible — a computer can come up with a good answer, but not be able to explain what led to the conclusion. Been Kim is developing a “translator for humans” so that we can understand when artificial intelligence breaks down."
Are you aware of other research and researchers doing similar work? Can you share links, resources and/or your research on this, please? Thanks!
Relevant answer
  • asked a question related to Machine Intelligence
Question
16 answers
What is the difference between artificial intelligence and machine intelligence ?
Relevant answer
Answer
Artificial Intelligence AI is the scientific broader of intelligent machines (smart machines), while Machine Intelligence or Machine Learning is one of the main applications of AI.
You can see all the applications of AI covered by the International Journal of Distributed Artificial Intelligence (IJDAI) , a specialized AI journal
  • asked a question related to Machine Intelligence
Question
3 answers
The current developments in utilizing computer systems to study facial expressions highlight the facial changes in light of a man's interior enthusiastic states, aims, or social interchanges are based on complex visual data processing. The multimedia systems utilizing machine vision and computerized image processing procedures are incorporating land or flying remote detection and recognition of neurotic pressure conditions, shape and shading portrayal of organic products.
Papers:
M.Z. Uddin, M.M. Hassan, A. Almogren, A. Alamri, M. Alrubaian, and G. Fortino, “Facial expression recognition utilizing local direction-based robust features and deep belief network,” IEEE Access, vol. 5, pp. 4525-4536, 2017.
Y. Wu, T. Hassner, K. Kim, G. Medioni, and P. Natarajan, “Facial landmark detection with tweaked convolutional neural networks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017.
H. Ding, S.K. Zhou, and R. Chellappa, “Facenet2expnet: Regularizing a deep face recognition net for expression recognition,” In Automatic Face & Gesture Recognition (FG 2017), 2017 12th IEEE International Conference on, IEEE, pp. 118-126, 2017.
Relevant answer
Answer
  • asked a question related to Machine Intelligence
Question
2 answers
There are number of computational methods applied to simultaneous translation.
Papers:
M. Rusinol, D. Aldavert, R. Toledo, and J. Llados, “Browsing heterogeneous document collections by a segmentation-free word spotting method,” vol. 22. in Proc. of International Conference on Document Analysis and Recognition (ICDAR), IEEE, 2011, pp. 63–67.
V. Frinken, A. Fischer, R. Manmatha, and H. Bunke, “A novel word spotting method based on recurrent neural networks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, pp. 211–224, 2012.
Relevant answer
Answer
Karayaneva, Y., & Hintea, D. (2018, February). Object recognition algorithms implemented on NAO robot for children's visual learning enhancement. In Proceedings of the 2018 2nd International Conference on Mechatronics Systems and Control Engineering (pp. 86-92). ACM.
Upadhyayaa, P., Farooqa, O., & Abidia, M. R. (2018). Block Energy Based Visual Features Using Histogram Of Oriented Gradient For Bimodal Hindi Speech Recognition. Procedia Computer Science, 132, 1385-1393.
Kaur, H., & Kumar, M. (2018). A comprehensive survey on word recognition for non-Indic and Indic scripts. Pattern Analysis and Applications, 1-33.
Benmoussa, M., & Mahmoudi, A. (2018, April). Machine learning for hand gesture recognition using bag-of-words. In Intelligent Systems and Computer Vision (ISCV), 2018 International Conference on (pp. 1-7). IEEE.
Junejo, I., Dexter, E., Laptev, I., & Perez, P. (2011). View-independent action recognition from temporal self-similarities. IEEE transactions on pattern analysis and machine intelligence.
  • asked a question related to Machine Intelligence
Question
24 answers
What do you think: when artificial intelligence (AI) will be smarter than humans, if ever? Can you predict it and if yes, when it will approximately happen in your opinion?
You can also vote in poll at:
Relevant answer
Answer
Developing a true test as to the consciousness of Artificial Intelligence would be difficult, but the truest measure is that fully self programming AI's are virtually unknown. The self-modeling, self-generation capacities of humanity are of definite interest in generating an AI with greater adaptability in a broad set of circumstances. That, in my opinion, is the next threshold to cross in Artificial Intelligence. Another difficulty with AI is  how do you define an Artificial Intelligence without having a fully accurate model of human intelligence? Our adaptability is our best defined feature as humans, so adaptive artificial intelligence may be a better goal than strong artificial intelligence, at least until a clear picture emerges of more general features such as consciousness.
  • asked a question related to Machine Intelligence
Question
3 answers
I have predicted multiple feature using a Neural Net model and I have found the Error Vector for the same (E= P-A). Now to determine whether a particular entry is Anomaly or Not , I need to convert those multiple Error Vector into a single one, so as o set Threshold. What should be the best technique for that?
Relevant answer
Answer
You can take the average error and select the model whose error is near about the average error. You can also use ensemble techniques. Please try with the link:
  • asked a question related to Machine Intelligence
Question
3 answers
The term was apparently introduced by John Alan Robinson in a 1970 paper in the Proceedings of the Sixth Annual Machine Intelligence Workshop, Edinburgh, 1970, entitled "Computational Logic: The Unification Computation" (Machine Intelligence 6:63-72, Edinburgh University Press, 1971). The expression is used in the second paragraph with a footnote claiming that *computational logic* (the emphasis is in the paper) is "surely a better phrase than 'theorem proving', for the branch of artificial intelligence which deals with how to make machines do deduction efficiently". This sounds like coining the term; no reference to a previous use is mentioned. Is anybody aware of a previous use of "computational logic" by someone else?
Relevant answer
Answer
Thanks Surender! In Section 13 of Robinson's CL 2000 paper, he confirms that he was the introductor of the term in the 1970 paper I found.
  • asked a question related to Machine Intelligence
Question
15 answers
I have a dataset, which contains normal as well as abnormal data (counter data) behavior .
Relevant answer
Answer
You can used clustering algorithm like
1. K-means/medoid
2. Fuzzy c-means
3. partition-based clustering etc. for the classification issue.
  • asked a question related to Machine Intelligence
Question
3 answers
What is relation between Machine Intelligence and Smart Networks?
Relevant answer
Answer
Artificial Intelligence has been around for a long time – the Greek myths contain stories of mechanical men designed to mimic our own behavior. Very early European computers were conceived as “logical machines” and by reproducing capabilities such as basic arithmetic and memory, engineers saw their job, fundamentally, as attempting to create mechanical brains.
As technology, and, importantly, our understanding of how our minds work, has progressed, our concept of what constitutes AI has changed. Rather than increasingly complex calculations, work in the field of AI concentrated on mimicking human decision making processes and carrying out tasks in ever more human ways.
Artificial Intelligences – devices designed to act intelligently – are often classified into one of two fundamental groups – applied or general. Applied AI is far more common – systems designed to intelligently trade stocks and shares, or manoeuvre an autonomous vehicle would fall into this category.
Neural Networks – Artificial Intelligence And Machine [+]
Generalized AIs – systems or devices which can in theory handle any task – are less common, but this is where some of the most exciting advancements are happening today. It is also the area that has led to the development of Machine Learning. Often referred to as a subset of AI, it’s really more accurate to think of it as the current state-of-the-art.
The Rise of Machine Learning
Two important breakthroughs led to the emergence of Machine Learning as the vehicle which is driving AI development forward with the speed it currently has.
One of these was the realization – credited to Arthur Samuel in 1959 – that rather than teaching computers everything they need to know about the world and how to carry out tasks, it might be possible to teach them to learn for themselves.
The second, more recently, was the emergence of the internet, and the huge increase in the amount of digital information being generated, stored, and made available for analysis.
Once these innovations were in place, engineers realized that rather than teaching computers and machines how to do everything, it would be far more efficient to code them to think like human beings, and then plug them into the internet to give them access to all of the information in the world.
Neural Networks
The development of neural networks has been key to teaching computers to think and understand the world in the way we do, while retaining the innate advantages they hold over us such as speed, accuracy and lack of bias.
A Neural Network is a computer system designed to work by classifying information in the same way a human brain does. It can be taught to recognize, for example, images, and classify them according to elements they contain.
Essentially it works on a system of probability – based on data fed to it, it is able to make statements, decisions or predictions with a degree of certainty. The addition of a feedback loop enables “learning” – by sensing or being told whether its decisions are right or wrong, it modifies the approach it takes in the future.
Machine Learning applications can read text and work out whether the person who wrote it is making a complaint or offering congratulations. They can also listen to a piece of music, decide whether it is likely to make someone happy or sad, and find other pieces of music to match the mood. In some cases, they can even compose their own music expressing the same themes, or which they know is likely to be appreciated by the admirers of the original piece.
These are all possibilities offered by systems based around ML and neural networks. Thanks in no small part to science fiction, the idea has also emerged that we should be able to communicate and interact with electronic devices and digital information, as naturally as we would with another human being. To this end, another field of AI – Natural Language Processing (NLP) – has become a source of hugely exciting innovation in recent years, and one which is heavily reliant on ML.
NLP applications attempt to understand natural human communication, either written or spoken, and communicate in return with us using similar, natural language. ML is used here to help machines understand the vast nuances in human language, and to learn to respond in a way that a particular audience is likely to comprehend.
A Case Of Branding?
Artificial Intelligence – and in particular today ML certainly has a lot to offer. With its promise of automating mundane tasks as well as offering creative insight, industries in every sector from banking to healthcare and manufacturing are reaping the benefits. So, it’s important to bear in mind that AI and ML are something else … they are products which are being sold – consistently, and lucratively.
Machine Learning has certainly been seized as an opportunity by marketers. After AI has been around for so long, it’s possible that it started to be seen as something that’s in some way “old hat” even before its potential has ever truly been achieved. There have been a few false starts along the road to the “AI revolution”, and the term Machine Learning certainly gives marketers something new, shiny and, importantly, firmly grounded in the here-and-now, to offer.
The fact that we will eventually develop human-like AI has often been treated as something of an inevitability by technologists. Certainly, today we are closer than ever and we are moving towards that goal with increasing speed. Much of the exciting progress that we have seen in recent years is thanks to the fundamental changes in how we envisage AI working, which have been brought about by ML. I hope this piece has helped a few people understand the distinction between AI and ML. In my next piece on this subject I go deeper – literally – as I explain the theories behind another trending buzzword – Deep Learning.
Bernard Marr is a best-selling author & keynote speaker on business, technology and bigdata. His new book is Data Strategy. Toread hisfuture posts simply join his network here.
  • asked a question related to Machine Intelligence
Question
5 answers
Because this is a Anomaly Detection problem.
So when you do the classification, in the training stage, you only use one-class-labeled data? or use both classes of data for anomaly?
Relevant answer
Dear Zhiliu
All classifications algorithm design for binary class classification problem then they extended to deal with multi-class classification problem. so, if your problem is binary class use binary other wise use multi class.
Regards,
  • asked a question related to Machine Intelligence
Question
6 answers
As increasing number of samples to train neural network, the efficiency of system reduces. Can anyone tell me why this is happening? 
Relevant answer
Answer
 how can we initialise weights? Can you explain with small example.
  • asked a question related to Machine Intelligence
Question
3 answers
For detecting and prediction spyware stealing data from the system without the user consciousness.
  • asked a question related to Machine Intelligence
Question
15 answers
Tweet interactions over the 3-day risk conference (CCCR2016 at Cambridge University), including fear of AI and robotics, shows some resistance to a '10-commandments' type ethical template for robotics.
Ignoring the religious overtones, and beyond Asimov, aren't similar rules encoded into robotics necessary to ensure future law-abiding artificial intellects?
Mathematicians and philosophers alone are not the solution to ensuring non-threatening AI. Diverse teams of anthropologists, historians, linguists, brain-scientists, and more, collaborating could design ethical and moral machines.
Relevant answer
Answer
Encoding robot ethics through sets of human-designed rules may not be computationally tractable, and as Asimov shows, it's incredibly difficult to create a small set of ethical rules to follow without encountering situations in which those rules do not play out as intended. 
You may be interested in this recent paper from our laboratory in which recent approaches to automatically learning ethical principles through Inverse Reinforcement Learning are critiqued: https://hrilab.tufts.edu/publications/aaai17-alignment.pdf
A quote from that paper seems directly relevant to your question: "The idea of training a system on data (either supervised or unsupervised) has captured more
and more attention as a way to understand how ethics and AI might best function in concert. Coding ethical values “by hand,” in the manner of many other traditional forms of coding – seems destined for the lesser task of lending basic “scaffolding” within which machine learning can operate (Tanz 2016)."
I also highly recommend the book "Robot Ethics" (https://www.amazon.com/Robot-Ethics-Implications-Intelligent-Autonomous/dp/026252600X) which presents several approaches to robot ethics, and discusses some of the ethical challenges faced by robots. 
  • asked a question related to Machine Intelligence
Question
2 answers
My topic of PhD research is "Computer Aided Detection of Prostate Cancer from MRI Images using Machine Intelligence Techniques". From where do I have to start prostate segmentation? Does registration has to be done before segmentation or is it optional? Are there any open source codes available for learning the existing methods of prostate segmentation?
Relevant answer
Answer
Hi Bejoy,
If you are able to post an example image or dataset, we may be able to put together a recipe file for you in our new extended free trial software MIPAR (http://MIPAR.us) that should at the very least make you aware of the processing techniques involved in prostate segmentation if you need to track down open source codes yourself.
Cheers,
John
  • asked a question related to Machine Intelligence
Question
31 answers
Or are they fed instructions only to follow?
Machine Education and Computational learning theory employs different models of learning using smart algorithms to make machines intelligent and learn faster. But in theory, do machines actually learn "anything"? They are fed instructions which they follow, whether associative, supervised, reinforcement, or adaptive learning or exploratory data mining, these are all information (instructions) that are fed into computers. There is no involvement of motivation and synthesis of learned materials.
What's the precise definition of machine learning?
Relevant answer
Answer
Surely they learn. In a technical sense
learning is nothing more than the changing
of a system's state in order to do
something better.
What 'better' means is a matter
of philosophy.
Regards,
Joachim
  • asked a question related to Machine Intelligence
Question
3 answers
My team and I have been assigned to produce a layout of a production line of a manufacturing company by using WITNESS 14. The layout of the production line has been done. However, we are currently having problem to allocate the labour to take the parts from the shelf and move it to the machines. We have been told that it required some coding to control the action of the labour. Enclosed is the layout of the production line we drawn for this project.
Thanks for the reply.
Relevant answer
Answer
  • asked a question related to Machine Intelligence
Question
3 answers
Does anyone know or have the triple feature extraction function code in Matlab?
It was proposed by Alexander Kadyrov, "The Trace Transform and Its Application", IEEE transaction on pattern analysis and machine intelligence, 2001.
Relevant answer
Answer
 Hi
I attached you a paper that explained algorithm of extraction  triple features by trace transform and also you can email the author to obtain the code
Regards
  • asked a question related to Machine Intelligence
Question
185 answers
Miguel Nicolelis and Ronald Cicurel claim that the brain is relativistic and cannot be simulated by a Turing Machine which is contrary to well marketed ideas of simulating /mapping the whole brain  https://www.youtube.com/watch?v=7HYQsJUkyHQ  If it cannot be simulated on digital computers what is the solution to understand the brain language?
Relevant answer
Answer
 Dear friends, Dorian, Dick, Roman, Mario, et al.,
            There is hope (Haykin and Fuster, Proc. IEEE, 102, 608-628, 2014).  But first we have to modify our computer and some of our traditional ideas about the brain. With respect to both, here I offer humbly some of my views after half a century of working in cognitive neuroscience.  I shall be brief and cautious.  For an account of empirical evidence, read my “Cortex and Mind”  (Oxford, 2003).
            1.  Alas, the computer cannot be only digital, but also must be analog.  Most all the cognitive operation in the brain are based on analog transactions at many levels (membrane potentials, spike frequencies, firing thresholds, metabolic gradients, dendritic potentials, neurotransmitter concentrations, synaptic weights, etc., etc.). Further, the computer must be able to compute and work with probabilities, because cognition is largely probabilistic in the Bayesian sense, which means that our computer must also have a degree of plasticity.
            2.  The computer must also have distributed memory.  In the brain, especially the cortex, cognitive information is contained in a complex system of distributed, interactive and overlapping neuronal networks formed by Hebbian rules by association between temporally coincident inputs (i.e., sensory stimuli or inputs from other activated networks).  The cognitive “code” is therefore essentially relational or relativistic, and is defined by connective structure, by associations of context and temporal coincidence.  That is why, theoretically, connectionism and the connectome make some sense.
            3.  It is true that the soma of a neuron contains “memory”: in the mitochondria.  But that is genetic memory (what I call “phyletic memory,” memory of the species), some of which was acquired in evolution.  It is important for brain development and for the function of primary sensory and motor systems.  It is also important for regeneration after injury.  Further, it is the ground-base on which individual cognitive memory will be formed. But the latter consists of more or less widely distributed cortical networks or “cognits” (J. Cog. Neurosci. 21, 2047–2072, 2009).  These overlap and interact to a large extent, whereby a neuron or group of neurons practically anywhere in the cortex can be part of many networks, thus many memories or items of knowledge.  This is trouble for the connectome which, if ever comes to fruition, will be vastly more complex than the genome.
            4.  Our present tools to define the structure, let alone the dynamics, of the connectome appear rather inadequate to deal with those facts and hypotheses.   Consider DTI (diffusor tensor imaging), one of those tools presently in fashion and widely used to trace neural connections.  It is based on the analysis of the orientation of water molecules in a magnetic field.  Therefore, it can successfully visualize nerve fibers with high water content, such as myelinated fibers and some large unmyelinated ones.  But the method (I dub it “water-based paint”) is good for tractography, for visualizing large, fast conducting fibers, but not for the fine connective stuff that defines memory networks.
            5.  What’s more, those networks change all the time, even during sleep.  In sum, it is difficult to imagine a dynamic connectome that would instantiate the vicissitudes and idiosyncrasies of our thinking, remembering, perceiving, calculating and predictive brain.
            Some of this may be wrong.  But that’s the way I see it, and may be useful to model the real brain.  Cheers, Joaquín
  • asked a question related to Machine Intelligence
Question
16 answers
Experiments such as the study of the Relationship between Weather Variables and Electric Power Demand inside a Smart Grid/Smart World Framework correlated the existing relationship between energy demand and climatic variables, in order to make better demand forecasts http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3478798/. The study shows interesting correlations, especially between temperature and electrical demand. 
My question to ANN experts are, in time series microgrid load forecasting, would a self training or adaptive ANN architecture inherently take into consideration the cross correlation between weather data and the electrical demand pattern of a (rural) household/microgrid when the weather/climatic data is fed as a separate input (or inputs) into the ANN (or would it be better to try and normalize the real and historical demand profiles). 
Relevant answer
Answer
Hi 
The short a definite answer: yes, neural networks can take weather variables (temperature, wind chill, humidity) into consideration, and also their lags (not everything is contemporaneous). Indeed neural networks are particularly well suited to forecast load demand as the relationship between weather variables and load is non-linear and the modelling is much more adequate than with linear regression.
BTW: the question of "normalising the input" is confusing´the terminology a bit: in electriciy demand forecasting you often refer to normalise the demand by deseasoanalising (i.e. removing weather effects); in Neural Networks nromalisation refers to scaling the input variables into a suitable range that the algorithm can learn from (i.e. [0; 1] or [-1, 1]).
So yes, normalise the inputs in a neural network training  sense, but do not preprocess / deseasonaliose the data as this woudl get rid of the nonlinear interactions between weather and load. 
Hope this helps, Sven
  • asked a question related to Machine Intelligence
Question
70 answers
Creation of such a system is one of the main goals of artificial intelligence.
Relevant answer
Answer
Dear Alexey,
Classification can be based on a prototype representing a class but in many cases the data used for learning the classifier are represented as a set of objects with assigned class labels.
If you run a clustering algorithm, you divide the number of input objects into a number of disjoint (typically) sets. So a cluster number (e.g. originating from k-means algorithm) can be used as a data label used to learn classifier (e.g. C4.5).
I think the criminal profiling takes into account historical data, at present collected in data bases and mined with classical techniques. It would be very hard to assume an apriori knowledge and programs. The oldest sources are probably the Bible, Greek tragedies, Mahabharata, Hamlet, etc.
Best regards
Piotr
  • asked a question related to Machine Intelligence
Question
3 answers
Two machine flow shop with blocking have been studied for about 50 years.  The most famous solution is about converting it into a TSP problem.  Is there other approach to get optimal solutions for this solution.
Relevant answer
Answer
It is NP-hard, so you won't find anything significantly better.  Johnson's rule for the case of makespan without blocking can build up a huge buffer (see reference below) so it is not going to give you a good heuristic. 
Ramudhin, A., Bartholdi, J. J. III, Calvin, J. M., Vande Vate, J. H. and Weiss, G., "A Probabilistic Analysis of 2-Machine Flowshops", Operations Research, 44 No. 6, pp. 899-908, 1996.
  • asked a question related to Machine Intelligence
Question
21 answers
What level of intelligence do machines actually need/posses and how can this be compared. If the community is to create a Machine Quotient (MQ), how would this be compared to human cognition?
Relevant answer
Answer
IQ tests don't measure "Intelligence" (whatever that is), they measure IQ (whatever that is).
The interisting thing about IQ tests in the light of Turing is that they have always tried to factor out linguistic skills to prevent cultural background and education from confounding the testing of inherent "intelligence."  Therefore these test contain many questions exercising non-verbal patern matching / prediction. So that is where I would start if I were you and it was specifically "IQ" as opposed to "intelligence" that I wished to measure.
  • asked a question related to Machine Intelligence
Question
3 answers
How do you measure machine intelligence and what level of cognitive processing do machines really need to behave autonomously. Is human level cognition required in ordinary machines or just those working remotely?
Relevant answer
Answer
Hi, your question is very general so an answer can only be general also.
I would add, that the needed degree of autonomy is very depending on the application you're targeting on. High autonomy would be needed for autonomous transport systems in public traffic scenario as the environment is very demanding, whereas in industrial application autonomy level could be lower as special measures or infrastructure components can be used. Second, the economic site: Depending on application you're not able to get to "full autonomy", which is expensive (sensor, data processing). Also legal reasons have to be considered.
Deriving from biological systems, I would say that intelligence isn't needed for autonomy, but it makes an autonomous system behaves in a "wiseful" way.
  • asked a question related to Machine Intelligence
Question
14 answers
Supervised learning handle the classification problem with certain labeled training data and semi-supervised learning algorithm aims to improve the classifiers performance by the help of amount of unlabeled samples. While is there any  theory or classical frame work to handle the training with soft class label? This soft label are prior knowledge of the training sample,  which may be the class probabilities, class beliefs or  expert experience values. 
Relevant answer
Answer
I disagree here. The question is not how to train a classifier in general and which are the well known courses/softwares to (learn to) do so.
The question,  as far as I understood,  is how to train a classifier when the available supervision at training time are soft class labels in the form of a class conditional probability value for each training example. In a binary classification case, this would be p(x|c=1) for each sample x in your training set conditioned to the class "c" (or its complement p(x|c=0) for the other class).
The fact that a standard logistic regression outputs a model that produces such probability values, once the model is trained and can be used on independent test examples, does not answer how to use such a soft supervision at training time.
In other words, a standard logreg package would typically require as inputs a  training  set in the form of concrete examples "x" together with their hard class labels e.g. (c=1) or (c=0).
Besides, the most standard way of fitting a logreg is by  iteratively-reweighted least squares (IRLS) and it does not look immediate that this specific type of optimization actually "minimize H(p, q), averaged over your training set."  At least, it would deserve some math to make the (doubtful) link or specify under which type of specific distributions p and q, both optimizations would happen to be equivalent.
  • asked a question related to Machine Intelligence
Question
6 answers
How can machine learning techniques play a role in HVAC systems? I am searching for techniques that are used for making efficient HVAC systems. And which issues of HVAC are solved by using these techniques. Need some reliable sources for understanding use and working of machine learning techniques in making HVAC systems.
Relevant answer
Answer
thank you for your response. i am also basically searching on use of ANNs in HVAC. but unable to find any appropriate model using ANN for this purpose. can you please share some stuff (literature or project names) on use of ANN in HVAC systems.
Thanks
  • asked a question related to Machine Intelligence
Question
5 answers
I am working on Physical Activity Recognition using data acquired from smartphone sensors (gyroscope, magnetometer and accelerometer) and would like to compare different classifier performance on the dataset, but wondering which evaluation matrix would be best to use: True Positive Rate (TPR), False Positive Rate (FPR), Precision, Recall, F-score or overall classification accuracy? This is a six class problem (6 different activities).
Relevant answer
Answer
Rajeev.
TPR and FPR separately do not tell you anything. The same goes for Precision and Recall. F-score, Equal Error Rate and similar can give you some information, but it is limited and you have to be careful how you use those values.
Look at this previous question:
I'll copy part of my answer for that question here:
You have to first answer a simple question: Do You know what are the costs of the decisions?
If the costs are known, you can calibrate your classifiers (set the operating point), and compare costs of the classifiers.
If you do not know the costs, you may have some insight about a reasonable range of operating points of the target application. Lets say the target application is a surveillance system, and all positive responses will have to be shown to a human operator which is able to process one event per minute. In such case, it makes sense to compare detection rates at one false positive per minute. You can use other measures instead, but it makes sense to keep one type of error fixed and compare the other one.
If you have no knowledge about the possible operating point in the target application, you should show results in full range of possible operating points. Plot ROC, precision-recall, DET or something similar. You can also compute area under ROC or Precision-recall and get a numeric value which reflects performance over the whole range of regimes; however, such aggregation has its own problems.
  • asked a question related to Machine Intelligence
Question
1 answer
If we are converting ECG signal, in frequency domain using FFT or other periodogram, and then calculating RR Interval from signal, then what will the input format for classification be (like SVM, Neural Network)? What features should we take for the input?
In the attached paper it was written to select input feature in form of NextRR (RR of Next Beat), PrevRR (RR of Previous Beat), and RRRatio (RR Ratio of previous to current beat).
My main problem with this solution is, that this way we will classify signal on the basis of single Beat, because NextRR, PrevRR, and RRRatio will be only for single beat.
Relevant answer
Answer
From ECG or by finger plethysmography one obtains R-R intervals that is the tachogram. on such obtained time series one may perform FFT obtaining three bands , VLF,LF,HF to aobtain ANS analysis
  • asked a question related to Machine Intelligence
Question
2 answers
For SVM with the help of SMO to solve linear problem what is the relation with data dimension, data size and time for training and resources (memory) that needs to increase linearly or have more than 1 degree relation. Is there any good reference for this question (Book or Paper)?
Relevant answer
Answer
There are several aspects to this. In the case of linear SVMs, during training you must estimate the vector w and bias b and this is usually done by solving a quadratic problem. Solving the quadratic problem is not easy (at least in the general case). For instance, testing that you have an optimal solution to the SVM problem involves something in the order of n² dot products, while solving the quadratic problem directly involves inverting the kernel matrix, which has complexity in the order of n³ (where n is the size of your training set). That being said, the time required for linear SVMs to reach a certain level of generalization error actually decreases as training set size increases, e.g.
The prediction time is linear in the number of features and constant in the size of the training data. For some more details have a look also here:
  • asked a question related to Machine Intelligence
Question
12 answers
In cognitive computing, humans and machines work together. How will you devise your cognitive computing methodology so that machines and humans can learn from one another?
Relevant answer
Answer
Gokul, I think of the 'meme' [http://en.wikipedia.org/wiki/Meme] as a container.
If the idea is worthy (weighted value) then one meme holder (machine or man) may pass the meme (high in concept, low in data) to another, for consideration. When/If a positive value-exchange occurs between two or more parties which have judged the meme container content exchange to be of interest, the meme can be populated with data by all parties and its 'worthy' value is increased.
This semantic exchange model borrows heavily from the well-proven TCP/IP header implementation: (interest) "Is it for me?", (construct) size, count of pieces, point-of-origin, etc., (content) packet.
The web services metaphor is also applicable.
  • asked a question related to Machine Intelligence
Question
7 answers
Deep Learning, Machine Learning.
Relevant answer
Answer
Deep Learning is a way of learning everything you need for a classification or clustering problem with neural facts and models. But it requires a good amount of computational power to get away its ramification complexity. If you provide this huge clusters and GPU facilities to your algorithm it is reasonable to obtain promising results for the problem. It is what the Google does currently. They are devoting clusters of computers to learn visual concepts from their image search engine with Deep Learning algorithms. On the other side of the spectrum it is not possible to find this much power as a simple company or institution. Therefore, other methods like hand crafted features and kernel machines also need to be considered even with simple Bag of Words features. There are also some papers saying that well crafted BoW approaches possibly give overrated results in relation to deep learning methods in some of the problems in especially computer vision. Beside all those criticism it is certainly most ground breaking learning tool of the last age of ML with its neuroscientific promise as well.
  • asked a question related to Machine Intelligence
Question
3 answers
Today, due to technological advancements machines are turning to be more intelligent than human being. If we compare them, there are several similarities and differences.
Relevant answer
Answer
Human and machines are quite different Human is self-regulated in all aspects of metabolism, surprising ourselves with each aspects of life, including ADN and ARN mechanisms of interconnection. But do not forget that humans spirit and mind, can reaction with other human's with empathy, capability to project more and more complex ways of life,including human communities. History of mankind and the capability to remember and understand multiple actions and effects on human ideas, is one of the main differences between human and machine. even you think in intelligent machines.
  • asked a question related to Machine Intelligence
Question
28 answers
Is it good to start with SVM for feature selection?
Relevant answer
Answer
As Marco pointed out, SVM can be used in a wrapper approach for feature selection. And I fully agree that, while this is fine for small-scale problems, it becomes quickly intractable for bigger tasks. A well-known embedded feature selection method using svm is to use the L1-norm (or approximate L0-"norm") weight regularizer in linear svms, which leads to a sparse weight vector whose non-zero components are associated to the features of interest.
The trade-off parameter can be set via cross validation. This can be very fast for even large-scale problems using fast linear solvers like liblinear.