Science topics: Machine Intelligence
Science topic

Machine Intelligence - Science topic

Explore the latest questions and answers in Machine Intelligence, and find Machine Intelligence experts.
Questions related to Machine Intelligence
  • asked a question related to Machine Intelligence
Question
16 answers
Since the importance of Machine Learning (ML) is significantly increasing, let's share our opinions, publications, or future applications on Optical Wireless Communication.
Thank you!
Relevant answer
Answer
Our paper is related to Localization in underwater Visible light communications using neural networks, I believe Localization using neural networks or deep learning is a hot topic in UVLC, There are many papers on the indoor environment, but more researchers are attracted to underwater communications as the optical domain is far better than RF inside the water for different types of water (Sea or Water or Turbid water). Introducing the blockage and dynamic motion is challenge in localization and transmission in UVLC.
  • asked a question related to Machine Intelligence
Question
3 answers
  1. In non-parametric statistics, the Theil–Sen estimator is a method for robustly fitting a line to sample points in the plane (simple linear regression) by choosing the median of the slopes of all lines through pairs of points. Many journals have applied Sen slope to find the magnitude and direction of the trend
  2. It has also been called Sen's slope estimator, slope selection, the single median method, the Kendall robust line-fit method,[6] and the Kendall–Theil robust line.
  3. The major advantage of Thiel-Sen slope is that the estimator can be computed efficiently, and is insensitive to outliers. It can be significantly more accurate than non-robust simple linear regression (least squares) for skewed and heteroskedastic data, and competes well against least squares even for normally distributed data in terms of statistical power.
My question is are there any disadvantages/shortcomings of Sen's Slope? Are there any assumptions on the time series before applying it.? Is there any improved version of this method? Since the method was discovered in 1968, does there exist any literature where the power of the Sen slope is compared with other non-parametric? What inference can be made by applying Sen slope to a hydrologic time series explicitly? What about the performance of the Sen slope when applied on an autocorrelated time series like rainfall and temperature?
Relevant answer
Answer
Two points. The approach is pretty similar to what Boscovich proposed in the 1700s, so if dating this type of procedure you can go further back (Farebrother, R. W. (1999). Fitting linear relationships: A history of the calculus of observations
1750--1900. New York, NY: Springer.) Second point. A disadvantage is it will be slow for even medium sized n. Here is a quick coding of T-S compared with the lm function in R (which is slower for n=10 because of all the checks it does before estimating the model) and what I assume is similar to the main computation bit in it. At n=10 they are similiar-ish, but n=100 T-S is much slower (note different units).
> theilsen <- function(x,y){
+ n <- length(x) # assuming no missing
+ slopes <- {}
+ for (i in 1:(n-1))
+ for (j in (i+1):n)
+ slopes <- c(slopes, (y[i]-y[j])/(x[i]-x[j]))
+ beta1 <- median(slopes[is.finite(slopes)])
+ beta0 <- median(y - beta1*x)
+ return(list(beta0=beta0,beta1=beta1))
+ }
> lmb <- function(x,y) solve(t(x) %*% x) %*% t(x) %*% y
> library(microbenchmark)
> x <- rnorm(10); y <- rnorm(10)
> microbenchmark(theilsen(x,y),lm(x~y),lmb(x,y))
Unit: microseconds
expr min lq mean median uq max
theilsen(x, y) 249.101 271.3515 764.2949 303.4510 373.6505 42047.800
lm(x ~ y) 1222.101 1293.1510 1496.4859 1419.3010 1594.7010 5597.801
lmb(x, y) 100.001 103.3010 271.7730 120.2015 186.0010 7302.101
neval cld
100 ab
100 b
100 a
> x <- rnorm(100); y <- rnorm(100)
> microbenchmark(theilsen(x,y),lm(x~y),lmb(x,y))
Unit: microseconds
expr min lq mean median uq
theilsen(x, y) 60628.902 75446.151 91017.986 76951.5510 80715.9015
lm(x ~ y) 1187.001 1377.951 1619.025 1659.1510 1807.2015
lmb(x, y) 100.600 111.702 176.185 192.4505 215.2015
max neval cld
543952.501 100 b
2262.701 100 a
303.602 100 a
>
  • asked a question related to Machine Intelligence
Question
3 answers
Hello
I am a PhD student looking to read some recent good papers that can help me identify a research topic in RL for controls applications . I have been reading through quite a few papers/topics discussing model free vs model based RL etc . Not been able to find something , may be I don't understand it yet to the extent :) .
Just for the background : My experience is with Diesel , SI engines , vehicles and controls .
One of the topics/areas that seems interesting to me is learning using RL in uncertain scenarios, this might seem to broad for most of the people .
Another possible area would be RL for connected vehicles, self driving etc .
Any help/suggestion is welcome .
Relevant answer
Answer
combining MARL and safety would be an interesting area
  • asked a question related to Machine Intelligence
Question
4 answers
We introduce the concept of Proton-Seconds and see it lends itself to a method of solving problems across a large range of disciplines in the Natural Sciences. The underpinnings seem to be in 6-fold symmetry. This lends itself to a Universal Form. We find this presents the Periodic Table of the Elements as a squaring of the circle. It is rather abstract thinking, but just as the moment we define truth and as a result it reverses, I think we can treat problem solving this way: As Patterns…The idea is there is nothing we can say is the truth, but we can solve problems through pattern recognition. I would think this manner of problem solving through pattern recognition could be employed in developing deep learning machine intelligence and AI for its method of imitating human learning to gain knowledge.
Deleted research item The research item mentioned here has been deleted
Relevant answer
Answer
Okay Stay but I think in order for a pattern to exist a theme must recur, therefor it has a characteristic by which it abides, so that is a restriction or physics in a straight jacket as Feynmann calls it. I think it is much like improvising on a musical intsrument, you have to develop your ideas according to a rhythmic cycle. The proton-second is an abstract idea that can be applied to large amounts of particles over macroscopic time periods. Alas the timescale of a second is characteristic of the proton, in this paper it determines its radius.
  • asked a question related to Machine Intelligence
Question
10 answers
Exploring the similarities and differences between these three powerful machine learning tools (PCA, NMF, and Autoencoder) has always been a mental challenge for me. Anyone with knowledge in this field is welcome to share it with me.
Relevant answer
Answer
In machine learning projects we often run into curse of dimensionality problem where the number of records of data are not a substantial factor of the number of features. This often leads to a problems since it means training a lot of parameters using a scarce data set, which can easily lead to overfitting and poor generalization. High dimensionality also means very large training times. So, dimensionality reduction techniques are commonly used to address these issues. It is often true that despite residing in high dimensional space, feature space has a low dimensional structure.
Regards,
Shafagat
  • asked a question related to Machine Intelligence
Question
4 answers
Recently, I have been attracted by the paper "Stable learning establishes some common ground between causal inference and machine learning" published in Nature Machine Intelligence journal. After perusing it, I met with a problem regarding the connection between model explainability and spurious correlation.
I notice that in the paper, after introducing the three key factors(stability, explainability and fairness) ML researchers need to address, the authors make a further judgement that spurious correlation is a key source of risk. So far I have figured out why spurious correlation can cause ML models to lack stability and fairness, however, it is still unknown to me why spurious correlation can obstruct the research on explainable AI. As far as I know, there have been two lines of research on XAI, explaining black-box models and building inherently interpretable models. But I'm wondering if there are some concrete explanations about why spurious correlations are such a troublemaker when trying to design good XAL methods?
Relevant answer
Answer
Spurious correlation, or spuriousness, occurs when two factors appear casually related to one another but are not. The appearance of a causal relationship is often due to similar movement on a chart that turns out to be coincidental or caused by a third "confounding" factor.
A more data-driven approach to diagnosing spurious correlation is to use statistical techniques to examine the residuals. If the residuals exhibit autocorrelation, this suggests that some key variable may be missing from the analysis.
  • asked a question related to Machine Intelligence
Question
10 answers
Can anyone suggest any ensembling methods for the output of pre-trained models? Suppose, there is a dataset containing cats and dogs. Three pre-trained models are applied i.e., VGG16, VGG19, and ResNet50. How will you apply ensembling techniques? Bagging, boosting, voting etc.
Relevant answer
  • asked a question related to Machine Intelligence
Question
4 answers
I want to check the Homogeneity of a rainfall time series. I want to apply the following techniques. Is there any R package available in CRAN for running the following test?
  • The von Neumann Test
  • Cumulative Deviations Test
  • Bayesian Test
  • Dunnett Test
  • Bartlett Test
  • Hartley Test
  • Link-Wallace Test
  • Tukey Test for Multiple Comparisons
In XLSTAT software, four homogeneity tests are present, is there any other software where all the homogeneity tests are present?
Relevant answer
Answer
Dear @Kabbilawsh Peruvazhuthi,
In R, you can use several packages:
-climtrends package: VonNeumann test, Cumulative Deviations Test, and other homogeneity tests such as SNHT, Buishand, and Pettitt
-DescTools package: Dunnett Test
-multcompView package : TukeyHSD()
-Function bartlett.test(x, …)
  • asked a question related to Machine Intelligence
Question
16 answers
Like other meta-heuristic algorithms, some algorithms tend to be trapped in low diversity, local optima and unbalanced exploitation ability.
1- Enhance its exploratory and exploitative performance.
2- Overcome premature convergence (increase the fast convergence) and ease of falling (trapped) into a local optimum.
3- Increase the diversity of population and alleviate the prematurity convergence problem
4- The algorithm suffers from an immature balance between exploitation and exploration.
5- Maintain the diversity of solutions during the search, so that the tendency of stagnation towards the sub-optimal solutions can be avoided and the convergence rate can be boosted to obtain more accurate optimal solutions.
6- Slow convergence speed, inability to jump out of local optima and fixed step length.
7- Improve its population diversity in the search space.
Relevant answer
Answer
like Mr . Joel Chacón, I think also that question is a dependent problem, there is no exact answer.
  • asked a question related to Machine Intelligence
Question
19 answers
What is the boundary between the tasks that need human interventions and the tasks that can be fully autonomous in the domain of civil and environmental engineering? What are ways of establishing a human-machine interface that combines the best parts of human intelligence and machine intelligence in different civil and environmental engineering problem-solving processes? Any tasks that can never be autonomous and need civil and environmental engineers? Coordinating international infrastructure projects? Operating future cities with many interactions between building facilities? We would love to learn from you about your existing work and thoughts in this broad area and hope we can build the future of humans and civil & environmental engineering together.
Please see this link for an article that serves as a starting point for this discussion initiated by an ASCE task force:
Relevant answer
Answer
You are most welcome dear Pingbo Tang .
Wish you the best always.
  • asked a question related to Machine Intelligence
Question
9 answers
[Information] Special Issue - Intelligent Control and Robotics
Relevant answer
Answer
Thanks for sharing.
  • asked a question related to Machine Intelligence
Question
5 answers
Dear Researchers,
Does anybody here know if having accepted a paper in the 9th Machine Intelligence and Digital Interaction Conference is valid and valuable for a scientific resume?
Thank you in advance.
Relevant answer
Answer
Dear Shima,
This is a very good question, which conferences are valuable to submit and participate. In Machine Leaning field the top conferences can be :
Neural Information Processing Systems (NIPS)
CVPR : IEEE/CVF Conference on Computer Vision and Pattern Recognition
AAAI Conference on Artificial Intelligence
If you need more technical information about the rank and quality of the ML and AI conference please let me know by email. (neshat.mehdi@gmail.com)
  • asked a question related to Machine Intelligence
Question
4 answers
How do you interpret different concepts in NLP for time-series? For example “self-attention” and “positional embeddings” in transformers.
Relevant answer
Answer
There are numerous benefits to utilizing the Transformer architecture over LSTM RNN. The two chief differences between the Transformer Architecture and the LSTM architecture are in the elimination of recurrence, thus decreasing complexity, and the enabling of parallelization, thus improving efficiency in computation.
Kind Regards
Qamar Ul Islam
  • asked a question related to Machine Intelligence
Question
3 answers
While working on my unsupervised learning project, I had found that the widely used Davies-Bouldin (DB) and Calinski-Harabasz (CH) Indexes are not working. While finding the reason behind it, I had found that this is because the data I am using is sparse. Are there any other clustering evaluation methods (index or metric) available that work for sparse data?
Relevant answer
Answer
In order to analyze sparse data, the best way is to perform dimension reduction, principal component analysis or singular values decomposition for data shrinking or removing the unwanted features exists in the given data base.
  • asked a question related to Machine Intelligence
Question
3 answers
I am working on a project with Generative Adversarial Network to generate pseudo (minority) samples to expand the dataset and then using that expanded dataset to train my model for fault detection in machinery.
I have already tried my project with 2 machine temperature sensor datasets (containing timestamp and temperature data at that timestamp).
I am looking for a dataset having a similar structure (i.e timestamp with one sensor data) , For Example, Current sensor with time or pressure sensor with time.
I have searched Kaggle but most datasets on Kaggle are multivariate.
Where can I find Univariate Time-series data for fault detection in machinery?
#machinelearning #univariatedata #timeseriesdata #machinerydata #faultanalysis #faultdetection #anomalydetection #GAN #neuralnetwork
Relevant answer
Answer
You can check the following bearing datasets (vibration signals Vs time):
PS: both of the dataset are treated in this paper :
prognostic-data-repository/
Wind turbine high speed shaft data: https://www.mathworks.com/help/predmaint/ug/ wind-turbine-high-speed-bearing-prognosis.html
  • asked a question related to Machine Intelligence
Question
6 answers
A lot of companies are investing a lot of their energy, money and time into digitalization and UX is at the forefront, leading the cause at some of these companies. What are the hottest topics of research in UX these days?
#uxdesign #uxresearch #ux #research #ai #innovation #artificialintelligence #machinelearning #datascience #deeplearning #ml #science #futureofbusiness #futureofai #futureinsights #futureoftech
Relevant answer
Cloud based tools like Figma for great quick collaborative real time teamwork. Perfect for UX UI designers, writers and developers and testing.
  • asked a question related to Machine Intelligence
Question
3 answers
As far as I know, machine intelligence of engineering field has basically presented two research ideas to realize the data-driven modeling and analysis of complex systems and solve the prediction or diagnosis problems: 1) by proposing advanced and complex algorithms with good adaptive ability, and 2) by employing simple and effective algorithm with good interpretability combined with the characteristics of practical engineering problems.
I wonder what colleagues think of these two research ideas, and whether there are any other ideas. Aiming at a certain research problem, how to effectively evaluate the innovation of these two ideas?
Relevant answer
Answer
While it is good to take a broad view to understand the area that you are working to gain understanding of the field, your objetive for research should be focused on a particular area. I will agree with Karim Ibrahim in breaking the two categories and choosing one for research purposes as you have done in your previous publications.
To continue your search, I would focus on what are the areas of your potential advisors and look for common ground between your interest and theirs.
Regards
  • asked a question related to Machine Intelligence
Question
9 answers
Hello, I'm currently studying a Deriche filter. I've managed to create python program for derivative smoothing from this article [1]. As I understand, Deriche used one α for smoothing in both directions - X and Y, so this filter seems to me isotrophic. Is it correct way to use two alphas − α_x and α_y to add anistrophy?
Can someone please explain me the relaion between α in Deriche filter and σ in Gaussian filter? In [1] i found an equation picture, but i do not understand the symbol П. Is it π? To my mind, it's not, because in the article they took α = 0.14 and σ = 10. From these values П = 625/19≈3.1887755.
Thank you.
[1]: Deriche, R. (1990). Fast algorithms for low-level vision. IEEE Transactions on Pattern Analysis and Machine Intelligence, 12(1), 78–87. doi:10.1109/34.41386
Relevant answer
Answer
Aparna Sathya Murthy Thank you and sorry for long reply. I will use two alphas to create anistrophic filter!
You said that Deriche filter is a high pass filter, but Hale in [2] confirm it's a low pass filter.
  • asked a question related to Machine Intelligence
Question
11 answers
I am new to the field of Neural networks. Currently, I am working on training a CNN model to classify XRAY Images into Normal and Viral Pneumonia.
I am facing an issue of Constant Val accuracy while training the model. However, with each epoch the training accuracy is becoming better and both the losses (loss and Val loss) are decreasing.
How should I deal with this problem of the constant Val Accuracy?
Attaching below: The Screenshot of the Epochs
Relevant answer
  • asked a question related to Machine Intelligence
Question
35 answers
Been machine intelligent artificially, could we teach them to love artificially?
Relevant answer
Answer
I dont think that we fully understand the mystery of love to teach it to machines
  • asked a question related to Machine Intelligence
Question
44 answers
Recently, several works have been published on predictive analytics:
Besides, there is a paper on how to discover a process model using neural networks:
My questions for this discussion are:
  • It seems, that the field for machine learning approaches in process mining in not limited to predictions/discovery. Can we formulate the areas of possible applications?
  • Can we use process mining techniques in machine learning? Can we, for example, mine how neural networks learn (in order to better understand their predictions)?
  • If you believe that the subjects are completely incompatible, then, please, share your argument. Why do you think so?
  • Finally, please, share known papers in which: process mining (PM) is applied in machine learning (ML) research, ML is applied in PM research, both PM and ML are applied to solve a problem. I believe, this will be useful for any reader of this discussion.
Relevant answer
Answer
There are actually quite a lot of nice application of machine learning techniques in the context of business process variant analysis, which is a fairly large subset of the process mining literature.
For example, Folino, Cuzzocrea et al. have done a series of studies on variant analysis (or deviance mining) using various machine learning methods, including ensemble learning and clustering:
We recently conducted a literature survey of methods in the field of variant analysis, many of them based on machine learning techniques:
Related to the above, there is work on bayesian networks for delay analysis (explanatory rather than predictive):
The above is related to variant analysis and performance mining. But there is also work on anomaly detection in event logs using bayesian networks:
And using deep learning architectures:
As well as using deep learning models to compute alignments in order to correct anomalies:
And a bit related to the above, there was quite a bit of research on using trace clustering in the context of automated process discovery (e.g. Jochen De Weerdt)
So we can say that process mining and machine learning go well together. One should not forget though that BPM and process mining are application-oriented disciplines - their objective is to design approaches to improve business processes. Whereas machine learning is a horizontal discipline, it seeks to develop methods that can be adapted to a broad range of problems/settings. Process mining has tapped a lot into machine learning, but sure it has a lot more to exploit from it.
  • asked a question related to Machine Intelligence
Question
9 answers
Hi everyone,
I am writing this to gather some suggestions for my thesis topic.
I am a student of MSc Quantitative Finance. I am in need of some suggestions from the experienced members for a research topic in Portfolio management.
My expertise are in statistics and empirical analysis. I believe that I will be able to present some good work in field portfolio analysis. Currently, I am researching for some good topics where I can apply machine learning or machine intelligence e.g. for forecasting portfolio performance or may be use it to asses portfolio optimization strategies.
I will be very grateful for you suggestions and guidance. If it suits you, you can also email me on narendarkumar306@gmail.com
Regards.
Relevant answer
Answer
Topics
  • Behavioral Finance ...
  • Derivatives. Options ...
  • Factors, risk premia. Analysis of individual factors/risk premia ...
  • Fixed income and structured finance. ...
  • International Investing. ...
  • Legal/regulatory/public policy. ...
  • Long-term/retirement investing. ...
  • Mutual funds/passive investing/indexing.
  • asked a question related to Machine Intelligence
Question
3 answers
Hi all,
To work on a "predictive maintenance" issue, I need a real data set that contain sensor data so that i can train a model to predict or diagnose failure like high temperature alert .
I would appreciate it if anybody could help me to get a real data set.
Thanks
Relevant answer
Answer
Ashutosh Karna Thank you for your response.
my main objective is in oil and gase (industry 4.0) equipment maintenance use case and i need temperature,pressure ,humidity or volume flow sensors to predict supervised failures to train a model.
  • asked a question related to Machine Intelligence
Question
13 answers
Article at this link https://www.quantamagazine.org/been-kim-is-building-a-translator-for-artificial-intelligence-20190110/ talks about "A New Approach to Understanding How Machines Think". It says in intro:
"Neural networks are famously incomprehensible — a computer can come up with a good answer, but not be able to explain what led to the conclusion. Been Kim is developing a “translator for humans” so that we can understand when artificial intelligence breaks down."
Are you aware of other research and researchers doing similar work? Can you share links, resources and/or your research on this, please? Thanks!
Relevant answer
  • asked a question related to Machine Intelligence
Question
16 answers
What is the difference between artificial intelligence and machine intelligence ?
Relevant answer
Answer
Artificial Intelligence AI is the scientific broader of intelligent machines (smart machines), while Machine Intelligence or Machine Learning is one of the main applications of AI.
You can see all the applications of AI covered by the International Journal of Distributed Artificial Intelligence (IJDAI) , a specialized AI journal
  • asked a question related to Machine Intelligence
Question
3 answers
The current developments in utilizing computer systems to study facial expressions highlight the facial changes in light of a man's interior enthusiastic states, aims, or social interchanges are based on complex visual data processing. The multimedia systems utilizing machine vision and computerized image processing procedures are incorporating land or flying remote detection and recognition of neurotic pressure conditions, shape and shading portrayal of organic products.
Papers:
M.Z. Uddin, M.M. Hassan, A. Almogren, A. Alamri, M. Alrubaian, and G. Fortino, “Facial expression recognition utilizing local direction-based robust features and deep belief network,” IEEE Access, vol. 5, pp. 4525-4536, 2017.
Y. Wu, T. Hassner, K. Kim, G. Medioni, and P. Natarajan, “Facial landmark detection with tweaked convolutional neural networks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017.
H. Ding, S.K. Zhou, and R. Chellappa, “Facenet2expnet: Regularizing a deep face recognition net for expression recognition,” In Automatic Face & Gesture Recognition (FG 2017), 2017 12th IEEE International Conference on, IEEE, pp. 118-126, 2017.
Relevant answer
Answer
  • asked a question related to Machine Intelligence
Question
2 answers
There are number of computational methods applied to simultaneous translation.
Papers:
M. Rusinol, D. Aldavert, R. Toledo, and J. Llados, “Browsing heterogeneous document collections by a segmentation-free word spotting method,” vol. 22. in Proc. of International Conference on Document Analysis and Recognition (ICDAR), IEEE, 2011, pp. 63–67.
V. Frinken, A. Fischer, R. Manmatha, and H. Bunke, “A novel word spotting method based on recurrent neural networks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, pp. 211–224, 2012.
Relevant answer
Answer
Karayaneva, Y., & Hintea, D. (2018, February). Object recognition algorithms implemented on NAO robot for children's visual learning enhancement. In Proceedings of the 2018 2nd International Conference on Mechatronics Systems and Control Engineering (pp. 86-92). ACM.
Upadhyayaa, P., Farooqa, O., & Abidia, M. R. (2018). Block Energy Based Visual Features Using Histogram Of Oriented Gradient For Bimodal Hindi Speech Recognition. Procedia Computer Science, 132, 1385-1393.
Kaur, H., & Kumar, M. (2018). A comprehensive survey on word recognition for non-Indic and Indic scripts. Pattern Analysis and Applications, 1-33.
Benmoussa, M., & Mahmoudi, A. (2018, April). Machine learning for hand gesture recognition using bag-of-words. In Intelligent Systems and Computer Vision (ISCV), 2018 International Conference on (pp. 1-7). IEEE.
Junejo, I., Dexter, E., Laptev, I., & Perez, P. (2011). View-independent action recognition from temporal self-similarities. IEEE transactions on pattern analysis and machine intelligence.
  • asked a question related to Machine Intelligence
Question
24 answers
What do you think: when artificial intelligence (AI) will be smarter than humans, if ever? Can you predict it and if yes, when it will approximately happen in your opinion?
You can also vote in poll at:
Relevant answer
Answer
Developing a true test as to the consciousness of Artificial Intelligence would be difficult, but the truest measure is that fully self programming AI's are virtually unknown. The self-modeling, self-generation capacities of humanity are of definite interest in generating an AI with greater adaptability in a broad set of circumstances. That, in my opinion, is the next threshold to cross in Artificial Intelligence. Another difficulty with AI is  how do you define an Artificial Intelligence without having a fully accurate model of human intelligence? Our adaptability is our best defined feature as humans, so adaptive artificial intelligence may be a better goal than strong artificial intelligence, at least until a clear picture emerges of more general features such as consciousness.
  • asked a question related to Machine Intelligence
Question
3 answers
I have predicted multiple feature using a Neural Net model and I have found the Error Vector for the same (E= P-A). Now to determine whether a particular entry is Anomaly or Not , I need to convert those multiple Error Vector into a single one, so as o set Threshold. What should be the best technique for that?
Relevant answer
Answer
You can take the average error and select the model whose error is near about the average error. You can also use ensemble techniques. Please try with the link:
  • asked a question related to Machine Intelligence
Question
3 answers
The term was apparently introduced by John Alan Robinson in a 1970 paper in the Proceedings of the Sixth Annual Machine Intelligence Workshop, Edinburgh, 1970, entitled "Computational Logic: The Unification Computation" (Machine Intelligence 6:63-72, Edinburgh University Press, 1971). The expression is used in the second paragraph with a footnote claiming that *computational logic* (the emphasis is in the paper) is "surely a better phrase than 'theorem proving', for the branch of artificial intelligence which deals with how to make machines do deduction efficiently". This sounds like coining the term; no reference to a previous use is mentioned. Is anybody aware of a previous use of "computational logic" by someone else?
Relevant answer
Answer
Thanks Surender! In Section 13 of Robinson's CL 2000 paper, he confirms that he was the introductor of the term in the 1970 paper I found.
  • asked a question related to Machine Intelligence
Question
15 answers
I have a dataset, which contains normal as well as abnormal data (counter data) behavior .
Relevant answer
Answer
You can used clustering algorithm like
1. K-means/medoid
2. Fuzzy c-means
3. partition-based clustering etc. for the classification issue.
  • asked a question related to Machine Intelligence
Question
3 answers
What is relation between Machine Intelligence and Smart Networks?
Relevant answer
Answer
Artificial Intelligence has been around for a long time – the Greek myths contain stories of mechanical men designed to mimic our own behavior. Very early European computers were conceived as “logical machines” and by reproducing capabilities such as basic arithmetic and memory, engineers saw their job, fundamentally, as attempting to create mechanical brains.
As technology, and, importantly, our understanding of how our minds work, has progressed, our concept of what constitutes AI has changed. Rather than increasingly complex calculations, work in the field of AI concentrated on mimicking human decision making processes and carrying out tasks in ever more human ways.
Artificial Intelligences – devices designed to act intelligently – are often classified into one of two fundamental groups – applied or general. Applied AI is far more common – systems designed to intelligently trade stocks and shares, or manoeuvre an autonomous vehicle would fall into this category.
Neural Networks – Artificial Intelligence And Machine [+]
Generalized AIs – systems or devices which can in theory handle any task – are less common, but this is where some of the most exciting advancements are happening today. It is also the area that has led to the development of Machine Learning. Often referred to as a subset of AI, it’s really more accurate to think of it as the current state-of-the-art.
The Rise of Machine Learning
Two important breakthroughs led to the emergence of Machine Learning as the vehicle which is driving AI development forward with the speed it currently has.
One of these was the realization – credited to Arthur Samuel in 1959 – that rather than teaching computers everything they need to know about the world and how to carry out tasks, it might be possible to teach them to learn for themselves.
The second, more recently, was the emergence of the internet, and the huge increase in the amount of digital information being generated, stored, and made available for analysis.
Once these innovations were in place, engineers realized that rather than teaching computers and machines how to do everything, it would be far more efficient to code them to think like human beings, and then plug them into the internet to give them access to all of the information in the world.
Neural Networks
The development of neural networks has been key to teaching computers to think and understand the world in the way we do, while retaining the innate advantages they hold over us such as speed, accuracy and lack of bias.
A Neural Network is a computer system designed to work by classifying information in the same way a human brain does. It can be taught to recognize, for example, images, and classify them according to elements they contain.
Essentially it works on a system of probability – based on data fed to it, it is able to make statements, decisions or predictions with a degree of certainty. The addition of a feedback loop enables “learning” – by sensing or being told whether its decisions are right or wrong, it modifies the approach it takes in the future.
Machine Learning applications can read text and work out whether the person who wrote it is making a complaint or offering congratulations. They can also listen to a piece of music, decide whether it is likely to make someone happy or sad, and find other pieces of music to match the mood. In some cases, they can even compose their own music expressing the same themes, or which they know is likely to be appreciated by the admirers of the original piece.
These are all possibilities offered by systems based around ML and neural networks. Thanks in no small part to science fiction, the idea has also emerged that we should be able to communicate and interact with electronic devices and digital information, as naturally as we would with another human being. To this end, another field of AI – Natural Language Processing (NLP) – has become a source of hugely exciting innovation in recent years, and one which is heavily reliant on ML.
NLP applications attempt to understand natural human communication, either written or spoken, and communicate in return with us using similar, natural language. ML is used here to help machines understand the vast nuances in human language, and to learn to respond in a way that a particular audience is likely to comprehend.
A Case Of Branding?
Artificial Intelligence – and in particular today ML certainly has a lot to offer. With its promise of automating mundane tasks as well as offering creative insight, industries in every sector from banking to healthcare and manufacturing are reaping the benefits. So, it’s important to bear in mind that AI and ML are something else … they are products which are being sold – consistently, and lucratively.
Machine Learning has certainly been seized as an opportunity by marketers. After AI has been around for so long, it’s possible that it started to be seen as something that’s in some way “old hat” even before its potential has ever truly been achieved. There have been a few false starts along the road to the “AI revolution”, and the term Machine Learning certainly gives marketers something new, shiny and, importantly, firmly grounded in the here-and-now, to offer.
The fact that we will eventually develop human-like AI has often been treated as something of an inevitability by technologists. Certainly, today we are closer than ever and we are moving towards that goal with increasing speed. Much of the exciting progress that we have seen in recent years is thanks to the fundamental changes in how we envisage AI working, which have been brought about by ML. I hope this piece has helped a few people understand the distinction between AI and ML. In my next piece on this subject I go deeper – literally – as I explain the theories behind another trending buzzword – Deep Learning.
Bernard Marr is a best-selling author & keynote speaker on business, technology and bigdata. His new book is Data Strategy. Toread hisfuture posts simply join his network here.
  • asked a question related to Machine Intelligence
Question
5 answers
Because this is a Anomaly Detection problem.
So when you do the classification, in the training stage, you only use one-class-labeled data? or use both classes of data for anomaly?
Relevant answer
Dear Zhiliu
All classifications algorithm design for binary class classification problem then they extended to deal with multi-class classification problem. so, if your problem is binary class use binary other wise use multi class.
Regards,
  • asked a question related to Machine Intelligence
Question
6 answers
As increasing number of samples to train neural network, the efficiency of system reduces. Can anyone tell me why this is happening? 
Relevant answer
Answer
 how can we initialise weights? Can you explain with small example.
  • asked a question related to Machine Intelligence
Question
3 answers
For detecting and prediction spyware stealing data from the system without the user consciousness.
  • asked a question related to Machine Intelligence
Question
15 answers
Tweet interactions over the 3-day risk conference (CCCR2016 at Cambridge University), including fear of AI and robotics, shows some resistance to a '10-commandments' type ethical template for robotics.
Ignoring the religious overtones, and beyond Asimov, aren't similar rules encoded into robotics necessary to ensure future law-abiding artificial intellects?
Mathematicians and philosophers alone are not the solution to ensuring non-threatening AI. Diverse teams of anthropologists, historians, linguists, brain-scientists, and more, collaborating could design ethical and moral machines.
Relevant answer
Answer
Encoding robot ethics through sets of human-designed rules may not be computationally tractable, and as Asimov shows, it's incredibly difficult to create a small set of ethical rules to follow without encountering situations in which those rules do not play out as intended. 
You may be interested in this recent paper from our laboratory in which recent approaches to automatically learning ethical principles through Inverse Reinforcement Learning are critiqued: https://hrilab.tufts.edu/publications/aaai17-alignment.pdf
A quote from that paper seems directly relevant to your question: "The idea of training a system on data (either supervised or unsupervised) has captured more
and more attention as a way to understand how ethics and AI might best function in concert. Coding ethical values “by hand,” in the manner of many other traditional forms of coding – seems destined for the lesser task of lending basic “scaffolding” within which machine learning can operate (Tanz 2016)."
I also highly recommend the book "Robot Ethics" (https://www.amazon.com/Robot-Ethics-Implications-Intelligent-Autonomous/dp/026252600X) which presents several approaches to robot ethics, and discusses some of the ethical challenges faced by robots. 
  • asked a question related to Machine Intelligence
Question
2 answers
My topic of PhD research is "Computer Aided Detection of Prostate Cancer from MRI Images using Machine Intelligence Techniques". From where do I have to start prostate segmentation? Does registration has to be done before segmentation or is it optional? Are there any open source codes available for learning the existing methods of prostate segmentation?
Relevant answer
Answer
Hi Bejoy,
If you are able to post an example image or dataset, we may be able to put together a recipe file for you in our new extended free trial software MIPAR (http://MIPAR.us) that should at the very least make you aware of the processing techniques involved in prostate segmentation if you need to track down open source codes yourself.
Cheers,
John
  • asked a question related to Machine Intelligence
Question
31 answers
Or are they fed instructions only to follow?
Machine Education and Computational learning theory employs different models of learning using smart algorithms to make machines intelligent and learn faster. But in theory, do machines actually learn "anything"? They are fed instructions which they follow, whether associative, supervised, reinforcement, or adaptive learning or exploratory data mining, these are all information (instructions) that are fed into computers. There is no involvement of motivation and synthesis of learned materials.
What's the precise definition of machine learning?
Relevant answer
Answer
Surely they learn. In a technical sense
learning is nothing more than the changing
of a system's state in order to do
something better.
What 'better' means is a matter
of philosophy.
Regards,
Joachim
  • asked a question related to Machine Intelligence
Question
3 answers
My team and I have been assigned to produce a layout of a production line of a manufacturing company by using WITNESS 14. The layout of the production line has been done. However, we are currently having problem to allocate the labour to take the parts from the shelf and move it to the machines. We have been told that it required some coding to control the action of the labour. Enclosed is the layout of the production line we drawn for this project.
Thanks for the reply.
Relevant answer
Answer
  • asked a question related to Machine Intelligence
Question
3 answers
Does anyone know or have the triple feature extraction function code in Matlab?
It was proposed by Alexander Kadyrov, "The Trace Transform and Its Application", IEEE transaction on pattern analysis and machine intelligence, 2001.
Relevant answer
Answer
 Hi
I attached you a paper that explained algorithm of extraction  triple features by trace transform and also you can email the author to obtain the code
Regards
  • asked a question related to Machine Intelligence
Question
185 answers
Miguel Nicolelis and Ronald Cicurel claim that the brain is relativistic and cannot be simulated by a Turing Machine which is contrary to well marketed ideas of simulating /mapping the whole brain  https://www.youtube.com/watch?v=7HYQsJUkyHQ  If it cannot be simulated on digital computers what is the solution to understand the brain language?
Relevant answer
Answer
 Dear friends, Dorian, Dick, Roman, Mario, et al.,
            There is hope (Haykin and Fuster, Proc. IEEE, 102, 608-628, 2014).  But first we have to modify our computer and some of our traditional ideas about the brain. With respect to both, here I offer humbly some of my views after half a century of working in cognitive neuroscience.  I shall be brief and cautious.  For an account of empirical evidence, read my “Cortex and Mind”  (Oxford, 2003).
            1.  Alas, the computer cannot be only digital, but also must be analog.  Most all the cognitive operation in the brain are based on analog transactions at many levels (membrane potentials, spike frequencies, firing thresholds, metabolic gradients, dendritic potentials, neurotransmitter concentrations, synaptic weights, etc., etc.). Further, the computer must be able to compute and work with probabilities, because cognition is largely probabilistic in the Bayesian sense, which means that our computer must also have a degree of plasticity.
            2.  The computer must also have distributed memory.  In the brain, especially the cortex, cognitive information is contained in a complex system of distributed, interactive and overlapping neuronal networks formed by Hebbian rules by association between temporally coincident inputs (i.e., sensory stimuli or inputs from other activated networks).  The cognitive “code” is therefore essentially relational or relativistic, and is defined by connective structure, by associations of context and temporal coincidence.  That is why, theoretically, connectionism and the connectome make some sense.
            3.  It is true that the soma of a neuron contains “memory”: in the mitochondria.  But that is genetic memory (what I call “phyletic memory,” memory of the species), some of which was acquired in evolution.  It is important for brain development and for the function of primary sensory and motor systems.  It is also important for regeneration after injury.  Further, it is the ground-base on which individual cognitive memory will be formed. But the latter consists of more or less widely distributed cortical networks or “cognits” (J. Cog. Neurosci. 21, 2047–2072, 2009).  These overlap and interact to a large extent, whereby a neuron or group of neurons practically anywhere in the cortex can be part of many networks, thus many memories or items of knowledge.  This is trouble for the connectome which, if ever comes to fruition, will be vastly more complex than the genome.
            4.  Our present tools to define the structure, let alone the dynamics, of the connectome appear rather inadequate to deal with those facts and hypotheses.   Consider DTI (diffusor tensor imaging), one of those tools presently in fashion and widely used to trace neural connections.  It is based on the analysis of the orientation of water molecules in a magnetic field.  Therefore, it can successfully visualize nerve fibers with high water content, such as myelinated fibers and some large unmyelinated ones.  But the method (I dub it “water-based paint”) is good for tractography, for visualizing large, fast conducting fibers, but not for the fine connective stuff that defines memory networks.
            5.  What’s more, those networks change all the time, even during sleep.  In sum, it is difficult to imagine a dynamic connectome that would instantiate the vicissitudes and idiosyncrasies of our thinking, remembering, perceiving, calculating and predictive brain.
            Some of this may be wrong.  But that’s the way I see it, and may be useful to model the real brain.  Cheers, Joaquín
  • asked a question related to Machine Intelligence
Question
16 answers
Experiments such as the study of the Relationship between Weather Variables and Electric Power Demand inside a Smart Grid/Smart World Framework correlated the existing relationship between energy demand and climatic variables, in order to make better demand forecasts http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3478798/. The study shows interesting correlations, especially between temperature and electrical demand. 
My question to ANN experts are, in time series microgrid load forecasting, would a self training or adaptive ANN architecture inherently take into consideration the cross correlation between weather data and the electrical demand pattern of a (rural) household/microgrid when the weather/climatic data is fed as a separate input (or inputs) into the ANN (or would it be better to try and normalize the real and historical demand profiles). 
Relevant answer
Answer
Hi 
The short a definite answer: yes, neural networks can take weather variables (temperature, wind chill, humidity) into consideration, and also their lags (not everything is contemporaneous). Indeed neural networks are particularly well suited to forecast load demand as the relationship between weather variables and load is non-linear and the modelling is much more adequate than with linear regression.
BTW: the question of "normalising the input" is confusing´the terminology a bit: in electriciy demand forecasting you often refer to normalise the demand by deseasoanalising (i.e. removing weather effects); in Neural Networks nromalisation refers to scaling the input variables into a suitable range that the algorithm can learn from (i.e. [0; 1] or [-1, 1]).
So yes, normalise the inputs in a neural network training  sense, but do not preprocess / deseasonaliose the data as this woudl get rid of the nonlinear interactions between weather and load. 
Hope this helps, Sven
  • asked a question related to Machine Intelligence
Question
70 answers
Creation of such a system is one of the main goals of artificial intelligence.
Relevant answer
Answer
Dear Alexey,
Classification can be based on a prototype representing a class but in many cases the data used for learning the classifier are represented as a set of objects with assigned class labels.
If you run a clustering algorithm, you divide the number of input objects into a number of disjoint (typically) sets. So a cluster number (e.g. originating from k-means algorithm) can be used as a data label used to learn classifier (e.g. C4.5).
I think the criminal profiling takes into account historical data, at present collected in data bases and mined with classical techniques. It would be very hard to assume an apriori knowledge and programs. The oldest sources are probably the Bible, Greek tragedies, Mahabharata, Hamlet, etc.
Best regards
Piotr
  • asked a question related to Machine Intelligence
Question
3 answers
Two machine flow shop with blocking have been studied for about 50 years.  The most famous solution is about converting it into a TSP problem.  Is there other approach to get optimal solutions for this solution.
Relevant answer
Answer
It is NP-hard, so you won't find anything significantly better.  Johnson's rule for the case of makespan without blocking can build up a huge buffer (see reference below) so it is not going to give you a good heuristic. 
Ramudhin, A., Bartholdi, J. J. III, Calvin, J. M., Vande Vate, J. H. and Weiss, G., "A Probabilistic Analysis of 2-Machine Flowshops", Operations Research, 44 No. 6, pp. 899-908, 1996.
  • asked a question related to Machine Intelligence
Question
21 answers
What level of intelligence do machines actually need/posses and how can this be compared. If the community is to create a Machine Quotient (MQ), how would this be compared to human cognition?
Relevant answer
Answer
IQ tests don't measure "Intelligence" (whatever that is), they measure IQ (whatever that is).
The interisting thing about IQ tests in the light of Turing is that they have always tried to factor out linguistic skills to prevent cultural background and education from confounding the testing of inherent "intelligence."  Therefore these test contain many questions exercising non-verbal patern matching / prediction. So that is where I would start if I were you and it was specifically "IQ" as opposed to "intelligence" that I wished to measure.
  • asked a question related to Machine Intelligence
Question
3 answers
How do you measure machine intelligence and what level of cognitive processing do machines really need to behave autonomously. Is human level cognition required in ordinary machines or just those working remotely?
Relevant answer
Answer
Hi, your question is very general so an answer can only be general also.
I would add, that the needed degree of autonomy is very depending on the application you're targeting on. High autonomy would be needed for autonomous transport systems in public traffic scenario as the environment is very demanding, whereas in industrial application autonomy level could be lower as special measures or infrastructure components can be used. Second, the economic site: Depending on application you're not able to get to "full autonomy", which is expensive (sensor, data processing). Also legal reasons have to be considered.
Deriving from biological systems, I would say that intelligence isn't needed for autonomy, but it makes an autonomous system behaves in a "wiseful" way.
  • asked a question related to Machine Intelligence
Question
14 answers
Supervised learning handle the classification problem with certain labeled training data and semi-supervised learning algorithm aims to improve the classifiers performance by the help of amount of unlabeled samples. While is there any  theory or classical frame work to handle the training with soft class label? This soft label are prior knowledge of the training sample,  which may be the class probabilities, class beliefs or  expert experience values. 
Relevant answer
Answer
I disagree here. The question is not how to train a classifier in general and which are the well known courses/softwares to (learn to) do so.
The question,  as far as I understood,  is how to train a classifier when the available supervision at training time are soft class labels in the form of a class conditional probability value for each training example. In a binary classification case, this would be p(x|c=1) for each sample x in your training set conditioned to the class "c" (or its complement p(x|c=0) for the other class).
The fact that a standard logistic regression outputs a model that produces such probability values, once the model is trained and can be used on independent test examples, does not answer how to use such a soft supervision at training time.
In other words, a standard logreg package would typically require as inputs a  training  set in the form of concrete examples "x" together with their hard class labels e.g. (c=1) or (c=0).
Besides, the most standard way of fitting a logreg is by  iteratively-reweighted least squares (IRLS) and it does not look immediate that this specific type of optimization actually "minimize H(p, q), averaged over your training set."  At least, it would deserve some math to make the (doubtful) link or specify under which type of specific distributions p and q, both optimizations would happen to be equivalent.
  • asked a question related to Machine Intelligence
Question
6 answers
How can machine learning techniques play a role in HVAC systems? I am searching for techniques that are used for making efficient HVAC systems. And which issues of HVAC are solved by using these techniques. Need some reliable sources for understanding use and working of machine learning techniques in making HVAC systems.
Relevant answer
Answer
thank you for your response. i am also basically searching on use of ANNs in HVAC. but unable to find any appropriate model using ANN for this purpose. can you please share some stuff (literature or project names) on use of ANN in HVAC systems.
Thanks
  • asked a question related to Machine Intelligence
Question
5 answers
I am working on Physical Activity Recognition using data acquired from smartphone sensors (gyroscope, magnetometer and accelerometer) and would like to compare different classifier performance on the dataset, but wondering which evaluation matrix would be best to use: True Positive Rate (TPR), False Positive Rate (FPR), Precision, Recall, F-score or overall classification accuracy? This is a six class problem (6 different activities).
Relevant answer
Answer
Rajeev.
TPR and FPR separately do not tell you anything. The same goes for Precision and Recall. F-score, Equal Error Rate and similar can give you some information, but it is limited and you have to be careful how you use those values.
Look at this previous question:
I'll copy part of my answer for that question here:
You have to first answer a simple question: Do You know what are the costs of the decisions?
If the costs are known, you can calibrate your classifiers (set the operating point), and compare costs of the classifiers.
If you do not know the costs, you may have some insight about a reasonable range of operating points of the target application. Lets say the target application is a surveillance system, and all positive responses will have to be shown to a human operator which is able to process one event per minute. In such case, it makes sense to compare detection rates at one false positive per minute. You can use other measures instead, but it makes sense to keep one type of error fixed and compare the other one.
If you have no knowledge about the possible operating point in the target application, you should show results in full range of possible operating points. Plot ROC, precision-recall, DET or something similar. You can also compute area under ROC or Precision-recall and get a numeric value which reflects performance over the whole range of regimes; however, such aggregation has its own problems.
  • asked a question related to Machine Intelligence
Question
1 answer
If we are converting ECG signal, in frequency domain using FFT or other periodogram, and then calculating RR Interval from signal, then what will the input format for classification be (like SVM, Neural Network)? What features should we take for the input?
In the attached paper it was written to select input feature in form of NextRR (RR of Next Beat), PrevRR (RR of Previous Beat), and RRRatio (RR Ratio of previous to current beat).
My main problem with this solution is, that this way we will classify signal on the basis of single Beat, because NextRR, PrevRR, and RRRatio will be only for single beat.
Relevant answer
Answer
From ECG or by finger plethysmography one obtains R-R intervals that is the tachogram. on such obtained time series one may perform FFT obtaining three bands , VLF,LF,HF to aobtain ANS analysis
  • asked a question related to Machine Intelligence
Question
2 answers
For SVM with the help of SMO to solve linear problem what is the relation with data dimension, data size and time for training and resources (memory) that needs to increase linearly or have more than 1 degree relation. Is there any good reference for this question (Book or Paper)?
Relevant answer
Answer
There are several aspects to this. In the case of linear SVMs, during training you must estimate the vector w and bias b and this is usually done by solving a quadratic problem. Solving the quadratic problem is not easy (at least in the general case). For instance, testing that you have an optimal solution to the SVM problem involves something in the order of n² dot products, while solving the quadratic problem directly involves inverting the kernel matrix, which has complexity in the order of n³ (where n is the size of your training set). That being said, the time required for linear SVMs to reach a certain level of generalization error actually decreases as training set size increases, e.g.
The prediction time is linear in the number of features and constant in the size of the training data. For some more details have a look also here:
  • asked a question related to Machine Intelligence
Question
14 answers
.
Relevant answer
Answer
A generic answer as to what is degree of freedom is the minimum number of independent variables required to completely describe a body.
When it comes to mechanism, generally, we are always referring to a rigid body, unless you mean a compliant mechanism.
For a rigid body, it would mean the number of independent co ordinates needed to define the position of the body.
In a planar mechanism, a body (typically a link) can translate along x axis, or y axis or it can rotate about the z axis, in xy plane. Hence to completely define the position of the link, we need to specify the x co ordinate, the y co ordinate and the magnitude by which the link is inclined to either x or y axis. Hence we have three independent variables, hence 3 dof.
In a spatial mechanism, we would have 6 dof, translation along the three axes and the rotation about them.
So much about the degrees of freedom of an independent body.
Now, when it comes to a mechanism, there are more than one bodies, and they are no longer independent, since they are joined to each other in a particular fashion. This is called a constrained body, since it will no longer have as many degrees of freedom as an independent body, and is hence constrained.
The constrained are physically applied by connecting two bodies to each other through what is called a joint. Thus a joint allows certain degrees of freedom between the links, and constraints some of them. This results in formation of what is called as a kinematic pair.
The kinematic pair may allow only one degree of freedom, called a lower pair, or may allow more degrees of freedom, called a higher pair.
example of lower pairs are pin joint, which allow only planar rotation, or slider joint which allows only translation in one direction.
The degree of freedom of a mechanism is the total number of independent variables required to define completely the particular mechanism. This can be found very easily using the Grublers equation...
  • asked a question related to Machine Intelligence
Question
12 answers
In cognitive computing, humans and machines work together. How will you devise your cognitive computing methodology so that machines and humans can learn from one another?
Relevant answer
Answer
Gokul, I think of the 'meme' [http://en.wikipedia.org/wiki/Meme] as a container.
If the idea is worthy (weighted value) then one meme holder (machine or man) may pass the meme (high in concept, low in data) to another, for consideration. When/If a positive value-exchange occurs between two or more parties which have judged the meme container content exchange to be of interest, the meme can be populated with data by all parties and its 'worthy' value is increased.
This semantic exchange model borrows heavily from the well-proven TCP/IP header implementation: (interest) "Is it for me?", (construct) size, count of pieces, point-of-origin, etc., (content) packet.
The web services metaphor is also applicable.
  • asked a question related to Machine Intelligence
Question
7 answers
Deep Learning, Machine Learning.
Relevant answer
Answer
Deep Learning is a way of learning everything you need for a classification or clustering problem with neural facts and models. But it requires a good amount of computational power to get away its ramification complexity. If you provide this huge clusters and GPU facilities to your algorithm it is reasonable to obtain promising results for the problem. It is what the Google does currently. They are devoting clusters of computers to learn visual concepts from their image search engine with Deep Learning algorithms. On the other side of the spectrum it is not possible to find this much power as a simple company or institution. Therefore, other methods like hand crafted features and kernel machines also need to be considered even with simple Bag of Words features. There are also some papers saying that well crafted BoW approaches possibly give overrated results in relation to deep learning methods in some of the problems in especially computer vision. Beside all those criticism it is certainly most ground breaking learning tool of the last age of ML with its neuroscientific promise as well.
  • asked a question related to Machine Intelligence
Question
3 answers
Today, due to technological advancements machines are turning to be more intelligent than human being. If we compare them, there are several similarities and differences.
Relevant answer
Answer
Human and machines are quite different Human is self-regulated in all aspects of metabolism, surprising ourselves with each aspects of life, including ADN and ARN mechanisms of interconnection. But do not forget that humans spirit and mind, can reaction with other human's with empathy, capability to project more and more complex ways of life,including human communities. History of mankind and the capability to remember and understand multiple actions and effects on human ideas, is one of the main differences between human and machine. even you think in intelligent machines.
  • asked a question related to Machine Intelligence
Question
28 answers
Is it good to start with SVM for feature selection?
Relevant answer
Answer
As Marco pointed out, SVM can be used in a wrapper approach for feature selection. And I fully agree that, while this is fine for small-scale problems, it becomes quickly intractable for bigger tasks. A well-known embedded feature selection method using svm is to use the L1-norm (or approximate L0-"norm") weight regularizer in linear svms, which leads to a sparse weight vector whose non-zero components are associated to the features of interest.
The trade-off parameter can be set via cross validation. This can be very fast for even large-scale problems using fast linear solvers like liblinear.