Questions related to Machine Intelligence
Since the importance of Machine Learning (ML) is significantly increasing, let's share our opinions, publications, or future applications on Optical Wireless Communication.
- In non-parametric statistics, the Theil–Sen estimator is a method for robustly fitting a line to sample points in the plane (simple linear regression) by choosing the median of the slopes of all lines through pairs of points. Many journals have applied Sen slope to find the magnitude and direction of the trend
- It has also been called Sen's slope estimator, slope selection, the single median method, the Kendall robust line-fit method, and the Kendall–Theil robust line.
- The major advantage of Thiel-Sen slope is that the estimator can be computed efficiently, and is insensitive to outliers. It can be significantly more accurate than non-robust simple linear regression (least squares) for skewed and heteroskedastic data, and competes well against least squares even for normally distributed data in terms of statistical power.
My question is are there any disadvantages/shortcomings of Sen's Slope? Are there any assumptions on the time series before applying it.? Is there any improved version of this method? Since the method was discovered in 1968, does there exist any literature where the power of the Sen slope is compared with other non-parametric? What inference can be made by applying Sen slope to a hydrologic time series explicitly? What about the performance of the Sen slope when applied on an autocorrelated time series like rainfall and temperature?
I am a PhD student looking to read some recent good papers that can help me identify a research topic in RL for controls applications . I have been reading through quite a few papers/topics discussing model free vs model based RL etc . Not been able to find something , may be I don't understand it yet to the extent :) .
Just for the background : My experience is with Diesel , SI engines , vehicles and controls .
One of the topics/areas that seems interesting to me is learning using RL in uncertain scenarios, this might seem to broad for most of the people .
Another possible area would be RL for connected vehicles, self driving etc .
Any help/suggestion is welcome .
We introduce the concept of Proton-Seconds and see it lends itself to a method of solving problems across a large range of disciplines in the Natural Sciences. The underpinnings seem to be in 6-fold symmetry. This lends itself to a Universal Form. We find this presents the Periodic Table of the Elements as a squaring of the circle. It is rather abstract thinking, but just as the moment we define truth and as a result it reverses, I think we can treat problem solving this way: As Patterns…The idea is there is nothing we can say is the truth, but we can solve problems through pattern recognition. I would think this manner of problem solving through pattern recognition could be employed in developing deep learning machine intelligence and AI for its method of imitating human learning to gain knowledge.
Deleted research item The research item mentioned here has been deleted
Recently, I have been attracted by the paper "Stable learning establishes some common ground between causal inference and machine learning" published in Nature Machine Intelligence journal. After perusing it, I met with a problem regarding the connection between model explainability and spurious correlation.
I notice that in the paper, after introducing the three key factors(stability, explainability and fairness) ML researchers need to address, the authors make a further judgement that spurious correlation is a key source of risk. So far I have figured out why spurious correlation can cause ML models to lack stability and fairness, however, it is still unknown to me why spurious correlation can obstruct the research on explainable AI. As far as I know, there have been two lines of research on XAI, explaining black-box models and building inherently interpretable models. But I'm wondering if there are some concrete explanations about why spurious correlations are such a troublemaker when trying to design good XAL methods?
Can anyone suggest any ensembling methods for the output of pre-trained models? Suppose, there is a dataset containing cats and dogs. Three pre-trained models are applied i.e., VGG16, VGG19, and ResNet50. How will you apply ensembling techniques? Bagging, boosting, voting etc.
I want to check the Homogeneity of a rainfall time series. I want to apply the following techniques. Is there any R package available in CRAN for running the following test?
- The von Neumann Test
- Cumulative Deviations Test
- Bayesian Test
- Dunnett Test
- Bartlett Test
- Hartley Test
- Link-Wallace Test
- Tukey Test for Multiple Comparisons
In XLSTAT software, four homogeneity tests are present, is there any other software where all the homogeneity tests are present?
Like other meta-heuristic algorithms, some algorithms tend to be trapped in low diversity, local optima and unbalanced exploitation ability.
1- Enhance its exploratory and exploitative performance.
2- Overcome premature convergence (increase the fast convergence) and ease of falling (trapped) into a local optimum.
3- Increase the diversity of population and alleviate the prematurity convergence problem
4- The algorithm suffers from an immature balance between exploitation and exploration.
5- Maintain the diversity of solutions during the search, so that the tendency of stagnation towards the sub-optimal solutions can be avoided and the convergence rate can be boosted to obtain more accurate optimal solutions.
6- Slow convergence speed, inability to jump out of local optima and fixed step length.
7- Improve its population diversity in the search space.
What is the boundary between the tasks that need human interventions and the tasks that can be fully autonomous in the domain of civil and environmental engineering? What are ways of establishing a human-machine interface that combines the best parts of human intelligence and machine intelligence in different civil and environmental engineering problem-solving processes? Any tasks that can never be autonomous and need civil and environmental engineers? Coordinating international infrastructure projects? Operating future cities with many interactions between building facilities? We would love to learn from you about your existing work and thoughts in this broad area and hope we can build the future of humans and civil & environmental engineering together.
Please see this link for an article that serves as a starting point for this discussion initiated by an ASCE task force:
Does anybody here know if having accepted a paper in the 9th Machine Intelligence and Digital Interaction Conference is valid and valuable for a scientific resume?
Thank you in advance.
While working on my unsupervised learning project, I had found that the widely used Davies-Bouldin (DB) and Calinski-Harabasz (CH) Indexes are not working. While finding the reason behind it, I had found that this is because the data I am using is sparse. Are there any other clustering evaluation methods (index or metric) available that work for sparse data?
I am working on a project with Generative Adversarial Network to generate pseudo (minority) samples to expand the dataset and then using that expanded dataset to train my model for fault detection in machinery.
I have already tried my project with 2 machine temperature sensor datasets (containing timestamp and temperature data at that timestamp).
I am looking for a dataset having a similar structure (i.e timestamp with one sensor data) , For Example, Current sensor with time or pressure sensor with time.
I have searched Kaggle but most datasets on Kaggle are multivariate.
Where can I find Univariate Time-series data for fault detection in machinery?
#machinelearning #univariatedata #timeseriesdata #machinerydata #faultanalysis #faultdetection #anomalydetection #GAN #neuralnetwork
A lot of companies are investing a lot of their energy, money and time into digitalization and UX is at the forefront, leading the cause at some of these companies. What are the hottest topics of research in UX these days?
#uxdesign #uxresearch #ux #research #ai #innovation #artificialintelligence #machinelearning #datascience #deeplearning #ml #science #futureofbusiness #futureofai #futureinsights #futureoftech
As far as I know, machine intelligence of engineering field has basically presented two research ideas to realize the data-driven modeling and analysis of complex systems and solve the prediction or diagnosis problems: 1) by proposing advanced and complex algorithms with good adaptive ability, and 2) by employing simple and effective algorithm with good interpretability combined with the characteristics of practical engineering problems.
I wonder what colleagues think of these two research ideas, and whether there are any other ideas. Aiming at a certain research problem, how to effectively evaluate the innovation of these two ideas?
Hello, I'm currently studying a Deriche filter. I've managed to create python program for derivative smoothing from this article . As I understand, Deriche used one α for smoothing in both directions - X and Y, so this filter seems to me isotrophic. Is it correct way to use two alphas − α_x and α_y to add anistrophy?
Can someone please explain me the relaion between α in Deriche filter and σ in Gaussian filter? In  i found an equation picture, but i do not understand the symbol П. Is it π? To my mind, it's not, because in the article they took α = 0.14 and σ = 10. From these values П = 625/19≈3.1887755.
: Deriche, R. (1990). Fast algorithms for low-level vision. IEEE Transactions on Pattern Analysis and Machine Intelligence, 12(1), 78–87. doi:10.1109/34.41386
I am new to the field of Neural networks. Currently, I am working on training a CNN model to classify XRAY Images into Normal and Viral Pneumonia.
I am facing an issue of Constant Val accuracy while training the model. However, with each epoch the training accuracy is becoming better and both the losses (loss and Val loss) are decreasing.
How should I deal with this problem of the constant Val Accuracy?
Attaching below: The Screenshot of the Epochs
Recently, several works have been published on predictive analytics:
- Prediction-based Resource Allocation using LSTM and Minimum Cost and Maximum Flow Algorithm by Gyunam Park and Minseok Song (https://ieeexplore.ieee.org/abstract/document/8786063)
- Using Convolution Neural Networks for Predictive Process Analytics by Vincenzo Pasquadibisceglie et al. (https://ieeexplore.ieee.org/document/8786066)
Besides, there is a paper on how to discover a process model using neural networks:
My questions for this discussion are:
- It seems, that the field for machine learning approaches in process mining in not limited to predictions/discovery. Can we formulate the areas of possible applications?
- Can we use process mining techniques in machine learning? Can we, for example, mine how neural networks learn (in order to better understand their predictions)?
- If you believe that the subjects are completely incompatible, then, please, share your argument. Why do you think so?
- Finally, please, share known papers in which: process mining (PM) is applied in machine learning (ML) research, ML is applied in PM research, both PM and ML are applied to solve a problem. I believe, this will be useful for any reader of this discussion.
I am writing this to gather some suggestions for my thesis topic.
I am a student of MSc Quantitative Finance. I am in need of some suggestions from the experienced members for a research topic in Portfolio management.
My expertise are in statistics and empirical analysis. I believe that I will be able to present some good work in field portfolio analysis. Currently, I am researching for some good topics where I can apply machine learning or machine intelligence e.g. for forecasting portfolio performance or may be use it to asses portfolio optimization strategies.
I will be very grateful for you suggestions and guidance. If it suits you, you can also email me on email@example.com
To work on a "predictive maintenance" issue, I need a real data set that contain sensor data so that i can train a model to predict or diagnose failure like high temperature alert .
I would appreciate it if anybody could help me to get a real data set.
Article at this link https://www.quantamagazine.org/been-kim-is-building-a-translator-for-artificial-intelligence-20190110/ talks about "A New Approach to Understanding How Machines Think". It says in intro:
"Neural networks are famously incomprehensible — a computer can come up with a good answer, but not be able to explain what led to the conclusion. Been Kim is developing a “translator for humans” so that we can understand when artificial intelligence breaks down."
Are you aware of other research and researchers doing similar work? Can you share links, resources and/or your research on this, please? Thanks!
The current developments in utilizing computer systems to study facial expressions highlight the facial changes in light of a man's interior enthusiastic states, aims, or social interchanges are based on complex visual data processing. The multimedia systems utilizing machine vision and computerized image processing procedures are incorporating land or flying remote detection and recognition of neurotic pressure conditions, shape and shading portrayal of organic products.
M.Z. Uddin, M.M. Hassan, A. Almogren, A. Alamri, M. Alrubaian, and G. Fortino, “Facial expression recognition utilizing local direction-based robust features and deep belief network,” IEEE Access, vol. 5, pp. 4525-4536, 2017.
Y. Wu, T. Hassner, K. Kim, G. Medioni, and P. Natarajan, “Facial landmark detection with tweaked convolutional neural networks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017.
H. Ding, S.K. Zhou, and R. Chellappa, “Facenet2expnet: Regularizing a deep face recognition net for expression recognition,” In Automatic Face & Gesture Recognition (FG 2017), 2017 12th IEEE International Conference on, IEEE, pp. 118-126, 2017.
There are number of computational methods applied to simultaneous translation.
M. Rusinol, D. Aldavert, R. Toledo, and J. Llados, “Browsing heterogeneous document collections by a segmentation-free word spotting method,” vol. 22. in Proc. of International Conference on Document Analysis and Recognition (ICDAR), IEEE, 2011, pp. 63–67.
V. Frinken, A. Fischer, R. Manmatha, and H. Bunke, “A novel word spotting method based on recurrent neural networks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, pp. 211–224, 2012.
What do you think: when artificial intelligence (AI) will be smarter than humans, if ever? Can you predict it and if yes, when it will approximately happen in your opinion?
You can also vote in poll at:
I have predicted multiple feature using a Neural Net model and I have found the Error Vector for the same (E= P-A). Now to determine whether a particular entry is Anomaly or Not , I need to convert those multiple Error Vector into a single one, so as o set Threshold. What should be the best technique for that?
The term was apparently introduced by John Alan Robinson in a 1970 paper in the Proceedings of the Sixth Annual Machine Intelligence Workshop, Edinburgh, 1970, entitled "Computational Logic: The Unification Computation" (Machine Intelligence 6:63-72, Edinburgh University Press, 1971). The expression is used in the second paragraph with a footnote claiming that *computational logic* (the emphasis is in the paper) is "surely a better phrase than 'theorem proving', for the branch of artificial intelligence which deals with how to make machines do deduction efficiently". This sounds like coining the term; no reference to a previous use is mentioned. Is anybody aware of a previous use of "computational logic" by someone else?
I have a dataset, which contains normal as well as abnormal data (counter data) behavior .
Because this is a Anomaly Detection problem.
So when you do the classification, in the training stage, you only use one-class-labeled data? or use both classes of data for anomaly?
As increasing number of samples to train neural network, the efficiency of system reduces. Can anyone tell me why this is happening?
Tweet interactions over the 3-day risk conference (CCCR2016 at Cambridge University), including fear of AI and robotics, shows some resistance to a '10-commandments' type ethical template for robotics.
Ignoring the religious overtones, and beyond Asimov, aren't similar rules encoded into robotics necessary to ensure future law-abiding artificial intellects?
Mathematicians and philosophers alone are not the solution to ensuring non-threatening AI. Diverse teams of anthropologists, historians, linguists, brain-scientists, and more, collaborating could design ethical and moral machines.
My topic of PhD research is "Computer Aided Detection of Prostate Cancer from MRI Images using Machine Intelligence Techniques". From where do I have to start prostate segmentation? Does registration has to be done before segmentation or is it optional? Are there any open source codes available for learning the existing methods of prostate segmentation?
Or are they fed instructions only to follow?
Machine Education and Computational learning theory employs different models of learning using smart algorithms to make machines intelligent and learn faster. But in theory, do machines actually learn "anything"? They are fed instructions which they follow, whether associative, supervised, reinforcement, or adaptive learning or exploratory data mining, these are all information (instructions) that are fed into computers. There is no involvement of motivation and synthesis of learned materials.
What's the precise definition of machine learning?
My team and I have been assigned to produce a layout of a production line of a manufacturing company by using WITNESS 14. The layout of the production line has been done. However, we are currently having problem to allocate the labour to take the parts from the shelf and move it to the machines. We have been told that it required some coding to control the action of the labour. Enclosed is the layout of the production line we drawn for this project.
Thanks for the reply.
Does anyone know or have the triple feature extraction function code in Matlab?
It was proposed by Alexander Kadyrov, "The Trace Transform and Its Application", IEEE transaction on pattern analysis and machine intelligence, 2001.
Miguel Nicolelis and Ronald Cicurel claim that the brain is relativistic and cannot be simulated by a Turing Machine which is contrary to well marketed ideas of simulating /mapping the whole brain https://www.youtube.com/watch?v=7HYQsJUkyHQ If it cannot be simulated on digital computers what is the solution to understand the brain language?
Experiments such as the study of the Relationship between Weather Variables and Electric Power Demand inside a Smart Grid/Smart World Framework correlated the existing relationship between energy demand and climatic variables, in order to make better demand forecasts http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3478798/. The study shows interesting correlations, especially between temperature and electrical demand.
My question to ANN experts are, in time series microgrid load forecasting, would a self training or adaptive ANN architecture inherently take into consideration the cross correlation between weather data and the electrical demand pattern of a (rural) household/microgrid when the weather/climatic data is fed as a separate input (or inputs) into the ANN (or would it be better to try and normalize the real and historical demand profiles).
Two machine flow shop with blocking have been studied for about 50 years. The most famous solution is about converting it into a TSP problem. Is there other approach to get optimal solutions for this solution.
What level of intelligence do machines actually need/posses and how can this be compared. If the community is to create a Machine Quotient (MQ), how would this be compared to human cognition?
How do you measure machine intelligence and what level of cognitive processing do machines really need to behave autonomously. Is human level cognition required in ordinary machines or just those working remotely?
Supervised learning handle the classification problem with certain labeled training data and semi-supervised learning algorithm aims to improve the classifiers performance by the help of amount of unlabeled samples. While is there any theory or classical frame work to handle the training with soft class label? This soft label are prior knowledge of the training sample, which may be the class probabilities, class beliefs or expert experience values.
How can machine learning techniques play a role in HVAC systems? I am searching for techniques that are used for making efficient HVAC systems. And which issues of HVAC are solved by using these techniques. Need some reliable sources for understanding use and working of machine learning techniques in making HVAC systems.
I am working on Physical Activity Recognition using data acquired from smartphone sensors (gyroscope, magnetometer and accelerometer) and would like to compare different classifier performance on the dataset, but wondering which evaluation matrix would be best to use: True Positive Rate (TPR), False Positive Rate (FPR), Precision, Recall, F-score or overall classification accuracy? This is a six class problem (6 different activities).
If we are converting ECG signal, in frequency domain using FFT or other periodogram, and then calculating RR Interval from signal, then what will the input format for classification be (like SVM, Neural Network)? What features should we take for the input?
In the attached paper it was written to select input feature in form of NextRR (RR of Next Beat), PrevRR (RR of Previous Beat), and RRRatio (RR Ratio of previous to current beat).
My main problem with this solution is, that this way we will classify signal on the basis of single Beat, because NextRR, PrevRR, and RRRatio will be only for single beat.
For SVM with the help of SMO to solve linear problem what is the relation with data dimension, data size and time for training and resources (memory) that needs to increase linearly or have more than 1 degree relation. Is there any good reference for this question (Book or Paper)?
In cognitive computing, humans and machines work together. How will you devise your cognitive computing methodology so that machines and humans can learn from one another?
Today, due to technological advancements machines are turning to be more intelligent than human being. If we compare them, there are several similarities and differences.