Science topic

Deep Learning - Science topic

Explore the latest questions and answers in Deep Learning, and find Deep Learning experts.
Questions related to Deep Learning
  • asked a question related to Deep Learning
Question
6 answers
I'm an undergraduate doing a Software Engineering degree. I'm looking for a research topic for my final year project. If anyone has any ideas or research topics or any advice on how or where to find one please post them.
Thanks in advance ✌
Relevant answer
Answer
Most of the SE based on Design and cost functions. Concentrate on
  • asked a question related to Deep Learning
Question
4 answers
Is it a good idea to extract features from pre-trained, the last 1x1 convolution removed U-NET/Convolutional Autoencoder? Data will be similar and the model will be trained for image segmentation. I know everybody suggests freezing the encoder is the best option but I think there is feature extraction in the decoder part too(In both convolutional autoencoder and U-NET). They are high-level feature extractors, if my data was different, the frozen decoder part wouldn't be a good idea. But what if my data is very similar?
Relevant answer
Answer
An autoencoder is a neural network model that seeks to learn a compressed representation of an input. An autoencoder is a neural network that is trained to attempt to copy its input to its output.
Regards,
Shafagat
  • asked a question related to Deep Learning
Question
1 answer
Recently, I am studying the use of deep learning technology to explore the extraocular diseases that can be reflected by fundus photography. I think it's difficult, because the human body is a complex system, in which there are too many confounding factors. What should I consider when designing and implementing my research?
Relevant answer
Pay attention to intraocular pressure
  • asked a question related to Deep Learning
Question
9 answers
Hi, I have been working on some Natural Language Processing research, and my dataset has several duplicate records. I wonder should I delete those duplicate records to increase the performance of the algorithms on test data?
I'm not sure whether duplication has a positive or negative impact on test or train data. I found some controversial answers online regarding this, which make me confused!
For reference, I'm using ML algorithms such as Decision Tree, KNN, Random Forest, Logistic Regression, MNB etc. On the other hand, DL algorithms such as CNN and RNN.
Relevant answer
Answer
Hi Abdus,
I would suggest you should check the performance of your model with and without duplication of records. Generally, the duplication may increase the biasedness of the data, which may lead to a biased model. To solve this you can use the data augmentation approach.
  • asked a question related to Deep Learning
Question
3 answers
Hi
I am working with the UAV image data.
my study area is a dense forest.
How can I extract the dieback of the trees with deep learning techniques? do you know any package for this aim?
I want to extract the steps of the dieback trees.
thank you so much
Relevant answer
Answer
Hi dear Shafagat
Thank you for sharing the information and article.
I downloaded the article.
Good luck
  • asked a question related to Deep Learning
Question
7 answers
What are approaches to classify patent data using deep learning? (document, text, word, labels )
How patent classification using CNN, DNN, RNN ?
Is transfer learning is effective in patent classification?
Relevant answer
Answer
The patent classification is a way of defining the technical field of an invention. At present, the two most popular patent classification systems are the International Patent Classification (IPC) and the Cooperative Patent Classification (CPC). The patent classification helps you to conduct searches in our databases
  • asked a question related to Deep Learning
Question
3 answers
I wanted to ask this one Quora, but they no longer allow you to enter background/elaboration on a question.
I'm looking to build/purchase a workstation to use from home for research/data analysis (and probably some recreation, but anything able to handle my work needs will be able to run a game or two with ease). The most demanding applications it will be used for are image processing (specifically, 3D- and/or Z-projections of high-res, 16bit Z stacks, deconvolution, and deep learning / training DNNs and GANs). I'm looking at NVidia graphics cards, since it's so much easier to push parallel processing tasks to the GPU with something like CUDA than it is to jury-rig a workaround for an AMD card. Specifically, i'm trying to decide between a GTX and an RTX series card.
I know the primary difference is that RTX cards can do ray-tracing, while GTX cannot -- but it's not immediately apparent to me whether I should care, given that I won't be using the GPU for its ability to render realistic real-time action scenes.
Given that the GTX line is considerably more affordable, I'd like to know if the RTX equivalents will outperform the GTX counterparts in tasks like neural network training. Or even if I should bite the bullet and spring for a Quadro (though I sincerely hope that won't be the case).
Relevant answer
Answer
Deep learning is a field with intense computational requirements, and your choice of GPU will fundamentally determine your deep learning experience. But what features are important if you want to buy a new GPU? GPU RAM, cores, tensor cores? How to make a cost-efficient choice? This blog post will delve into these questions, tackle common misconceptions, give you an intuitive understanding of how to think about GPUs, and will lend you advice, which will help you to make a choice that is right for you.
This blog post is designed to give you different levels of understanding of GPUs and the new Ampere series GPUs from NVIDIA. You have the choice: (1) If you are not interested in the details of how GPUs work, what makes a GPU fast, and what is unique about the new NVIDIA RTX 30 Ampere series, you can skip right to the performance and performance per dollar charts and the recommendation section. These form the core of the blog post and the most valuable content.
(2) If you worry about specific questions, I have answered and addressed the most common questions and misconceptions in the later part of the blog post.
(3) If you want to get an in-depth understanding of how GPUs and Tensor Cores work, the best is to read the blog post from start to finish. You might want to skip a section or two based on your understanding of the presented topics.
  • asked a question related to Deep Learning
Question
1 answer
I was wondering if anyone has success using AMD Radeon GPUs for deep learning because nvidia GPU is preferred in the majority of online tutorials.
Relevant answer
Answer
Deep learning is a field with intense computational requirements, and your choice of GPU will fundamentally determine your deep learning experience. But what features are important if you want to buy a new GPU? GPU RAM, cores, tensor cores? How to make a cost-efficient choice? This blog post will delve into these questions, tackle common misconceptions, give you an intuitive understanding of how to think about GPUs, and will lend you advice, which will help you to make a choice that is right for you.
This blog post is designed to give you different levels of understanding of GPUs and the new Ampere series GPUs from NVIDIA. You have the choice: (1) If you are not interested in the details of how GPUs work, what makes a GPU fast, and what is unique about the new NVIDIA RTX 30 Ampere series, you can skip right to the performance and performance per dollar charts and the recommendation section. These form the core of the blog post and the most valuable content.
(2) If you worry about specific questions, I have answered and addressed the most common questions and misconceptions in the later part of the blog post.
(3) If you want to get an in-depth understanding of how GPUs and Tensor Cores work, the best is to read the blog post from start to finish. You might want to skip a section or two based on your understanding of the presented topics.
  • asked a question related to Deep Learning
Question
1 answer
I have long-term rainfall data and have calculated Mann-Kendall test statistics using the XLSTAT trial version ( addon in MS word). There is an option for asymptotic and continuity correction in XLSTAT drop-down menu.
  • What does the term "Asymptotic" and "continuity correction" mean?
  • When and under what circumstances should we apply it?
  • Is there any assumption on time series before applying it?
  • What are the advantages and limitations of these two processes?
Relevant answer
Answer
I am not specifically expert in the Mann-Kendall Trend test but it is related to classical non-parametric tests, like the Kendall correlation test that I know better. Be careful with XLSTAT (which works in ExceI, not in Word). Indeed, in the procedure I used a few years ago, I had many problems and had to contact the support. I think you should read more about the test and more generally on non-parametric tests. Asymptotic means when the number of observations n grows to infinity. Otherwise, these tests are based on tables of critical values depending on n. When n is too large, use the asymptotic distribution, often normal with a given mean and a given variance (depending on n, of course). For the continuity correction, it is because the test statistic takes discrete values whereas the asymptotic distribution is continuous. The same kind of correction appears with a binomial distribution. Look in your statistics course.
  • asked a question related to Deep Learning
Question
2 answers
I want to find length of construction zones appearing on drone videos. Drone is moving along the road.
Please refer me a code to find
length of the construction zones appearing on the drone videos
Calculate number and types of construction zones.
  • asked a question related to Deep Learning
Question
7 answers
Do Serial correlation, auto-correlation & Seasonality mean the same thing? or Are they different terms? If so what are the exact differences with respect to the field of statistical Hydrology? What are the different statistical tests to determine(quantity) the serial correlation, autocorrelation & seasonality of a time series?
Relevant answer
Answer
Kabbilawsh Peruvazhuthi, Serial correlation & auto-correlation are same thing but seasonality is different.
  • asked a question related to Deep Learning
Question
4 answers
I am trying to make generalizations about which layers to freeze. I know that I must freeze feature extraction layers but some feature extraction layers should not be frozen (for example in transformer architecture encoder part and multi-head attention part of the decoder(which are feature extraction layers) should not be frozen). Which layers I should call “feature extraction layer” in that sense? What kind of “feature extraction” layers should I freeze?
Relevant answer
Answer
No problem Muhammedcan Pirinççi I am glad it helped you.
In my humble opinion, first, we should consider the difference between transfer learning and fine-tuning and then decide which one better fits our problem. In this regard, I found this link very informative and useful: https://stats.stackexchange.com/questions/343763/fine-tuning-vs-transferlearning-vs-learning-from-scratch#:~:text=Transfer%20learning%20is%20when%20a,the%20model%20with%20a%20dataset.
Afterward, when you decide which approach to use, there are tons of built-in functions and frameworks to do such for you. I am not sure if I understood your question completely, however, I tried to talk about it a little bit. If there is still something vague to you please don't hesitate to ask me.
Regards
  • asked a question related to Deep Learning
Question
15 answers
Hello dear researchers
I used a metaheuristic algorithm for binary prediction. The input data of the model is in the form of text and has an order
Which deep learning model do you recommend? Why?
My goal is to compare the traditional machine learning model with deep learning models in solving a binary problem
CNN and LSTM performed
Thank you for your support
Relevant answer
Answer
Dear Osman Ali Sadek Ibrahim, thanks for sharing. Can you suggest an ensemble-based method in deep learning? Thanks a lot
  • asked a question related to Deep Learning
Question
3 answers
looking for dataset which can useful for our project.
Relevant answer
Answer
You can try to get similar datasets from UCI machine learning repository or from Kaggle.
  • asked a question related to Deep Learning
Question
4 answers
Hi,
Thank you for help.
How to make the scheduling process in CloudSim an environment for my reinforcement learning model ?
Relevant answer
Answer
Thank you for sharing the links and papers, I will use them to learn.
I appreciate your time and efforts
Best Regards,
Bashar
  • asked a question related to Deep Learning
Question
4 answers
We need large datasets to work on malware detection in android apks using deep learning
Relevant answer
Answer
You can access Androids security related Datasets from this link
  • asked a question related to Deep Learning
Question
2 answers
I am trying to develop an automatic segmentation system for T1 and T2 MRI (via Deep Learning) whose goal is to segment different areas of interest:
- Scalp
- Bones
- Blood vessels
- Cerebrospinal fluid
- White/gray matter
In order to be able to extract surfaces and to make calculations with.
At the beginning, I was based on an unsupervised segmentation system inspired by the W-NET model (https://arxiv.org/pdf/1711.08506.pdf).
But this system seems complicated to set up for this type of images. So I turned to other (supervised) models like U-NET or V-NET. But this kind of model requires to have the segmented mask as ground truth.
I would like to know if you have knowledge of the existence of a type of dataset where T1 and T2 brain MRI could already be segmented manually?
I found the following dataset: MRBrainS (https://github.com/looooongChen/MRBrainS-Brain-Segmentation) but it is only the brain that is segmented, not the whole head.
Thanks for your help!
  • asked a question related to Deep Learning
Question
8 answers
As a generative model, GAN is usually used for generating fake samples but not classification
Relevant answer
Answer
A GAN has a discriminator which can be used for classification. I am not sure why a semi-supervised approach is needed here Muhammad Ali
However, the discriminator is just trained to classify between generated and real data. If this is what you want Mohammed Abdallah Bakr Mahmoud then this should work fine.
Normally I would rather train a dedicated classifier if enough labeled data is available.
  • asked a question related to Deep Learning
Question
2 answers
I am working with nii files, that I have normalized to the T1 MNI space. All the Nifti files have been converted to 2D images, and for each NIfti file, I am getting 256 2D jpg images, some of which contain almost no information.
How do I determine which of these 2d Images should I use for training my DL model?
If I use all the images, wouldn't the images containing no or less information decrease the performance of my model?
Relevant answer
Answer
It depends on the task you are trying to achieve and the model that you are using. For example, if you want to segment or detect a specific organ, firstly train your 3D model on only the images that contain this specific information. After you have achieved it, you can add extra neighboring slices for false-positive reduction and feature analysis. If you are using a deep network like 3D UNet, it requires a certain number of slices for the input, in that case, make sure that at least half of the slices you input contain the required specific organ/ information.
  • asked a question related to Deep Learning
Question
3 answers
Hi
I'm still new to deep learning.
I'm currently reading papers in my specific area of application of deep learning for text classification.
When I read those papers, I couldn't figure out how could they come out with the proposed methods.
1) So if I am just getting started, how could I find the idea of a new deep learning method?
2) And what aspect do researchers usually work on in deep learning?
Relevant answer
Answer
You can easily get an idea of research in deep learning by utilizing the following steps:
1. Pick a topic you are interested in
First, select a topic that is really interesting for you. It will help you stay motivated and involved in the learning process. Focus on a certain problem and look for a solution, instead of just passively reading about everything you can find on the internet.
2. Find a quick solution
The point is to find any basic solution that covers the problem as much as possible. You need an algorithm that will process data into a form that is understandable for machine learning, train a simple model, give a result, and evaluate its performance.
3. Improve your simple solution
Once you have a simple basis, it’s time for creativity. Try to improve all the components and evaluate the changes in order to determine whether these improvements are worth your time and effort. For example, sometimes, improving preprocessing and data cleaning gives a higher return on investments than improving a learning model itself.
4. Share your solution
Write up your solution and share it in order to get feedback. Not only will you get valuable advice from other people, but it will also be the first record in your portfolio.
5. Repeat steps 1-4 for different problems
Choose different problems and follow the same steps for each task. If you’ve started with tabular data, choose a problem that involves working with images or unstructured text. It’s also important to learn how to formulate problems for machine learning properly. Developers often need to turn some abstract business objectives into concrete problems that fit the specifics of machine learning.
6. Complete a Kaggle competition
This competition allows you to test your skills, solving the same problems many other engineers are working on. You will be forced to try different approaches, choosing the most effective solutions. This competition can also teach you collaboration, as you can join a big community and communicate with people on the forum, sharing your ideas and learning from others.
7. Use Deep learning professionally
You need to determine what your career goals are and to create your own portfolio. If you are not ready to apply for machine learning jobs, look for more projects that will make your portfolio impressive. Join civic hackathons and look for data-related positions in community service.
Regards;
Ehtisham
  • asked a question related to Deep Learning
Question
6 answers
how can I optimize the accuracy of the deep learning model for image segmentation by using methods optimization in python?
can anyone help me?
thank you in advance
Relevant answer
Answer
  • asked a question related to Deep Learning
Question
6 answers
Hello Friends,
I am applying ML algorithms (DT, RF, ANN, SVM, KNN, etc) in python to my dataset which has features and target variables as continuous data. For example, when I'm using DecisonTreeRegressor I get the r_2 square equal to 0.977. However, I'm interested to deploy the classification metrics like confusion matrix, accuracy score, etc. For this, I converted the continuous target values into categorical ones. Now when I'm applying the DecisionTreeClassifier, I get the accuracy square=1.0 which I think is overfitting. Then I applied the normality checks, and correlation techniques (spearman) but the accuracy remains the same.
My question is am I right to convert numeric data into categorical one?
Secondly, if both regressor and classifiers are used for the same dataset, will the accuracy be changed?
Need your valuable suggestions, please.
For details plz see the attached files.
Thanks for the time
Relevant answer
Answer
I think there are two misconceptions here.
1) There is no reason to expect similar accuracies for regression and classification on the same data set. Turning a regression problem into a classification problem is tricky, and essentially pointless.
2) r_2 is definitely not a valid index of the quality of a regression model. Imagine a model that systematically gives a prediction equal to 10 times the observation. The r_2 of the model will be equal to 1, although the model is obviously very poor. For regression, the most useful quality index is the root mean squared error, computed on a test set, i.e. on data that have never been used for designing the model, neither for training nor for model selection.
  • asked a question related to Deep Learning
Question
3 answers
Hello, I am interested converting word numerals to numbers task, e.g
- 'twenty two' -> 22
- 'hundred five fifteen eleven' -> 105 1511 etc.
And the problem I can't understand at all currently is for a number 1234567890 there are many ways we can write this number in words:
=> 12-34-56-78-90 is 'twelve thirty four fifty six seventy eight ninety'
=> 12-34-576-890 is 'twelve thirty four five hundred seventy six eight hundred ninety'
=> 123-456-78-90 is '(one)hundred twenty three four hundred fifty six seventy eight ninety'
=> 12-345-768-90 is 'twelve three hundred forty five seven hundred sixty eight ninety'
and so on (Here I'm using dash for indicating that 1234567890 is said in a few parts).
Hence, all of the above words should be converted into 1234567890.
I am reading following papers in the hopes of tackling this task:
But so far I still can't understand how would one go about solving this task.
Thank you
  • asked a question related to Deep Learning
Question
4 answers
How will the test cases get generated with deep learning?
  • asked a question related to Deep Learning
Question
2 answers
Cross-Validation for Deep Learning Models
Relevant answer
Answer
Train with different learning ratios (higher suggested), I mean do some shuffling. As k-fold validation process sometimes help to sort out overfitting issues as it's own, but taking values from past iterations may create difficulty. You'll be able to get 95% or even more accuracy with generalised validation process but need to play with it changing between hyperparameters to achieve more desired outcome.
  • asked a question related to Deep Learning
Question
4 answers
The concept of Circular Economy (CE) in the Construction Industry (CI) is mainly about the R-principles: Rethink, Reduce, Reuse, Repair, and Recycle. Thus, if the design stage following an effective job site management would include consideration of the whole lifecycle of the building with further directions of the possible use of the structure elements, the waste amount could be decreased or eliminated. Analysis of the current literature has shown that CE opportunities in CI are mostly linked to materials reuse. Other top-researched areas include the development of different circularity measures, especially during the construction period.
In the last decade, AI merged as a powerful method. It solved many problems in various domains, such as object detection in visual data, automatic speech recognition, neural translation, and tumor segmentation in computer tomography scans.
Despite the broader range of works on the circular economy, AI was not widely utilized in this field. Thus, I would like to ask if you have an opinion or idea on how Artificial intelligence (AI) can be useful in developing or applying circular construction activities?
Relevant answer
Answer
  • asked a question related to Deep Learning
Question
1 answer
Four homogeneity tests, namely the Standard Normal Homogeneity Test(SNHT), Buishand Range(BR) and Pettitt test and Von-Neumann Ratio test (VNR) are applied for finding the break-point. Out of which SNHT, BR and Pettitt give the timestamp at which the break occurs whereas VNR measures the amount of inhomogeneity. Multiple papers have made the claim that "SNHT finds the break point at the beginning and end of the series whereas BR & Pettitt test finds the break point at the middle of the series."
Is there any mathematical proof behind that claim ? Is there any peer-reviewed work (Journal article) which has proved the claim or is there any paper which has crosschecked the claim ?
Let me say that I have a 100 years data, then start of the time series means whether it is the first 10 years or first 15 years or first 20 years? How to come to a conclusion ?
Relevant answer
Answer
Well, I do not know much about these tests you are doing. What you say in the last sentence is more related to my experience. I have prepared a paper on day temperature series and used split-line models with plateau phase followed by linear or nonlinear phase (same can be done if various trends are followed by a plateau phase as happens in Mitscherlich's diminishing returns (Exponential curve). There is also a test which helps to decide if linear trend in two sub-periods are equal or not.
  • asked a question related to Deep Learning
Question
4 answers
I am going to recognize both static and dynamic sign language and how to use both static and video datasets to recognize.
Relevant answer
Answer
  • asked a question related to Deep Learning
Question
5 answers
How can I combine three classifiers of deep learning in python language ?
Relevant answer
Answer
can anyone tell how am i ensemble deep learning models those having different input shape array?
as Dnn using 2D input array shape while CNN and RNN using 3D shape input arrays.
all of the models are already fitted
  • asked a question related to Deep Learning
Question
6 answers
For a research project we are looking for users who apply Artificial IntelligenceI/Machine Learning/Deep Learning in daily clinical practice. The participants will be asked about their experiences in a 30-minute interview.
Best,
Beat Hofer
Relevant answer
Answer
The primary aim of health-related AI applications is to analyze relationships between clinical techniques and patient outcomes. AI programs are applied to practices such as diagnostics, treatment protocol development, drug development, personalized medicine, and patient monitoring and care.
  • asked a question related to Deep Learning
Question
2 answers
I am trying to find datasets with cbct dental imaging for detecting periapical lesions.
Relevant answer
Answer
Usually this data requires ethics approval this is why it's not available mostly online. I saw few free sources though with full arch images. Of you see some, please, share! thank you!
  • asked a question related to Deep Learning
Question
4 answers
Dear everyone:
I cannot find web sites or books explaining how to backpropagate attention layers in Deep Learning
Could anyone of you please teach me how to backpropagate them?
Thank you in advance and have a nice day
Relevant answer
Answer
Dear Artificial Intelligence
Thank you so much for your kindness
I want a lecture which will teach me how to implement backward()
I will use the site you recommended
Thank you again and see you again
Best regards,
Kyoungmun Chang
  • asked a question related to Deep Learning
Question
9 answers
Most often training in computer vision task are usually in 2D or 3D. Why can't the images be presented into 1D image to access how DL models will perform?
Relevant answer
Answer
Yiping Gao There are several ways to revert 2D images into 1D. One way I do this is by reading the image as a NumPy array and then reshaping it. This isn't an efficient way but it reverts the image. So, depending on what you are actually doing. For me, the feature of interest is always my focus.
I recently came across vision transformers and how the architecture reads images by putting them into patches and indexing them. this is also another efficient way to do that.
  • asked a question related to Deep Learning
Question
5 answers
Hello All,
I need to identify the heart region and thorax region in an automated manner in an ultrasound scan image of the heart. Could anyone tell me the step by step guide to identify the regions once an input image is given. Any assistance regarding this would a great help.
Relevant answer
Answer
Training data need to assign bounding boxes to the area of interest. Object detection will try to find the best overlapping bounding box. Some algorithms work with random shapes. To start follow the link below, Yolov3 and Faster RCNN are very good:
  • asked a question related to Deep Learning
Question
1 answer
I am experimenting with Retinal Optical coherence tomography. However, the region of interest named 'irf' has a very small area and I am using Dice loss for the segmentation in Unets.
However, I am not getting satisfactory results as the input images are noisy and also the ROI is very small. Can anyone suggest to me a suitable loss for this kind of challenge?
Relevant answer
Answer
  • asked a question related to Deep Learning
Question
3 answers
How can cocomo model be used with deep learning?
Relevant answer
Answer
Based on my experience building very large information systems, and personally writing much of the software, I recommend that you first focus on Object Oriented Software Development and Design (The Booch Method) that will enable you to identify re-usable components and new (needs new development) components. But this requires that you have a library of designed, developed, and integrated components. In other words, you can't Learn without a well-managed library of re-usable software. I generally write 20,000 new lines of code per year that becomes part of several hundred thousand lines of well tested re-usable code. Good Luck, Barry.
  • asked a question related to Deep Learning
Question
3 answers
  1. In non-parametric statistics, the Theil–Sen estimator is a method for robustly fitting a line to sample points in the plane (simple linear regression) by choosing the median of the slopes of all lines through pairs of points. Many journals have applied Sen slope to find the magnitude and direction of the trend
  2. It has also been called Sen's slope estimator, slope selection, the single median method, the Kendall robust line-fit method,[6] and the Kendall–Theil robust line.
  3. The major advantage of Thiel-Sen slope is that the estimator can be computed efficiently, and is insensitive to outliers. It can be significantly more accurate than non-robust simple linear regression (least squares) for skewed and heteroskedastic data, and competes well against least squares even for normally distributed data in terms of statistical power.
My question is are there any disadvantages/shortcomings of Sen's Slope? Are there any assumptions on the time series before applying it.? Is there any improved version of this method? Since the method was discovered in 1968, does there exist any literature where the power of the Sen slope is compared with other non-parametric? What inference can be made by applying Sen slope to a hydrologic time series explicitly? What about the performance of the Sen slope when applied on an autocorrelated time series like rainfall and temperature?
Relevant answer
Answer
Two points. The approach is pretty similar to what Boscovich proposed in the 1700s, so if dating this type of procedure you can go further back (Farebrother, R. W. (1999). Fitting linear relationships: A history of the calculus of observations
1750--1900. New York, NY: Springer.) Second point. A disadvantage is it will be slow for even medium sized n. Here is a quick coding of T-S compared with the lm function in R (which is slower for n=10 because of all the checks it does before estimating the model) and what I assume is similar to the main computation bit in it. At n=10 they are similiar-ish, but n=100 T-S is much slower (note different units).
> theilsen <- function(x,y){
+ n <- length(x) # assuming no missing
+ slopes <- {}
+ for (i in 1:(n-1))
+ for (j in (i+1):n)
+ slopes <- c(slopes, (y[i]-y[j])/(x[i]-x[j]))
+ beta1 <- median(slopes[is.finite(slopes)])
+ beta0 <- median(y - beta1*x)
+ return(list(beta0=beta0,beta1=beta1))
+ }
> lmb <- function(x,y) solve(t(x) %*% x) %*% t(x) %*% y
> library(microbenchmark)
> x <- rnorm(10); y <- rnorm(10)
> microbenchmark(theilsen(x,y),lm(x~y),lmb(x,y))
Unit: microseconds
expr min lq mean median uq max
theilsen(x, y) 249.101 271.3515 764.2949 303.4510 373.6505 42047.800
lm(x ~ y) 1222.101 1293.1510 1496.4859 1419.3010 1594.7010 5597.801
lmb(x, y) 100.001 103.3010 271.7730 120.2015 186.0010 7302.101
neval cld
100 ab
100 b
100 a
> x <- rnorm(100); y <- rnorm(100)
> microbenchmark(theilsen(x,y),lm(x~y),lmb(x,y))
Unit: microseconds
expr min lq mean median uq
theilsen(x, y) 60628.902 75446.151 91017.986 76951.5510 80715.9015
lm(x ~ y) 1187.001 1377.951 1619.025 1659.1510 1807.2015
lmb(x, y) 100.600 111.702 176.185 192.4505 215.2015
max neval cld
543952.501 100 b
2262.701 100 a
303.602 100 a
>
  • asked a question related to Deep Learning
Question
4 answers
The costs and time commitments associated with data collection and labeling might be prohibitive. A huge dataset is insufficient since the success of deep learning models is strongly dependent on the quality of training data. Cost, time, and the use of appropriate training data are all challenges. Biases, incorrect labels, and omitted values are some of the difficulties that impair the quality of deep learning training datasets.
Relevant answer
Answer
I agree, but they're other metrics that can be deployed to appraise these algorithms apart from accuracy.
  • asked a question related to Deep Learning
Question
3 answers
Deep learning models require large volumes of data, and this massive data will increase the need for a system to be trained continuously.
Relevant answer
Answer
One solution could be to alternate between subdivision (improving details) and merging (simplifying details) at a local scale according to some pre-requisite performance criteria...
Regards
  • asked a question related to Deep Learning
Question
4 answers
I have come across many research articles stating the values of rmse of deep learning models such as LSTM. Some results are between 0 and 1, while some are between 0 and 50. I want to know what an ideal rmse value is for such neural network models.
I have been trying to reduce the rmse values to less than 1 from values between 3 and 7.
Note: The error values are computed after unscalling back the dataset to its original form. The errors are, however, between 0 and 1 when computed with scaled data. But, I think computing with unscaled data makes more sense.
Relevant answer
Answer
You're welcome Hayatullahi Adeyemo and thanks for recommendation!
  • asked a question related to Deep Learning
Question
1 answer
How to request for download the dataset of DERMNET?
Relevant answer
  • asked a question related to Deep Learning
Question
6 answers
I want some free resources that I can learn deep learning easily. I am interested in the text mining field.
Thank You
Relevant answer
Answer
This might be helpful: MIT Introduction to Deep Learning | 6.S191 https://www.youtube.com/watch?v=7sB052Pz0sQ
  • asked a question related to Deep Learning
Question
3 answers
Hi all, I am looking for GAN generated face datasets. Please share the links. Also if there is any pre-trained networks for this purpose it would be helpful.
Thank you.
Relevant answer
Answer
Hey absolutely , there are a lot of github repositories that you can check out online, especially for deepfakes. There is a huge dataset containing StyleGAN2- generated deepfake images as well. Recently while working on a project I came across this amazing repository by Professor Jeff Heaton that talks about manipulating latent vectors (512 in size) to automatically generate glasses on human faces. Its a very interesting approach , by understanding the latent vector features of the image, we can manipulate human faces. The link to that video: https://www.youtube.com/watch?v=5XX4uy9Mk9I&t=652s . The github repo is there in the description box.
  • asked a question related to Deep Learning
Question
3 answers
Deep Learning Training
Relevant answer
Answer
My idea ,Adam is adaptive optimizier, based on this concept LR will be managed by ADAM mechanism.So no advantage of using any other LR SCHEDULER OR EARLY STOPING MECHANISM.
  • asked a question related to Deep Learning
Question
2 answers
I am about to start working on 6d pose estimation for Object Grasping based point cloud. In our lab we have the following:
-AUBO i5 industrial Manipulator.
-COBOT 3D Camera that will give us a point cloud of the scene. the camera will be attach to the manipulator in the eye in hand configuration (mounted on the manipulator's gripper(end effector).
Deep learning Based method will be used for 6D pose estimation of the target object.
the 6D pose estimation will be calculated on my laptop, How can I send the final result or the pose estimation to the robot in order to control it and eventually pick and place the target object.
Relevant answer
Answer
Hello Abdulrahman Abdo Ali Alsumeri if you want to control an industrial robotic arm using your own laptop, you can see if that model that you are using has ROS conecction. If this is the case, you can just suscribe to the topics that are usefull for your task and send the required ofnormation for each of the different joints of the robot.
If there is not a ROS interface, you can try to use some other controller created especifically for the industrial robots, that works using 7Step lenguage for example, but normally those programs are controlled by the company that owns the robot.
In ths case, for your model, it has a ROS Interface that you can use. There is the link to ROS WIKI for your model: http://wiki.ros.org/aubo_robot
I hope my comment has helped you.
  • asked a question related to Deep Learning
Question
3 answers
I need to classify patent data using deep learning. What is word embeddings in used patent classification?
How to categorized patents using transfer learning?
Which patent dataset is freely available?
Relevant answer
Answer
Rough patent data is available in Patstat to work with big data sets. Have a look at the EPO website how to get access to the data set. IPC is available via WIPO.
  • asked a question related to Deep Learning
Question
11 answers
Monkeypox Virus is recently spreading very fast, which is very alarming. Awareness can assist people in reducing the panic that is caused all over the world.
To do that, Is there any image dataset for monkeypox?
Relevant answer
Answer
Medical Datasets
please consider the above links and medical data set
  • asked a question related to Deep Learning
Question
1 answer
I'm using SPSS software to model my statistically variables. I'm used to model variables in MATLAB, R and Python, this is my first experience with SPSS software. I've create model on my observed dataset however the result of model revealing some calibration with oberserved dataset. How can I can calibrate SPSS timeseries model using programming language.
I actually want to run infinite time model on my dataset and want to set certain parameters e.g., RMSE between observed and predicted dataset fall under my descries limit I want to save that model parameters. Is there any way to do SPSS software with some programming ?
Time reply will be appreciated
Relevant answer
Answer
Please respond if any one has idea?
  • asked a question related to Deep Learning
Question
11 answers
Any good course on machine learning/deep learning online or on udemy, coursera?
Relevant answer
Answer
You can check the following lecture which is very good.
  • asked a question related to Deep Learning
Question
4 answers
which deep learning algorithm is best for infilling data gap in river streamflow time series??
Relevant answer
Answer
very good
  • asked a question related to Deep Learning
Question
3 answers
I need to define an objective function that has several decision variables. This objective function needs to be minimised by using Adam optimiser from Tensorflow. There are a lot of examples on deep learning on internet. However, I am finding it difficult to get one that only optimises the objective function.
Relevant answer
Answer
Dear Shirsendu,
Take a look at this python notebook shared by yacineMahdid, it contain several Gradient Descent Optimization Algorithms.
Good luck.
  • asked a question related to Deep Learning
Question
2 answers
Among all the Neural Network structures that are introduced, RNN has received noticeable attention because of the state art included in its gradient computation with backpropagation. On the other hand, we have the invention of autograd as a gradient computation tool in many programs like Pytorch, Tensorflow,... which takes care of analytic computation of Network's gradient with chain rule and backpropagation. But even the invention of autograd doesn't impact the superiority of RNN over many other structures, because computer takes care of gradient computation with no care about the network structure which can result in a super complicated and unnecessary computations. Therefore, even applying chain rule with no inclusion of the state of art in Gradient computation can be still difficult and computer codes can crash in gradient computation. For example consider the RNN structure, if you introduce it directly to autograd with no hint of the presence of the state of art that is available for it, then autograd will have a hard time to compute its gradient, but if you hint the autograd of the specific method which exists for RNN, then the autograd will compute the gradient easily. So my question is:
I am going to introduce a network structure to autograd which includes RNN or LSTM, if I introduce the network in the standard way then it would be hard for autograd to compute its gradient, but if I tell the autograd that a specific part of my network structure is RNN, then it will compute it easily due to pre-provided tools in the autograd for famous network structures like RNN or LSTM. So How can I do that? How should I inform the autograd that my network structure has LSTM embedded inside of it?
Relevant answer
Answer
Navid Hashemi , if I understand your question correctly, you could just use the LSTM functionality on Tensorflow (https://www.tensorflow.org/api_docs/python/tf/keras/layers/LSTM) or Pytorch (https://pytorch.org/docs/stable/generated/torch.nn.LSTM.html)which should specify your system as an LSTM.
  • asked a question related to Deep Learning
Question
4 answers
Hello,
Is there any point clouds dataset for railway asset classification? Or are some network's weights pre-trained on such a dataset? to date, there are no datasets available in open access for this purpose.
Thank you.
Relevant answer
Answer
Look the link, maybe useful.
Regards,
Shafagat
  • asked a question related to Deep Learning
Question
9 answers
We have a paper with 4 names on it in the field of deep learning in medical image analysis. Two authors had almost equal contributions. One of them is a researcher (called A) with a master's degree and the other is a last-year Ph.D. (called B) There is also another first-year Ph.D. student (called C) who had a minor contribution by participating in some meetings and finally the senior professor (called D).
Since the supervision is done by B, B's name is before the professor as supervisor, but is it okay to put an equal contribution between A and B as first and third author? Is the list of authors like follow fine?
A* C B* D
* means the equal contribution
Thanks
Relevant answer
Answer
Hello Ma Gh
I agree with Yves Daoust that this is “playing with the order” and, in truth, there is too much worry.
My personal approach would be main writer first, senior/supervisor last and the rest (second onwards) is by order of contribution. If the post-doc has done next to nothing, then they should not be included at all; that is what I think. They should earn it.
  • asked a question related to Deep Learning
Question
6 answers
I have 18 rainfall time series. On calculating the variance, it was found there was an appreciable change in the value of variance from one rainfall station to other. Parametric statistical tests are sensitive to Variance, does it mean we need to apply robust statistical tests instead of the parametric test?
Relevant answer
Answer
Kabbilawsh Peruvazhuthi, Generally Parametric tests are to assume equal variances across groups. You could potentially resolve issues either with some data transformation or switching to a non-parametric equivalent test.
Best !!
AN
  • asked a question related to Deep Learning
Question
8 answers
Hello,
In neural network pruning, we first train the network. Then, we identify redundant parts and remove them. Usually, network pruning deteriorates the model's performance, so we need to fine-tune our network. If we determine redundant parts correctly, the performance will not deteriorate.
What happens if we consider a smaller network from scratch (instead of removing redundant parts of a large network)?
Relevant answer
Answer
Thank you so much for your response.
Consider I have a model which has achieved the accuracy of 80%. Then, I prune a fourth of this network and fine-tune it. This pruned model achieves an accuracy of 79%. Now, my question is that if I initialize my pruned network randomly and train it from the scratch, will my model achieve 79% again?
  • asked a question related to Deep Learning
Question
6 answers
I am looking for programming techniques to use in stock investment recommendation.
Relevant answer
Answer
Iuri Velasco The Stock Price Prediction with Machine Learning tool assists you in determining the future worth of a company's stock and other financial assets traded on an exchange. The main point of forecasting stock prices is to make large gains. It is difficult to predict how the stock market will fare.
Machines are not only incapable of anticipating a black swan event, but they are also more likely to trigger one, as traders discovered the hard way during the 2010 flash crash when an algorithmic computer failure sparked a sudden market meltdown. Finally, A.I is bound to fail at stock market forecasting.
  • asked a question related to Deep Learning
Question
3 answers
I am
Relevant answer
Answer
Siddharth Kamila The loss value indicates how poorly or well a model performs after each optimization iteration. An accuracy metric is used to quantify the algorithm's performance in a meaningful way. The accuracy of a model is often computed after the model parameters and is expressed as a percentage.
  • asked a question related to Deep Learning
Question
5 answers
The papers using machine-learning particularly deep-learning models in hydrological prediction (runoff, soil moisture, evapotranspiration, etc.) increase dramatically in recent years. In my viewpoint, these data-driven methods require substantial data to derive solid predictions. I am not sure what is the advantage of these models over the process-based models in predicting hydrological processes.
Relevant answer
Answer
Hi All and Prof. Gao,
I came across this discussion. I think this is really an interesting topic and a good question. First of all, I'd like to say that your point "these data-driven approaches require substantial data" is correct. To answer this question, I think we need to understand the background of the emergence and widespread application of AI and "big data" techniques, and we should think about why we need machine learning or other "big data" techniques in hydrology/earth system science? What are the pitfalls of our current process-based models at present? Can machine learning methods fill these gaps or improve the prediction?
In recent decades, we have vast amounts of spatiotemporal data from in-situ observations, remote sensing, reanalysis data, and model outputs. In my opinion, data-driven methods can help us gain innovative knowledge from these data, and their greatest advantage is that they do not need to rely on parametric assumptions, and thus can dynamically capture the effects of non-stationary surface processes. For the advantages of deep learning in Earth system science, I recommend reading the review paper "Deep learning and process understanding for data-driven Earth system science" by Reichstein et al. I think this paper discusses it very well.
Frankly, there are some challenges with distributed hydrological models, especially when tuning parameters. In the parameter calibration process, why do different parameters get the same effect? When we use the historical data to set the parameters, will the value of the parameters change in future?
In the future, we hope to be able to see the coupling between these approaches so that we can express the real world more realistically. Hope this information is helpful to you. Good luck!
  • asked a question related to Deep Learning
Question
3 answers
How effective will be endometrial cancer detection from ultrasound images using deep learning? Any kind of suggestion would be highly appreciable
Relevant answer
Answer
Hello Mrs. Mosarrat Rumman,
Improving endometrial cancer screening is indeed a matter of better pelvic ultrasound practice. Nevertheless, some scoring systems seem extremely promising to improve our approach.
I share here the REM score with the following links which combine four criteria: age, symptoms, HE4 dosage and finally endometrial thickness. This allows to reach a sensitivity of 93.9% and a specificity of 95.4%.
It is an extremely simple and fast tool where it is enough to draw a line by linking the different criteria of the system to obtain a score. The REM score has proven its effectiveness and seems to accelerate the patient's care.
1. Roberto Angioli, Stella Capriglione, Alessia Aloisi, Daniela Luvero, Ester Valentina Cafà, Nella Dugo, Roberto Montera, Carlo De Cicco Nardone, Corrado Terranova, Francesco Plotti; REM (Risk of Endometrial Malignancy): A Proposal for a New Scoring System to Evaluate Risk of Endometrial Malignancy. Clin Cancer Res 15 October 2013; 19 (20): 5733–5739.
2. Plotti F, Capriglione S, Terranova C, Montera R, Scaletta G, Lopez S, Luvero D, Gianina A, Aloisi A, Benedetti Panici P, Angioli R. Validation of REM score to predict endometrial cancer in patients with ultrasound endometrial abnormalities: results of a new independent dataset. Med Oncol. 2017 May;34(5):82. doi: 10.1007/s12032-017-0945-y. Epub 2017 Apr 7. PMID: 28389908.
3. Plotti F, Capriglione S, Scaletta G, Luvero D, Lopez S, Nastro FF, Terranova C, De Cicco Nardone C, Montera R, Angioli R. Implementing the Risk of Endometrial Malignancy Algorithm (REM) adding obesity as a predictive factor: Results of REM-B in a single-center survey. Eur J Obstet Gynecol Reprod Biol. 2018 Jun;225:51-56. doi: 10.1016/j.ejogrb.2018.03.044. Epub 2018 Apr 7. PMID: 29660578.
  • asked a question related to Deep Learning
Question
1 answer
At present, deep learning has been widely used in various scientific research fields. How to view the application prospect of deep learning in InSAR and the direction that it can be applied in InSAR in the future?
Relevant answer
Answer
Dear Sun Min ,
Look the link, maybe useful.
Regards,
Shafagat
  • asked a question related to Deep Learning
Question
8 answers
Three papers to start learning about science behind graph neural networks and how they work. These papers will be easy to ready if you are familiar with, not necessarily expert in, neural networks and machine learning. There are many more papers on graph neural network. Feel free to share more with your network in a message on this post. I will do the same. A Comprehensive Survey on Graph Neural Networks ( ) Kernel Graph Convolutional Neural Networks ( ) Geom-GCN: Geometric Graph Convolutional Networks ( )
Relevant answer
Answer
Thanks to dear Dr. Madani
Perhaps these two articles are also helpful
  • asked a question related to Deep Learning
Question
6 answers
Hello,
I've pruned my CNN layer-by-layer in two steps.
First, I removed a fourth of the filters of some desired layers, which led to performance degradation of around 1%. Then, I selected two layers that had the highest number of filters and caused the least performance deterioration. Next, I removed half of their filters. The second model performed even better than the original model. What is the reason?
Relevant answer
Answer
First of all, you need to quantify uncertainty of your prediction and find out if it's meaningful at all or just a result of stochastic processes inside the model. Different metrics are not a single point estimate, it is a distribution. Therefore it may vary widely depending on task, model, data split, hyoereparams, etc.
A great article on uncertainty in ML here: https://arxiv.org/abs/2103.03098
Pruning should not improve the performance, therefore my bet is that it is inside the distribution of possible model accuracies.
  • asked a question related to Deep Learning
Question
13 answers
If Model consists of CCN/LSTM layers and Deep Nueral Network(Fully connected layers) then can we called it hybrid model.
Relevant answer
The convolutional layer is the main building block of a convolutional neural network. The convolution layer includes its own filter for each channel, the convolution kernel of which processes the previous layer fragment by fragment (summing up the results of the element-wise product for each fragment). The weights of the convolution kernel (small matrix) are unknown and are set during training.
A feature of the convolutional layer is a relatively small number of parameters that are set during training. So, for example, if the original image has a dimension of 100x100 pixels in three channels (which means 30,000 input neurons), and the convolutional layer uses filters with a 3x3 pixel kernel with an output of 6 channels, then only 9 kernel weights are determined in the learning process , however, for all combinations of channels, that is, 9×3×6=162, in this case, this layer requires finding only 162 parameters, which is significantly less than the number of required parameters of a fully connected neural network.
The scalar result of each convolution falls on the activation function, which is a kind of non-linear function. The activation layer is usually logically combined with the convolution layer (it is considered that the activation function is built into the convolution layer). The non-linearity function can be any of the choice of the researcher; traditionally, functions such as hyperbolic tangent or sigmoid were used for this. However, in the 2000s, a new activation function, ReLU (rectified linear unit), was proposed and studied, which made it possible to significantly speed up the learning process and at the same time simplify calculations (due to the simplicity of the function itself), which means a block of linear rectification, calculating function. That is, in essence, this is an operation of cutting off the negative part of a scalar value. There is a method for determining the optimal number of linear distillation units.
The input of a CNN is traditionally two-dimensional, field or matrix, but can also be changed to one-dimensional, allowing an internal representation of a one-dimensional sequence to be created.
This allows CNNs to be used more generally for other types of data that have spatial relationships. For example, there is an order relationship between words in a text document. There is an ordered relationship in the time steps of a time series.
The Long Short-Term Memory Network, or LSTM, is arguably the most successful RNN as it overcomes the training problems of a repetitive network and is in turn used in a wide variety of applications.
The CNN or RNN model is rarely used in isolation.
These types of networks are used as layers in a larger model that also has one or more MLP layers. Technically, this is a hybrid type of neural network architecture.
Perhaps the most interesting work is related to the combination of different types of networks into hybrid models.
For example, consider a model that uses a stack of layers with a CNN as input, an LSTM in the middle, and an MLP as output. Such a model can read a sequence of input images, such as a video, and generate a prediction. This is called the CNN LSTM architecture.
Network types can also be combined into specific architectures to open up new possibilities such as reusable image recognition models that use very deep CNNs and MLPs that can be added to a new LSTM model and used for photo captions. In addition, encoder-decoder LSTM networks can be used to obtain input and output sequences of different lengths.
It is important to first think clearly about what you and your stakeholders need from a project, and then choose a network architecture (or develop one) that meets the specific needs of your project.
  • asked a question related to Deep Learning
Question
11 answers
Dear Researchers.
These days machine learning application in cancer detection has been increased by developing a new method of Image processing and deep learning. In this regard, what is your idea about a new image processing method and deep learning for cancer detection?
Thank you in advance for participating in this discussion.
Relevant answer
Answer
I am assuming your data are images since you mentioned image processing and thus deep CNN models are the state of the art and can produce good results if and only you have a great amount of training data. If your data is small in size then just go ahead with regular neural networks like multi layer perceptrons (MLP) with one or maximum two hidden layers. Now, if your data is just tabular data (csv file) then I don’t recommend using neural networks like CNN or MLP at all. You can simply use traditional machin learning algorithms like random fores, support vector machine, or K-nearest neighbor.
  • asked a question related to Deep Learning
Question
10 answers
I was trying to get an insight into quantum computing (QC) for a research purpose. However, every source was filled with lots of technical terms with less explanation. It becomes very difficult for us those who want to learn QC with no knowledge before.
Is there any source where beginners can learn QC with zero background knowledge?
Thanks in advance.
  • asked a question related to Deep Learning
Question
5 answers
Why rrrrrrr uuuuuuuu
Relevant answer
Answer
CNN and RNN are deep learning algorithms which are also examples of ANN while ANN is a type of machine learning.
ANN has different architectures such as FNN, CNN, RNN, GAN, auto-encoders, SOM, and so on. These different atchitectures are suitable for different kinds of tasks as well as nature and volume of data.
I think that in your question, you referred to FNN ( with F referring to feedforward) as ANN. FNN works as a supervised learning algorithm for mapping numerical input vectors to output vectors (either continuous or categorical outputs). CNN is suitable for learning a model for at least 2D data such as image, RNN is for sequence modelling.
It all depends on the task. I have not worked on fault detection before but I suspect it could involve classifying variables which are likely to be numerical values. The volume of the data too can also contribute to why FNN is used because it is less data-hungry. Even in small data problems involving 2D or more, FNN can be used on the flattened data because it works better on small dataset than CNN.
  • asked a question related to Deep Learning
Question
7 answers
I am comparing 2 deep learning models on same dataset. But I am getting different comparison on different run. For example, once I get model 1's accuracy is better than model 2, again on another run I get the opposite result. I have used same seeds before running the models.
What may be the reason? How can I get a stable comparison result?
Relevant answer
Answer
  • asked a question related to Deep Learning
Question
2 answers
After CLIP and DALLE, are there any latest advances when it comes to learning deep learning embeddings of image & text description together ?
Relevant answer
Answer
  • asked a question related to Deep Learning
Question
7 answers
In the oil and gas industry, for technical, economic, and similar reasons, well-Log running is done from special intervals. Therefore, to build comprehensive models for field development, we will need more information at different depths. Today, with advances in numerical methods, especially machine learning and deep learning methods, we can use their help to eliminate these data gaps. Of course, there are methods such as rock physics that are very practical. But according to my results, part of which is described below. It is better to combine the rock physics method with the deep learning methods, in which case the results will be amazing. I selected wells from the Poseidon Basin in Australia for testing and got good results. In this study, by combining the rock physics method and deep learning (CNN + GRU), the values ​​of density, porosity, and shear wave slowness were predicted. A comprehensive database of PEF, RHOB, LLD, GR, CGR, NPHI, DTC, DTS, and water saturation logs was prepared and used as training data for the wells. The below figure is the result of a blind well test for Torosa well in the Poseidon Basin, Australia. As you can see, the prediction results are very close to the measured values ​​of shear wave slowness in this well.
Relevant answer
Answer
You can use Machine learning algorithm to learn the data trends, where it is available and what parameters the data depend on, then use the algorithm to predict the data where they are missing the relationships learn from the training.
  • asked a question related to Deep Learning
Question
2 answers
Related to early diagnosis of Alzheimer's by deep learning
I'm trying to implement a deep learning model to classify stable and converter MC
.
Relevant answer
  • asked a question related to Deep Learning
Question
3 answers
I was exploring differential privacy (DP) which is an excellent technique to preserve the privacy of the data. However, I am wondering what will be the performance metrics to prove this between schemes with DP and schemes without DP.
Are there any performance metrics in which a comparison can be made between scheme with DP and scheme without DP?
Thanks in advance.
Relevant answer
Answer
Dear Anik Islam Abhi,
You may want to review the data below:
What is differential data privacy?
Differential privacy (DP) is a system for publicly sharing information about a dataset by describing the patterns of groups within the dataset while withholding information about individuals in the dataset.
Why is differential privacy so important?
Preventing attackers from access to perfect data This deniability aspect of differential privacy is important in cases like linkage attacks where attackers leverage multiple sources to identify the personal information of a target.
What is privacy budget in differential privacy?
Also known as the privacy parameter or the privacy budget. (i) When ε is small. (ε,0)-differential privacy asserts that for all pairs of adjacent databases x, y and all outputs M, an adversary cannot distinguish which is the true database on the basis of observing the output.
What is differential privacy in machine learning?
Differential privacy is a notion that allows quantifying the degree of privacy protection provided by an algorithm on the underlying (sensitive) data set it operates on. Through the lens of differential privacy, we can design machine learning algorithms that responsibly train models on private data.
How much is enough choosing Epsilon for differential privacy?
... The recommended values for ε vary in a big interval, from as small as 0.01 and 0.1 to as big as 7
Who uses differential privacy?
Apple launched differential privacy for the first time in macOS Sierra and iOS 10. Since then, we have expanded to other use cases such as Safari and Health types.
Differential Privacy: General Survey and Analysis of Practicability in the Context of Machine Learning
Franziska Boenisch
  • asked a question related to Deep Learning
Question
21 answers
How do we determine the appropriate number of hidden layers for the problem so that it positively affects the solution of the desired problem?
How do we understand the impact of adding an additional hidden layer on improving workflow? Where exactly will the improvement be?
Relevant answer
Answer
  • asked a question related to Deep Learning
Question
7 answers
Hello everyone,
I am looking for links of audio datasets that can be used in classification tasks in machine learning. Preferably the datasets have been exposed in scientific journals.
Thank you for your attention and valuable support.
Regards,
Cecilia-Irene Loeza-Mejía
Relevant answer
Answer
In our work we used UrbanSound8K .