Q&A

ResearchGate Q&A lets scientists and researchers exchange questions and answers relating to their research expertise, including areas such as techniques and methodologies.

Browse by research topic to find out what others in your field are discussing.

Browse Topics

A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
  • Kamesh Natarajan added an answer in Water Treatment:
    What kind of process carried out in water treatment plant using Bioreactors?

    I need an protocol for treating wastewater sample in Airlift Bioreactor using natural adsorbent materials  like banana peel, algae and some other materials?

    Kamesh Natarajan · Bannari Amman Institute of Technology

    Does adsorbant materials reduce the parameters or properties + of wastewater?

  • Sajedah Ayesh asked a question in Tramadol:
    Case discussion: is there any relationship between ulcerative colitis and Tramadol? does it improve the clinical signs and symptoms?

    case: patient with ulcerative colitis for many years, he addict of Tramadol for 1 month, now he has no clinical symptoms of ulcerative colitis

  • Hani Antoun added an answer in Phosphates:
    Can anybody elaborate how to prepare buffered media for elucidating phosphate solubilizing activity?

    In research articles i have noticed one RP media, if somebody has used, then please tell details

  • What does "offset" and "quantile normalization" means in LIMMA ?

    In LIMMA for gene expression data normalization, offset is used to correct background and quantile for between array normalization. How same works and setting different offset values like 16 or 50, means what ?. It will be easy for me to understand in words rather than in equations..

    Ahmed ElFatih Amin Eldoliefy · North Dakota State University

    Thanks... and Sorry... i may regret my answer...

  • Phil Geis added an answer in Melanins:
    Are there any commercial bacterial melanin products or medicines ?

    I would like to know whether any bacterial melanin products are available commercially How much is the cost of bacterial melanin world wide. How much is its requirement ?.

    Phil Geis · GMQ

    Sigma sells a DOPA synthetic melanin http://www.sigmaaldrich.com/catalog/product/sigma/m8631?lang=en&region=US

    Please be aware that there are different melanins.   Not all are products of tyrosinase.

  • Mikhail Saltychev added an answer in ANCOVA:
    How do I use ANCOVA for meta-analysis?

    Trying to conduct a meta-analysis using post-test values from several studies with two independent groups (cases/controls). There are pre-/post-test means and SDs. What I'm looking for, are tips on how to employ ANCOVA for adjusting for difference between baseline data. What software should I use? I'm quite familiar with CMA and MIX. I'm not sure, but I think that CMA is converting pre-/post-values into change difference. I understand that it works too, but, in this particular case, I'd like to try ANCOVA. Please, do not suggest R. I just don't want to spend too much time on learning the new language. Unfortunately, I do not also have Stata at my disposal. If it is too complex, than I just stick with change difference. Sorry if I couldn't express myself more clearly.

    Mikhail Saltychev · Turku University Hospital

    Dear Souvik. Would it be possible to get the full text of Miller paper you suggested?

  • I used 100 ppm and 500ppm Sodium hypochlorite solution for decontamination of lettuce but I could not completely, why?

    I want to decontaminate lettuce completely before inoculation of my target bacteria. For this, first I removed outer leaves and core of  the lettuce and cut into 3 to 4 inches pieces and added 25 gm of lettuce into 500ml  solution of 100 and 500ppm sodium hypochlorite solution. For preparation of hypochlorite solution, I added 0.5 ml into 500l DW  for 100ppm solution as original stock 100000 ppm and 2.5 ml to 500 ml DW for 500ppm solution. After that, I took out lettuce from solution, added 25ml PBS and did stomaching. 100 microlitre of stomaching produce , I  spread and found around 100 cfu/100 microlitre in 100ppm treated one and 150cfu/100 microlitre in 500ppm. but I could not completely decontaminate. I repeated 3 times same procedure but could succeed. If anybody know where is the problem, please answer me.

    K. R. Sridhar · Mangalore university

    Importantly, sodium hypochlorite is photosensitive, it should be black bottle and kept in dark. If it is exposed to light its potency lost and whatever higher concentration you use it will not be effective. Usually after surface sterilization no growth of fungi occurs from live tissue until 7 days and above testify that sterilization is effective, if grown within 2-3 days means those are nothing but weedy fungi and sterilization is incomplete!

  • Jean Vaillant added an answer in Data Analysis:
    Can anyone guide me and explain transforming continuous data into instantaneous data (with 3 mins interval)?

    I'm studying spinner dolphins' behavior. Is it by dividing the duration period of one behavioral state? Thanks in advance.

    Jean Vaillant · Université des Antilles et de la Guyane

    I think you deal with a marked time point process. This is the case if your data consist of consecutive dates of events each associated with the population behavior. You can use techniques based on inter event waiting times and there is no need of dividing your observation periods in sub-intervals. But if you want to use count based techniques, then you need to choose a bin length. The R package PtProcess can help you : http://www.jstatsoft.org/v35/i08/paper.

  • Aysha Bey added an answer in Colonialism:
    Is Ernest Hemingway's The Sun Also Rises a good sample for the study of the colonial and cultural journey?

    Can we investigate Edward Said's Orientalism and any colonial influences, as well as  studying Homi Bhabha's theory of location of culture in the Sun Also Rises

    Aysha Bey · University of Alabama at Birmingham

    There may be some aspects of colonial or post-colonial theory that can be applied to Spain--but because Spain is not the object of "conventional colonization" (as Ludmilla points out), it may be difficult to apply post-colonial theory, especially Bhabha's.  But if you look at the "exoticizing" (perfect term) of Spain in light of Said's "Orientalism," you might have more success.

    If you are willing to look at other texts, I have found Wole Soyinka's drama "Death and the King's Horseman" particularly good for the study of colonial and post-colonial theories. There are a number of other writers whose works could provide superb sources for the application of both Said's and Bhabha's theories.

  • Tarik Ömer Oğurtani added an answer in Matrix:
    What is the driving force for growth of the particles with the same size in the matrix?

    what is the driving force for growth of the particles with the same size in the matrix

    Tarik Ömer Oğurtani · Middle East Technical University

    Dear Dr. Sayyedan;  In ordinary sense there are two main driving forces for the growth of a given particle in a matrix:  1) The Bulk (Gibbs f or  isobaric systems or Helmholtz for isovolumic systems) free energy of transformation. Where the rate of   change in free energy should be negative. 2) The rate of surface free energy increase should be compensated by the bulk free energy variation.

     In multi-component systems (alloys) the particle has different composition then the matrix therefore rejected solute at the interface should be taken out by diffusion in solid state transformations. 

    There is a very sophisticated theory using the  irreversible thermodynamics advocated by Ogurtani, which takes  care of the  applied   elastostatic (deformation) as well as electrostatic  forces. The problem treated in this theory is nonequilibrium and it doesn't assume any shape for the particle that would be adjusted by capillary forces.

  • In Schottky diodes, how could we explain, physically, the barrier inhomogeneities ?

    In Schottky diodes appears at metal/Semiconductor interface some inhomogeneities in the barrier.

    Amipara Manilal D · Balaji Institute of Engineering & technology, Junagadh

    The in-homogeneity in a material (due to metal -semiconductor mixing) creates uneven conductivity, in turn, availability of free electron/hole are not uniform. As the barrier is creates due to recombination of free electron-hole, results in in-homogeneity in barrier

  • What are the best conceptual model simulation tools based on your experience?

    Is it an open source tool?

    What modeling language/method/diagrams does it use?

    What type of simulation is that, e.g. symbolic animation, prototyping, ...?

    Research methodologies/publications behind?

    James W. Richardson · Texas A&M University

    Use Excel with Simetar.  Get a 30 day trial copy at www.simetar.com.  I designed it after 30 years of developing simulation models.  It has been tested by several 1,000 students in graduate courses in 10 countries and 10 universities in the United States.

  • John Schloendorn added an answer in Venture Capital:
    How can I approach venture capitalists?

    I am planning to conduct a very short survey of venture capitalists around the world. Random sampling is not necessary.  Any suggestions?

    They are super busy, super secretive (protective of proprietary processes), and super hard to access. I tried trade groups, but that was not successful.

    John Schloendorn · Gene And Cell Technologies

    Might the results of your survey be useful to LPs, the folks who give money to the VCs?  These are often government employed pension fund mangers, or private endowment mangers with a natural interest in asking questions about VCs.  If you can partner with such an institution, the VC will become your lap dog overnight.  

  • Angela Vasaturo added an answer in Dyes:
    Are there any good dyes for 3-color immunofluorescence imaging?

    I would like to perform 3-color immunofluorescence imaging on three proteins in cells. One of them is EGFP-tagged. The red color dyes (Texas Red or Alexa Flour 568) worked very well. However, the blue dyes (Alexa Flour 350 or 405) worked poorly. Any recommendation for another color dye?

    Angela Vasaturo · Radboud University Nijmegen

    I would also suggest to use Alexa 647!

  • Francisco Olivos added an answer in Youth:
    What books or articles would you recommend to create a survey about idle youth?

    I'm working in a survey to idle youth in Chile. I need some validated questions or similar studies. We have interest to compare this population with other realities. 

    Francisco Olivos · Pontifical Catholic University of Chile

    We will compare with other youth. The focus is family, laboral market and education. Mainly attitudes, expectatives and integration. 

  • Murlidhar Meghwal added an answer in Network:
    Can anyone help regarding NARX network in network timeseries analysis tool?

    How to set only feedback delay not the input delays in the network ?

    Murlidhar Meghwal · Jain University

    Dynamic neural networks are good at time series prediction.

    Suppose, for instance, that you have data from a pH neutralization process. You want to design a network that can predict the pH of a solution in a tank from past values of the pH and past values of the acid and base flow rate into the tank. You have a total of 2001 time steps for which you have those series.

    You can solve this problem in two ways:

    Use a graphical user interface, ntstool, as described in Using the Neural Network Time Series Tool.

    Use command-line functions, as described in Using Command-Line Functions.

    It is generally best to start with the GUI, and then to use the GUI to automatically generate command-line scripts. Before using either method, the first step is to define the problem by selecting a data set. Each GUI has access to many sample data sets that you can use to experiment with the toolbox. If you have a specific problem that you want to solve, you can load your own data into the workspace. The next section describes the data format.
    Defining a Problem

    To define a time series problem for the toolbox, arrange a set of TS input vectors as columns in a cell array. Then, arrange another set of TS target vectors (the correct output vectors for each of the input vectors) into a second cell array (see "Data Structures" for a detailed description of data formatting for static and time series data). However, there are cases in which you only need to have a target data set. For example, you can define the following time series problem, in which you want to use previous values of a series to predict the next value:

    targets = {1 2 3 4 5};

    The next section shows how to train a network to fit a time series data set, using the neural network time series tool GUI, ntstool. This example uses the pH neutralization data set provided with the toolbox.
    Using the Neural Network Time Series Tool

    If needed, open the Neural Network Start GUI with this command:

    nnstart

    Notice that this opening pane is different than the opening panes for the other GUIs. This is because ntstool can be used to solve three different kinds of time series problems.

    In the first type of time series problem, you would like to predict future values of a time series y(t) from past values of that time series and past values of a second time series x(t). This form of prediction is called nonlinear autoregressive with exogenous (external) input, or NARX (see "NARX Network" (narxnet, closeloop)), and can be written as follows:

    y(t) = f(y(t – 1), ..., y(t – d), x(t – 1), ..., (t – d))

    This model could be used to predict future values of a stock or bond, based on such economic variables as unemployment rates, GDP, etc. It could also be used for system identification, in which models are developed to represent dynamic systems, such as chemical processes, manufacturing systems, robotics, aerospace vehicles, etc.

    In the second type of time series problem, there is only one series involved. The future values of a time series y(t) are predicted only from past values of that series. This form of prediction is called nonlinear autoregressive, or NAR, and can be written as follows:

    y(t) = f(y(t – 1), ..., y(t – d))

    This model could also be used to predict financial instruments, but without the use of a companion series.

    The third time series problem is similar to the first type, in that two series are involved, an input series x(t) and an output/target series y(t). Here you want to predict values of y(t) from previous values of x(t), but without knowledge of previous values of y(t). This input/output model can be written as follows:

    y(t) = f(x(t – 1), ..., x(t – d))

    The NARX model will provide better predictions than this input-output model, because it uses the additional information contained in the previous values of y(t). However, there may be some applications in which the previous values of y(t) would not be available. Those are the only cases where you would want to use the input-output model instead of the NARX model.

    For this example, select the NARX model and click Next to proceed.

    Click Load Example Data Set in the Select Data window. The Time Series Data Set Chooser window opens.

    Note Use the Inputs and Targets options in the Select Data window when you need to load data from the MATLAB® workspace.

    Select pH Neutralization Process, and click Import. This returns you to the Select Data window.

    Click Next to open the Validation and Test Data window, shown in the following figure.

    The validation and test data sets are each set to 15% of the original data.

    With these settings, the input vectors and target vectors will be randomly divided into three sets as follows:

    70% will be used for training.

    15% will be used to validate that the network is generalizing and to stop training before overfitting.

    The last 15% will be used as a completely independent test of network generalization.

    (See "Dividing the Data" for more discussion of the data division process.)

    Click Next.

    The standard NARX network is a two-layer feedforward network, with a sigmoid transfer function in the hidden layer and a linear transfer function in the output layer. This network also uses tapped delay lines to store previous values of the x(t) and y(t) sequences. Note that the output of the NARX network, y(t), is fed back to the input of the network (through delays), since y(t) is a function of y(t – 1), y(t – 2), ..., y(t – d). However, for efficient training this feedback loop can be opened.

    Because the true output is available during the training of the network, you can use the open-loop architecture shown above, in which the true output is used instead of feeding back the estimated output. This has two advantages. The first is that the input to the feedforward network is more accurate. The second is that the resulting network has a purely feedforward architecture, and therefore a more efficient algorithm can be used for training. This network is discussed in more detail in "NARX Network" (narxnet, closeloop).

    The default number of hidden neurons is set to 10. The default number of delays is 2. Change this value to 4. You might want to adjust these numbers if the network training performance is poor.

    Click Next.

    Select a training algorithm, then click Train.. Levenberg-Marquardt (trainlm) is recommended for most problems, but for some noisy and small problems Bayesian Regularization (trainbr) can take longer but obtain a better solution. For large problems, however, Scaled Conjugate Gradient (trainscg) is recommended as it uses gradient calculations which are more memory efficient than the Jacobian calculations the other two algorithms use. This example uses the default Levenberg-Marquardt.

    The training continued until the validation error failed to decrease for six iterations (validation stop).

    Under Plots, click Error Autocorrelation. This is used to validate the network performance.

    The following plot displays the error autocorrelation function. It describes how the prediction errors are related in time. For a perfect prediction model, there should only be one nonzero value of the autocorrelation function, and it should occur at zero lag. (This is the mean square error.) This would mean that the prediction errors were completely uncorrelated with each other (white noise). If there was significant correlation in the prediction errors, then it should be possible to improve the prediction - perhaps by increasing the number of delays in the tapped delay lines. In this case, the correlations, except for the one at zero lag, fall approximately within the 95% confidence limits around zero, so the model seems to be adequate. If even more accurate results were required, you could retrain the network by clicking Retrain in ntstool. This will change the initial weights and biases of the network, and may produce an improved network after retraining.

    View the input-error cross-correlation function to obtain additional verification of network performance. Under the Plots pane, click Input-Error Cross-correlation.

    This input-error cross-correlation function illustrates how the errors are correlated with the input sequence x(t). For a perfect prediction model, all of the correlations should be zero. If the input is correlated with the error, then it should be possible to improve the prediction, perhaps by increasing the number of delays in the tapped delay lines. In this case, all of the correlations fall within the confidence bounds around zero.

    Under Plots, click Time Series Response. This displays the inputs, targets and errors versus time. It also indicates which time points were selected for training, testing and validation.

    Click Next in the Neural Network Time Series Tool to evaluate the network.

    At this point, you can test the network against new data.

    If you are dissatisfied with the network's performance on the original or new data, you can do any of the following:

    Train it again.

    Increase the number of neurons and/or the number of delays.

    Get a larger training data set.

    If the performance on the training set is good, but the test set performance is significantly worse, which could indicate overfitting, then reducing the number of neurons can improve your results.

    If you are satisfied with the network performance, click Next.

    Use this panel to generate a MATLAB function or Simulink® diagram for simulating your neural network. You can use the generated code or diagram to better understand how your neural network computes outputs from inputs, or deploy the network with MATLAB Compiler™ tools and other MATLAB and Simulink code generation tools.

    Use the buttons on this screen to generate scripts or to save your results.

    You can click Simple Script or Advanced Script to create MATLAB code that can be used to reproduce all of the previous steps from the command line. Creating MATLAB code can be helpful if you want to learn how to use the command-line functionality of the toolbox to customize the training process. In Using Command-Line Functions, you will investigate the generated scripts in more detail.

    You can also have the network saved as net in the workspace. You can perform additional tests on it or put it to work on new inputs.

    After creating MATLAB code and saving your results, click Finish.

    Using Command-Line Functions

    The easiest way to learn how to use the command-line functionality of the toolbox is to generate scripts from the GUIs, and then modify them to customize the network training. As an example, look at the simple script that was created at step 15 of the previous section.

    % Solve an Autoregression Problem with External
    % Input with a NARX Neural Network
    % Script generated by NTSTOOL
    %
    % This script assumes the variables on the right of
    % these equalities are defined:
    %
    % phInputs - input time series.
    % phTargets - feedback time series.

    inputSeries = phInputs;
    targetSeries = phTargets;

    % Create a Nonlinear Autoregressive Network with External Input
    inputDelays = 1:4;
    feedbackDelays = 1:4;
    hiddenLayerSize = 10;
    net = narxnet(inputDelays,feedbackDelays,hiddenLayerSize);

    % Prepare the Data for Training and Simulation
    % The function PREPARETS prepares time series data
    % for a particular network, shifting time by the minimum
    % amount to fill input states and layer states.
    % Using PREPARETS allows you to keep your original
    % time series data unchanged, while easily customizing it
    % for networks with differing numbers of delays, with
    % open loop or closed loop feedback modes.
    [inputs,inputStates,layerStates,targets] = ...
    preparets(net,inputSeries,{},targetSeries);

    % Set up Division of Data for Training, Validation, Testing
    net.divideParam.trainRatio = 70/100;
    net.divideParam.valRatio = 15/100;
    net.divideParam.testRatio = 15/100;

    % Train the Network
    [net,tr] = train(net,inputs,targets,inputStates,layerStates);

    % Test the Network
    outputs = net(inputs,inputStates,layerStates);
    errors = gsubtract(targets,outputs);
    performance = perform(net,targets,outputs)

    % View the Network
    view(net)

    % Plots
    % Uncomment these lines to enable various plots.
    % figure, plotperform(tr)
    % figure, plottrainstate(tr)
    % figure, plotregression(targets,outputs)
    % figure, plotresponse(targets,outputs)
    % figure, ploterrcorr(errors)
    % figure, plotinerrcorr(inputs,errors)

    % Closed Loop Network
    % Use this network to do multi-step prediction.
    % The function CLOSELOOP replaces the feedback input with a direct
    % connection from the outout layer.
    netc = closeloop(net);
    netc.name = [net.name ' - Closed Loop'];
    view(netc)
    [xc,xic,aic,tc] = preparets(netc,inputSeries,{},targetSeries);
    yc = netc(xc,xic,aic);
    closedLoopPerformance = perform(netc,tc,yc)

    % Early Prediction Network
    % For some applications it helps to get the prediction a
    % timestep early.
    % The original network returns predicted y(t+1) at the same
    % time it is given y(t+1).
    % For some applications such as decision making, it would
    % help to have predicted y(t+1) once y(t) is available, but
    % before the actual y(t+1) occurs.
    % The network can be made to return its output a timestep early
    % by removing one delay so that its minimal tap delay is now
    % 0 instead of 1. The new network returns the same outputs as
    % the original network, but outputs are shifted left one timestep.
    nets = removedelay(net);
    nets.name = [net.name ' - Predict One Step Ahead'];
    view(nets)
    [xs,xis,ais,ts] = preparets(nets,inputSeries,{},targetSeries);
    ys = nets(xs,xis,ais);
    earlyPredictPerformance = perform(nets,ts,ys)

    You can save the script, and then run it from the command line to reproduce the results of the previous GUI session. You can also edit the script to customize the training process. In this case, follow each of the steps in the script.

    The script assumes that the input vectors and target vectors are already loaded into the workspace. If the data are not loaded, you can load them as follows:

    load ph_dataset
    inputSeries = phInputs;
    targetSeries = phTargets;

    Create a network. The NARX network, narxnet, is a feedforward network with the default tan-sigmoid transfer function in the hidden layer and linear transfer function in the output layer. This network has two inputs. One is an external input, and the other is a feedback connection from the network output. (After the network has been trained, this feedback connection can be closed, as you will see at a later step.) For each of these inputs, there is a tapped delay line to store previous values. To assign the network architecture for a NARX network, you must select the delays associated with each tapped delay line, and also the number of hidden layer neurons. In the following steps, you assign the input delays and the feedback delays to range from 1 to 4 and the number of hidden neurons to be 10.

    inputDelays = 1:4;
    feedbackDelays = 1:4;
    hiddenLayerSize = 10;
    net = narxnet(inputDelays,feedbackDelays,hiddenLayerSize);

    Note Increasing the number of neurons and the number of delays requires more computation, and this has a tendency to overfit the data when the numbers are set too high, but it allows the network to solve more complicated problems. More layers require more computation, but their use might result in the network solving complex problems more efficiently. To use more than one hidden layer, enter the hidden layer sizes as elements of an array in the fitnet command.

    Prepare the data for training. When training a network containing tapped delay lines, it is necessary to fill the delays with initial values of the inputs and outputs of the network. There is a toolbox command that facilitates this process - preparets. This function has three input arguments: the network, the input sequence and the target sequence. The function returns the initial conditions that are needed to fill the tapped delay lines in the network, and modified input and target sequences, where the initial conditions have been removed. You can call the function as follows:

    [inputs,inputStates,layerStates,targets] = ...
    preparets(net,inputSeries,{},targetSeries);

    Set up the division of data.

    net.divideParam.trainRatio = 70/100;
    net.divideParam.valRatio = 15/100;
    net.divideParam.testRatio = 15/100;

    With these settings, the input vectors and target vectors will be randomly divided, with 70% used for training, 15% for validation and 15% for testing.

    Train the network. The network uses the default Levenberg-Marquardt algorithm (trainlm) for training. For problems in which Levenberg-Marquardt does not produce as accurate results as desired, or for large data problems, consider setting the network training function to Bayesian Regularization (trainbr) or Scaled Conjugate Gradient (trainscg), respectively, with either

    net.trainFcn = 'trainbr';
    net.trainFcn = 'trainscg';

    To train the network, enter:

    [net,tr] = train(net,inputs,targets,inputStates,layerStates);

    During training, the following training window opens. This window displays training progress and allows you to interrupt training at any point by clicking Stop Training.

    This training stopped when the validation error increased for six iterations, which occurred at iteration 70.

    Test the network. After the network has been trained, you can use it to compute the network outputs. The following code calculates the network outputs, errors and overall performance. Note that to simulate a network with tapped delay lines, you need to assign the initial values for these delayed signals. This is done with inputStates and layerStates provided by preparets at an earlier stage.

    outputs = net(inputs,inputStates,layerStates);
    errors = gsubtract(targets,outputs);
    performance = perform(net,targets,outputs)

    performance =

    0.0042

    View the network diagram.

    view(net)

    Plot the performance training record to check for potential overfitting.

    figure, plotperform(tr)

    This figure shows that training, validation and testing errors all decreased until iteration 64. It does not appear that any overfitting has occurred, because neither testing nor validation error increased before iteration 64.

    All of the training is done in open loop (also called series-parallel architecture), including the validation and testing steps. The typical workflow is to fully create the network in open loop, and only when it has been trained (which includes validation and testing steps) is it transformed to closed loop for multistep-ahead prediction. Likewise, the R values in the GUI are computed based on the open-loop training results.

    Close the loop on the NARX network. When the feedback loop is open on the NARX network, it is performing a one-step-ahead prediction. It is predicting the next value of y(t) from previous values of y(t) and x(t). With the feedback loop closed, it can be used to perform multi-step-ahead predictions. This is because predictions of y(t) will be used in place of actual future values of y(t). The following commands can be used to close the loop and calculate closed-loop performance

    netc = closeloop(net);
    netc.name = [net.name ' - Closed Loop'];
    view(netc)
    [xc,xic,aic,tc] = preparets(netc,inputSeries,{},targetSeries);
    yc = netc(xc,xic,aic);
    perfc = perform(netc,tc,yc)

    perfc =

    2.8744

    Remove a delay from the network, to get the prediction one time step early.

    nets = removedelay(net);
    nets.name = [net.name ' - Predict One Step Ahead'];
    view(nets)
    [xs,xis,ais,ts] = preparets(nets,inputSeries,{},targetSeries);
    ys = nets(xs,xis,ais);
    earlyPredictPerformance = perform(nets,ts,ys)

    earlyPredictPerformance =

    0.0042

    From this figure, you can see that the network is identical to the previous open-loop network, except that one delay has been removed from each of the tapped delay lines. The output of the network is then y(t + 1) instead of y(t). This may sometimes be helpful when a network is deployed for certain applications.

    If the network performance is not satisfactory, you could try any of these approaches:

    Reset the initial network weights and biases to new values with init and train again (see "Initializing Weights" (init)).

    Increase the number of hidden neurons or the number of delays.

    Increase the number of training vectors.

    Increase the number of input values, if more relevant information is available.

    Try a different training algorithm (see "Training Algorithms").

    To get more experience in command-line operations, try some of these tasks:

    During training, open a plot window (such as the error correlation plot), and watch it animate.

    Plot from the command line with functions such as plotresponse, ploterrcorr and plotperform. (For more information on using these functions, see their reference pages.)

    Also, see the advanced script for more options, when training from the command line.

    Each time a neural network is trained, can result in a different solution due to different initial weight and bias values and different divisions of data into training, validation, and test sets. As a result, different neural networks trained on the same problem can give different outputs for the same input. To ensure that a neural network of good accuracy has been found, retrain several times.

    There are several other techniques for improving upon initial solutions if higher accuracy is desired. For more information, see Improve Neural Network Generalization and Avoid Overfitting.

  • Aliasger Haiderali added an answer in Abaqus:
    Relative displacement in abaqus?

    Dear all;

    How can i define relative displacement between tow surface in abaqus?

    Thanks

  • John Jeglum added an answer in Plant Behavior:
    Are plant growing in wetlands mostly light-dependant?

    Does anybody have any information about hygrophilous plants behavior linked with the amount of light available? I'd like to know if most of plants in wetlands are not shade tolerant. I'm searching for articles about this subject. Thanks.

    John Jeglum · Swedish University of Agricultural Sciences

    This depends on what type of wetland, for instance, marsh, fen, bog swamp forest. Hakan Rydin and I have listed some of the plants that are light demanding and shade tolerant. The shade tolerant ones are found  beneath well-canopied thicket swamps or forested swamps. Ours was completely a subjective rating. see Rydin and Jeglum. 2013. The biology of peatlands, 2nd ed. Oxford U. Press.

  • Hengky S H added an answer in Marketing:
    Is there a useful method to measure the contribution of marketing mix for a product in different markets?

    I would like to find some similar studies, about this topic.

    Hengky S H · Universiti Utara Malaysia

    Hi Ms. Zahra,

    May use benchmark it with modeling (attached) of  Wolfe and Crotts (2011)

    Regards

    Hengky

  • Why is Tubular Solar Still better than other solar stills?

    Other than basin type stills accumulation of salts, I could not find any advantage.

    Philippe Mimeault · Université du Québec à Montréal

    A few years back, on top of lifetime, grey energy was a big difference, but I am not sure if this is still the case.

  • Can we compare Burr III dstribution with Weibull type distributions?

    Distribution theory

    Mauricio Jerez · University of the Andes (Venezuela)

    I think so. If you are comparing the fitting to empirical data, the general form and the special cases would give different values for the goodness of fit tests (i.e. the weibull would be a Burr form with less parameters). Also, different fitting methods would give different goodness of fit values. I asumme that by limiting case you mean a special case of the Burr III distribution, don.t you?

  • Aysha Bey added an answer in Research Papers:
    Can we define an expiration date or useful age for our papers?
    Is it reasonable to use these terms?
    A number of papers have been published a long time ago, but still have many citations.
    If it is possible, then is it predictable?
    Is citation can be a suitable measure to judge the useful age of a paper?
    Which papers have more useful lifetime or long expire date?

    Thanks for your inputs.
    Aysha Bey · University of Alabama at Birmingham

    Alireza makes a hugely important point here about research.  And Michael points out the other essential matter--literature reviews usually demand more current research, at least, something new coming out of an older idea.  Once can certainly cite a landmark work--one that sets a standard for the field.  I like to remind my literature students that we professors of literature are still reading Gilgamesh (2740 BCE) so we may be not be so involved in the date (we don't cite the date in parenthetical citations as the APA format does).  But I also tell them I don't want to see literary research with no citations within the last 10 years. For instance, in ancient Germanic studies, Friedrich Klaeber is always cited as THE landmark study of Beowulf.  But the literature review must move on since Klaeber studied in the 1890s and early decades of the 20th century.  There was no Sutton Hoo finding to back up the weapons descriptions in the text of Beowulf.  There was no longboat dug up in Scandinavia.  If there is not much current research available on a piece of literature, it may simply indicate that the text is not currently taught or studied.  When I was in graduate school in the 1980s, one of my favorite "reads" was Gothic fiction, usually dominated in the West by women.  My professors considered that writing "trash" and not worthy of academic ranking. Nowadays, the Golthic is everywhere (good and bad), but women like Ann Radcliffe and Jane Austen ("Northanger Abbey") are regularly studied and appreciated.

     The short story "The Yellow Wallpaper" written in the 1890s was always interpreted by the critics (all male, of course) as a piece of Gothic fiction--until the 1970s when the feminists began to write critical reviews (and found a publishing outlet).  I like to have my students do comparative papers of the critical viewpoints just to get them acquainted with how dramatically views can change from one period to another.  A paper citing only those males from 1920 would fail in any literature class--because it is not taking into account the social changes that have altered our viewpoints, our world view even.

    I think in science dates are even more essential. I also teach academic writing with many of my students in the sciences. For instance, if one reads medical journals about heart attacks in women in the 1950s, the data basically says  women do not suffer much from this situation.  There seemed to be no Alpha Females in the 1950s.  If you read medical journals now, rates of heart attacks in women have been on the rise--quickly.  And the presence of Alpha Females is an accepted fact.  Similar situations exist in the treatment of diseases--like systemic lupus, a chronic illness whose recent flare-up  would have cost me my life 30-40 years ago.  But now with CT scans, ultrasounds, fMRIs, doctors do not have to engage in invasive procedures just to see what's going on inside the body (not subject to X-rays). Reading a journal from 1980 on auto-immune disease would give a totally different view than the one present now. As Alireza makes clear, the rules of citation and the preparation of literature reviews is highly dependent on the discipline.

  • Macho Anani added an answer in Lye:
    Why did the best band gap for solar cell lye close to 1.5 eV ?

    In several papers I found that the optimized band gap for solar cells is close to 1.5 eV. This value corresponds to a wavelength of about 830 nm, in infrared. Is it due to the fact that we use more silicon or silicon-like devices ?

    Macho Anani · University of Sidi-Bel-Abbes

    Thank you for your answer, nevertheless I can say that the solar spectrum is optimum at a wavelength of 530 nm, according to AM1.5. This corresponds to a band gap of 2.33 eV and a yellow color, I think. It seems to be far from 1.5 eV. Could you explain me more ? 

  • Hani Antoun added an answer in Organic Acids:
    What would be the ideal conditions of HPLC for organic acid detection? Which mobile phase and column is suitable, as organic acids are hydrophilic?

    Analytical method for organic acid detection using HPLC

  • Hongzhen Yang added an answer in Chronic Disease:
    What drugs or diet supplements could shift the TH1/TH2 balance to TH1 response?

    The shift of TH1/TH2 balance to TH2 is a major cause of most chronic diseases, such as fibrosis and cancer. What could one take (drugs or diet supplements) that could help re-shift this the TH1/TH2 balance to TH1 type response?

    Hongzhen Yang · University of Southern California

    Maybe, Vitamin C and Vitamin E that almost have no side effects. 

  • What will be the best method to measure school satisfaction towards pre-service teachers or practicum teachers?

    I am in the process of gathering information on how effective our Diploma TESL is to primary schools.

    If the schools are satisfied with our practicum teachers, does it mean our curriculum is effective?

    Hairunnida Mansor · Kolej Poly-Tech Mara

    Thank you everyone for the opinion and recommendations. In fact, we practise the followings: 1. classroom observations (3 times), 2. feedback given right after the observations, 3. discuss matters with the school mentors.

    we have never actually interviewed the students;  we'll definitely consider that now.

    Thank you everyone again and Happy New Year!!

  • Jenkins Macedo added an answer in Carbon Footprint:
    Could anyone provide details on the carbon footprint of staple food production in developed countries?

    Any papers or details concerning the life cycle CO2 emissions of rice, wheat and maize in developed countries will help me. Thanks!

    Jenkins Macedo · Clark University

    Hello,

    Check out available data provided freely by the UN Geodata portal at: http://www.theanalysisfactor.com/can-likert-scale-data-ever-be-continuous/

  • Jenkins Macedo added an answer in Likert Scale:
    How can I do data analysis of the Likert scal?

    I have prepared a questionnaire which contains the Likert scale. Respondents are in four different age groups and there are 20 questions asked using the likert scale. Can I use 1 way ANOVA?

    Jenkins Macedo · Clark University

    Here is some information I think could give you some insights:

    http://www.theanalysisfactor.com/can-likert-scale-data-ever-be-continuous/ 

  • Girish Bm added an answer in Wear:
    How is it possible that the wear rate of composites which have more hardness is increased?

    How is it possible that the wear rate of composites which have more hardness increased?

    Girish Bm · East Point College of Engineering and Technology

    Yes. Wear depends on several factors like you mentioned.