Discover the world's scientific knowledge
With 160+ million publication pages, 25+ million researchers and 1+ million questions, this is where everyone can access science
You can use AND, OR, NOT, "" and () to specify your search.
It is not found the well know value of VMS parameter traffic per user. Some says its 2 milli erlang some says it is 0.2 erlang. Being the adjustable parameter, though want to know the standard and empirical value.
I want to predict 3 microstructural properties by training neural network (backpropagation) with cooling rate of a alloy solidifying from a liquid as input and my target values as those 3 properties. I have 50 samples of cooling rate and corresponding 50 X 3 values of properties.
I attach the above mentioned data, 1st column represents i/p and the other three targets.
Till now I simply called nftool in Matlab which uses trainlm and lets me select the size of hidden layer apart from partitioning the data, and fed the i/p with Cooling rate data [1x50] matrix. Similarly the Targets were provided with the [3 X 50] matrix representing 50 values of my 3 parameters.
The value of the MSE that i have obtained is mostly in the range of 8 - 30. I tried to use the number of hidden layer from 4-16, but that didn't help much either.
For simplicity, let me change the context of the question. I am trying to rank students based on the scores they obtained in two tests. Say test 1 is a math test measured in marks obtained out of 100. Test 2 is 100 m sprint measured in seconds taken to complete. Z-score allows me to linearly transform them and I may then add them up to obtain a cumulative score.
Now, let's say that the math teacher decides one needs 75 marks to pass in math and gym teacher says one should complete 100m under 15s to pass. Can I modify the z-score to x-75/s.d for test 1 and x-15/s.d for test 2 and still linearly add them up?
There are numerous social networks and many organizations find value in collaborating as a means of problem solving. Informal conversations with people in the problem space lean on their social network for short answers, but I haven't found examples or cases of organizations who have used the virtual tools to work out the problem together. What tools do they use to engage and how is the information tracked let alone synthesized? Is it still a function of online connection and offline work that then gets reposted? I'm interested in the process and its impact on the problem as well as participant's experience of the conjoint activity in the virtual world
Deleted research item The research item mentioned here has been deleted
Spar platform is one type of floating structures applicable to deep and ultra deep water region for oil and gas exploration. It is a cylindrical deep draft floating hull, held in place by mooring lines anchored to the sea floor. Displacement along x, y and z axis of spar platform occurs due to wave and current effect and rotation with respect to x, y and z axis of platform due to same reason. Total 6 types of response (3 displacements and 3 rotations) have been obtained from analysis of spar platform by Finite Element Method (FEM) software. FEM is computationally very expensive and highly time consuming process. It is normally required 18~20 hours for getting each response.
Total 23 no. of inputs such as wave height, current speed, depth of water, diameter of spar, length of spar etc. have been used for analysis of spar platform by FEM.
And total 6 types of response (3 displacements and 3 rotations) have been obtained as output of FEM analysis. More than 2000 values for each response can be obtained in 1000 sec. by FEM.
I want to train an Artificial Neural Network (ANN) where FEM analysis inputs will be used as ANN inputs and target value will be as like output of FEM. After completing train of ANN, prediction of responses of spar platform will be done.
Is it possible to solve this type of problem by ANN?
We tried to implement an Extreme Learning Machine (artificial neural network) algorithm in C# within the .NET framework.To avoid any mistakes: I am referring to a particular neural network architecture, similar to a multilayer perceptron, in which the connection weights from the input to the hidden layer are randomly assigned by initialisation and only the weights to the output layer are trained.
We ran into a problem, as during the computation of a large dataset of many input vectors (i.e. training examples, in our case a very long electricity load time series) the implementation throws an exception stating that one of the arrays just exceeded the maximum number of elements a .NET array can hold. This is happening during the Singular Value Decomposition for the Moore-Penrose pseudoinverse.
Is there another computationally feasible way to calculate the pseudoinverse without using SVD (and without producing matrices with the size of the squared input sample count)? Or is there some way to split the H-matrix of the ELM algorithm, than calculate the pseudoinverse for these smaller parts and reassemble them afterwards before computing the output weights? Or how do you tackle solving this for long time series?
Your help would be much appreciated!
As we have the acceptable BER for reference for optical system, is there reference value for crosstalk, insertion loss and propagation delay?
Supply Chains are now explained as Supply Networks (or in many cases as Value Chains) I am seeking for specific literature which explains the transition from Traditional Supply Chains to Supply Networks, and lately to Value Chains, eventually to Value Networks.
I am currently using back propagation to train a ANN model that simulates a swinging pendulum. The input of the system becomes saturated as it reaches a maximum limit value. What method should I use to train the ANN model so that the output doesn't also become saturated?