Science topic
Matrices - Science topic
Explore the latest questions and answers in Matrices, and find Matrices experts.
Questions related to Matrices
The answer in Google search is,
If the determinant of a matrix is zero, then it has no inverse; hence the matrix is said to be singular. Only non-singular matrices have inverses.
Assume contrary to Google's answer that a singular matrix can have an inverse which is another singular matrix for example,
-1 2 0 2 2 0 -4 0
2 -1 2 0 0 2 0 -4
0 2 -1 2 -4 0 2 0
2 0 2 -1 0 -4 0 2
2 0 -4 0 -1 2 0
0 2 0 -4 2 -1 2 0
-4 0 2 0 0 2 -1 2
0 -4 0 2 2 0 2 -1
And,
11/105 22/105 8/105 22/105 22/105 8/105 4/105 8/105
22/105 11/105 22/105 8/105 8/105 22/105 8/105 4/105
8/105 22/105 11/105 22/105 4/105 8/105 22/105 8/105
22/105 8/105 22/105 11/105 8/105 4/105 8/105 22/105
22/105 8/105 4/105 8/105 11/105 22/105 8/105 22/105
8/105 22/105 8/105 4/105 22/105 11/105 22/105 8/105
4/105 8/105 22/105 8/105 8/105 22/105 11/105 22/105
8/105 4/105 8/105 22/105 22/105 8/105 22/105 11/105
So what?
Dear all
Actually, I did not remember the name of product between two matrices by multiplying first row from A by first row of B and so on.
With my regards.
According to the ICH Q2(R1) guidelines, the acceptability criteria for precision and accuracy are critical in evaluating the performance of analytical methods. These criteria differ for analytical methods (used primarily for chemical substances) and bioanalytical methods (applied in biological matrices), with analytical methods generally requiring stricter thresholds than bioanalytical methods. The relative standard deviation (RSD) should generally be ≤2%? (For bioanalytical is within ≤15% (or ≤20% for the lower limit of quantitation, LLOQ). And Accuracy? Acceptable recovery is generally within 98-102%?
Let \( A \) and \( B \) be two square matrices. It is well-known that the equation \( AB = I \) is equivalent to \( BA = I \). This equivalence holds even for matrices whose entries lie in a commutative ring. However, I am curious if there is a counterexample to this claim in a non-commutative ring, whether straightforward or complex.
Thank you!
I've created spatial matrix weights by using Geoda, but when I imported this .gal file to Stata by spmat command, the error i got is: Error in row 1 of spatial-weighting matrix. How to deal with this problem.
Dear all,
I’m performing an untargeted analysis of carotenoids in different matrices by means of HPLC-DAD, but I have not all the zis isomers standards for comparing my UV-visible spectra, so I wonder if there is any available library that I could use with my Chemstation software.
Thank you very much on advance.
Un ejemplo clásico de grupo semisimple es el grupo especial lineal SL(2, ℝ), que consiste en todas las matrices 2x2 con entradas reales y determinante 1:
SL(2, ℝ) = { A ∈ M(2, ℝ) | det(A) = 1 }
Donde M(2, ℝ) es el conjunto de todas las matrices 2x2 con entradas reales.
Este grupo es semisimple porque no tiene subgrupos normales abelianos no triviales. En otras palabras, no hay subgrupos que sean simultáneamente normales (invariantes bajo conjugación) y abelianos (conmutativos).
Otro ejemplo es el grupo especial unitario SU(3), que consiste en todas las matrices 3x3 con entradas complejas y determinante 1, que satisfacen la condición de ser unitarias (es decir, la matriz inversa es igual a la matriz conjugada transpuesta):
SU(3) = { U ∈ M(3, ℂ) | U^(-1) = U^†, det(U) = 1 }
Donde M(3, ℂ) es el conjunto de todas las matrices 3x3 con entradas complejas.
Este grupo es semisimple porque no tiene subgrupos normales abelianos no triviales, y es importante en física de partículas porque describe las simetrías de la cromodinámica cuántica (QCD).
Otros ejemplos de grupos semisimples incluyen:
- SL(n, ℝ) para n ≥ 2
- SU(n) para n ≥ 2
- SO(n) para n ≥ 3 (grupo ortogonal especial)
- Sp(n) para n ≥ 1 (grupo simpléctico)
Estos grupos son fundamentales en la teoría de la representación, la física de partículas y la geometría algebraica.
I am solving a large system of nonlinear equations. The Jacobian for this system of equations is a block tridiagonal matrix. When solved using Newton's method, the equation residuals may keep oscillating around lower values. In this case I have found that a rank one correction to the Jacobian, i.e. the broyden method, converges more quickly. The problem is that the traditional broyden method of correcting the Jacobian destroys its sparse pattern. Is there a way to update the Jacobi's method while maintaining the (block) tridiagonal matrix?
We tried both Kryo and Paraffin Embedding before cutting, and in both we had issues with autofluorescence in our COL1 scaffolds. Is it possible to avoid counterstainings/autofluorescence within the process of preparation of stainings (especially IHC with COL2,ACAN,DAPI) , and do you use some software/certain fluorescence markers to avoid this? Thank you for your help!
I simulate the properties of composite materials. I have drawn the structure of this material using VESTA software; however, these two materials have different structures.
There is a way to use rotation matrices to convert this structure to another structure, for example, converting a hexagonal cell to an orthorhomic cell as shown in the video below.
VESTA Software - 𝛂-CsPbI3 / MoS2 Monolayer Heterostructure (youtube.com)
How can I find the rotation matrices of different systems ?
I am looking for resources in the Netherlands regarding cultural tourism. So what cultural activities correlate, and what lifestyles can be distinguished. Any resource is welcome, articles, factor analyses, correlation matrices, spss files etc.
If the Box's Test yields a significant p-value (p < .001), indicating unequal covariance matrices of (X) across (3 variables) between (2 groups), it raises concerns about the assumption of homogeneity of covariance matrices. (ratio between 2 groups 1.21)
In such cases, should we still rely on the results of the multivariate test, or should we consider applying Mauchly's Sphericity Test?..
Hello! I am trying to characterize communities of eukaryotes living in biofilms using V4 region 18s amplicon sequencing data. Working with this type of data is new to me, and very interesting!
The SIMPER analysis from vegan package in R intrigues me, as it would be amazing to know if certain organisms are found in one of my sample groups and not the others, or if any are found in only one location, for example. Because of eukaryotic gene duplication, I am considering ASV's, presence/absence rather than reads (though sometimes also relative abundance in certain situations), and doing my diversity measures with a binary Jaccard index. I know that SIMPER is based on the more statistically robust Bray-Curtis index, which I can't use with the type of 18s amplicon data I have. However, I have also read that Jaccard and Bray-Curtis are equivalent when only presence/absence is concerned. My question, ultimately, is whether it is possible to use SIMPER with binary distance matrices, and if it would be possible to use SIMPER with ASV data.
Thank you so much for your time!
I am fighting my way through Axelrod and Hamilton (1981) on the Prisoners Dilemma.
this is the payoff matrix they present for the PD. But they only present the payoffs for player A. Normally, these matrices present the payoffs for both A and B. How do I modify this to present both . I’d like to really understand the math Later in the paper.
![](profile/Simon-Kiss/post/How_do_I_understand_the_payoff_matrix_in_Axelrod_and_Hamilton/attachment/65ccd4a71d0f563db303894f/AS%3A11431281223777988%401707922599382/image/httpspublic.websites.umich.edu%7EaxeresearchAxelrod+and+Hamilton+EC+1981.pdf.png)
I have built a hybrid model for a recognition task that involves both images and videos. However, I am encountering an issue with precision, recall, and F1-score, all showing 100%, while the accuracy is reported as 99.35% ~ 99.9%. I have tested the model on various videos and images (related to the experiment data including seperate data), and it seems to be performing well. Nevertheless, I am confused about whether this level of accuracy is acceptable. In my understanding, if precision, recall, and F1-score are all 100%, the accuracy should also be 100%.
I am curious if anyone has encountered similar situations in their deep learning practices and if there are logical explanations or solutions. Your insights, explanations, or experiences on this matter would be valuable for me to better understand and address this issue.
Noted: An ablation study was conducted based on different combinations. In the model where I am confused, without these additional combinations, accuracy, precision, recall, and F1 score are very low. Also, the loss and validation accuracy are very high on other's combinations.
Thank you.
to know the volume fraction
Hello,
I am doing reduced order modelling for nonlinear analysis and I have to use the POD and Galerkin projection to reduce my matrices size. The problem is that since it's a nonlinear analysis, the matrices have to be updated for each increment. And for commercial FEA softwares, I do not have access to the stiffness matrices for each step time.
Does someone have any suggestions (using abaqus subroutines for example).
Thank you in advance.
I am trying to do a single-point polarizability calculation with TDDFT with input :
#p polar td=(nstates=8) M062X/6-31+g(d,p) geom=connectivity
I am getting an error like:
The selected state is a singlet.
CISGrX: IGrad=3 NXY=2 DFT=T
CISAX will form 3 AO SS matrices at one time.
Can anyone suggest any solution?
I have attached the output below.
I see a lot of mathematics but few interpretations in time (how it evolves, step by step with its maths).
I am currently doing Geometric morphometric analysis and I need to know if I can use the covariance matrices generated by Morpho J to do modularity and integration analysis in Geomorph
Suppose we have the FRF data (vector of frequencies, and vector of complex responses), how to build up a state-space model with predefined structures? I know MATLAB function "ssest" can do the job in principle.
I did it for 2 by 2 matrices via fixing some variables, seems good, though got the feeling that initial guess of free variables is tricky to set. The main concern is that for large-dimension matrices, this is cumbersome to perform the "Structured Estimation". Anyone can provide some codes for achieving this? Or provide another way to get the same thing here.
Cheers
I have a large sparse matrix A which is column rank-defficient. Typical size of A is over 100000x100000. In my computation, I need the matrix W whose columns span the null space of A. But I do not know how to fastly compute all the columns of W.
If A is small-scale, I know there are several numerical methods based on matrix factorization, such as LU, QR, SVD. But for large-scale matrices, I can not find an efficient iterative method to do this.
Could you please give me some help?
I need to undrstand how monitoring can affect the pass rate of matric learners
Hello everyone,
I need to extract the mode shape vectors of some cantilever plate to make a correlation between some of them analytically.
I used the workbench to simulate the problem and added the next APDL command to extract both mass and stiffness matrices in MMF:
/AUX2
COMBINE, FULL
/POST1
*SMAT, MatKS, D, IMPORT, FULL, file.full, STIFF
*SMAT, MatMS, D, IMPORT, FULL, file.full, MASS
*Export, MatKS, MMF, matK_MMF.txt
*Export, MatMS, MMF, matM_MMF.txt
Then I did modal analysis , and I got the two files of mass and stiffness matrices in MMF .
I used the next Matlab code to solve the eigen problem in order to extract mode shapes.
clc;
clear all;
format shortG;
format loose;
load matK_MMF.txt;
K = zeros(462,462);
for r = 2:5515
K(matK_MMF(r,1), matK_MMF(r,2)) = matK_MMF(r,3);
end
disp (K)
load matM_MMF.txt;
M = zeros(462,462);
for r = 2:1999
M(matM_MMF(r,1), matM_MMF(r,2)) = matM_MMF(r,3);
end
disp(M)
cheq=linsolve(M,K)
[Mode,Lamda]=eig(cheq);
lamda=diag(sort(diag(Lamda),'ascend')); % make diagonal matrix out of sorted diagonal values of input 'Lamda'
[c, ind]=sort(diag(Lamda),'ascend'); % store the indices of which columns the sorted eigenvalues come from 'lamda'
omegarad=sqrt(lamda);
omegaHz=omegarad/pi/2
mode=Mode(:,ind)
This code ran syntactically without any errors .
I checked for the first natural frequency, omegaHz(1,1), which is supposed to be 208.43 Hz as shown in the workbench analysis , but unfortunately it was 64023 Hz .
Would you please show me what is wrong with that problem?
Or is there any possible way to extract the mode shape vectors or modal matrix directly from ANSYS ?
Regards
Hello,
Please refer to the figure, I have prepared my standard solutions in the range of 10 - 200 ppb, in which the concentration of Internal standard was set to be 50 ppb whose response can be seen in the figure, but as for the case of samples the same amount and concentration of internal standard give me double response as compared to standard. I am looking for the root cause of the problem ,can somebody kindly shares his experience.
Note:
- The same problem observed for two systems both for WATERS MS SYSTEM AND AGILENT MS SYSTEM.
- There is negligible chance of higher amount of internal standard to be spiked.
- Standard and sample solution matrices are same.
I have used a 5-point likert scale for my research on 'catalysing spiritual transformation'. The scale has 20 items divided equally across 4 domains (factors). It is a dual response scale and the first response rates the goal while the second response rates the accomplishment. It is a proven scale which has been validated for content and construct across continents. However, since I am using a translation for the first time, the author of the instrument who approved my translation, suggested that it is proper to do a fresh 'construct' validation for the translation. Accordingly I prepared to do CFA and found that my sample size after joining pretest and posttest data was only 174. I would like to join the two sets of responses of each questionnaire and double the sample size to 348, considering the fact that both sets of responses have identical structures though with different foci. I also noticed from the correlations matrices for the two sets of responses and the combination, that the correlation coefficients are significantly better for the combination and are all positive and > 0.5. Will it be scientifically sound to join the data of the two sets of responses and double my sample size as above?
Look forward to your valuable thoughts.
Thankfully
Lawrence F Vincent
Since multiplication is defined in matrices and division is also defined, how to simplify and expand rational matrices?
Adjacency matrix represents the functional connectivity patterns of the human brain. In my opinion, thresholding of correlation matrix is one of the most important and ambiguous step to get the adjacency matrices. Reason behind my opinion is that thresholding is user dependent and can be chosen any value (i.e., from 0.051 to 0.999) above the 5% because above this level means there is no significant difference between two signals or there is coherence between both. User is open to select the strength of connectivity by its own.
I want to know your opinion that does this a fair way to move from correlation matrices to adjacency matrices? If yes, how results of two researchers can be compared when they use different thresholding values? If no, what should be a reasonable threshold value for correlation matrices?
Thanks in advance!
In topological optimization of binary matrices, where 1 corresponds to a density of 1 and 0 corresponds to a density of 0, how can you ensure that the number of connected components for 0 is 1 in MATLAB?
I have looked at data base management and applications, data-sets and their use in different contexts. I have looked at digital in general, and I have noticed that there seems to be a single split:
-binary computers, performing number crunching (basically), and behind this you find the Machine Learning, ML, DL, RL, etc at the root fo the current AI
-quantum computing, still with numbers as key objects, with added probability distributions, randomisation, etc. This deviates from deterministic binary computing but only to a certain extent.
Then, WHAT ABOUT computing "DIRECTLY ON SETS", instead of "speaking of sets" and actually only "extracting vectors of numbers from them"? We can program and operate with non-numerical objects, old languages like LISP and LELISP, where the basic objects are lists of characters of any length and shape have done just that decades ago.
So, to every desktop user of spreadsheets (the degree-zero of data-set analytics) I am saying: you work with matrices, the mathematical name of tables of numbers, you know about data-sets, and about analytics. Why would not YOU put the two together: sets are flexible. Sets are sometimes are incorrectly named "bags" because it sounds fashionable (but bags have holes, they may be of plastic, not reusable, sets are more sustainable, math is clean -joking). It's cool to speak of "bags of words", I don't do that. Sets, why? Sets handle heterogeineity, and they can be formed with anything you need them to contain, in the same way a vehicle can carry people, dogs, potatoes, water, diamonds, paper, sand, computers. Matrices? Matrices nicely "vector-multiply", and are efficient in any area of work, from engineering to accounting to any science or humanities domain. They can be simplified in many cases (eigenvector, eigenvalue, along some geometric directions operations get simple, sometimes the change of reference vectors gives a diagonal matrix with zeros everywhere except on the diagonal, by a simple change of coordinates (geometric transformation).
HOW DO WE DO THAT IN PRACTICE? Compute on SETS NOT ON NUMBERS? One can imagine the huge efficiencies gained in some domains, potentially (new: yet to be explored, maybe BY YOU? IN YOUR AREA). Here is the math, simple, it combines knowledge of 11 years old (basic set theory) and knowledge of 15 years old (basic matrix theory). SEE FOR YOURSELF ,and please POST YOUR VIEW on where and how to apply...
The Pauli group is a representation of the gamma group (higher-dimensional matrices) in three-dimensional Euclidean space.
![](profile/Sergio-Perez-Felipe/post/Are_hypercubes_a_derivation_from_Pauli_groups/attachment/649c9f8e97e2867d50985e62/AS%3A11431281171000248%401687986061915/image/paulia.png)
Could any one provide me with a MATLAB code for fixed-fixed beam that calculates the Mass and Stiffness matrices, Natural frequency, and mode shapes.
Hello everyone!
I extracted betweenness centrality values of more than 250 data by using number of streamline weighted connectivity matrices. In order to calculate the betweenness centrality, I converted the connectivity matrices into connection-length matrices as it was suggested in brain connectivity toolbox website. However, my betweenness centrality values varies between 0- 1960. When I checked the related articles, the indices are between 0-1. Since I am planning to submit a paper including betweenness centrality, is it okay to use the betweenness centrality as I acquired (between 0-1965) , or I need to have values between 0 and 1.
If I need to have values between 0-1, what would you suggest me to do for making my values between 0-1?
Thank you for your help!
Hello! I need to extract the mass and stiffness matrices for a model with the following problem size:
P R O B L E M S I Z E
NUMBER OF ELEMENTS IS 249191
153326 linear line elements of type T3D2
84141 linear hexahedral elements of type C3D8R
102 linear line elements of type B31
11613 linear quadrilateral elements of type S4R
NUMBER OF NODES IS 267444
NUMBER OF NODES DEFINED BY THE USER 267240
NUMBER OF INTERNAL NODES GENERATED BY THE PROGRAM 204
TOTAL NUMBER OF VARIABLES IN THE MODEL 837207 (DEGREES OF FREEDOM PLUS MAX NO. OF ANY LAGRANGE MULTIPLIER VARIABLES. INCLUDE *PRINT,SOLVE=YES TO GET THE ACTUAL NUMBER.)
The properties are input as mass density, and I believe they will be used to generate a consistent mass matrix.
Here's the input file code I used: ** Global Mass and Stiffness matrix *Step, name=Export matrix *MATRIX GENERATE, STIFFNESS, MASS, VISCOUS DAMPING, STRUCTURAL DAMPING *MATRIX OUTPUT, STIFFNESS, MASS, VISCOUS DAMPING, STRUCTURAL DAMPING, FORMAT=coordinate
I have the following questions regarding my problem:
- Dimensions of M and K matrices As indicated above, the number of degrees of freedom is 837,207, but the matrix dimensions are reduced to 354,231*354,231. Shouldn't the number of degrees of freedom match the matrix dimensions?
- Node numbering The model consists of 8 parts, and the nodes start from 1 for each part. However, when I extract the matrices using the FORMAT=matrix input option, a different node numbering system (1 to 241,751) is applied, making it difficult to match the entries to the actual model locations. How can I find the correspondence between the entries in the M and K matrices and the nodes in the model?
- In the coordinate format, I get 5,620,189 rows of data, while in the matrix input format, I get 2,987,210 rows of data. Shouldn't the number of data entries be the same in both cases?
- When using the matrix input format, the entries are extracted in the following format: 241751,3, 241751,3, 9.038200770026704e+00 Can I interpret the corresponding data as follows? 1: X (translational) 2: Y (translational) 3: Z (translational) 4: RX (rotational) 5: RY (rotational) 6: RZ (rotational)
- The modes obtained from modal analysis in ABAQUS CAE GUI and the eigenanalysis results obtained from extracting the M and K matrices and performing the Lanczos method in MATLAB do not match. Is there any way to reconcile them?
While saying hello to the professors and those interested in mathematical sciences, I wanted to know from the perspective of the history of mathematics, what factor or process of solving what problem caused the definition of determinants in matrices? Were determinants created only to understand the independence or dependence of vectors, or by understanding the determinants of a matrix, Can you understand other questions related to the matrix in question? If the answer is yes, what other things can be guessed or obtained from the determinant value of a matrix? Thanks
I'm working with clinical trial data and would like to see whether any cognitive functions (measured by several neuropsychological tasks) changed as a result of the treatment. I'm hoping to derive some composite scores that would represent greater or lesser improvement on any identified principal components.
I have two treatment groups (control and intervention), and each variable is measured twice (once before and once after treatment). Just a regular PCA would involve 4 different covariance matrices (one for each combination of treatment condition and timepoint), but I need a way to pool those covariance matrices. Is this possible/is there freely available R code that would allow me to do this?
Which is the best reference to study basics notions of
"block operator matrices"
I am trying to start a new project but I am not familiar with machine learning algorithms. I want to build a predictive supervised model that is able to classify samples into clusters. This clusters are defined by a gene signature. I basically have gene expression matrices.
I would like to know which type of machine learning is the best in performance and prediction for this type of data and query. I've been looking at Deep Learning but I still can't find which one would fit better.
I want to use R to work with simple complex matrices, such as the Pauli matrices and Dirac gamma matrices. The problem I have is that the way R prints complex numbers by default makes such matrices hard to read. For example, the complex number i is usually printed 0 + 1 i. When a matrix contains both real and complex numbers, the real numbers are printed, for example, 1 + 0 i. Consequently, the second Pauli matrix would be printed
0 + 0 i 0 - 1 i
0 + 1 i 0 + 0 i
What can I do to get it to print out instead as
0 - i
i 0
please? This is clearly much easier to understand at a quick glance.
What are the most important properties of pairwise comparison matrices (pc matrices)?
Also if you can provide related applications.
Hello,
I ask suggestions for differentiation medium and differentiation days. In addition, the possible use of matrices and coating.
Thank you.
My team and I are in the middle of a prioritization problem that involves 350 alternatives (see figure for context about alternatives) or so. I have used the AHP to support the decision-making process in the past with only 7 or 8 alternatives and it has worked perfectly.
I would like to know if the AHP has a limit on the number of alternatives, because consistency may become a problem as Dr. Saaty's method provides Random consistency Indexes for matrix sizes of up to 10.
I was thinking in distributing the 350 alternatives in groups of 10, according to an attribute or classification criteria, to be able to use the RI chart proposed by Dr. Saaty.
If there are other more adecuate multi-criteria analysis tools, or different approaches to calculate the RI for larger matrices, please let me know.
Greetings and thank you,
![](profile/Jose-De-La-Garza/post/Is_there_a_limit_on_the_number_of_alternatives_in_the_Analytic_Hierarchy_Process/attachment/5d965d603843b093838a6724/AS%3A810020698599424%401570135392133/image/Schematic-representation-of-the-AHP-model.png)
I am looking for a way to connect a classic 4-step transport model (macro) with a micro-level model. The purpose is to capture a behavioural response (change in travel behaviour) of people to some specific policy change (road charges…) and feed this information into another micro-level model (microsimulation model with individuals grouped into households). The difficulty is that the 4-step transport model is a macro model, where we have aggregate flows of people (# of trips), not individuals. The output is in form of OD matrices between different geographical zones (of a studied area) before and after reform. They show the number of trips between zones and the total matrix is subdivided into OD matrices for different travel modes and some combinations of socio-economic profile and trip purpose.
My question is what would be the best approach to extract information from aggregate OD matrices to feed into micro-level dataset? I wish to capture how modal choice will change (e.g. if we introduce road tax, how each individual in micro dataset will adapt, maybe he will choose public transport?…)
What would be your suggestions? Maybe someone already tried to do something similar? I couldn’t find anything. Your suggestions would be very appreciated!
Respected RG members
What are the pros and cons of modeling dynamics using psuedo- stochastic matrices as the transition matrix units?
For Ns = 1, where Ns is the number of streams per user, the beam steering vectors are calculated by finding the array response vectors corresponding to the largest effective channel gain, i.e, finding the path that maximizes abs(abs(Ar(:,r)'*H*At(:,t))) where Ar and At are functions of the receive and transmit antenna array responses, respectively.
My question is, how to calculate the beam steering matrices for the case in which Ns> 1?
dear colleague, I would like to create a matlab script that allows the calculation of matrices (finite elements) by varying the number of elements of the rectangular mesh. But I don't know how to proceed need help?
Hi,
I have exported the mass and stiffness matrices for a 2D euler-bernoulli beam element (B23 in ABAQUS). These elements have three degrees of freedom for each node (two translation, one rotation), therefore, for a single element I would've expected a 6x6 matrix for both matrices, however, I have an 8x8 matrix for both.
Can anyone tell me where these extra DOF are coming from?
I have attached the input file I used to extract the matrices as well as the mass and stiffness matrices. The beam material properties are:
L = 1m
b = 0.01 m
h = 0.01
E = 70e9 Pa
rho = 2700 kg/m^3
v = 0.3
![](profile/Kyle_Dubber/post/ABAQUS_mass_and_stiffness_matrices_incorrect_size/attachment/66ab71c26e715e59922e2ab3/AS%3A11431281115740902%401675093060054/image/stiffness+matrix.png)
K, M, and C of shear building with Distributed TMDs
I need mass and stiffness matrices of a specific element of a structure in global coordination.
I am busy setting up an LC-MS/MS method for quantifying various analytes in treated wastewater effluent. For method validation, I require a matrix blank but all of the matrices I have evaluated are not blank for the analytes (I have evaluated at least 30 matrices). Any suggestions on what to use as a matrix blank for method validation?
I would want to know the fundamental difference between eigenvalues and singular values when applied to spectral analysis of graphs' adjacency and laplacian. As far as I know the SVDs can be worked on nonsquared matrices but adjacency and laplacians are squared matrices and they would be symmetric if the graph is undirected.
Hello,
I am working on an experiment analyzing rhizosphere and mycorrhizal fungal communities belonging to different tundra plant roots. Using two DNA extraction protocols on the same plant roots, I have generated both bacterial and ectomycorrhizal species matrices.
I want to compare the two matrices to see if the communities are correlated to each other but am running into problems executing. The bacterial matrix is obviously much larger than the ectomycorrhizal matrix which is causing problems when trying to do a mantel test in R.
I know in theory mantel tests do not require symmetrical matrices, however the packages I have tried using in R all do require symmetrical matrices (vegan, ape, ade4).
I was hoping somebody might have an idea how I can go about comparing these two matrices, either by modification so that the matrices are symmetrical, or perhaps another piece of code/ software package.
Thanks
I am looking for an automated method for the diagonalization of multidimensional matrices (Cubic 3D matrices, for example). Any suggestion would be much appreciated.
if the W matrix is a Square matrix and symmetric
A=[-2+3i -1+4i; -0.5+2i -0.9+4i];
B=[-1.5+3i; 0];
C=[-0.25+5i 0];
D=0;
How to plot step/impulse response of this system
Hello,
I am working on a pre-analysis plan for an experiment that I want to conduct. The experiment is about productivity behaviour. Participants solve matrices and the number of matrices solved my primary outcome variable (i.e. productivity).
There are 2 groups:
1.) Treatment Group: Working under stimulus
2.) Control Group: Working without stimulus
In the beginning of the experiment, I conduct a baseline-phase to check whether potential differences in productivity may stem from differing baseline abilities.
My hypothesis states that productivity in both groups is the same.
What is the best way to investigate the hypothesis?
a) First check for differences in baseline ability and then conduct a nonparametric / parametric test?
b) Use a linear regression model, use a dummy on the treatment group and include baseline productivity as a control variable?
c) Is it even better to conduct both (e.g. Mann-Whitney U-Test and a subsequent linear regression) to arrive at more compelling results? Or would that approach even be counterproductive?
Kind regards and thanks for you help!
Dear community,
I'm trying to analyze a document based on multiple terms. I have 80 terms and 137 documents. I would like to reduce the number of terms and try to cluster different words into words that reflect the same concept. For example, the terms reduce cost, reduce expense, cut cost, technology, etc can be grouped under one concept which is cost minimization.
I have computed the U,d,V terms of the singular value decompositio (SVD) of the term-document matrix, and I have chosen to reduce the terms into 5 instead of 80. So I have the following dimension of these matrices U (80*5) d(5*5) and V^(t)(5*137)
I would like to know what is the next step ?
Many thanks for your great help
Hi
Some authors use either correlation matrices or VIF to identify collinearity between variables, while others apply both to improve model performance and interpretability. Therefore, I would be happy to get statistical explanations from anyone about the tools used separately and simultaneously or I want to know if other robust mechanisms to check collinearity are extant.
Thank you in advance!
I am doing sMRI analysis on CAT12 and Freesurfer. Acquisition of T1-weighted images: echo time/repetition time (TE/TR) =2.5 ms/1900 ms, field of view = 270 × 270 mm slice thickness = 1 mm, 176 slices, 256 × 246 voxel matrices. But 2 subjects have different slices (160 slices) from others. I want to know the reason for this difference and should I exclude those subjects from my analysis?
A(A^T)=(A^T)A =I(Identity Matrix)
Then A always have real enteries.
Is it true?
I’ve linearised about my current state estimate with my equations that relate my measured variables to the states. But I think I must be missing an equation or setting it up wrong because the dimensions of my matrices being concatenated are not consistent
Lower Matrix to represent the approximate final stages of payment and upper representing the early payments
I am specifically trying to analyze:
x_dot = a*x*(1-x)*(1-y)
y_dot = y*(1-y)*(c*(1-x)-b)
where {0<= x, y <=1}. Stability of fixed point at (1-b/c, 1)?
I want to compare two theorems and see which one has the largest feasibility domain. like the attached picture.
for example, I have the following matrices:
A1=[-0.5 5; a -1];
A2=[-0.5 5; a+b -1];
they are a function of 'a' and 'b'
I want to plot according to 'a' and 'b' the points where an LMI is feasible
for example the following LMI
Ai'*P+P*Ai<0
then I want to plot the points where another LMI is feasible, for example:
Ai'*Pi+Pi*Ai<0
I have seen similar maps in many articles where the authors demonstrated that an LMI is better than another because it is applicable for more couples (a,b)
![](profile/Wail-Hamdi-2/post/How_to_plot_the_feasibility_domain_of_LMI/attachment/6280d8887c442f28bda844bb/AS%3A1155949343580161%401652611208360/image/Feasible-domain-provided-by-Theorem-1-and-Theorem-94-from-4-with.png)
i tried to use next orders with ANSYS Workbench 2019.2 to export mass and stiffness matrices :
!Stiffness
*DMAT,MatKD,D,IMPORT,FULL,file.full,STIFF
*PRINT,MatKD,Kdense.matrix
!Mass
*DMAT,MatMD,D,IMPORT,FULL,file.full,MASS
*PRINT,MatMD,Mdense.matrix
The shown codes worked out with workbench R15
When i try it with Workbench 2019.2, i had next error
(*DMA Command : Fails to open the file file.full.)
some friends told me to use next code for sparse matrix
*SMAT,matk,D,IMPORT,file.full,FULL,STIFF
*PRINT,matk,matk,txt
i received an error :
( *SMA Command : The File Format (FILE.FULL) is not recognized. This command is ignored.)
do any one face such problem and how i can mange this error?
A is a nxn matrix whose five eigenvalues are zero and other n-5 are non-zero. so the rank of the matrix is n-5?
Hello,
I am trying to export Mass and Stiffness matrices using Ansys Workbench via Modal analysis modular, with APDL commands inserted. However, there is no .matrix file generated. Could anyone please tell me what I'm doing wrong? The figure illustrating the procedure as well as the commands is attached.
Thank you.
Rui Wang
![](profile/Rui-Wang-366/post/How_to_export_Mass_and_Stiffness_matrices_using_Ansys_Workbench/attachment/5fdce9b83b21a2000162f221/AS%3A970150333411333%401608313272886/image/Workbench_M_K.png)
Hi All,
I have had lots of experience with computing RV and covariance matrices with equity data. I wanted to compute RV for US Treasury bonds and realized covariance/correlation with the S&P500. I downloaded 5 min continuous TY futures from Reuters Datascope. There are lots of missing observations around the rolling of contracts at maturity. This makes the data nearly impossible to use to construct RV.
Any suggestions on intraday data/different series to compute bond RV and covariance with equities.
Thanks
Adam
I am working on a community detection problem based on time-series correlation data. The principal literature reference is this:
Random Matrix Theory (RMT) is used to identify non-random components in correlation matrices. The paper states: "A correlation matrix constructed from N completely random time series of duration T has (in the limits N → +∞ and T → +∞ with 1 < T /N < +∞) a very specific distribution of its eigenvalues, known as the Marcenko-Pastur or SenguptaMitra distribution".
Now, in my case I have N >> T, which would violate the 1 < T /N < +∞ condition.
Does anybody know how N >> T affects the Marcenko-Pastur distribution and the validity of RMT in the context of correlation matrices? Would it change anything if I resample the time series to get N < T.
Thanks a lot for the help.