Science topic

Matrices - Science topic

Explore the latest questions and answers in Matrices, and find Matrices experts.
Questions related to Matrices
  • asked a question related to Matrices
Question
4 answers
The answer in Google search is,
If the determinant of a matrix is ​​zero, then it has no inverse; hence the matrix is ​​said to be singular. Only non-singular matrices have inverses.
Assume contrary to Google's answer that a singular matrix can have an inverse which is another singular matrix for example,
-1 2 0 2 2 0 -4 0
2 -1 2 0 0 2 0 -4
0 2 -1 2 -4 0 2 0
2 0 2 -1 0 -4 0 2
2 0 -4 0 -1 2 0
0 2 0 -4 2 -1 2 0
-4 0 2 0 0 2 -1 2
0 -4 0 2 2 0 2 -1
And,
11/105 22/105 8/105 22/105 22/105 8/105 4/105 8/105
22/105 11/105 22/105 8/105 8/105 22/105 8/105 4/105
8/105 22/105 11/105 22/105 4/105 8/105 22/105 8/105
22/105 8/105 22/105 11/105 8/105 4/105 8/105 22/105
22/105 8/105 4/105 8/105 11/105 22/105 8/105 22/105
8/105 22/105 8/105 4/105 22/105 11/105 22/105 8/105
4/105 8/105 22/105 8/105 8/105 22/105 11/105 22/105
8/105 4/105 8/105 22/105 22/105 8/105 22/105 11/105
So what?
Relevant answer
Answer
Any matrix, square or strictly rectangular, always has a unique pseudo inverse ( The Moore Penrose Inverse).
For determinant to be defined, a matrix has to be square. A square singular matrix has a unique pseudo inverse with which it satisfy the Moore Penrose conditions 1,2,3 and 4.
For a square invertible matrix, it's inverse is unique and is equal to it's pseudo inverse.
  • asked a question related to Matrices
Question
6 answers
Dear all
Actually, I did not remember the name of product between two matrices by multiplying first row from A by first row of B and so on.
With my regards.
Relevant answer
Answer
"Entrywise product" that the term l have seen many mathematicians using
  • asked a question related to Matrices
Question
3 answers
According to the ICH Q2(R1) guidelines, the acceptability criteria for precision and accuracy are critical in evaluating the performance of analytical methods. These criteria differ for analytical methods (used primarily for chemical substances) and bioanalytical methods (applied in biological matrices), with analytical methods generally requiring stricter thresholds than bioanalytical methods. The relative standard deviation (RSD) should generally be ≤2%? (For bioanalytical is within ≤15% (or ≤20% for the lower limit of quantitation, LLOQ). And Accuracy? Acceptable recovery is generally within 98-102%?
Relevant answer
Answer
Just an addition to what others have already commented:
If there is no strong reason to use the Q2(R1) guideline - do not use it. There is a Q2(R2) guideline which is updated and came in effect in June 2024.
  • asked a question related to Matrices
Question
3 answers
Let \( A \) and \( B \) be two square matrices. It is well-known that the equation \( AB = I \) is equivalent to \( BA = I \). This equivalence holds even for matrices whose entries lie in a commutative ring. However, I am curious if there is a counterexample to this claim in a non-commutative ring, whether straightforward or complex.
Thank you!
Relevant answer
Answer
Mohammad,
A ring R is said to be n-finite if, in the matrix ring M_n(R), AB=I implies BA=I.
In his response, James Tuite provided you with a simple example of a ring that is not 1-finite, and therefore also not n-finite for all n>1. However, it is important to realize that there are also rings that are 1-finite but fail to be n-finite for some n>1. An example of a domain enjoying this property for n=2 is described in the paper "Inverses and zero divisors in matrix rings" by J. C. Shepherdson (Proc. London Math. Soc. 1 (1951), 71-85).
In his paper "Some remarks on the invariant basis property" (Topology 5 (1966), 215-228), Paul Cohn generalizes Shepherdson's construction to produce, for each integer n>1, an example of a domain that is not n-finite but is k-finite for k<n .
Regards,
Karl
  • asked a question related to Matrices
Question
2 answers
I've created spatial matrix weights by using Geoda, but when I imported this .gal file to Stata by spmat command, the error i got is: Error in row 1 of spatial-weighting matrix. How to deal with this problem.
Relevant answer
Answer
The error message "Error in row 1 of spatial weighting matrix" indicates that Stata encountered an issue with the format or content of the first row in your `.gal` file during the import process. To resolve this, consider the following steps:
1. Verify the `.gal` File Format: Ensure that the `.gal` file adheres to the expected format for spatial weight matrices. The first line should specify the number of observations, and subsequent lines should detail the spatial relationships. Any deviation can lead to import errors.
2. Check for Non-Numeric Entries**: Open the `.gal` file in a text editor and inspect the first row for any non-numeric characters or unexpected symbols. The presence of such characters can cause Stata to misinterpret the file structure.
3. Use Alternative Import Methods**: If the issue persists, consider converting the `.gal` file into a different format that Stata can handle more effectively. For instance, you can convert the `.gal` file into a CSV or TXT file and then use the `spmat import` command to import it into Stata. Ensure that the converted file maintains the correct structure, with the first row indicating the number of observations and subsequent rows detailing the spatial weights.
4. Consult Stata Documentation**: Refer to the official Stata documentation on importing spatial weighting matrices for detailed guidelines and examples. The [spmatrix import](https://www.stata.com/manuals/spspmatriximport.pdf) manual provides comprehensive instructions on the expected file formats and import procedures.
  • asked a question related to Matrices
Question
3 answers
Dear all,
I’m performing an untargeted analysis of carotenoids in different matrices by means of HPLC-DAD, but I have not all the zis isomers standards for comparing my UV-visible spectra, so I wonder if there is any available library that I could use with my Chemstation software.
Thank you very much on advance.
Relevant answer
Answer
Collected UV/VIS Spectra from an HPLC analysis run are not applicable for any type of "Library Comparisons". This is fundamental to the technique of HPLC and these types of comparisons do not apply to HPLC-DAD spectra unless they are from the same analysis run. No HPLC spectra libraries are available or published for this reason (unlike NMR, FTIR, EI MSD etc). The UV/VIS spectra obtained from any HPLC analysis will vary from instrument to instrument and method to method. They are not comparable separately, however you can learn some basic information from the data obtained (e.g. A general idea of what the spectra looks like when the sample is dissolved in a specific solution, or the mobile phase in this example). Because HPLC methods are fully customized to each instrument, the data collected for a sample will vary and can only be compared to data obtained on the same instrument using the same analysis method and settings. Different tubing dimensions, column type and dimensions, flow cell dimensions (volume and path-length), sampling rate, detection settings (wavelength/bandwidth), mobile phase composition, flow rate, temperature etc. all change the data.
In your example, if you wanted to know what the various UV/VIS spectra looked like for various versions of the same compound (i.e. isomers), then you could start by developing an HPLC method of analysis for all forms OR obtain pure standards. If you develop a valid method which retains and resolves the forms apart using good chromatography principles, then you could view each sample's spectra and compare them directly. Alternatively, if you can obtain pure standards of each compound, then you could dissolve them in the mobile phase and analyze each one separately using a UV/VIS spectrophotometer to obtain and store their spectra for comparison (If you acquire standards of each, you could also use these standards with a full HPLC analysis method to compare and qualitatively identify them too).
  • asked a question related to Matrices
Question
1 answer
Un ejemplo clásico de grupo semisimple es el grupo especial lineal SL(2, ℝ), que consiste en todas las matrices 2x2 con entradas reales y determinante 1:
SL(2, ℝ) = { A ∈ M(2, ℝ) | det(A) = 1 }
Donde M(2, ℝ) es el conjunto de todas las matrices 2x2 con entradas reales.
Este grupo es semisimple porque no tiene subgrupos normales abelianos no triviales. En otras palabras, no hay subgrupos que sean simultáneamente normales (invariantes bajo conjugación) y abelianos (conmutativos).
Otro ejemplo es el grupo especial unitario SU(3), que consiste en todas las matrices 3x3 con entradas complejas y determinante 1, que satisfacen la condición de ser unitarias (es decir, la matriz inversa es igual a la matriz conjugada transpuesta):
SU(3) = { U ∈ M(3, ℂ) | U^(-1) = U^†, det(U) = 1 }
Donde M(3, ℂ) es el conjunto de todas las matrices 3x3 con entradas complejas.
Este grupo es semisimple porque no tiene subgrupos normales abelianos no triviales, y es importante en física de partículas porque describe las simetrías de la cromodinámica cuántica (QCD).
Otros ejemplos de grupos semisimples incluyen:
- SL(n, ℝ) para n ≥ 2
- SU(n) para n ≥ 2
- SO(n) para n ≥ 3 (grupo ortogonal especial)
- Sp(n) para n ≥ 1 (grupo simpléctico)
Estos grupos son fundamentales en la teoría de la representación, la física de partículas y la geometría algebraica.
Relevant answer
Me gustaría que abundaras un poco más sobre el "grupo topológico local o localmente compacto". Existen algunos tipos de brackets utilizados frecuentemente en álgebra asociativa, los conocidos grupos de Lie, por ejemplo el corchete de Poisson, que define estructuras algebraicas y sus variedades. Tales variedades preservan una topología. Al referirte a grupos topológicos, ¿a cuáles te refieres?
  • asked a question related to Matrices
Question
4 answers
I am solving a large system of nonlinear equations. The Jacobian for this system of equations is a block tridiagonal matrix. When solved using Newton's method, the equation residuals may keep oscillating around lower values. In this case I have found that a rank one correction to the Jacobian, i.e. the broyden method, converges more quickly. The problem is that the traditional broyden method of correcting the Jacobian destroys its sparse pattern. Is there a way to update the Jacobi's method while maintaining the (block) tridiagonal matrix?
Relevant answer
Answer
If no corrections are made to the Jacobian in each iteration of Newton's method, then the search direction will become no longer the descent direction of the residuals of the equation. While this method of not updating the Jacobian exists, it does not work for my system of nonlinear equations.
  • asked a question related to Matrices
Question
1 answer
We tried both Kryo and Paraffin Embedding before cutting, and in both we had issues with autofluorescence in our COL1 scaffolds. Is it possible to avoid counterstainings/autofluorescence within the process of preparation of stainings (especially IHC with COL2,ACAN,DAPI) , and do you use some software/certain fluorescence markers to avoid this? Thank you for your help!
Relevant answer
Answer
Collagen is weakly fluorescent at lower wavelengths and in the UV. If you have collagen matrices this fluorescence can be fairly significant, especially if you are using a widefield microscope. The easiest way to avoid this is to pick probes that fluoresce at longer wavelengths. Use something like TO-PRO-3 instead of DAPI, and for antibodies use fluorophores like Alexa 565. If you need 3 fluorophores and have a limited range of excitation wavelengths. Whatever channel you expect will be the brightest should be stained with a 488 fluorophore. If there is a large enough gap between background intensity and staining intensity you can subtract out the background without also removing the staining you are interested in.
  • asked a question related to Matrices
Question
1 answer
I simulate the properties of composite materials. I have drawn the structure of this material using VESTA software; however, these two materials have different structures.
There is a way to use rotation matrices to convert this structure to another structure, for example, converting a hexagonal cell to an orthorhomic cell as shown in the video below.
VESTA Software - 𝛂-CsPbI3 / MoS2 Monolayer Heterostructure (youtube.com)
How can I find the rotation matrices of different systems ?
  • asked a question related to Matrices
Question
2 answers
I am looking for resources in the Netherlands regarding cultural tourism. So what cultural activities correlate, and what lifestyles can be distinguished. Any resource is welcome, articles, factor analyses, correlation matrices, spss files etc.
Relevant answer
Answer
Thank you very much!!
  • asked a question related to Matrices
Question
4 answers
If the Box's Test yields a significant p-value (p < .001), indicating unequal covariance matrices of (X) across (3 variables) between (2 groups), it raises concerns about the assumption of homogeneity of covariance matrices. (ratio between 2 groups 1.21) In such cases, should we still rely on the results of the multivariate test, or should we consider applying Mauchly's Sphericity Test?..
Relevant answer
Answer
Neither. If there are more than 2 levels on a repeated measures factor it's sensible to assume the sphericity assumption is violated. The question is the severity of the violation. Using a correction based on epsilon is a sensible choice and will result in no correction if epsilon = 1 and the assumption is met.
  • asked a question related to Matrices
Question
1 answer
Hello! I am trying to characterize communities of eukaryotes living in biofilms using V4 region 18s amplicon sequencing data. Working with this type of data is new to me, and very interesting!
The SIMPER analysis from vegan package in R intrigues me, as it would be amazing to know if certain organisms are found in one of my sample groups and not the others, or if any are found in only one location, for example. Because of eukaryotic gene duplication, I am considering ASV's, presence/absence rather than reads (though sometimes also relative abundance in certain situations), and doing my diversity measures with a binary Jaccard index. I know that SIMPER is based on the more statistically robust Bray-Curtis index, which I can't use with the type of 18s amplicon data I have. However, I have also read that Jaccard and Bray-Curtis are equivalent when only presence/absence is concerned. My question, ultimately, is whether it is possible to use SIMPER with binary distance matrices, and if it would be possible to use SIMPER with ASV data.
Thank you so much for your time!
Relevant answer
Answer
SIMPER reads in a community matrix (presence/absence and abundance) and not a distance matrix. The latter is computed during the procedure. vegan's help file on the simper function is pretty clear re. this. Because I am not a geneticist, I cannot comment on the use of ASV data vs reads.
I hope this is of any help.
  • asked a question related to Matrices
Question
1 answer
I am fighting my way through Axelrod and Hamilton (1981) on the Prisoners Dilemma.
this is the payoff matrix they present for the PD. But they only present the payoffs for player A. Normally, these matrices present the payoffs for both A and B. How do I modify this to present both . I’d like to really understand the math Later in the paper.
Relevant answer
Answer
Just exchange the roles of A and B (since players(not their decisions) are independent and equally probable), and you are good to go then.
  • asked a question related to Matrices
Question
19 answers
I have built a hybrid model for a recognition task that involves both images and videos. However, I am encountering an issue with precision, recall, and F1-score, all showing 100%, while the accuracy is reported as 99.35% ~ 99.9%. I have tested the model on various videos and images (related to the experiment data including seperate data), and it seems to be performing well. Nevertheless, I am confused about whether this level of accuracy is acceptable. In my understanding, if precision, recall, and F1-score are all 100%, the accuracy should also be 100%.
I am curious if anyone has encountered similar situations in their deep learning practices and if there are logical explanations or solutions. Your insights, explanations, or experiences on this matter would be valuable for me to better understand and address this issue.
Noted: An ablation study was conducted based on different combinations. In the model where I am confused, without these additional combinations, accuracy, precision, recall, and F1 score are very low. Also, the loss and validation accuracy are very high on other's combinations.
Thank you.
Relevant answer
Answer
Results after some modifications in the code where I made mistakes before.
  • asked a question related to Matrices
Question
3 answers
to know the volume fraction
Relevant answer
Answer
Dear Sarah NADIAH Binti Nordin, what are the components of the nanocomposite? You can do:
- disslution, extraction, titration
- thermal analysis (TGA)
  • asked a question related to Matrices
Question
2 answers
Hello,
I am doing reduced order modelling for nonlinear analysis and I have to use the POD and Galerkin projection to reduce my matrices size. The problem is that since it's a nonlinear analysis, the matrices have to be updated for each increment. And for commercial FEA softwares, I do not have access to the stiffness matrices for each step time.
Does someone have any suggestions (using abaqus subroutines for example).
Thank you in advance.
Relevant answer
Answer
Lam Vu Tuong Nguyen , thank you for your response. But how about the tangent stiffness matrix (in Newton-Raphson method) ?
To reduce my model, I also need to project this matrix in the POD reduced basis. Another point is how I give the reduced matrices to abaqus to solve the reduced equation.
Thank you !
  • asked a question related to Matrices
Question
4 answers
I am trying to do a single-point polarizability calculation with TDDFT with input :
#p polar td=(nstates=8) M062X/6-31+g(d,p) geom=connectivity
I am getting an error like:
The selected state is a singlet.
CISGrX: IGrad=3 NXY=2 DFT=T
CISAX will form 3 AO SS matrices at one time.
Can anyone suggest any solution?
I have attached the output below.
Relevant answer
Answer
OKay. Thank you so much for your help!
  • asked a question related to Matrices
Question
2 answers
I see a lot of mathematics but few interpretations in time (how it evolves, step by step with its maths).
Relevant answer
Answer
Thank you, I don't find by now exactly what I wanted, but it is a much better way to do the search.
  • asked a question related to Matrices
Question
3 answers
I am currently doing Geometric morphometric analysis and I need to know if I can use the covariance matrices generated by Morpho J to do modularity and integration analysis in Geomorph
Relevant answer
Answer
I think that the only real issue is to write a version of the "partition.gp" that geomorph needs for figuring out which variables belong to which module in the form that is used for partitioning the covariance matrix. But what might be fastest is to ask the geomorph google group. Dean Adams and Michael Collyer answer questions there.
  • asked a question related to Matrices
Question
4 answers
Suppose we have the FRF data (vector of frequencies, and vector of complex responses), how to build up a state-space model with predefined structures? I know MATLAB function "ssest" can do the job in principle.
I did it for 2 by 2 matrices via fixing some variables, seems good, though got the feeling that initial guess of free variables is tricky to set. The main concern is that for large-dimension matrices, this is cumbersome to perform the "Structured Estimation". Anyone can provide some codes for achieving this? Or provide another way to get the same thing here.
Cheers
Relevant answer
Answer
You can use the ident toolbox in Matlab or some other freeware codes.
What I recommend that you plot the FRF (or your output data in frequency domain) and you count the resonances. For each of these resonances should be the state space model order increased (default by 2, complex resonances, or 1 for real valued resonance (atypical). This can give you a good idea about the model complexity.
  • asked a question related to Matrices
Question
3 answers
I have a large sparse matrix A which is column rank-defficient. Typical size of A is over 100000x100000. In my computation, I need the matrix W whose columns span the null space of A. But I do not know how to fastly compute all the columns of W.
If A is small-scale, I know there are several numerical methods based on matrix factorization, such as LU, QR, SVD. But for large-scale matrices, I can not find an efficient iterative method to do this.
Could you please give me some help?
Relevant answer
Answer
If the matrix is sparse then a sparse LU will enable you to compute the null space. (Row &/or column swaps (= pivoting) may be necessary or useful to keep memory and time costs low.) But otherwise this will become very expensive. Even just storing such a matrix would take (in double precision floating point) over 80GB. Any solver would be an out-of-core type of solver, but would be extremely expensive.
An alternative, if the matrix is real symmetric, is to use the Lanczos method or a variant such as the Lehoucq & Sorensen ARPACK method. More precisely, ARPACK is a restarted Arnoldi method. Then you look for when you have the zero eigenvalue (or simply a very small eigenvalue). The corresponding eigenvector(s) gives a basis for the null space. ARPACK is a semi-iterative method, and so there is a trade-off between the accuracy of the eigenvalues (and eigenvectors) and the number of iterations performed.
  • asked a question related to Matrices
Question
2 answers
I need to undrstand how monitoring can affect the pass rate of matric learners
Relevant answer
Answer
The monitoring and assessment of education, including monitoring in schools, is the subject of numerous academic journals and publications. Numerous subjects pertaining to educational assessment, evaluation, and data-driven decision-making are frequently covered in these periodicals. Several renowned periodicals in this area are listed below:
1. Educational Assessment, Evaluation, and Accountability (formerly known as the Journal of Personnel Evaluation in Education):This journal publishes research articles, reviews, and reports related to educational assessment, program evaluation, and accountability in education. 2. Assessment in Education: Principles, Policy & Practice:It focuses on all aspects of assessment and evaluation in education, including classroom assessment, standardized testing, and educational policy related to assessment.
3. Journal of Educational Measurement:This journal is dedicated to the theory and practice of educational measurement and assessment. It covers psychometrics, test development, and validation.
4. Educational Measurement: Issues and Practice:This journal publishes articles on a wide range of topics in educational measurement, assessment, and evaluation, including practical applications in schools.
5. Educational Assessment:This journal explores various aspects of educational assessment, including formative and summative assessment, assessment design, and the impact of assessment on teaching and learning. 6. Educational Evaluation and Policy Analysis:It covers research on educational policy, program evaluation, and assessment in education, with a focus on policy implications. 7. Studies in Educational Evaluation:This journal publishes articles on various aspects of educational evaluation, including methods, models, and the impact of evaluation on educational practices. 8. International Journal of Assessment Tools in Education:It focuses on the development and application of assessment tools and techniques in educational settings.
9. Journal of Research Practice:While not exclusively focused on education, this interdisciplinary journal often includes articles related to educational research, assessment, and monitoring practices. 10. Journal of Classroom Interaction:This journal examines classroom interaction and communication, including assessment and monitoring strategies used by teachers.
These peer-reviewed journals provide insightful information about monitoring and evaluation in educational contexts, including research findings and best practices. The precise topic and scope of these publications can differ, so it's important to read through their articles and choose the ones that best suit your interests and study requirements.
  • asked a question related to Matrices
Question
9 answers
Hello everyone,
I need to extract the mode shape vectors of some cantilever plate to make a correlation between some of them analytically.
I used the workbench to simulate the problem and added the next APDL command to extract both mass and stiffness matrices in  MMF:
/AUX2
COMBINE, FULL
/POST1
*SMAT, MatKS, D, IMPORT, FULL, file.full, STIFF
*SMAT, MatMS, D, IMPORT, FULL, file.full, MASS
*Export, MatKS, MMF, matK_MMF.txt 
*Export, MatMS, MMF, matM_MMF.txt 
Then I did modal analysis , and I got the two files of mass and stiffness matrices in MMF .
I used the next Matlab code to solve the eigen problem in order to extract mode shapes.
clc;
clear all;
format shortG;
format loose;
load matK_MMF.txt;
K = zeros(462,462);
for r = 2:5515
    K(matK_MMF(r,1), matK_MMF(r,2)) = matK_MMF(r,3);
end
disp (K)
load matM_MMF.txt;
M = zeros(462,462);
for r = 2:1999
    M(matM_MMF(r,1), matM_MMF(r,2)) = matM_MMF(r,3);
end
disp(M)
cheq=linsolve(M,K)
[Mode,Lamda]=eig(cheq);
lamda=diag(sort(diag(Lamda),'ascend')); % make diagonal matrix out of sorted diagonal values of input 'Lamda'
[c, ind]=sort(diag(Lamda),'ascend'); % store the indices of which columns the sorted eigenvalues come from 'lamda'
omegarad=sqrt(lamda);
omegaHz=omegarad/pi/2
mode=Mode(:,ind)
This code ran syntactically without any errors .
I checked for the first natural frequency, omegaHz(1,1), which is supposed to be 208.43 Hz as shown in the workbench analysis , but unfortunately it was 64023 Hz .
Would you please show me what is wrong with that problem?
Or is there any possible way to extract the mode shape vectors or modal matrix directly from ANSYS ?
Regards
Relevant answer
Answer
Dear Mohamed,
try the command lines provided in the attached file. They work for a full 3D model, but you can easily modify them for other kind of elements.
Best regards,
Marco Montemurro
  • asked a question related to Matrices
Question
1 answer
Hello,
Please refer to the figure, I have prepared my standard solutions in the range of 10 - 200 ppb, in which the concentration of Internal standard was set to be 50 ppb whose response can be seen in the figure, but as for the case of samples the same amount and concentration of internal standard give me double response as compared to standard. I am looking for the root cause of the problem ,can somebody kindly shares his experience.
Note:
  1. The same problem observed for two systems both for WATERS MS SYSTEM AND AGILENT MS SYSTEM.
  2. There is negligible chance of higher amount of internal standard to be spiked.
  3. Standard and sample solution matrices are same.
Relevant answer
Answer
Hi, in our analysis we face the same problem as we get double response when analyzing the samples. Could you find the reason of this problem, or a solution? Thanks
  • asked a question related to Matrices
Question
2 answers
I have used a 5-point likert scale for my research on 'catalysing spiritual transformation'. The scale has 20 items divided equally across 4 domains (factors). It is a dual response scale and the first response rates the goal while the second response rates the accomplishment. It is a proven scale which has been validated for content and construct across continents. However, since I am using a translation for the first time, the author of the instrument who approved my translation, suggested that it is proper to do a fresh 'construct' validation for the translation. Accordingly I prepared to do CFA and found that my sample size after joining pretest and posttest data was only 174. I would like to join the two sets of responses of each questionnaire and double the sample size to 348, considering the fact that both sets of responses have identical structures though with different foci. I also noticed from the correlations matrices for the two sets of responses and the combination, that the correlation coefficients are significantly better for the combination and are all positive and > 0.5. Will it be scientifically sound to join the data of the two sets of responses and double my sample size as above?
Look forward to your valuable thoughts.
Thankfully
Lawrence F Vincent
Relevant answer
Answer
Certainly! Combining data from a double-response questionnaire for Confirmatory Factor Analysis (CFA) involves preprocessing the responses, matching participants' data, integrating the two sets of responses into a single dataset while maintaining proper labeling, and then performing CFA to validate the hypothesized factor structure. This unified analysis allows you to assess how well the latent factors correspond to the observed variables from both sets of questions, providing insights into the underlying relationships between the constructs being measured. It's important to ensure that the data are compatible and that the assumptions of CFA are met during this process.
  • asked a question related to Matrices
Question
7 answers
Since multiplication is defined in matrices and division is also defined, how to simplify and expand rational matrices?
Relevant answer
Dear Prof. Hasan Keleş,
Rational matrices, like rational numbers, can be simplified and expanded using similar principles. A rational matrix is a matrix whose elements are rational numbers (fractions). Simplifying and expanding rational matrices involve performing operations analogous to those with rational numbers, such as reducing fractions and finding common denominators.
  1. Simplifying Rational Matrices:To simplify a rational matrix, you aim to reduce its elements to their simplest form. This involves dividing both the numerator and denominator of each element by their greatest common divisor (GCD). For example: Consider the matrix A with rational elements: A=[[2/4,3/6],[5/10,4/8]]. To simplify A, divide all elements by their respective GCDs: A=[[1/2,1/2],[1/2,1/2]]
  2. Expanding Rational Matrices:Expanding a rational matrix is not as straightforward as with scalar numbers, but you can apply similar concepts when performing matrix operations. For example: Consider the matrix B with rational elements and a scalar factor of 2: B=[[2/3,1/4],[3/5,1/6]]. To expand B by a scalar factor of two, you would multiply each element by two: 2B=[4/3,1/2],[6/5,1/3]]. Expanding by scalar multiplication is a common operation, but note that other types of matrix operations, such as matrix addition and multiplication, involve more complex calculations.
Keep in mind that not all operations that are valid for scalar rational numbers have direct analogs for matrices. Division, for example, is not directly defined for matrices in the same way it is for scalar numbers. You might use techniques like matrix inversion instead of division when dealing with matrices.
Additionally, some operations in matrix algebra, like matrix inversion, determinant calculation, and matrix multiplication, can lead to rational numbers becoming irrational or even undefined if not handled properly. Always ensure you're following established rules and guidelines for matrix operations to maintain accuracy and validity.
  • asked a question related to Matrices
Question
3 answers
Adjacency matrix represents the functional connectivity patterns of the human brain. In my opinion, thresholding of correlation matrix is one of the most important and ambiguous step to get the adjacency matrices. Reason behind my opinion is that thresholding is user dependent and can be chosen any value (i.e., from 0.051 to 0.999) above the 5% because above this level means there is no significant difference between two signals or there is coherence between both. User is open to select the strength of connectivity by its own.
I want to know your opinion that does this a fair way to move from correlation matrices to adjacency matrices? If yes, how results of two researchers can be compared when they use different thresholding values? If no, what should be a reasonable threshold value for correlation matrices?
Thanks in advance!
Relevant answer
Answer
  • asked a question related to Matrices
Question
3 answers
In topological optimization of binary matrices, where 1 corresponds to a density of 1 and 0 corresponds to a density of 0, how can you ensure that the number of connected components for 0 is 1 in MATLAB?
Relevant answer
Answer
The approach:
1. Convert the binary matrix to a binary image: Assuming you have a binary matrix `A`, you can convert it to a binary image using the `imshow` function:
```matlab
imshow(A)
```
2. Perform morphological operations: Use morphological operations to manipulate the binary image and ensure that the number of connected components for 0 is 1. Specifically, you can use the following operations:
a. Dilation: Dilate the image using the `imdilate` function with an appropriate structuring element. The dilation operation expands the regions of 1s in the image.
```matlab
se = strel('disk', 1); % Adjust the structuring element as needed
dilated = imdilate(A, se);
```
b. Erosion: Erode the dilated image using the `imerode` function. The erosion operation shrinks the regions of 1s in the image.
```matlab
eroded = imerode(dilated, se);
```
c. Invert the eroded image: Invert the eroded image using the logical NOT (`~`) operator.
```matlab
inverted = ~eroded;
```
Now, the `inverted` image should have a single connected component for 0.
1. Visualize the result: You can use `imshow` again to visualize the resulting binary image:
```matlab
imshow(inverted)
```
The displayed image should show a single connected component for 0.
Good luck!!
  • asked a question related to Matrices
Question
3 answers
I have looked at data base management and applications, data-sets and their use in different contexts. I have looked at digital in general, and I have noticed that there seems to be a single split:
-binary computers, performing number crunching (basically), and behind this you find the Machine Learning, ML, DL, RL, etc at the root fo the current AI
-quantum computing, still with numbers as key objects, with added probability distributions, randomisation, etc. This deviates from deterministic binary computing but only to a certain extent.
Then, WHAT ABOUT computing "DIRECTLY ON SETS", instead of "speaking of sets" and actually only "extracting vectors of numbers from them"? We can program and operate with non-numerical objects, old languages like LISP and LELISP, where the basic objects are lists of characters of any length and shape have done just that decades ago.
So, to every desktop user of spreadsheets (the degree-zero of data-set analytics) I am saying: you work with matrices, the mathematical name of tables of numbers, you know about data-sets, and about analytics. Why would not YOU put the two together: sets are flexible. Sets are sometimes are incorrectly named "bags" because it sounds fashionable (but bags have holes, they may be of plastic, not reusable, sets are more sustainable, math is clean -joking). It's cool to speak of "bags of words", I don't do that. Sets, why? Sets handle heterogeineity, and they can be formed with anything you need them to contain, in the same way a vehicle can carry people, dogs, potatoes, water, diamonds, paper, sand, computers. Matrices? Matrices nicely "vector-multiply", and are efficient in any area of work, from engineering to accounting to any science or humanities domain. They can be simplified in many cases (eigenvector, eigenvalue, along some geometric directions operations get simple, sometimes the change of reference vectors gives a diagonal matrix with zeros everywhere except on the diagonal, by a simple change of coordinates (geometric transformation).
HOW DO WE DO THAT IN PRACTICE? Compute on SETS NOT ON NUMBERS? One can imagine the huge efficiencies gained in some domains, potentially (new: yet to be explored, maybe BY YOU? IN YOUR AREA). Here is the math, simple, it combines knowledge of 11 years old (basic set theory) and knowledge of 15 years old (basic matrix theory). SEE FOR YOURSELF ,and please POST YOUR VIEW on where and how to apply...
Relevant answer
Answer
Am in line with Aparna Sathya Murthy There are different levels of computing or computational methods.Number crunching is helpful for and used in any industry.Data crunching commonly involves stripping out unwanted information and formatting, as well as cleaning and restructuring the data. Analyzing large amounts of information can be invaluable for decision-making, but companies often underestimate the amount of effort required to transform data into a form that can be analyzed. Even accounting is much more than number crunching.
Computers are like humans - they do everything except think.
John von Neumann
  • asked a question related to Matrices
Question
10 answers
The Pauli group is a representation of the gamma group (higher-dimensional matrices) in three-dimensional Euclidean space.
Relevant answer
Answer
In reallity this image is a Pauli representation from this web:
  • asked a question related to Matrices
Question
2 answers
Could any one provide me with a MATLAB code for fixed-fixed beam that calculates the Mass and Stiffness matrices, Natural frequency, and mode shapes.
Relevant answer
Answer
Please download the code from iVABS from wenbinyugroup.github.io which include codes for cross-sectional analysis, and general-purpose linear/nonlinear analysis of beams made of arbitrary cross-section and arbitrary material.
  • asked a question related to Matrices
Question
3 answers
Hello everyone!
I extracted betweenness centrality values of more than 250 data by using number of streamline weighted connectivity matrices. In order to calculate the betweenness centrality, I converted the connectivity matrices into connection-length matrices as it was suggested in brain connectivity toolbox website. However, my betweenness centrality values varies between 0- 1960. When I checked the related articles, the indices are between 0-1. Since I am planning to submit a paper including betweenness centrality, is it okay to use the betweenness centrality as I acquired (between 0-1965) , or I need to have values between 0 and 1.
If I need to have values between 0-1, what would you suggest me to do for making my values between 0-1?
Thank you for your help!
Relevant answer
Answer
Thank you very much for your help!!
Alireza Falakdin, I will do as you suggested!
Best
Seda
  • asked a question related to Matrices
Question
1 answer
Hello! I need to extract the mass and stiffness matrices for a model with the following problem size:
P R O B L E M S I Z E
NUMBER OF ELEMENTS IS 249191
153326 linear line elements of type T3D2
84141 linear hexahedral elements of type C3D8R
102 linear line elements of type B31
11613 linear quadrilateral elements of type S4R
NUMBER OF NODES IS 267444
NUMBER OF NODES DEFINED BY THE USER 267240
NUMBER OF INTERNAL NODES GENERATED BY THE PROGRAM 204
TOTAL NUMBER OF VARIABLES IN THE MODEL 837207 (DEGREES OF FREEDOM PLUS MAX NO. OF ANY LAGRANGE MULTIPLIER VARIABLES. INCLUDE *PRINT,SOLVE=YES TO GET THE ACTUAL NUMBER.)
The properties are input as mass density, and I believe they will be used to generate a consistent mass matrix.
Here's the input file code I used: ** Global Mass and Stiffness matrix *Step, name=Export matrix *MATRIX GENERATE, STIFFNESS, MASS, VISCOUS DAMPING, STRUCTURAL DAMPING *MATRIX OUTPUT, STIFFNESS, MASS, VISCOUS DAMPING, STRUCTURAL DAMPING, FORMAT=coordinate
I have the following questions regarding my problem:
  1. Dimensions of M and K matrices As indicated above, the number of degrees of freedom is 837,207, but the matrix dimensions are reduced to 354,231*354,231. Shouldn't the number of degrees of freedom match the matrix dimensions?
  2. Node numbering The model consists of 8 parts, and the nodes start from 1 for each part. However, when I extract the matrices using the FORMAT=matrix input option, a different node numbering system (1 to 241,751) is applied, making it difficult to match the entries to the actual model locations. How can I find the correspondence between the entries in the M and K matrices and the nodes in the model?
  3. In the coordinate format, I get 5,620,189 rows of data, while in the matrix input format, I get 2,987,210 rows of data. Shouldn't the number of data entries be the same in both cases?
  4. When using the matrix input format, the entries are extracted in the following format: 241751,3, 241751,3, 9.038200770026704e+00 Can I interpret the corresponding data as follows? 1: X (translational) 2: Y (translational) 3: Z (translational) 4: RX (rotational) 5: RY (rotational) 6: RZ (rotational)
  5. The modes obtained from modal analysis in ABAQUS CAE GUI and the eigenanalysis results obtained from extracting the M and K matrices and performing the Lanczos method in MATLAB do not match. Is there any way to reconcile them?
Relevant answer
Answer
Chanwoo Lee Can you share your Abaqus model (.inp)?
  • asked a question related to Matrices
Question
3 answers
While saying hello to the professors and those interested in mathematical sciences, I wanted to know from the perspective of the history of mathematics, what factor or process of solving what problem caused the definition of determinants in matrices? Were determinants created only to understand the independence or dependence of vectors, or by understanding the determinants of a matrix, Can you understand other questions related to the matrix in question? If the answer is yes, what other things can be guessed or obtained from the determinant value of a matrix? Thanks
Relevant answer
Answer
English mathematician, James Sylvester (1814–1897) invented matrix maths. In his 1850 dated paper, Sylvester stated, “For this purpose we must commence, not with a square, but with an oblong arrangement of terms consisting, suppose, of m lines and n columns. This will not in itself represent a determinant, but is, as it were, a Matrix out of which we may form various systems of determinants by fixing upon a number p, and selecting at will p lines and p columns, the squares corresponding of pth order.” Sylvester used the term matrix .
  • asked a question related to Matrices
Question
2 answers
I'm working with clinical trial data and would like to see whether any cognitive functions (measured by several neuropsychological tasks) changed as a result of the treatment. I'm hoping to derive some composite scores that would represent greater or lesser improvement on any identified principal components.
I have two treatment groups (control and intervention), and each variable is measured twice (once before and once after treatment). Just a regular PCA would involve 4 different covariance matrices (one for each combination of treatment condition and timepoint), but I need a way to pool those covariance matrices. Is this possible/is there freely available R code that would allow me to do this?
Relevant answer
I add a paper from our research so you can see how we did:
  • asked a question related to Matrices
Question
2 answers
Which is the best reference to study basics notions of
"block operator matrices"
Relevant answer
Answer
درود.هرگاه ماتریس عملگرد خطی را به صورت چند بلوک در بیاوریم ماتریس بلوکی عملگرد خطی گویند.
  • asked a question related to Matrices
Question
3 answers
I am trying to start a new project but I am not familiar with machine learning algorithms. I want to build a predictive supervised model that is able to classify samples into clusters. This clusters are defined by a gene signature. I basically have gene expression matrices.
I would like to know which type of machine learning is the best in performance and prediction for this type of data and query. I've been looking at Deep Learning but I still can't find which one would fit better.
Relevant answer
Answer
There are several machine learning algorithms that can be used for clustering and classification of gene expression data, including deep learning algorithms such as Convolutional Neural Networks (CNNs), as I mentioned in my previous answer.
Another popular deep learning algorithm for clustering and classification tasks is the Deep Belief Network (DBN). DBNs have been used for gene expression analysis and have shown good performance in identifying gene clusters associated with diseases and other phenotypes.
Other commonly used machine learning algorithms for clustering and classification of gene expression data include:
  1. Support Vector Machines (SVMs): SVMs are widely used in bioinformatics and have been shown to be effective in gene expression analysis.
  2. Random Forests: Random Forests are an ensemble learning algorithm that can be used for classification and feature selection tasks in gene expression analysis.
  3. K-means clustering: K-means clustering is a popular unsupervised clustering algorithm that can be used to identify gene clusters based on similarities in their expression profiles.
Ultimately, the choice of machine learning algorithm will depend on the specific characteristics of your dataset and the problem you are trying to solve. It's recommended to experiment with different algorithms and compare their performance to identify the best approach for your specific problem.
  • asked a question related to Matrices
Question
2 answers
I want to use R to work with simple complex matrices, such as the Pauli matrices and Dirac gamma matrices. The problem I have is that the way R prints complex numbers by default makes such matrices hard to read. For example, the complex number i is usually printed 0 + 1 i. When a matrix contains both real and complex numbers, the real numbers are printed, for example, 1 + 0 i. Consequently, the second Pauli matrix would be printed
0 + 0 i 0 - 1 i
0 + 1 i 0 + 0 i
What can I do to get it to print out instead as
0 - i
i 0
please? This is clearly much easier to understand at a quick glance.
Relevant answer
Answer
Debopam, I've noted your suggestion, but R does every calculation I want, easily and efficiently. It does everything I need (at least, so far, and I don't doubt that will continue) except for this matter of formatting the output. I don't want to spend time on other software. I'll use R without getting this issue sorted if I have to, but things will be far easier if the formatting gets sorted.
  • asked a question related to Matrices
Question
6 answers
What are the most important properties of pairwise comparison matrices (pc matrices)?
Also if you can provide related applications.
Relevant answer
Answer
Let A = (𝑎𝑖𝑗) be n × n square matrix such that 𝑎𝑖𝑗 > 0 𝑓𝑜𝑟 𝑒𝑣𝑒𝑟𝑦 𝑖, 𝑗 ∈ {1, 2, … , 𝑛}. Then A is said to be a pairwise comparison matrix (PC matrix) if it is expressed in the following form, A =[1 𝑎12 … 𝑎1n; (1 /𝑎12) 1 … 𝑎2𝑛;....; (1/𝑎1𝑛) (1/𝑎2𝑛) ... 1].
That is, A is a diagonal matrix with each entry of the first row is the reciprocal of each entry of the first column, and continue like this for the rest.
  • asked a question related to Matrices
Question
1 answer
Hello,
I ask suggestions for differentiation medium and differentiation days. In addition, the possible use of matrices and coating.
Thank you.
Relevant answer
Answer
My suggestion is low glucose media with 2% heat inactivated horse serum. The coating can be done to taste and purse. Collagen works, so does laminin, and dilute matrigel but the latter are more expensive. All this works best if you are plating myoblasts with either very small or no fibroblast contamination. I would also suggest if you want to plate them for a long time to use a confluent layer of irradiated fibroblasts as the coating. With the other coatings you should start to see myotubes in 3-4 days and they will begin to die off in 5-8 days. On fibroblasts you can keep them for 20+ days and achieve striations (look at David Allen's paper for mouse)
  • asked a question related to Matrices
Question
13 answers
My team and I are in the middle of a prioritization problem that involves 350 alternatives (see figure for context about alternatives) or so. I have used the AHP to support the decision-making process in the past with only 7 or 8 alternatives and it has worked perfectly.
I would like to know if the AHP has a limit on the number of alternatives, because consistency may become a problem as Dr. Saaty's method provides Random consistency Indexes for matrix sizes of up to 10.
I was thinking in distributing the 350 alternatives in groups of 10, according to an attribute or classification criteria, to be able to use the RI chart proposed by Dr. Saaty.
If there are other more adecuate multi-criteria analysis tools, or different approaches to calculate the RI for larger matrices, please let me know.
Greetings and thank you,
Relevant answer
Answer
Dear José de la Garza
I don’t think that AHP has a limit for alternatives, however, in your case, dealing with 350 alternatives involves a tremendous workload, and, if for whatever reasons and after you finish, you add or delete and alternative or a criterion, as ususually happens, you have to start all over again.
I would suggest not making pair-wise comparisons of criteria, but simply, the group may evaluate each criterion separately, and then finding the average. Consistency or lack of it, is a property of AHP, and in my opinion useless, since the DM may be forced to adjust something that he/she believed, and assuming that there must be transitivity with a 10 % of tolerance. And all of this trouble to gain what?
Nothing, because they can’t assume that the scenario in the real-world is transitive. Maybe it is, or may be not.
I believe that your group criteria is OK but short.
For instance, don’t you think that it is important a criterion that qualifies each supplier regarding compliance history in time and in quantities?
You can have a hint of it by researching the history of each supplier, and asking your competition. What about type and age of machinery? Are your potential suppliers metal foundries for Aluminium, Iron, Precision casting?) (I guess it since you talk about casting).
If it is so, it appears that in level 2 criteria, they may be related. For instance, I don’t think that you can address the manufacturing capability independently of product capacity and cost. A foundry with small capacity most probable will have higher production costs that a large one.
If this is the case, you are not allowed to use AHP, because in this method all criteria must be independent. This was specifically established by its creator, Dr. Thomas Saaty.
What I would suggest is computing weights independently and apply them to a decision matrix that responds to real issues, for instance production capacity in kgr/day, costs per unit, expertise, financial capacity, size of technical department, etc.
Once you have that you can applied methods like PROMETHEE, TOPSIS, ELECTRE,VIKOR, etc., to find the best supplier, or else a method that does not use weights, like SIMUS.
I hope it helps
  • asked a question related to Matrices
Question
1 answer
I am looking for a way to connect a classic 4-step transport model (macro) with a micro-level model. The purpose is to capture a behavioural response (change in travel behaviour) of people to some specific policy change (road charges…) and feed this information into another micro-level model (microsimulation model with individuals grouped into households). The difficulty is that the 4-step transport model is a macro model, where we have aggregate flows of people (# of trips), not individuals. The output is in form of OD matrices between different geographical zones (of a studied area) before and after reform. They show the number of trips between zones and the total matrix is subdivided into OD matrices for different travel modes and some combinations of socio-economic profile and trip purpose.
My question is what would be the best approach to extract information from aggregate OD matrices to feed into micro-level dataset? I wish to capture how modal choice will change (e.g. if we introduce road tax, how each individual in micro dataset will adapt, maybe he will choose public transport?…)
What would be your suggestions? Maybe someone already tried to do something similar? I couldn’t find anything. Your suggestions would be very appreciated!
Relevant answer
Answer
To extract information from aggregate OD matrices and feed it into a micro-level dataset, you could use a technique called disaggregation. Disaggregation involves breaking down aggregate data into smaller units, such as individual households or trips, to capture more detailed information about travel behavior. Here are a few possible approaches to consider:
  1. Gravity model: The gravity model is a common method for disaggregating OD matrices. The model uses factors such as distance, population, and economic activity to estimate the flow of people or goods between origin-destination pairs. You could use a gravity model to estimate the number of trips between each pair of zones in your OD matrix and then allocate those trips to individuals in your micro dataset based on their characteristics (e.g. income, age, gender, etc.). You could then simulate the effect of a road tax on travel behavior by adjusting the cost or attractiveness of different modes in the model.
  2. Synthetic population: Another approach is to create a synthetic population that represents the characteristics and travel patterns of individuals in your study area. A synthetic population is generated by combining data from various sources (e.g. census, surveys, etc.) to create a representative sample of the population. Once you have a synthetic population, you could use it to simulate the effect of different policy scenarios on travel behavior. For example, you could introduce a road tax and see how individuals in the synthetic population respond by choosing different modes of transportation.
  3. Activity-based modeling: Activity-based modeling is a more detailed approach to disaggregation that takes into account the various activities that individuals engage in throughout the day (e.g. work, school, shopping, leisure, etc.) and how those activities influence their travel behavior. Activity-based models typically use data from travel surveys and other sources to estimate the frequency, duration, and timing of different activities for individuals in the study area. You could use an activity-based model to simulate the effect of a road tax on travel behavior by adjusting the cost or availability of different modes for each activity.
  • asked a question related to Matrices
Question
2 answers
Respected RG members
What are the pros and cons of modeling dynamics using psuedo- stochastic matrices as the transition matrix units?
Relevant answer
Answer
@ Amer Dababneh
Thank you for sharing the link.
Regards
D. Ghosh
  • asked a question related to Matrices
Question
4 answers
For Ns = 1, where Ns is the number of streams per user, the beam steering vectors are calculated by finding the array response vectors corresponding to the largest effective channel gain, i.e, finding the path that maximizes abs(abs(Ar(:,r)'*H*At(:,t))) where Ar and At are functions of the receive and transmit antenna array responses, respectively.
My question is, how to calculate the beam steering matrices for the case in which Ns> 1?
Relevant answer
Answer
In general, when Ns > 1, beamforming can be achieved by using a linear combination of beam steering vectors. The optimal linear combination can be found using a technique called singular value decomposition (SVD).
Specifically, if the channel matrix H is an Nt x Nr matrix, where Nt is the number of transmit antennas and Nr is the number of receive antennas, we can perform SVD on H as follows:
H = U * S * V'
where U is an Nt x Nr unitary matrix, S is a Nr x Nr diagonal matrix, and V is an Nr x Nr unitary matrix.
The diagonal entries of S represent the singular values of the channel matrix, which can be interpreted as the "strength" of the channel. The columns of U and V are the left and right singular vectors, respectively, which represent the directions of maximum channel strength.
To perform beamforming for Ns streams, we can select the first Ns columns of U to form a matrix U_Ns. We can then define the beam steering matrix as:
W = U_Ns'
This means that the transmit signal is multiplied by the conjugate transpose of the first Ns columns of U. The resulting signal is transmitted using the antenna array.
At the receiver, we can perform maximum ratio combining (MRC) to combine the received signals from different antennas. The MRC weight vector can be found by taking the conjugate transpose of the first Ns columns of V, as follows:
w_MRC = (V(:,1:Ns))'
This means that the received signals are multiplied by the conjugate transpose of the first Ns columns of V, and the resulting signals are added together to form the final output signal.
In practice, hybrid beamforming is often used in massive MIMO systems, which involves using a combination of analog and digital beamforming to reduce the complexity of the system. The analog beamforming is implemented using a set of phase shifters, which can be adjusted to steer the beam in different directions. The digital beamforming is implemented using the above-described linear combination of beam steering vectors.
  • asked a question related to Matrices
Question
6 answers
dear colleague, I would like to create a matlab script that allows the calculation of matrices (finite elements) by varying the number of elements of the rectangular mesh. But I don't know how to proceed need help?
Relevant answer
Answer
you should define the rectangular mesh by setting its dimensions and number of elements, then divide the mesh into smaller elements, for example:
L = 1;
numberElementsX=8;
numberElementsY=8;
numberElements=numberElementsX*numberElementsY;
Calculate the stiffness matrix using numerical integration techniques and specify boundary conditions. Repeat the calculation for different number of elements by including a loop if needed.
  • asked a question related to Matrices
Question
2 answers
Hi,
I have exported the mass and stiffness matrices for a 2D euler-bernoulli beam element (B23 in ABAQUS). These elements have three degrees of freedom for each node (two translation, one rotation), therefore, for a single element I would've expected a 6x6 matrix for both matrices, however, I have an 8x8 matrix for both.
Can anyone tell me where these extra DOF are coming from?
I have attached the input file I used to extract the matrices as well as the mass and stiffness matrices. The beam material properties are:
L = 1m
b = 0.01 m
h = 0.01
E = 70e9 Pa
rho = 2700 kg/m^3
v = 0.3
Relevant answer
Answer
.dat file reports that:
NUMBER OF ELEMENTS IS 1
NUMBER OF NODES IS 3
NUMBER OF NODES DEFINED BY THE USER 2
NUMBER OF INTERNAL NODES GENERATED BY THE PROGRAM 1
TOTAL NUMBER OF VARIABLES IN THE MODEL 8
From Abaqus theory manual (sect.3,5,3. Euler-Bernoulli beam elements, Interpolation): “To eliminate the unwanted axial strain constraint, in Abaqus the stretch at the node of each such element is taken as an internal variable, local to the element (a third internal node is created for this purpose, and so it is not shared with neighboring elements.)”
  • asked a question related to Matrices
Question
1 answer
K, M, and C of shear building with Distributed TMDs
Relevant answer
Answer
To obtain the mass, stiffness, and damping matrices of a shear structure with TMDs (tuned mass dampers) distributed in the structure's floors, you can use a combination of analytical and experimental methods. Here are some general steps you can follow:
  1. Analyze the structure's natural frequencies and modes of vibration using a finite element analysis (FEA) software or other analytical method.
  2. Measure the structure's response to an excitation, such as ambient vibrations or a controlled input, using sensors such as accelerometers.
  3. Use the measured response data to identify the modal properties of the structure, such as natural frequencies and mode shapes.
  4. Determine the mass, stiffness, and damping matrices for the TMDs based on the design and properties of the TMDs.
  5. Incorporate the TMD matrices into the overall mass, stiffness, and damping matrices of the structure using a modal combination method, such as the Craig-Bampton method or the component mode synthesis method.
  6. Validate the model by comparing the predicted response of the structure with TMDs to the measured response.
Note: the TMDs can be modeled as a point mass-spring-damper system and added to the main structure matrices using substructuring method or Craig-Bampton method, this will change the natural frequency and damping ratio of the main structure.
  • asked a question related to Matrices
Question
1 answer
I need mass and stiffness matrices of a specific element of a structure in global coordination.
Relevant answer
Answer
Greetings,
It seems to me that such information is limited in SAP2000 as it works with different calculation processes. but you could use ROBOT.
  • asked a question related to Matrices
Question
5 answers
I am busy setting up an LC-MS/MS method for quantifying various analytes in treated wastewater effluent. For method validation, I require a matrix blank but all of the matrices I have evaluated are not blank for the analytes (I have evaluated at least 30 matrices). Any suggestions on what to use as a matrix blank for method validation?
Relevant answer
Answer
They are different but they are water, this is the closest matrix you will have. Sometimes you need to take what is more similar to your sample as a blank since it is impossible to have a real blank sample. For matrix effect you can do standard addition in your wastewater to assess matrix effects or to use deuterated compounds.
  • asked a question related to Matrices
Question
1 answer
I would want to know the fundamental difference between eigenvalues and singular values when applied to spectral analysis of graphs' adjacency and laplacian. As far as I know the SVDs can be worked on nonsquared matrices but adjacency and laplacians are squared matrices and they would be symmetric if the graph is undirected.
Relevant answer
Answer
please just check wiki:
singular value decomposition ... generalizes the eigen decomposition of a square normal matrix ... to any m-by-n matrix
  • asked a question related to Matrices
Question
2 answers
Hello,
I am working on an experiment analyzing rhizosphere and mycorrhizal fungal communities belonging to different tundra plant roots. Using two DNA extraction protocols on the same plant roots, I have generated both bacterial and ectomycorrhizal species matrices.
I want to compare the two matrices to see if the communities are correlated to each other but am running into problems executing. The bacterial matrix is obviously much larger than the ectomycorrhizal matrix which is causing problems when trying to do a mantel test in R.
I know in theory mantel tests do not require symmetrical matrices, however the packages I have tried using in R all do require symmetrical matrices (vegan, ape, ade4).
I was hoping somebody might have an idea how I can go about comparing these two matrices, either by modification so that the matrices are symmetrical, or perhaps another piece of code/ software package.
Thanks
Relevant answer
Answer
Take a look at this search the download is the one that I thought was most useful to you. Best wishes David Booth
  • asked a question related to Matrices
Question
2 answers
I am looking for an automated method for the diagonalization of multidimensional matrices (Cubic 3D matrices, for example). Any suggestion would be much appreciated.
Relevant answer
Answer
I'm not sure if it will work in more than 3 dimensions, but in 3 it seems like you can diagonalize 2D matrices along one axis. You end up with a 2D matrix diagonally across the "volume" of the original matrix. You can then diagonals this matrix. Keeping track of all the transformations would an issue if you need to do anything with that information.
  • asked a question related to Matrices
Question
3 answers
if the W matrix is a Square matrix and symmetric
Relevant answer
Answer
The distance matrix has in position (i, j) the distance between vertices vi and vj . The distance is the length of a shortest path connecting the vertices. Unless lengths of edges are explicitly provided, the length of a path is the number of edges in it. The distance matrix resembles a high power of the adjacency matrix, but instead of telling only whether or not two vertices are connected (i.e., the connection matrix, which contains boolean values), it gives the exact distance between them.
  • asked a question related to Matrices
Question
2 answers
A=[-2+3i -1+4i; -0.5+2i -0.9+4i];
B=[-1.5+3i; 0];
C=[-0.25+5i 0];
D=0;
How to plot step/impulse response of this system
  • asked a question related to Matrices
Question
15 answers
Hello,
I am working on a pre-analysis plan for an experiment that I want to conduct. The experiment is about productivity behaviour. Participants solve matrices and the number of matrices solved my primary outcome variable (i.e. productivity).
There are 2 groups:
1.) Treatment Group: Working under stimulus
2.) Control Group: Working without stimulus
In the beginning of the experiment, I conduct a baseline-phase to check whether potential differences in productivity may stem from differing baseline abilities.
My hypothesis states that productivity in both groups is the same.
What is the best way to investigate the hypothesis?
a) First check for differences in baseline ability and then conduct a nonparametric / parametric test?
b) Use a linear regression model, use a dummy on the treatment group and include baseline productivity as a control variable?
c) Is it even better to conduct both (e.g. Mann-Whitney U-Test and a subsequent linear regression) to arrive at more compelling results? Or would that approach even be counterproductive?
Kind regards and thanks for you help!
Relevant answer
Answer
We proposed a new dependence measure between categorical and numerical variables and consequently a new test on distribution equality among groups can be performed to test the new correlation zero. The new test has many nice properties such as consistency and high power, especially in unbalance cases. Ronán Michael Conroy
Dang, X., Nguyen, D., Chen, Y. and Zhang, J. (2021). A New Gini Correlation between Quantitative and Qualitative Variables. Scandinavian Journal of Statistics, 48 (4), 1314-1343. https://doi.org/10.1111/sjos.12490
  • asked a question related to Matrices
Question
3 answers
Dear community,
I'm trying to analyze a document based on multiple terms. I have 80 terms and 137 documents. I would like to reduce the number of terms and try to cluster different words into words that reflect the same concept. For example, the terms reduce cost, reduce expense, cut cost, technology, etc can be grouped under one concept which is cost minimization.
I have computed the U,d,V terms of the singular value decompositio (SVD) of the term-document matrix, and I have chosen to reduce the terms into 5 instead of 80. So I have the following dimension of these matrices U (80*5) d(5*5) and V^(t)(5*137)
I would like to know what is the next step ?
Many thanks for your great help
Relevant answer
Answer
Samah Jradi Singular value decomposition (SVD) can be used to minimize the dimensions of the term-document frequency matrix to increase performance. SVD reduces the dimension of the matrix, making it more compact and informative. A large number of SVD dimensions generally results in a better summary of the data.
  • asked a question related to Matrices
Question
5 answers
Hi
Some authors use either correlation matrices or VIF to identify collinearity between variables, while others apply both to improve model performance and interpretability. Therefore, I would be happy to get statistical explanations from anyone about the tools used separately and simultaneously or I want to know if other robust mechanisms to check collinearity are extant.
Thank you in advance!
Relevant answer
Answer
Both can be used, but I will suggest you to isolate the variables through PCI
  • asked a question related to Matrices
Question
4 answers
I am doing sMRI analysis on CAT12 and Freesurfer. Acquisition of T1-weighted images: echo time/repetition time (TE/TR) =2.5 ms/1900 ms, field of view = 270 × 270 mm slice thickness = 1 mm, 176 slices, 256 × 246 voxel matrices. But 2 subjects have different slices (160 slices) from others. I want to know the reason for this difference and should I exclude those subjects from my analysis?
Relevant answer
Answer
There may be different reasons of having different slices for different subjects. Most probably, the scanning personnel aim to scan the whole target (i.e., brain) and the size along the slice dimension may be different for the different subjects so number of slice must also differ.
Regarding the exclusion of subjects with different slices, the answer is yes and no. Yes in the case if you feel (via visualization) that some part is missing, it won't well register to reference template (if you are using any template reference) etc.. On the other hand if you feel brain is the target is complete and after registration/resampling you can change the number of slice then keep it in data.
  • asked a question related to Matrices
Question
13 answers
A(A^T)=(A^T)A =I(Identity Matrix)
Then A always have real enteries.
Is it true?
Relevant answer
Answer
A(A^T)=(A^T)A. Such matrices are called normal
Normal matrix - Wikipedia
If A(A^T)=(A^T)A=E (identity matrix) then A is called orthogonal. A can be defined over the field C of complex numbers, hence, complex entries are admissible
  • asked a question related to Matrices
Question
2 answers
I’ve linearised about my current state estimate with my equations that relate my measured variables to the states. But I think I must be missing an equation or setting it up wrong because the dimensions of my matrices being concatenated are not consistent
Relevant answer
Answer
To obtain matrix C by linearization in discrete-time EKF, you need to consider the Jacobian matrix of the measurement equation with respect to the state variable at the current state estimate. A real-valued function 'f(.)' from 'n'-dimensional state variable to 'm'-dimensional observations has a 'm X n'-dimensional Jacobian matrix with the (i,j)-th element being first-order derivative of the 'i'-th observation with respect to the 'j'-th state variable. This will result in appropriate dimensions for implementing EKF.
You can look up more about Jacobian matrices.
Hope it helps.
Regards,
Himali
  • asked a question related to Matrices
Question
12 answers
Lower Matrix to represent the approximate final stages of payment and upper representing the early payments
Relevant answer
Answer
LU decomposition is a basic technique in numerical linear algebra. Numerical linear algebra has many applications, since a vast numerical algorithms reduce to linear algebra.
  • asked a question related to Matrices
Question
12 answers
I am specifically trying to analyze:
x_dot = a*x*(1-x)*(1-y)
y_dot = y*(1-y)*(c*(1-x)-b)
where {0<= x, y <=1}. Stability of fixed point at (1-b/c, 1)?
Relevant answer
Answer
We can not determine the stability at such an equilibrium point. There must exist a bifurcation at this equilibrium point. See the bifurcation theory when two eigenvalues are zero.
  • asked a question related to Matrices
Question
17 answers
I want to compare two theorems and see which one has the largest feasibility domain. like the attached picture.
for example, I have the following matrices: A1=[-0.5 5; a -1]; A2=[-0.5 5; a+b -1]; they are a function of 'a' and 'b' I want to plot according to 'a' and 'b' the points where an LMI is  feasible for example the following LMI Ai'*P+P*Ai<0
then I want to plot the points where another LMI is  feasible, for example:
Ai'*Pi+Pi*Ai<0
I have seen similar maps in many articles where the authors demonstrated that an LMI is better than another because it is applicable for more couples (a,b)
Relevant answer
Answer
Usually, these problems are easily solved by YALMIP toolbox in MATLAB.
Here, I write the following pseudocode for your problem:
It can be solved by defining two 'for-loop' as follows:
for min(a)<a<max(a)
for min(b)<b<max(b)
solve the proposed LMI-based optimization problem
if the LMI problem is feasible
figure(1)
hold on
plot(a,b,'.r')
end
end
end
OR, you can use the following sample:
for a = 0:1:15
for b = -12:1:0
yalmip('clear')
% Model data
A1 = [a 0.02;0.35463 0.2035];
A2 = [0.7025 0.02;0.2525 0.1025];
sdpvar P1(2);
sdpvar P2(2);
con1 = P1>=0;
con2 = P2>=0;
con3 = A1'*P1+P1*A1<0
con4 = A2'*P2+P2*A2<0
constraints = con1+con2+con3+con4;
opt = sdpsettings('solver','mosek','verbose',0);
optimize(constraints,gamma,opt);
P1 = value(P1);
P2 = value(P2);
eigP1 = min(eig(P11));
eigP2 = min(eig(P12));
if eigP1>0 && eigP2>0
figure(1)
hold on
plot(a,b,'.k','MarkerSize',5)
end
end
end
  • asked a question related to Matrices
Question
5 answers
i tried to use next orders with ANSYS Workbench 2019.2 to export mass and stiffness matrices :
!Stiffness
*DMAT,MatKD,D,IMPORT,FULL,file.full,STIFF
*PRINT,MatKD,Kdense.matrix
!Mass
*DMAT,MatMD,D,IMPORT,FULL,file.full,MASS
*PRINT,MatMD,Mdense.matrix
The shown codes worked out with workbench R15
When i try it with Workbench 2019.2, i had next error
(*DMA Command : Fails to open the file file.full.)
some friends told me to use next code for sparse matrix
*SMAT,matk,D,IMPORT,file.full,FULL,STIFF
*PRINT,matk,matk,txt
i received an error :
                       ( *SMA Command : The File Format (FILE.FULL) is not recognized.  This   command is ignored.)
do any one face such problem and how i can mange this error?
Relevant answer
Answer
appreciated , I will try it
  • asked a question related to Matrices
Question
10 answers
A is a nxn matrix whose five eigenvalues are zero and other n-5 are non-zero. so the rank of the matrix is n-5?
Relevant answer
Answer
Look the matrix A, n*n: 0 in diagonal and below, non zero above. So all n eigenvalues are zero, but rank may be from n-1 down to 1. So, it seems, answer is negative
  • asked a question related to Matrices
Question
10 answers
Hello,
I am trying to export Mass and Stiffness matrices using Ansys Workbench via Modal analysis modular, with APDL commands inserted. However, there is no .matrix file generated. Could anyone please tell me what I'm doing wrong? The figure illustrating the procedure as well as the commands is attached.
Thank you.
Rui Wang
Relevant answer
Answer
Hello,
I am trying to export Mass and Stiffness matrices using Ansys Workbench via Modal analysis modular, with APDL commands inserted. However, there is no .matrix file generated. Could anyone please tell me what I'm doing wrong? The figure illustrating the procedure as well as the commands is attached.
Thank you.
Kamrul
  • asked a question related to Matrices
Question
3 answers
Hi All,
I have had lots of experience with computing RV and covariance matrices with equity data. I wanted to compute RV for US Treasury bonds and realized covariance/correlation with the S&P500. I downloaded 5 min continuous TY futures from Reuters Datascope. There are lots of missing observations around the rolling of contracts at maturity. This makes the data nearly impossible to use to construct RV.
Any suggestions on intraday data/different series to compute bond RV and covariance with equities.
Thanks
Adam
Relevant answer
Answer
Hello, I would personally like to recommend API Fred because of the very useful database and engine for analyzing bond and futures quotes. Our brokerage houses in Poland very often analyze the volatility of financial instruments on the basis of FRED. Best regards
  • asked a question related to Matrices
Question
1 answer
I am working on a community detection problem based on time-series correlation data. The principal literature reference is this:
Random Matrix Theory (RMT) is used to identify non-random components in correlation matrices. The paper states: "A correlation matrix constructed from N completely random time series of duration T has (in the limits N → +∞ and T → +∞ with 1 < T /N < +∞) a very specific distribution of its eigenvalues, known as the Marcenko-Pastur or SenguptaMitra distribution".
Now, in my case I have N >> T, which would violate the 1 < T /N < +∞ condition.
Does anybody know how N >> T affects the Marcenko-Pastur distribution and the validity of RMT in the context of correlation matrices? Would it change anything if I resample the time series to get N < T.
Thanks a lot for the help.
Relevant answer
Answer
The MP law fits with the case that T/N<1. Please check any textbook on random matrix theory for details. I was thinking the requirement for T/N >1 is just to guarantee that the correlation matrix is non-singular. (I am sorry that I do not have time to read the article. )
You can set a really small value for T/N to calculate the MP law. This may help you.