Science topic

Scaling - Science topic

Explore the latest questions and answers in Scaling, and find Scaling experts.
Questions related to Scaling
  • asked a question related to Scaling
Question
1 answer
Here[0] I described an issue with the scaling factor for Landsat 8 Level 2 products. I've used the scaling factor from here[1] but when I compute the NDVI the result has many pixels out of the validity range.
----------
Relevant answer
Answer
You can use the SREM method for surface reflectance estimation and then can perform NDVI calculation. If you need any help regarding NDVI calculation using SREM, feel free to contact me.
  • asked a question related to Scaling
Question
3 answers
Hey, i often get this question- what is the difference between adoption and scaling? And isn't scaling just large adoption?
What would be your response?
Relevant answer
Answer
Adoption corresponds to the change or modification of individual behavior towards innovations while scaling can have a double meaning. It can mean replication/expansion/transfer of technology/innovations i.e. scaling out. It can also mean institutionalization, transition, transformation of innovations i.e. scaling up.
  • asked a question related to Scaling
Question
3 answers
Hi guys.
I am working on an Abaqus simulation using Dynamic/Explicit solver and user subroutine VUMAT to implement the damage model that I am developing now.
Target time increment is set to be 5E-05 via mass scaling. Yet, at some point during the simulation, the time increment became smaller than 5E-05 (ex. 5.436E-06).
How can this happen?
Can the stable time increment change DURING the simulation?
I wonder whether this is technically possible in Abaqus/Explicit software.
Please share your thoughts and experiences.
Thank you in advance.
Relevant answer
Answer
Dear Lee,
Stable time increment is dependent on the material's elasticity and density and also element's dimensions. At the beginning of the problem, the explicit solver, automatically calculate stable time increment values for all the elements in the model and choose the minimum value as minimumm stable time increment. Next explicit solver calculates the first increment and so on. Because of deformation of elements and changing every elements' stable time increments, the explicit solver calculates minimum time increment at the beginning of each increment and uses it for that increment.
Therefore, to refuse deformation speed error, minimum stable time increment or smaller value must be used. The smaller value can be a portion of minimum stable time or a fixed value that is smaller than minimum time increment during solving the problem.
About mass scaling, it can be used for quasi-static problems with some conditions. Use it carefully.
Regards,
M. Khodaei
  • asked a question related to Scaling
Question
1 answer
Hi I want to use mass scaling in seismic analysis in abaqus.
can we use mass scaling in seismic analysis?
Relevant answer
Answer
Technically, this is not a problem. However, you should understand the meaning of this tool. When you use mass scaling, you artificially increase the mass and thus affect the inertial properties of the structure. If the mass increase is insignificant, the simulation error will also be insignificant.
  • asked a question related to Scaling
Question
5 answers
Hi, I am looking to performing a regression analysis with the CATREG function in SPSS. Some of the guides I have found online use terminology I am unfamiliar with so I was wondering if anyone had advice on some of the practical steps involved. All help is hugely appreciated!
Task description: Expert and Amateur drummers performed a series of drumming tasks at five different speeds (80, 160, 240, 320, 400 hits per minute). Motion Capture was used to record arm/hand/stick movements, and forearm muscular activity recorded with EMG. A questionnaire was used to collect information about participants’ practice history.
Aim: The aim is to investigate if performance characteristics (e.g., variability in timing) can be predicted by physiological variables (EMG, movement patterns) and practice history variables.
Analysis: I have 8 predictor variables, and 1 outcome. To my knowledge, CATREG has less emphasis on assumptions (normality, etc.) and is a supported alternative to dummy-coding categorical variables. I believe that makes it a viable choice for our aim.
However, I am still uncertain on some points. Would anyone be able to advise on the following? -
1. Is CATREG only appropriate if your outcome/dependent variable is categorical? In my case the DV is continuous.
2. In regards to scaling/discretization, am I correct that numerical predictors do not require specific scaling, only nominal /ordinal ones? If this is correct, are the following SPSS scaling options for each variable ok?
  • For the variable 'expertise' (2 levels: Scaling is Ordinal, Descretization is Grouping Normal 2)?
  • For variable 'Tempo' (5 levels, Scaling is nominal, Descretization is Grouping Uniform 5)
Thanks!
  • asked a question related to Scaling
Question
1 answer
Any ideas please
Relevant answer
Answer
# specify original range
a <- -10
b <- 10
# Get a vector of 50 numbers from -10 to 10
set.seed(123)
dt <- runif(50, min=a, max=b)
# specify new range
my_min = 0.2
my_max = 0.8
# transform to the new range
scaled = (dt - a) / (b - a) * (my_max - my_min) + my_min
# check that it worked
min(scaled)
max(scaled)
# reverse the transformation
rev = (scaled - my_min) / (my_max - my_min) * (b - a) + a
# original minus reversed transform should be close to zero
dt - rev
Look at the link below for a full explanation of how this works
  • asked a question related to Scaling
Question
1 answer
Is it possible to determine the correction factor (Km) to estimate the (AED) for Lepidopteran species? Haven't found any literature that discusses the MRSD for Lepidopteran species. Would very much appreciate it if someone has any insight into it. Need to calculate the drug dose for Bombyx Mori.
Relevant answer
Answer
Dear Siam,
I'm afraid I haven't come across allometry for such cases. In general allometry is used to scale between mammalian species (or individual of different sizes for pediatric applications).
The general idea is that flows (clearance) scales with a factor of about 0.75, so this applies also to dose (which is expected to be a function of clearance). So it follows a relationship of a*BW^0.75.
In absolute terms you could apply the formula with the weight of any species / individual. However, I suspect that the empirical principles mainly established between mammals may not apply to invertebrates.
To note that even between more similar species like mammals allometry does not always work well.
Out of curiosity, how do you apply drugs in insects? Can you actually apply orally or how it it done?
  • asked a question related to Scaling
Question
5 answers
A LiDAR las file presents data in a point cloud of x, y, z coordinates and provides offsets and scaling factors, but what are the units?
Relevant answer
Answer
LIDAR or LADAR is the determination of range by light or laser, a visible remote sensing technology using pulses of light, usually laser beams, by which distances or characteristics of the observed targets are calculated.
  • asked a question related to Scaling
Question
4 answers
Hi all, I am using Abaqus to simulate cylindrical turning. No matter how I set the mass scaling factor (16,100,256, etc.), the final kinetic energy is always greater than 5% of the internal energy. Finally, I cancel the mass scaling, and the results show that the kinetic energy is also greater than 5% of the internal energy. How do I set up the model to meet the 5% condition? Thanks!
Relevant answer
Answer
Dear Martin Veidt, Thanks for your suggestion. I have found that in some simulations, the KE increases slowly while the IE increases rapidly.
  • asked a question related to Scaling
Question
3 answers
Compiled allometric data might help to detect scaling patterns.
Or similarities in the scaling relationships might suggest connections otherwise too subtle to find.
Does such a list exist?
Does such a list exist for biological phenomena?
If such lists do not exist should they?
Relevant answer
Answer
No such compilation that I know of (only some reviews, for the scaling of a specific trait in a specific group). So regarding your last question "should it exist", I think creating such a list would be an admirable but difficult task - depends on what you're thinking of exactly as "all known instances". Using all the raw data available would basically be a "database of everything", virtually impossible. It would still take a ton of work to even make a list of every allometric equation ever explicitly stated for every trait in every organism, and you'd also need to be able to update it, and to subset it by trait, taxa (down to intraspecific resolution, don't forget), external factors such as region/temperature/season/age/sex, etc.
(btw if somebody does go and make such a database it should definitely be named ALLometric right?)
  • asked a question related to Scaling
Question
4 answers
i have been working on ASTM standard model in ANSYS workbench. its been crashing due to error message says, "your product license has numerical problem size limits, you have exceeded these problem size limits and the solver cannot proceed". is there any way like scaling, merging nodes to reduce no of nodes for further analysis.
Relevant answer
Answer
Use a plane model if possible, exploit symmetry.
  • asked a question related to Scaling
Question
4 answers
E.g. I have a given interest of a balance between quality, cost and time. I have in the data set 6 features: 3 features for cost, 2 for time and only 1 for quality.
I would like to:
- quality features * 2
- cost features *2/3
- times *1
When should I do that: before the transformation/scaling? after the transformation but before scaling or even after trasnfromation/scaling? and why then?
-Data set is very small: <50 data points
-Features: 6
Relevant answer
Answer
Why adding weights? In my opinion, if you want to keep some weights between features: don't scale. If you want that the features look equal, so scale them.
  • asked a question related to Scaling
Question
2 answers
1. Can anyone help me confirm if their is a generally accepted range for scaling the Kessler Psychological (K6) scores into categories of Psychological distress i.e 0-4 Low, 5-9 Mild, 9 -24 High.
Using SPSS how can I compute individual respondents scores for into these categories.
Relevant answer
Answer
من خلال استخدام قانون المستويات المعيارية والذي يعد افضل طريقة لتصنيف المفحوصين
  • asked a question related to Scaling
Question
2 answers
BL-21 strain used.
Relevant answer
Answer
Apparently the conditions, specifically the environment around the microorganisms is not the same. In general oxygen transfer and mixing is limiting differently. In our Lecture Notes on scale up scale down there is a nice example on the analysis of a shake flask.
  • asked a question related to Scaling
Question
3 answers
How to select a scale if I am unsure how precisely my respondents can differentiate the points present in the scale?
Relevant answer
Answer
It depends upon the quality of respondents, as while developing achievement test questions, Linn (2008) recommends to develop direct questions for primary graders because they will not understand the statements or hypothesis by items. Similarly, it the respondents are better enough to think more critically then use the 5 or 7 point likert scale to provide them opportunity to respond as per their own level. And if your respondents are children or uneducated people then use the 3 point scale because your should not put them in confusion.
  • asked a question related to Scaling
Question
3 answers
After reading a lot through research papers and available literature, I am confused about whether we can perform arithmetic mean on a 5-point Likert type scale which is ordinal in nature. There exist research papers that have calculated the mean of the responses collected through the Likert scale. On the other hand, many statements are made that it is inappropriate to measure arithmetic mean for ordinal data.
Can someone please give a solution to this question?
Relevant answer
Answer
Hello Aditya,
Let me summarize the argument for doing so (summing or averaging responses):
Rensis Likert (see link below) proposed that summated scores (arithmetic sums of responses over a set of related items, each with a "Likert-type" response scale) be used as (pseudo) continuous scores to identify levels of belief / perception / opinion / or sentiment.
His argument for this was that the ordinal (1,2,3...) scaling of response options yielded values that correlated in the .90s with what (at that moment in time) represented more rigorously scaled values.
Since that time, many researchers have been content to consider summated (or the linear transformation thereof to an average) scores as sufficiently close to interval, continuous values to treat as such.
Let me summarize the argument against doing so:
Ordinal variables do not have equal intervals, and therefore arithmetic operations such as summation or averaging do not yield meaningful values.
As well, as Professor Booth correctly notes, there are available methods that are more than capable of handling ordinal scores appropriately.
Finally, IRT models, such as the graded response model, offer a way to calibrate interval-level values onto a scale comprised by ordinal-level responses.
Good luck with your work.
Reference:
  • asked a question related to Scaling
Question
9 answers
I am analysing LC-MS data. I have combined all NEG and POS mode peak intensities in a single file retaining all metabolites with higher peak intensities (appearing in either modes). Now, when I go for analysis in MetaboAnalyst, which ones should I select among:
1. Sampe normalization
2. Data transformation
3. Data Scaling
Thank you for guidance!
Relevant answer
Answer
  • asked a question related to Scaling
Question
3 answers
In the understanding of Pedagogy issues with Mathematical assumptions in Simulation, research outcomes of optimising modelling outcomes via scaling techniques could correlate to what Cryptographical analysis could help with remodelling. How could the Scaling approach help in such a scenario, consider some of the ethical standards of deploying Artificial Intelligence?
Relevant answer
Answer
Blockchain is getting bigger. the continuously growing number of nodes has resulted in the blockchain scalability problem. Even if blockchain has already been around for more than a decade, the problems with scalability can inhibit the prospects of blockchain adoption.
Kindly check these papers:
  • asked a question related to Scaling
Question
7 answers
(For me: the most.)
Arguments in favor of this proposition include:
In animals, 3/4 scaling applies over many orders of magnitude.
Historically, the applicable mathematical approach depends on dimension: Galileo’s 1638 work on weight-bearing animal bones; Sarrus and Rameaux 1838 on the scaled rate of mammalian breathing; Max Rubner’s and Max Kleiber’s measurements, and the geometric approach in 1997 by West Brown and Enquist.
If the underlying physical relationship prevails at all scales (sizes), then it applies to quanta, to metabolism, to cosmology, and to dark energy.
Or not?
Your views?
Relevant answer
Answer
‬‏
Metabolic scaling is the relationship between organismal metabolic rate and body mass. Understanding the patterns and causes of metabolic scaling provides a powerful foundation for predicting biological processes at the level of individuals, populations, communities, and ecosystems. Kleiber's law, named after Max Kleiber for his biology work in the early 1930s, is the observation that, for the vast majority of animals, an animal's metabolic rate scales to the 3⁄4 power of the animal's mass. Symbolically: if is the animal's metabolic rate, and the animal's mass, then Kleiber's law states that. On the other hand,allometric scaling is one of the tools that drug developers use to predict human PK based upon animal data. Prediction methods, like allometric scaling, provide a “sneak peek” at how a drug might behave in humans before any clinical studies are conducted.
  • asked a question related to Scaling
Question
2 answers
Hi Folks,
I have a concern regarding the application of correlation cofficient/matrix.
I have a dataset which comprises of X variables that are captured at different scales/measure. My Y variable a total score, and is a summation of X variables.
Y=X1+X2.... Xn
Can i run a correlation between Y and X1/X2.. Xn.?
Slightly confused here since correlation helps with the linear relationship identification, however since outcome is a summation, hence will a corr b/w Y and X1 make sense? Moreover i am scaling my data before running the correlation, since the scales are different for the X variables.
Would appreciate a clarity over this.
Thanks,
Relevant answer
Answer
Jai Tuteja, I've just seen your question, so I hope it's not too late to provide a response - and my doing so might help others, anyway.
What you are proposing to do is referred to as an item-total correlation or, if you removed the single item in each case from the total before calculating the correlation, as the corrected item-total correlation.
This kind of correlation is often used when scales are being constructed to identify the extent to which each item is related to all of the (other) items.
For what it's worth, I think these correlations are not particularly useful. Instead, I think it's more useful to create a matrix of correlations between all items and inspect the entries in that matrix carefully to detect which items are most, and least, related to which other items.
That can be the basis for more sophisticated procedures such as exploratory factor analysis.
  • asked a question related to Scaling
Question
8 answers
Hello.
I am simulating friction stir welding with Abaqus dynamic explicit and CEL technique. I have taken tool as rigid body and uniform mesh size of 2mm. Stable time increment is coming out to be 1.7 *e-7 and total time for simulation is 120s. I have tried mass scaling but due to ridid body of too, it doesn't apply. So, you might get that my computation time is quite high. Please guide me through the problem or where i am making mistake.
Relevant answer
Answer
You should not use mass scaling option while simulating the FSW process because it significantly affects the accuracy of the results.
  • asked a question related to Scaling
Question
3 answers
Do such measurements make sense? Do they exist?
Comparing redshift and luminosity distances, if that is a sensible question, may bear on the 4/3 scaling hypothesis as it relates to dark energy.
Relevant answer
Answer
Cepheid and RR Lyrae variables are well known standard candles, and important tools in the cosmological distance ladder. For example, Cepheid variables, which were discovered by Henrietta Swan Leavitt, have the property that their luminosities can be directly inferred by observing their pulsation period, which then allows one to calculate their luminosity distance, given that the observing instrument (telescope) also measures their flux.
However, although nothing stops you from making redshift measurements of relatively nearby objects, this will induce an error in any cosmological parameters inferred from these measurements (such as the luminosity distance), because the peculiar velocities of these objects would be comparable to their Hubble flow, giving you highly inconsistent results. Luminosity distances calculated by interpreting the measured redshifts as cosmological redshifts, become more reliable at larger distances, where the Hubble flow dominates over the peculiar velocities.
  • asked a question related to Scaling
Question
4 answers
Dear Sir/Madam,
I have done regression analysis consisting one dependent and 7 continuous independent variables. I found vary small coefficient value (like .00000000000001 etc) while using the original independent variables. When I divide some of the large digit independent variables by 10000, then it is giving quite better value. I want to know whether this type of scaling is accepted in regression analysis?
Thanking you.
Relevant answer
Answer
Neelabha Roy The main rationality and justification of regression analysis is to estimate the parameters of a dependency, not of an interdependency, relationship. This is not limited to holistically controlled, fixed variate experiments, the regression model, is limited to good experiments if good results are to be insured. A significantly robust experiment must above all provide as many dimensions of independent variation. You might want to look at the possibility towards the occurrence of multicollinearity which is viewed as an interdependency condition. Hope this helps.
Good luck.
  • asked a question related to Scaling
Question
2 answers
I am trying to determine the most appropriate way to define a beam source of a given intensity. Normally one define their beam intensity via an FM card scaling to power but I am planning to use my flux spectra (obtained by an E card) for depletion analysis in ORIGEN and I'm not sure the FM will affect the spectra results.
Relevant answer
Answer
You can use the WGT in the SDEF definition as Vadim Talanov said. With that being said, the FM card does multiply the fluxes in every energy bin. If you need to be convinced, you can always try this in a short/test calculation to confirm. I have learned a lot by trial and error.
  • asked a question related to Scaling
Question
6 answers
I have a dataset (Excel file /Size1.4GB ) that have 8 sheets each one having 900,000 rows.
I want to convert string to numeric. Moreover, normalization and scaling too.
Relevant answer
Answer
Yo could try pandas package for Python. It’s easy-way to use and transform your data. Yo can convert your string columns to categories, and then to numbers.
Here an example code:
import pandas as pd
df = pd.read_csv("your_excel_finel_in_CSV_format.csv")
df.your_string_col_name1 = pd.Categorical(df.your_string_col_name1).codes
df.your_string_col_name2 = pd.Categorical(df.your_string_col_name2).codes
...
Here a dummy example based on your data:
import pandas as pd
df = pd.DataFrame({"col1":[1, 2, 3, 4, 5], "col2":[22.5, -3.45, 5.12, 48.00, 33.33], "col3": ["http://www.yahoo.com/123", "http://www.yahoo.com/mail", "http://www.yahoo.com/mail/yahoo", "http://www.yahoo.com/123", "http://www.yahoo.com/mail"]})
print(df)
df.col3 = pd.Categorical(df.col3).codes print(df)
I hope I have been able to help you with this task.
  • asked a question related to Scaling
Question
4 answers
I am simulating a reinforced concrete beam with CDP model under four points bending test. However I keep encountering negative internal energy issue and wonder what might be the problem causing this.
I am using dynamic explicit step with default mass scaling and time period of 1. Also, displacement load of 20mm and mesh size of 15mm.
Thanks for you help in advance.
Relevant answer
Answer
If you get negative energy in an explicit analysis, typically the time steps you use are to great. Try to reduce them and check if it has an effect or not. The step width is determined for elastic material in the original configuration. If you stiffness changes locally, it effects the step width and could cause this effect.
  • asked a question related to Scaling
Question
2 answers
I am trying to downscale GCM-(CanCM4) data to bhima basin catchment(finer scale) for projecting future scenarios. Further I have used following variables ta,ua,zg,va(all these @ 925,850,700,200 pressure levels) and pr,psl(total 6 variables). I am attaching image which I got from working on GCM, now considering mid point of these GCM grid points only 2 station lie on periphery(+ mark) for down scaling. Can I downscale these GCM points to the 0.5deg  grid points? If yes, how to consider the weights?
Relevant answer
Answer
This is a good question.
  • asked a question related to Scaling
Question
15 answers
I've been reading forever statements (primarily, but not exclusively) in the biomedical literature like "a 1 year old rhesus monkey is the equivalent of a 4 year old child" but have not seen references for this. I think that this scaling may have to do with brain development, but again, no reference. Does anyone know the source (or *a* source) for statements like this?
Relevant answer
Answer
Yes it's true, more for a comparison not so much, for example the rhesus monkey to be seen to be less developed than chimpanzees, I liked your answer.
  • asked a question related to Scaling
Question
13 answers
Possibly: 4/3 scaling is a fundamental universal principle. Nothing underlies it. Why? It accounts for expanding cosmological space. Since 4/3 scaling brings 3 dimensional space, and hence everything else, into existence, it must be fundamental.
Can that be right? What favors and disfavors this notion?
Relevant answer
Answer
The ratio between the whole volume of the universe and the dynamical part of the same volume is about 1 : 0,74... (both quantities are determined by a different irrational number). In quantum field theory it means that the ratio between the volume of the Higgs field and the volume of the electric field in vacuum space is about 0,74 : 0,26 (total = 1,0).
Vector fields like the magnetic field and the field of Newtonian gravitation have no spatial dimension on their own. Einstein’s theory of general relativity describes the dynamical part of the volume of our universe – otherwise space cannot curve – thus the consequence is that the model of spacetime is restricted to 26% of the whole volume of the universe. The consequence is that gravity is an emergent force field (like Eric Verlinde proves for Newtonian gravity).
We may expect that ratios at the lowest scale size of reality that are present everywhere in the universe will “multiply” their ratio at larger scale sizes (like fractals do).
With kind regards, Sydney
  • asked a question related to Scaling
  • asked a question related to Scaling
Question
3 answers
Hello. My name is Malte. In my master thesis I study assemblages of Meso and Makrofauna associated with arboreal soil in tree-cavities, following an approach of island biogeography. In my study design the two predictors "size of tree cavity (amount of arboreal soil)" and "isolation (distance between tree cavity and terrestrial ground)" are included. Taking samples of twelve trees - sampling 1 tree cavity per each tree, 2 surveys with one year inbetween were conducted, including the following steps: Removal of arboreal soil of tree-cavities, Extraction of Fauna, Sterilization and restoration of arboreal soil into tree-cavities. I determined taxonomic groups to order level and counted the individuals. Having my dataset complete, the first thing I want to look at, analyzing my data, is the occurence probability of the insular groups (those that are bound to the arboreal soil in tree-cavities, e.g. Crassiclitellata) as a function of the predictors described above. I want to perform a zero-inflated regression model as it can consider the two distinct stochastic processes - 1. Colonization and 2. Abundace > 0 (Kéry and Royle 2015).
Relevant answer
Answer
Hello, I see that the answer the hypothesis you want to answer is if there are differences in the abundance of your samples according to the different types of trees and cavities, if so, you could test Poison distributions, Sorensen's ordering maps, or multivariate hypothesis tests like PERMANOVA or MANOVA.
  • asked a question related to Scaling
Question
1 answer
I am currently working on designing an EC reactor and scaling it up, for actual industrial use.
After doing research for around one month and testing I have found the current density required for actual industrial plant. But the issue arises while selecting a power supply for it. I can calculate the power required for small lab scale projects, but I couldn't do it for scaling up factor.
It would be really appreciated if anyone would like to assist me on this
Relevant answer
Answer
For reactors the effect of resistance and capacitance considered negligible compare to inductive the needed voltage V could be (I)(Xl), I the current=the product of current density and cross section area, and Xl the inductive reactance offered. Mathematical modeling for 132KV and 220KV system C.T performance for fault and inrush may help you in design of reactor, could refer my paper on C.T performance for inrush….
  • asked a question related to Scaling
Question
7 answers
Dear researchers
What is your opinion about the criterion recommended in seismic codes for determining scaling period, which are used to scale ground motion records?
As you know, the mentioned criterion is the period of the structure’s dominant mode, which has the largest modal participating mass ratio (usually the first vibration mode). Hence, the period of the mode with the second largest modal participating mass ratio is not considered in the scaling process. Consequently, although this criterion usually results in the largest value of scaling period, it is not logical ones.
This is especially important when Tuned Mass damper (TMD) or Base-Isolation system is utilized, which cause the modal properties of the structures to change.
I used a new criterion based on the weighted mean value of the periods for the structures equipped with TMD.
Have you used any criteria other than the criterion mentioned in the seismic codes?
Relevant answer
Answer
Dear Mikayel Gregor Melkumyan , it is my pleasure if u can read my latest article
  • asked a question related to Scaling
Question
2 answers
Please I need guidelines on the possible ways of scaling hydraulic fracturing field data to be used in a modelling software. Your response will be helpful. Thank you
Relevant answer
Answer
I will go through the two file carefully. After scanning through the Analysis of HF file, I figured I can get useful information from the publication. Thank you Shin Murakami
  • asked a question related to Scaling
Question
3 answers
I am training a deep learning model toward a image classification task. The VGG-16 model is trained individually on two different training sets with varying degrees of data imbalance. Set-1 had only 25% positive samples (class-1 (positive, label:1): 100; Class-0 (negative, label:0): 400). Set-2 had only 80% positive samples (class-1 (positive, label:1): 320; Class-0 (negative, label:0): 400). Set-1 is highly imbalanced compared to Set-2. The individual trained models were evaluated on a common test with equal numbers of positive and negative (n=100 each) samples. Since the dataset is imbalanced in both Set-1 and Set-2, the output probabilities would not be calibrated, so, I applied temperature scaling to rescale the probabilities. Figure (a) and (b) shows the results before and after calibration for the model trained on Set-1 and (c) and (d) for the model trained on Set-2. I observe that the expected calibration error and maximum calibration error reduce after temperature scaling and the probabilities are scaled to closely follow the y=x diagonal. However, it is not clear how to interpret the pre-calibration and post-calibration curves in terms of data imbalance. How the fraction of positives in each confidence bin, pre- and post-calibration, can be explained in terms of data imbalance?
Relevant answer
Answer
Refer to the work in [1] p.6 for explanation of the temperature scaling with respect to the softmax.
Regards
[1]Guo, C., Pleiss, G., Sun, Y., & Weinberger, K. Q. (2017, July). On calibration of modern neural networks. In International Conference on Machine Learning (pp. 1321-1330). PMLR.
  • asked a question related to Scaling
Question
7 answers
Which would be inverse to the way metabolism scales according to Kleiber’s Law, that is, inverse to the way metabolism scales as mass to a 3/4 exponent?
Which would be inverse to the way metabolism scales according to Kleiber’s Law, that is, inverse to the way metabolism scales as mass to a 3/4 exponent?
So that the rate of metabolism scaled, times longevity scaling is invariant?
Has this hypothesis been proposed? Are there are articles on the scaling of mammalian longevity?
Relevant answer
Answer
I could not tell about the longevity scale for mammals as a function of mass.
However, there is a known longevity scale for the number of heartbeats per lifetime for animals as a function of weight in kg.
See, for example, Figure 2 in Geoffrey West, Scale: The Universal Laws of Life, Growth, and Death in Organisms, Cities, and Companies, Penguin Books; Reprint edition (May 15, 2018), p.3.
  • asked a question related to Scaling
Question
1 answer
Hey,
I recently have a confusion about single cell ATAC-seq integration analysis between samples. I have read many discussions about that issue. So, I summarized them into two solutions as follows:
SOLUTION 1. (data QC ignored here) find the union feature set from different samples -> generate count matrix for each sample -> merge them into one large count matrix -> normalization/Scaling/cell clustering/ cluster annotations……
SOLUTION 2. generate the count matrix for each sample -> normalization/Scaling/cell clustering/ cluster annotations for each sample -> find common features among all samples -> generate count matrix against the selected common features for each sample -> merging data using pipelines, e.g. Signac/Harmony, to perform cell clustering, cluster annotation and other following analysis (which usually with give a new assay for common features).
My questions:
Either one selected, I will have cell clusters now. So the next plan for me is retrieving differential features for each cell type/cluster, which will be the key to the further investigation of biological functions.
Q1. I know that batch effect indeed exists between samples, but for SOLUTION 1, will normalization and scaling for a single large count matrix work for differential enrichment analysis between samples?
Q2. If SOLUTION 1 is not reasonable, SOLUTION 2 will give rise to a new assay only contain the selected common features, based on which the batch effect should be well corrected and the cell might be better clustered. However, how to perform the differential analysis for non-common features in each clusters? (That's to say, will the batch effect correction in the newly integrated assay by SOLUTION 2 will work for total differential feature detection in raw assays at the sample level?)
Thanks and best regards!
Relevant answer
Answer
  • asked a question related to Scaling
Question
4 answers
Dear colleagues,
Does anyone know a method for predicting (with more or less uncertainty) the contribution of organisms (community, species, functionnal group...) to a function/process?
I'm not a mathematical modeler and I don't pretend to create something robust. Rather, I am in an exploratory process to enable a better understanding among stakeholders.
For the moment I have found a few publications on plants that can help me (below) but I wonder if there are any others publications, on others organisms, ecosystems?
Best regards,
Kevin Hoeffner
References:
-Garnier, E., Cortez, J., Billès, G., Navas, M. L., Roumet, C., Debussche, M., ... & Toussaint, J. P. (2004). Plant functional markers capture ecosystem properties during secondary succession. Ecology, 85(9), 2630-2637.
-Suding, K. N., Lavorel, S., Chapin Iii, F. S., Cornelissen, J. H., DIAz, S., Garnier, E., ... & Navas, M. L. (2008). Scaling environmental change through the community‐level: A trait‐based response‐and‐effect framework for plants. Global Change Biology, 14(5), 1125-1140.
-Zwart, J. A., Solomon, C. T., & Jones, S. E. (2015). Phytoplankton traits predict ecosystem function in a global set of lakes. Ecology, 96(8), 2257-2264.
Relevant answer
Answer
Hello Kevin,
I think you can use trait-based apporach to understanding the contribution of organism to function
  • asked a question related to Scaling
Question
2 answers
  • Anyone ever scaled up from Brabender twin screw extruder to a Buhler one?
  • What are the scaling constants to consider?
Relevant answer
Answer
About several parameters that may possibly influence extrusion of ceramic pastes, you may check answers to another question at this forum ̶ «What are the most important parameters controlling the extrusion of pastes?»: https://www.researchgate.net/post/What_are_the_most_important_parameters_controlling_the_extrusion_of_pastes
  • asked a question related to Scaling
Question
3 answers
I want to perform a nonlinear time history analysis of a steel frame. I am using 3% rayleigh damping for my model. For nonlinear time history analysis, I want to scale the time histories. What should be the value for damping to develop the response spectrum for scaling my time histories.
Generally, 5% is used but as I am using 3% damping in my model. Should I prepare a response spectrum for 3% damping and then do the scaling or should I use the 5% damped response spectrum?
Please help me with this problem.
Thanks.
Relevant answer
Answer
Thank you very much.
  • asked a question related to Scaling
Question
3 answers
Hi every one
I am working on a model with stretching and bending in Abaqus. In my first step I apply the stretching to my model and in the second step I apply the bending. The magnitude of kinetic energy is small in 1st step but it increases significantly in bending step and exceeds the internal energy. Is there anyway to reduce the magnitude of kinetic energy? Also I should say that I have not used the mass scaling!
Thanks
Relevant answer
Answer
I'm glad i could help.
Good luck
  • asked a question related to Scaling
Question
1 answer
Dear all,
We just published an article about systems change, and this could be of your interest:
Feedback and comments are very welcome! 😊
Relevant answer
Answer
اعتقد السبب يرجع الى التطور العالمي الحاصل بكافة مجالات
  • asked a question related to Scaling
Question
5 answers
I have all the items on 5-point likert scale (1-strongly disagree-5-strongly agree), but one construct is having items on 5 -point likert scale (0- not applicable-4- severe). Can I run all the items together on EFA. Will different range of point scaling cause any problem in factor loadings or the EFA result?
Relevant answer
Answer
Dear Kirti,
While Stuart Cunningham is correct in that the scale itself does not matter from a quantitative viewpoint, it does from a semantic viewpoint. The fact that you are using different descriptors with different meanings (agreement in one and applicability in another) does have implications. First, it is best not to mix the two response types in the survey itself to lessen the chance of confusing the respondents. Second, the two response types should be analyzed separately. I hope this is helpful and good luck with your project.
Jim
  • asked a question related to Scaling
Question
11 answers
In my knowledge: Feature scaling is to standardize and normalize data. Feature selection is to optimize for best features.
In python, feature scaling is enough to get good accuracy %.
But in MATLAB, Normalization concept in feature scaling is required with optimization to get good accuracy %.
Any other extra benefits?
Relevant answer
feature selection is important in high diminsional data so that we can reduce the runing time of the training algorithm.feature scale are important in algorithms such as svm , logestic regresion,....
Fortunately, decion tree is not sensitive to the scale of the data
📷
  • asked a question related to Scaling
Question
25 answers
Can you give examples?
A comment in a related discussion prompted this question. In the discussion, https://www.researchgate.net/post/Is_it_worthwhile_reading_old_and_really_old_science_articles,
Vadim S. Gorshkov added a reply October 9, 2018: "My personal record is 1872".
Reading this reply of Dr. Gorshkov today, reminded me that I have relied on articles older than 1872 several times.
For example, about Galileo’s Two New Sciences, I wrote, Why scaling and not dimension, Galileo?
A comparatively more `recent’ article by Sarrus and Rameaux contains clues to providing an explanation of Kleiber’s Law. Under the Project Name Metabolic Scaling are several articles that rely on principles based on dimension, as used by Sarrus and Rameaux. They were, I suspect, the first to use dimension in connection with a biological process.
An important article about mean path lengths published in 1860 by Clausius is, I think, connected to dark energy. I have relied on Clausius’s 1860 article several times, and I believe it remains relevant and important for cosmology, as mentioned in for instance: https://www.researchgate.net/publication/320707338_Clausius's_1860_article_on_mean_path_lengths
What are your examples?
Relevant answer
Answer
This is an interesting question. The oldest reference I cited was the epoch-making "Pedologie" book of 1862 by Friedrich Albert Fallou, widely considered the founder of soil science.
  • asked a question related to Scaling
Question
6 answers
If in a multivariate model we have several continuous variables and some categorical ones, we have to change the categoricals to dummy variables containing either 0 or 1.
Now to put all the variables together to calibrate a regression or classification model, we need to scale the variables.
Scaling a continuous variable is a meaningful process. But doing the same with columns containing 0 or 1 does not seem to be ideal. The dummies will not have their "fair share" of influencing the calibrated model.
Is there a solution to this?
Relevant answer
Answer
Monika Mrozek I think that based on Johannes Elfner shared, it makes sense NOT to scale the discrete variables.
  • asked a question related to Scaling
Question
2 answers
The subframe parameters are mentioned along with scaling factors in the ICD. How to use them while extracting navigation parameters from binary data?
Relevant answer
Answer
Convert the binary number to decimal and multiply with the scaling factor to get the proper value.
If the data is in 2's complement format, then Abdul Malik Khan Abdul Malik Khan convert q number to decimal and multiply the scaling factor.
  • asked a question related to Scaling
Question
2 answers
sir ,
I am hiren patel persuing Phd From India.
I am much interested and want to explore Cloud Auto Scaling. I need your help in deciding regarding which portaion is still un touched that I can explore.
awaiting for your reply.
HIren
Relevant answer
Answer
Dear Hirenkumar Ramanbhai Patel,
Auto scaling is a feature offered by many cloud providers like AWS and Google Cloud Platform which handles the creation and deletion of new servers in your network automatically, allowing you to scale your application to meet varying loads.
This is a CPU usage related preference scaling which automatically puts preference to increases or decreases in load. Auto scaling makes sure instances scale up or down depending on the load. Whenever your network reaches a predetermined amount of load, say, 70% CPU usage, auto scaling will fire up a new instance to smooth things out. When it calms down, it’ll scale down the number or instances. Auto Scaling allows you to scale up to meet any amount of demand; it can also save you money by scaling down when it’s not needed. This saves waste of money usually, because during off-hours when your application isn’t under peak load, you’re paying more than you need to. Setting up auto scaling cloud won’t be easy, but GCP has tools to make this simpler, such as being able to use a container as a machine image.
Please refer this article and understand well how they use it:
Hope this information is a helpful for you. Read nicely section: Setting Up a Managed Instance Group
Ashish
  • asked a question related to Scaling
Question
2 answers
Best-worst scaling(object case) is a method to collect data instead of using rating scales.In this, different choice sets are prepared each with more than 3 attributes out of which the most important and least important attributes are to be selected. This is similar to discrete choice experiment where data is collected using choice sets but in this method, we select the best choice and also the worst choice, and only attributes will be used to prepare choice sets and levels won't be there. I am searching for a software to prepare these choice sets and found sawtooth, Q, JMP might serve the purpose but I am not sure which is freely available software and which would be more suitable. Please help me with your suggestions if anyone knows about this.
Relevant answer
Answer
Thanku so much@David Morse
  • asked a question related to Scaling
Question
1 answer
After downloading the data (JPL/CSR/GFZ) from  ftp://podaac-ftp.jpl.nasa.gov/allData/tellus/L3/land_mass/RL05/netcdf/ to calculate TWS (Terrestrial Water storage) do we need to do any additional post-processing without multiplying with the scaling factor? Especially before using the TWS to calculate groundwater level or drought severity index"?
Relevant answer
I couldn't access this dataset
do you know why?
  • asked a question related to Scaling
Question
4 answers
Hello everyone,
I am doing Dynamic, Explicit simulation in Abaqus. The total time of my simulation is about 50s but it is taking about 2 weeks for the simulation to run. Going through some literature I realized that I can use mass or time scaling to speed up my simulation. But now I am not sure which one to use and what scaling factors to choose.
Thanks for your help in advance.
Relevant answer
Answer
Some tips on mass scaling, density scaling, time scaling etc. in explicit simulation are explained in this video that can reduce simulation run time:
  • asked a question related to Scaling
Question
1 answer
I did PGA scaling i got the results . Now i am interested to do spectrum scaling for that and i will comparing PGA scaling and Spectrum scaling for the respective structure.
Relevant answer
Answer
Did you analyze the earthquake data by normalizing it?
  • asked a question related to Scaling
Question
3 answers
Hi! I have gone through some documents about how to apply the mass scaling factor into Abaqus. The "limit" I have found out is that the kinetic energy should not exceed 5% of the internal energy.
My case is not simply like that, the internal energy is increasing over time and the kinetic energy is a fixed value from the start to the end. So the ratio of kinetic energy and internal energy is dropping over time and after a certain timing, the ratio will below 5%.
In a case like this, how to know if the mass scaling factor is good? There will always be a period of time that the ratio is larger than 5%.
I just want to know if you guys have any suggestions about this.
Thanks for the help!
Relevant answer
Answer
you can use an amplitude of smooth type for your loading which will reduce the initial kinetic energy
also check the stability of your model.
  • asked a question related to Scaling
Question
3 answers
A. Bejan, A. Almerbati and S. Lorente have concluded that `the economies of scale phenomenon is a fundamental feature of all flow (moving) systems, animate, inanimate, and human made’ (https://doi.org/10.1063/1.4974962).
The universe’s space everywhere flows — expands — outwards from its beginning. Economies of scale appear to arise in flowing systems. Is cosmogenesis an economy of scale phenomenon for the entire universe?
Are the physics of cosmogenesis and economies of scale the same?
Relevant answer
Answer
According to piling evidence, the cosmos driving forces are based on electromagnetic forces besides gravity. I recommend to watch videos on the following YouTube channel. Scientifically, the work of people behind those discoveries is very rigorous.
The task will be to find out what is the medium facilitating interactions among economic subjects. Similarly to electromagneti forcess among stars.
Definitely, cosmological processes are affecting economy at many scales. One example would be earthquakes & volcanoes that are triggered according to the latest research by activity of the sun (it is better to say that they are correlated.)
Your idea can bring a lot of interesting results when studied sufficiently in depth. That paper about correlation of solar activity and volcanic activity is probably shared in the project '"Complexity Digests ..." If not then ask me, I will find it for you.
  • asked a question related to Scaling
Question
5 answers
I don’t think that the textbook scaling equations can be used for fair comparisons for recent nodes.
Relevant answer
It is not wise to compare an opinion with a research pair. You do not need to add you But sentence!!!
  • asked a question related to Scaling
Question
2 answers
Dear all,
I am currently writing my master thesis where I am analysizing my data using path analysis and SEM in R (lavaan package). Now that I like to write up my results, I am struggling with the R output. Since normality assumption is violated I decided to use the Yuan bentler correction.
Consequently, R is giving me an output containing a column with Robust values (that I thought were the corrected ones) as well as as extra robust fit indices.
Did I make a mistake? Otherwise, I would appreciate if you can give me a hint on which values to use (or better said, what the difference is between them)!
Thanking you in advance and best regards,
Alina
Relevant answer
Answer
Hello A. Berger,
Yes, report the values given under the "Robust" column heading. You'll note that the Y-B "correction" factor, if multiplied by the robust estimate, yields the "standard" estimate (e.g., 1.201 x 11.154 = 13.400).
Good luck with your work.
  • asked a question related to Scaling
Question
1 answer
To work on enormus large size dataset analysis
How do we design an optimal system to process big size data by choosing scaling approach?
Relevant answer
Answer
This might help to get an idea and an oveview of scling options. However, as has been already answered, it depends on your type of analysis and the type of data (e.g., image, video etc.):
  • asked a question related to Scaling
Question
1 answer
Dear members,
I have a single storey reinforced system which is strenghtened with Steel Plate Shear Wall. due to finite elements having very small dimensions (because of studs) stable time incerement is very small (around 1 x e-7). Therefore, I need to speed up the analysis and use mass scaling factor.
the question is, what kind of mass scaling factor should I choose? 10, 100, 1000... or much more? at the same time, I need to get the least kinetic energy as possible as I can as a need of cyclic loading process.
Relevant answer
Answer
There is no straightforward answer, my best advice is to run several simulations with different mass scaling factors/schemes and monitor the kinetic energy output until you reach a good balance between solution time and mass scaling factor.
To monitor your kinetic energy, request the energy output ALLKE and ALLIE and compare them together. ALLIE is the total internal energy while ALLKE is the kinetic energy; KE should be 5-10% of the IE.
Another solution to speed up your simulation time is to make use of loading rate. I suggest you read on mass scaling/loading rates online and from the manual as there are several resources available.
  • asked a question related to Scaling
Question
4 answers
I am working on an experimental SWRO plant with Ultrafiltration module. The Chemical Enhanced Backwash cycle for the UF membrane constitutes a biocidal flushing (NaOCl) followed by low pH HCl (2-3) and finally high pH NaOH (11-12)wash. The backwash solution is generated by injecting the said chemicals directly into UF permeate line from UF permeate storage tank. It is observed that that the NaOCl and NaOH dosing ports are getting frequently clogged due to precipitation of calcium carbonate (confirmed by chemical analysis of the precipitated scale).
Any suggestions on how to prevent this without reducing the pH (since the high pH is required for efficient cleaning)?
Can some alternate dosing arrangement be made to prevent this? Or using RO permeate instead of UF permeate solve this issue? Any other suggestions based on experience are also welcome.
Relevant answer
Answer
Thank you for the suggestions..The problem was solved by changing the order of chemical backwash (High pH followed by low pH instead of vice versa)
  • asked a question related to Scaling
Question
3 answers
On my project a set of questionnaire is developed. Three constructs of the questionnaire are using in different scaling methods. For variable 1, we are using dichotomous (true/false) scale where variable 2 is formulated using 5-likert scale ranging from 1 to 5, and for variable 3 using 5-likert scale ranging 0-4. Now, pilot study is conducted and sample data on the hand. My question is, chronbach alpha is enough to validate all the three variables and fulfill the assumption of psychometric analysis. If not, what should we have to do?
Relevant answer
Answer
To draw construct validity may be cronbach's alpha is used for the exploratory study for one to one comparison but for multiple variables factor analysis.
Correlation coefficient is better for your study.
Now SEM(structural equation modeling) are often used or recommended as the alternative of cronbach's alpha
  • asked a question related to Scaling
Question
2 answers
I'm just wondering if that is possible to simulate microchannel flow behavior in a larger channel with dimensionless parameter scaling of the operating condition
Relevant answer
Answer
I assume that you are carefully considering the "dynamic similarity" of the two cases (geometrical similarity, equal Re number. In case it is a heat transfer problem, equal Pe number and Pr number in addition to Re). If so, you are fine as long as the governing physics are identical for both cases; microchannel and macrochannel can have different velocity and temperature boundary conditions at the channel walls (when Knudsen number of microchannel is smaller than 0.1, we consider the slip velocity and temperature jump instead of no-slip condition and same solid-fluid temperature). This is the case for rarefied flow in microchannels, so if your working fluid is a liquid, I doubt you face such a condition.
  • asked a question related to Scaling
Question
4 answers
I would like to explore all performance metrics to evaluate an auto-scaling system.
The desired auto-scaling system scales out/in a web application, which is hosted in the cloud environment.
I know some metrics such as (1) Cost, (2) Response time, (3) Delay time, (4) Resource utilization, (5) Total Provisioned VMs, (6) Oscillation mitigation (or scaling overload), (7) Time to Adaptation, (8) VM minute, (9) Contradictory scaling decisions, (10) Contradictory scaling actions, (11)The rate of SLA Violation, (12) Stability, . . .
anymore? If yes, you can be a part of our team.
Relevant answer
Answer
We collected all metrics
in this article:
  • asked a question related to Scaling
Question
4 answers
Huge and vague inquiry from a non-professional active in mental health social work: I have read that there is not much evidence for the value of "I-messages." This leads me to ask about a lot of things I use:
-mindfulness in trauma reactions-
-reflective listening/validation-response
-challenging questions to people [in therapy, although I am not a therapist]
-I-statements/I messages
-Broken-record technique to avoid arguments
-Application of motivational interviewing to misinformation, e.g. "I don't want to have therapy because only crazy people need therapy."
-very simple screening questions
-scaling questions
-miracle questions
Relevant answer
Answer
Sorry: figure of resilience
  • asked a question related to Scaling
Question
1 answer
We have an optically accessible atmospheric combustor. We want to perform the instability analysis of the combustor like flashback, blowout and instabilities occurring near lean blowout. We are looking for some scaling parameters such that whatever we conclude from the atmospheric combustor applies well to an industrial combustor operating at higher pressures.
Relevant answer
Answer
Important variable power value
  • asked a question related to Scaling
Question
9 answers
Instances of the 4th dimension include:
Time in Minkowski’s space-time (Raum und Zeit).
As flow or motion in various 4/3 laws.
But:
In a space-time distance, time squared is preceded by a sign opposite to that of the other lengths squared. Time is different.
Flow, motion and time trace a moving point along a line. The 3 spatial dimensions are static.
In the 4/3 law pertaining to energy the same energy in 4 dimensions has 4/3 as much energy in the corresponding 3 dimensional space. How can energy occupy a 4th dimension that models a moving point? Perhaps the model in the 4/3 law is wrong or incomplete? If it is incomplete, how is it incomplete? Is some aspect of time missing? In this portion of the comment on the question, accounting for the 4th dimensional status of motion affects understanding of the 4/3 laws.
Or is the 4th dimension nothing more than a mathematical construct?
Relevant answer
Answer
The three dimensions that we are familiar with are a human Cartesian geometric construct. Dimensions are a human perspective of reality, not something in nature themselves. From the human perspective of change and motion one can understand the necessity of 'including time as a dimension.
Take the universe as a whole, for instance. Stop time. That would mean nothing would ever change in it. It would always look exactly the same with no change or motion within it. The forth dimension of time would be missing. Start time up again and one would see our changing universe and then realize time as being the forth conceptually and mathematically necessary dimension to describe reality.
  • asked a question related to Scaling
Question
5 answers
Can someone tell me use cases that uses multiple combination of custom metrics to scale or combination of custom and system metrics?
when it comes to scaling in Kubernetes, we rely on metrics. By default, we use CPU and Memory metrics. Considering use cases where we need custom metrics (such as 'request count per second'), Kubernetes allows us to plug in custom metrics. For an application, there might be interrelated metrics that help in scaling. For example, combining "CPU" and "memory" metrics, we can use "CPU^2/memory" as the combined metric to scale. Can someone direct me towards real-world scenarios similar to this, but relating *custom metrics, where multiple *custom metrics are correlated in order to perform scaling? will this be a better approach?
Relevant answer
Answer
I think that are excluding the metrics about CPU and memory consuming when an application is running. I suppose that you refer to other type of metrics that the applications can generate internally. If are these latter, I can give as example one project where the web application produce metrics in order to improve the load balancing mechanism.
Another example is security. The log records are used to produce metrics to support security.
  • asked a question related to Scaling
Question
3 answers
Dear fellow researchers and peers,
My name is Patrick, who is just starting to pursue my PhD in Marketing. My research topic requires me to conduct Best-Worst scaling questionnaires to elicit consumers' ranking of product attributes.
What I'm struggling with is my inability to use the Balanced Incomplete Block Design to create Choice Sets for my study. I have a list of 45 - 50 attributes that I want to test; thus, it would be impossible to follow the normal instruction (each attribute appears at least 3-5 times for each survey). Hope that I could get some help or guidance from this community.
I look forward to hearing from you guys.
Best regards,
Patrick
Relevant answer
Answer
Dear
Cordula Hinkes
,
Thank you so much for your response. As this is the very first phase of my study, we have come up with a quite comprehensive list of various attributes for fresh pork meat.
For the best-worst scaling, I don't look at the attributes level yet since I only wanted to narrow the list down from 45 - 50 to around 7 - 8 attributes for subsequent Discrete Choice Experiments. This is a reduced list of attributes for example:
- Additives free
- Organic certified
- Hormones growth free
- Meat colour
- Marbling
- Price
Hope this clarifies my question a bit more and I look forward to hearing from you soon.
Warm regards,
Patrick
  • asked a question related to Scaling
Question
3 answers
Hi Everyone.
I have this dimensionless governing equations that I want to solve in Ansys Fluent. But then, in Ansys Fluent, the geometry only dimensional. So my question is, does dimensionless equations means that we can apply it to any geometry i.e. scaling?
thanks.
Relevant answer
Answer
Yes, you can, but not directly. For example, you need to study some dimensionless flow. This flow is always characterized by dimensionless criteria, such as Mach number or Reynolds number (and other criteria depending on the physical model).
So if you create a simulation (no matter what physical dimensions it has) and keep the same Re and M numbers, it will be similar (dimensionless) according to the Buckingam-Pi theorem. To understand more about this question please read the Aerodynamics textbook from Anderson.
  • asked a question related to Scaling
Question
1 answer
I have tried to apply multidimensional scaling methods for the construct developed but the data collection becoming so difficult through factor analysis. Please suggest me the alternatives
1. What are the other multi dimensional scaling methods can be adapted?
2. Can any additional procedure with Likert and Thurstone scaling methods can be used for multi-dimensional scale development
  • asked a question related to Scaling
Question
13 answers
Hello Everyone!!
I hope everyone doing well.
While i was studying Phenomenology and Turbulence Scaling , total energy in the system for the largest eddies was approximated to U^2/2. Where U is the Rotational velocity of largest eddies.
I ddint find how or on what basis Spectral energy was approximated to U^2/2 ?
let me know if anybody has an idea about it.
appreciated in-advance for your perspective on this.
Thank you
Relevant answer
Answer
You can find all updater articles details about turbulence via Sciencedirect here
  • asked a question related to Scaling
Question
6 answers
Usually, whist analyzing sc-RNA-seq data, using SEURAT, a standard log normalize step is performed on the data prior to scaling the mean values of the data. In this step, the normalize method suggests to use a scale variable across cells of 10^4. I would have assumed, without any scale factor, each transcription value (t_ij) where i= gene and j= cell one simply does log10( t_ij + 1). Now, multiplying t_ij by some scale would not change much as taking log will simply add a constant say 4 for the suggested 10^4. But, if it means, that the method actually sums counts for each cell (S_j) then divides by it and multiplies by the scale i.e. (t_ij/N_j)*10^4, then it means after this step there is no more difference in sum of the columns i.e. across cells transcript counts are all 10,000. But when I plot hist(sumcols(expressionobject@data) I find a distribution. So it is not clear to me what is the method doing ? Any ideas ?
Relevant answer
Answer
Dear Sabyasachi,
to my mind the reason for a scaling factor of 10^4 is quite trivial:
At least when using the 10X-Genomics technology the total RNA count per cell is roughly 1000 to let's say 50000. If you divide the counts of one gene (usually in the range of 0 to let's say 200; the majority below 10 though) by such number and then take the log1p you end up with numbers which are difficult to read on a plot (many zeros after the decimal before it gets interesting). To avoid such ugly numbers using a scaling factor of 10^4 seems appropriate if the numbers regarding RNA counts which I stated above roughly hold true. With a scaling factor of 10^4 you get log-normalized counts in a range of 0.X to 10 or something which is easy to comprehend for us humans.
Yours, Chris
  • asked a question related to Scaling
Question
10 answers
Scaling deposits are common in flowlines subject to changes in pressure or temperature.Scale formation results in reduced diameter or blocked pipes.
A very delicate problem for managers, especially in southern Algeria using the waters of the albian , despite the cooling of the water.
What do you think is the best pratical solution for scaling in pipelines ?
Relevant answer
Answer
Ya si Rachid, these concerns have been raised for 20 years, but unfortunately policymakers were not listening to academics. This subject has now become classic. Today this topic is not a problem for me. There are two solutions: chemical and mechanical. From a chemical point of view, there are liquid solutions that remove scaling. From a mechanical point of view, there are devices that remove scaling. You just need to buy this type of device. I am sending you an article that I published in 2013. There is a Magister who was supported on this subject by Mr Fartas from the University of Ouargla. You will find in my book, a chapter on the scaling of the drinking water network of the city of Touggourt.
  • asked a question related to Scaling