ResearchGate Q&A lets scientists and researchers exchange questions and answers relating to their research expertise, including areas such as techniques and methodologies.
Browse by research topic to find out what others in your field are discussing.
How to deal with unequal sampling in herbivore transects?
I have data consisting of seasonal transects of herbivore counts, which I wish to analyse using the kilometric abundance index. I want to determine how changes in prey abundance over the study period has affected demographic rates of various lion prides. I have used the outline of lion home ranges to divide the transects into transects per pride. However, due to seasonal changes in flooding and access to roads, some prides have more transects than others within each year. My main questions are:
1) How do I determine whether the differences in distance covered have a significant effect on the outcome of the kilometric abundance? And, if it does have an effect, how do I account for it?
2) I have both lean season and abundant season data for most years, but in two years have only abundant season data. If I want to use lean season data , how do I deal with the two years where I have only abundant season data?
TRY to use the population software or sigmastat, both are good in solving such issueFollowing
How can I bond two alumina ceramics?
What is the best cement, and the temperature required to ensure a good bond ?just normal alumina ceramics. I wish to bond two pieces thoroughly which would withstand high temperatures say 800 to 900 deg C.
The article you suggest is not easily available. could you e-mail me at email@example.com the relevant page at least if not the review? Thanks.yFollowing
Is there a simple method for auto transfusion for cancerous patient with anemia before surgery?
a 55 yrs old man with gastric cancer with Hb = 10 mg/dl need surgery
thanks Dr joseFollowing
I am having a problem keeping 3T3-L1 cells alive when differentiating them with troglitazone. Has anyone encountered this?
I grow 3T3-L1 cells for my research. In the differentiation process, I use a medium that contains troglitazone for 2 days. This has worked very well for about 1year, but the past couple of time, the cells started dying shortly after the differentiation starts. I am not positive it's from the troglitazone medium, but that's really the one big difference. Before I used IBMX.Following
Does anyone use literature mapping in their literature review?
There are few researchers or students who don't get lost in the literature review stage of their assignment, article, essay, thesis or paper. The literature these days is quite easy to obtain through all the modern search engines, but you can end up with huge amounts of unwanted data. So how do you sort the data, extract the gold from the sand and then actually write something that is coherent about the sources of literature and how they fits into your overall narrative?
Franco Moretti's proposal is quite rich ( and polemic): Atlas of the European Novel, 1800–1900 , Graphs, Maps, Trees: Abstract Models for a Literary History . Also, I remember Nabokov's lectures on Russian an European Literature and his frequent use of mapsFollowing
Wall function in CFD model?
I have some question about wall function:
1. I am studying about wall function. According to following link, Does usage of Wall Function decrease number of grid on the CFD model?
or when It is used wall function, Are "viscous sublayer and buffer layer" neglected at computations?
I want to use K-e model (high Re turbulence model).
2. What is scalable wall function?
Where does this option in Fluent? If someone can, please place useful links about scalable wall function.
Can anyone recommend examplar case studies at the Master's, MPhil, and PhD levels in education?
I am looking for examplar or award winning case studies to share with my Master's, MPhil, and PhD students, who mostly choose this type of research. Can anyone recommend such studies.
Here are some exemplary PhD case studies of schools in the UK and North America, each of which has been published as a book:
BALL, S. J. (1981). Beachside Comprehensive: a case study of secondary schooling. Cambridge [England], Cambridge University Press.
HARGREAVES, D. H. (1967). Social relations in a secondary school. London, Routledge & Kegan Paul.
LACEY, C. (1970). Hightown Grammar: the school as a social system. Manchester, Manchest U.P.
MCLAREN, P. (1986). Schooling as a ritual performance: towards a political economy of educational symbols and gestures. London, Routledge & Kegan Paul.
WOLCOTT, H. F. (1967). A Kwakiutl village and school. New York, Holt, Rinehart and Winston.Following
Can any one help me to solve this equation using MATLAB plz?
I have a question plz
Pf = qfunc(sqrt(1+2*SNR)*qfuncinv(Pd)+SNR*(sqrt(N)))
The approximated values for Pf for different values of N are as follows:
for N=1000 Pf=0.62
for N=2000 Pf=0.53
for N=3000 Pf=0.49
for N=4000 Pf=0.44
for N=5000 Pf=0.4
for N=6000 Pf=0.39
for N=7000 Pf=0.36
for N=8000 Pf=0.35
for N=9000 Pf=0.32
for N=10000 Pf=0.3
I need to know the SNR value which satisfies the equation using MATLAB
This is a quick answer, so I cannot guarantee that it will work:
(1) create a function (click on NEW SCRIPT, type as follows, and save as an M file):
function Pf = Functionname(N,SNR)
Pd = 0.9;
Pf = qfunc(sqrt(1+2*SNR)*qfuncinv(Pd)+SNR*(sqrt(N)));
(2) Define the inputs and outputs:
N = 1000:1000:10000;
Pf = [put the values here];
(3) Guess a good approximation for SNR value (e.g. 10; I have no idea!) and type this in the command line:
MyFitType = fittype('Functionname(N,SNR)');
MyFit = fit(N,Pf,MyFitType,'StartPoint',10);
(4) I hope it works for you. Best, Behnam.Following
Can students be too motivated?
Students behave differently in education or research environments. Some are bold, others are shy. Some are very motivated, others less. Can students be too motivated, e.g. permanently willing to discuss, to interact in an education/research environment? What is your opinion?
Well, Marcel, can you first explain what you are looking for, more exactly? And why?Following
What causes DOMS?
I am interested in research that explains the causes of DOMS. Specifically is it the muscle damage that causes the pain or is it the associated inflammation and ROS's that cause the pain? Thanks
I agree with Dieter that lactate is not the major cause for DOMS. The lactate can be associated in part for DOMS but recent researches focused in extracellular matrix (ECM) deadhesion (Mackey et al. 2008, 2011 ; Crameri et al., 2007) . These alteration of ECM could be associated to the activation of neural receptor leading to the sore sensation.
Hope my comment can help your reflexion
Compound was purified by silica gel (deactivated with 10 wt % water) column chromatography in DCM :Methanol. What does this mean?
Tetrahedron 2006, Vol.62, 8199
It mean that compound was purified by silica gel column chromatography, eluation was made by mixture of dichlormethane:methanol mixture.Following
Is it possible to deploy a semiotic analysis to classify and get themes out of a visual text (films)?
I want to use semiotic theory as a theoretical framework for my research, whose purpose is to classify as well as account for the representations of the working-class people in Moroccan cinema.
Your reactions are very much appreciated.
I guess it depends on the granularity of your analysis. Besides everything that has been already listed, I would recommend to explore visual rhetoric analysis. I would take a look at the early work of Gui Bonsiepe and Groupe µ for context, to begin with.Following
What is a quick and simple way to demineralize 8 M urea before ion exchange chromatography?
I need to perform denaturing purification of my protein of interest via anion exchange chromatography - I understand that it is necessary to demineralize the urea to remove residual ions from self-decomposition of urea.
How will the demineralization process affect the effective concentration of urea?
What other pitfalls should I consider? How long will this urea stay demineralized?
Thanks for any help,
What are good examples of describing the Unit of Analysis in case studies?
Identifying and describing the Unit of Analysis in case studies is a key requirement in such research, but what are good examples of such descriptions?Following
Why do I have so little events compared to the number of cells I plate?
I plate around 800,000 cells (PBMCs) in a well in a 96well plate, but when I read them on the LSR-II about 4 days later I only get 250,000-150,000 events total, half of which is viable, only 10% are ssc/fsc of lymphocytes and only 2000-5000 events are CD3+.
I am running a proliferation assay where these cells are stained with cell trace violet before hand but their viability is good afterwards and I count them using a coulter counter.
Has anyone had any experience like this before? Is the counter off or am I loosing cells in culture/during staining. I do intracellular staining with these cells so they do go through a fix/perm step.
I will try plating at a variety of densities then. I want to have a non-proliferative control because I am trying to develop an assay to test antigen response so I am just using ConA right now as a positive controlFollowing
What are the relevant dimensions and measurement scales of Total Quality Management (TQM)?
I'm working, with my co-author, on the impact of TQM on Innovation, the role of employee job satisfaction.
We selected the following dimensions of TQM:
- Employee skills
Do you think that those dimensions are relevant to measure TQM?
There are many indicators of TQM in the literature on the subject but it is not possible to be a benchmark in the field if the level of participation has not exceeded 80% of staff. Another important indicator, if measured well, is the level of overall satisfaction in the survey of internal climate. If these indicators do not measure well, everything else becomes superficial.
Can open-book tests/examinations address the problem of cheating? How about allowing students to 'Google' answers?
The embedded post from Faculty Focus points out that students may be tempted to cheat in instances where responses to a question can be easily 'Googled'. It is suggested that open-book tests, including challenging application questions that relate directly to the course material, may help overcome this problem.
Some even believe that students should be allowed to 'Google' information during examinations, for instance, because they have to demonstrate digital literacy (an opinion expressed in the post from The Guardian).
Which of these approaches (if any) are acceptable? What would serve as guidelines for good practice if either of these approaches are incorporated in teaching and learning? Would a particular approach be acceptable in different fields or at various levels of study?
I am fascinated by the value of multiple sources for answering questions. I teach technology courses in an industry which honors people who can interpret problems, define clearly what is known about the problem and solutions that may already exist, and then go beyond the basics to innovate a new or renewed solution.
When I allow open-book exams, that allows for an eBook in PDF format to be searchable too while interpreting the exam questions and then retrieving the answers, according to the author. Even when mastered, that skill to find the "right answer" is not sufficient to be ready to enter the work force with skills that matter.
To handle exams being a metric for thinking about a topic, I dig into the generic pre-written questions and reword them to encourage original thinking. Afterward, if the automatically scored answer is challenged, with a citation, I may change a score to accept an answer and bump up points earned.
When it comes to academic courses, it is time to redefine what "cheating" is. The good old way of staging exams that measure recall of book-learning is now passe.
I welcome more dialog about this idea.Following
Why are capacitors banks for correcting power factor (cos phi) not the proper solution, when the power sources are generators?
Can someone explain me why capacitors banks for correcting power factor (cos phi) are not the proper solution when the power sources are generators?Any paper on this?
The question is why capacitors are not the proper solution for generators.
When the total load on an isolated generator (not a grid) can be subjected to changes with time, a capacitor connected for pf improvement may suddenly cause the total load to become capacitive, if a large inductive load is suddenly turned off. Capacitive load condition causes the alternator terminal voltage to increase due to armature reaction causing strengthening of the field flux. Increased voltage causes connected load to draw more power. This causes undesired instability and voltage fluctuations, till the excitation system controller recovers.Following
Please suggest couple of reliable companies/CROs from India/China for making chemical analogues of 2 plant alkaloids? Timeline is ~2-3 months?
We have 2 anti-cancer plant alkaloids and will need their chemical analogues (50-70 each) to be made for completing cell based assays using cell lines. Anyone knows reliable companies in India/China, who would like to work with us? Alternatively, any academic lab from North America/EU/Australia running a fee-for-service model will also be OK. Provide us with approximate cost per molecule and we will share with them chemical structures of these 2 alkaloids. CDA will be required.Following
Could be dilution of naturally occurring radioactive materials (NORM) considered as one of solution in the problems of NORM?
In some documents of IAEA and EU it is considered that authority can authorize in specific situations the mixture of radioactive residues containing naturally occurring radioactive material (NORM) with other materials to promote recycling of these materials and to reduce exposure for the public.
So, could be dilution of NORM, considered as one of solution in the problems of NORM?
Any suggestion on this topic? Known studies or practical application?
I believe land application of NORM as a disposal method is misguided. In my regulatory career, I dealt with a company that was licensed by an Ag agency to apply pH control waste water (a precipitated sludge) to farm land as a soil amendment. The sludge was 90% calcium sulfate, but was generated from a metallurgical process with a feed stock containing 300 ppm Uranium.
As a result, the land is now restricted from being used for anything but farming, even though it now is within an urban growth boundary. Residential development would create a Rn-222 problem in homes.
Large volume, low activity NORM waste is appropriate for disposal in an industrial waste landfill as long as the waste form is not leachable.Following
How the bright light stimulation influence on attention?
Could you tell me what should I expect after one hour the bright light stimulation
( > 500 lux or < 500 lux) via goggles, before performing the task, for example d2 test of attention? I would like to know something about impact on alpha and delta band.
Additionally, is it possible that bright light stimulation will have got influence on P300 response?Following
Is there any possibility for using home energy management system without wireless communication?
Thanks in advance for your replies.
The wireless channel is for communication of the following:
- status of energy consuming devices at home
- control commands for the devices.
- communicate the sensor readings etc.( eg: room temparature / humidity/ light)
For most of the cases we can use wired communication like ether net or
power line communication ( X10 ) . But if you need to distribute sensors, beyond the reach of wires , for this wireless is adviced.
The basic idea is to collaborate all these information through a gateway and available through the internet , for easy access through mobile phones or desktop browsers.Following
How do I check the efficiency of a pair of qPCR primers?
I am new at the qPCR field, and I would like to know how do I can check the efficiency of a pair of primers. I read that I have to make serial dilutions of a DNA (with known concentration), but I don't know weather I have to use the normal qPCR program or an special one, and how do I have to process the qPCR data.
I tried once with a normal PCR program and serial dilutions of a cDNA sample. After that I processed the results by ploting the Ct (y-axis) and the cDNA initial concentration (x-axis) in a semi-log 10 plot (with the x-axis as logarithmic). Then I took the slope of the regresion line of the plot and it was -8.8, far from -3.3 (walue that is considered the best).
I am really lost
Hello Anna. I am not at work at the moment but have performed many qPCR reactions and will send you some exact protocols in my qPCR drop box tomorrow plus links to useful sites like the above. In essence yes you do need to perform a standard curve by titration your cDNA versus the primers. You then need to create a standard curve using your real time platforms software. There will be clear instructions on how to do this. One thing you always need to do is perform this reaction in triplicate and specify within your experimental design exactly how you diluted your cDNA; that is 10 fold dilutions or 1:2 serial dilution in triplicate of your cDNA. In essence if you have a lot of RNA and can make many copies as cDNA by reverse transcribing from 1ug of RNA then 10 fold dilution from 1:1 to 1:10,000 @ 10 fold intervals will give you a nice curve generally speaking with a good fit. That is R2 > 0.95. Alternatively if your material is more limited and you are obliged to transcribe from much less RNA - say 50ng to 100ng- then in my experience a standard curve based on a serial dilution from 1:1 to 1:64 is much better in terms of a good curve. That is concordant right triplicates which fit the regression line and ct values of between 15 to 35. Generally if your copy numbers are so low that your ct values exceed 40 the triplicates tend to diverge and the standard curve is poor quality. This can happen with low copy number genes from lots of RNA (~1ug) or higher copy number genes from limited amounts of RNA (~50-100ng). Having generated your curve you need to demonstrate that your amplification efficiency is over 80%. That way you can trust actual gene expression data from your cDNA at optimal dilution which you also select from you standard curve. More to follow including proper SOPs tomorrow when back at work !!Following
Is there any labs or institutes that can assist me in the characterization of total bacteria in heavy oil contaminated soil?
Is there any labs or institute that can assist me in the characterization of total bacteria in heavy oil contaminated soil?
Thanks so much Prof Shahaby for the detailed information..Following
Can anyone help to diagnose this colon tumor?
A male of 46 yrs is having this tumor since 6 months.O/E lumen of lower GIT is free, no any infiltration is noted. Anal orifice and anal sphincters are free from any growth.Pt. doesn't give any previous h/o treatment anywhere and even he didn't come back for further examination HPE etc. Only for academic point of view, Please discuss as this is important to me
plese also inform us about physical exam especially inquvinal nodesFollowing
Likert, Language, Linguistics, and Loss: How do we justify the use of Likert-Type Response Data?
I’m working on a paper on Likert-type scales, as well as a statistical measure/test that sort of emerged by accident whilst working on the paper. However, I was hoping for some preliminary feedback (and what better place for such information as RG?). Specifically, the following are basically universally accepted by specialists whose fields are closely related to the matter:
1) There exists no 1-to-1 mapping between a word in a source language and a word in a target language. More simply, translation always involves information loss.
2) If we throw out the substantial evidence that lexemes aren’t the basic unit of language (and choose not to adopt a construction grammar), we are still left with polysemy
Linguists on opposite sides of the fence, such as Jackendoff and Langacker, still agree that “words” are encyclopedic: there isn’t any mapping from a word to some “unit” of knowledge, information, brain activity, etc. That’s why, if one looks in a dictionary, one finds words defined by other words.
3) Even if we accept the modern version of grandmother neurons (“concept cells” that have been found to respond selectively to e.g., specific people in ways that have lead some researchers to claim that there exists a 1-to-1 mapping between such cells and concepts, nobody believes (and it is an empirically validated falsehood) that there exists any mapping between the conceptual representation via neural activity in one brain to that in another.
4) Finally, language is intricately involved in shaping thought and knowledge, in particular through the relationship of constructions (or lexemes, phrasal nouns, collocations, etc.) and concepts. However, there isn’t any 1-to-1 mapping between a particular instantiation of lexemes in a particular construction that is sufficiently stable such that an individual can separate the scale (which is usually completely conceptual, although sometimes also empirical as in e.g., frequency) from the individual responses (whether only the endpoints are labeled or all possible responses are) and keep these conceptual domains distinct. In other words, the items necessarily force the respondent’s novel conceptualization, making it impossible to treat a single participant’s response to a single item as somehow remotely precise.
What, then, is the justification for treating all participant responses as infinitely precise and corresponding to the exact same values as if all participants were observations of a single value?
Thanks for any and all input!
The problem is that it isn't really just an issue of a problem between source and target language. It is quite literally a problem of fundamental error/information loss for every item for every participant, because there is no way of approximating the quantification of measurement error when one forces a conceptual response to be utterly devoid of semantic content. I've quite literally scrolled through the scans from an fMRI study in progress (before the fancy statistical image processing produces what is supposed to indicate significant activation is added in) and seen participants neural responses differ when it comes to the same question within half hour and hour blocks.
So, again, what is the theoretical justification for pretending that a theory of measurement (which we have largely abandoned in terms of the original justification; hence "Likert-type" and the general lack of research justifying Likert over Thurstone, let alone any general theoretical bases for treating conceptual responses as purely formal) actually measures what it is claimed to? All measurements are imperfect. That's a given. What's important is the capacity to quantify error. We don't have the capacity to approach the nature of "error" in this case, so instead we simply pretend our "measurement" is infinitely precise and all statistical analyses performed are essentially summations of unknown quantities of error for every item from every participant. Unless, of course, we have some theoretical justification. Hence my question. But referring to the potential to vaguely map source constructions to target constructions in translation isn't, so far as I can tell, anything remotely resembling theoretical justification.Following
Is there any free software that will allow me to model chemical reactions?
I'm currently studying a chain of reactions but I wonder if there's a software that will allow me to control the reaction conditions and see if there's any special interaction between products.
In silico enzyme digestion of a eukaryotic genome?
Hi, I'm trying to simulate MspI digested fragments from a eukaryotic genome, but all the tools I've found seem to be designed for much, much smaller sizes. Can anybody recommend a program or method?
What is the reason for shifting of XRD peak?
Can anyone explain the reason for shifting of XRD peak of Nd doped yttria nanopowders with increasing annealing temperature? What happens when the synthesis or annealing temperature increases, ie two theeta shifted to higher values with increasing treatment temperature? The dopant concentration is constant, the only thing to change is synthesis and/or annealing temperature.
Reasons for shifting of the same number of XRD peaks to higher two theta positions with increased annealing temperature in the nano regime are many including cation re-distribution, defect reduction, enhanced crystallization, etc., but the net result on the part of lattice constant is a definite decrease.Following