Q&A

ResearchGate Q&A lets scientists and researchers exchange questions and answers relating to their research expertise, including areas such as techniques and methodologies.

Browse by research topic to find out what others in your field are discussing.

Browse Topics

A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
  • Roman Kozak added an answer in Knee Joint:
    Are there applications for corticosteroid injections in joints in case of cartilage degeneration?

    I'm looking for several different studies and their results to see if there's any improvement in therapy. I want to know more about application in the knee joint.


    Thank you!

    Roman Kozak · National Academy of Medical Sciences of Ukraine

    The intraarticular injections of corticosroids use are extremely rare. It may be used in chronic synovitis, rheumatoid arthritis. Frequent intra-articular use of corticosteroids leads to the degeneration of cartilage.

  • Regarding this abstract, is there a system with failure interaction so that a failure of component impacts on the other component failure?

    Repairable system is commonly used in different industries and has been become more complex. In complex system, in addition to the random component failure, the effect of two component is random. The impact of failure is uncertain and can be considered as fuzzy. Fuzzy set theory has been the most important approach used to deal with uncertainty in problems. The repair of complex system divided to soft and hard. The hard failure causes the system stop and the soft failure does not, but it increases the system operating costs too. In this paper assumed: when the first component fails, it remains in a failed state until the next inspection time. Therefore, if the first component failed in each inspection interval, a downtime penalty cost is incurred. The cost is proportional to the elapsed time from failure time to its detection at inspection time. The short inspection interval increase the number of inspection and cause the extra cost for system. As well, long inspection interval will cause the greater cost due to long elapsed between real occurrence of the failure time and the failure detection (penalty cost). On a finite time horizon, the objective is to find the optimal inspection interval for the soft failure component so that the expected total cost will be minimized.

  • Can we talk about collaborative learning?

    Learning is an individual experience or can speak of collective learning, collaborative learning?

  • Alana Alexander added an answer in PacBio:
    Can anyone advise me on genome sequencing methods for Anolis distichus?

    Hi folks,

    We are aiming to kick off a genome sequencing project on an anole (lizard) species, Anolis distichus (genome size somewhere between 1.8 and 2.5Gbp). We've been researching "best practices", and the general consensus I am coming across is a combined coverage (of shorter and long inserts) of about ~100X, having long insert mate-pairs (~10-20kb+), and using SOAP or ALLPATHS as an assembler seems to lead to a decent assembly. However, recently, we've come across DISCOVAR de novo and Platanus as potential assembler options as well.

    Obviously, choice of assembler is going to affect our library prep, so I wanted to canvass the community to see if anyone had any updated thoughts on current best genome sequencing practices (most of the posts/genomes I found were initiated in 2014 or before).

    Resources:
    -- A 'relatively' inbred individual for sequencing, and high quality DNA extracted from it
    -- One of its congeners, A. carolinensis has been sequenced
    -- We'll have a transcriptome for the individual we sequence
    -- $5-7k for whole genome sequencing costs

    Our current feeling is aiming for 2*250bp reads of ~450bp insert sizes (Illumina) will allow us to use DISCOVAR denovo, and if along with the transcriptome, that doesn't give us a "pretty enough" assembly (we are looking for a very high quality draft as we are interested in specific chromosomal regions involved in divergence across the genus), we could then add in mate-pair (and potentially pacbio?) if we needed to, and try ALLPATHS.

    Do people have strong thoughts on how they would attack the project with the same resources? We would love to hear from you if so. Full disclosure: also asking this question on seqanswers, so if I get any answers there that I think people here would like to hear I'll make sure to share.

    Cheers!

    Alana Alexander · University of Kansas

    Thanks Diego!

  • How do I validate vulnerability maps?

    I prepared physical and social vulnerability maps of Seoul and Busan megacities. But I could not find the methods to validate those maps.

    Carlos Arturo Aguirre-Salado · Universidad Autónoma de San Luis Potosí

    Dear Anantha, 

    It depends from the modeling approach is being used. For example, if multicriteria decision methods are applied you can use sensitivity analysis in order to spatially explore how sensitive the model is, and it is a way to know the response of behaviour modeling. However, if you are using another approach that uses dependent and independent variables (e.g. logistic or multinomial regression, soft computing, etc)... you can try k-fold cross validation.

    Cheers!...

  • Erik Bernitt asked a question in Phalloidine:
    What method of actin labelling for TEM imaging gives you the smallest headache in practice?

    There are various methods to label actin for TEM. Among them conjugates that allow, e.g., to label actin for fluorescence via Alexa and for TEM via gold particles simultaneously. The conjugate is bound to actin via phalloidin and a biotin / streptavidin linker.

    My question is: are such methods feasible in practice? Any pitfalls the literature does not mention?

    I am looking for a robust method that works without too much effort as the TEM imaging is only a side project within a larger project. I do not have any experience in TEM so far. I like the idea to be able to image semi thin section in fluorescence and thin sections in TEM simultaneously. This is, however, not mandatory. Only the TEM imaging is.

    Thank you for your help!

  • The way of reuse historical building?

    The methodology of reuse historical buildings depend on:

     1- The site surrounded the building

    2- The value of the building is consider as monument or a unique building

    3- The new Function ( used for public or private building BOT system)

    4 - The building Condition ( need to restoration or good )

    5- The economic budget  ( need high a mount to renew and restoration and the re fund is in community service)

    Hossam hassan Elborombaly · Effat university-Jeddah__Ain shams University_Cairo

    Hallo Michael, reaily your office and you lucky to stay in historical building and to feel great with this environment, Of cource the liviing in historical b, and reuse in anther function is so dificult but it work .It need many steps to adapted the function with the new one.

  • Ilka Noss added an answer in Glucans:
    Is there any difference between beta glucan from yeast and beta glucan from oats?

    In terms of their function or structure or any other properties?

  • Ariel Linden added an answer in Maximum Likelihood:
    How to inflate a small sample through time series data?

    Hi everyone!

    In a few words, ma question is: Can I increase the number of cases of a small sample by considering these cases at several points in time ?

    In more detail: I have a sample of 26 cases (which is actually the universe), concerned by a given policy. This policy covers a period of 7 years. Is my sample big enough if I take those 26 cases in seven different points in time?

    Recently I have read this article that holds that small samples may not be too big an issue if one conducts Maximum Likelihood Regression instead of Least Squares. More specifically, the author posits that Type I errors are not a problem, Type II error may be more porblematic. If I combine the maximum likelihood method with cross section data in a time series, would my 26 cases still be a problem?

    thank you so much for your attention,

    Ariel Linden · University of Michigan

    I am not sure that this approach makes sense. If I understand you correctly, you have 26 cases that were not measured at 7 time points but you'd like to make the assumption that they were? How would you possibly know their trajectories over time without actually having seen some example of what those trajectories are?

    If you have some cases in which you have the full 7 observations, then you could generate simulated data to mimic that distribution (plus some random error component). However, this will not be a substitute for analyzing the actual data, and you could very well end up with results that are far from reality.

  • Ramesh K added an answer in Qualnet:
    I just wonder what simulation tool my colleagues use for power consumption calculations of the nodes in the Wireless Sensor Networks?
    • ns2?
    • ns3?
    • MATLAB?
    • OMNET?
    • QUALNET?
    • or something else?

    And which one do you think is more accurate?

    Ramesh K · Nandha Engineering College

    Ns2 is the best choice for WSN.. 

  • Ishag Adam added an answer in Body Mass Index:
    What statistical methods would you propose to evaluate BMI cut off points for obesity?

    What statistical methods would you propose to eveluate BMI cut off points for obesity?

    Ishag Adam · University of Khartoum

    Thank  you very much Marcia 

  • Does a gravitation wave change propagation direction passing near a massive gravitation center (star)?

    As we know light wave passing near a gravitation center (star) changes its propagation direction. Is this true for a gravitation wave? 

    George E. Van Hoesen · Global Green Building LLC

    Thierry,

    You do not have to agree.  I also think that you are accepting the idea that light and gravity are the same thing?

    I however think that light is a particle that has wave like properties and therefor if I shoot it off the source at a given velocity C then unless something gets in the way or deflects its path by Newtons laws it will reach us some day.  This has nothing to do with gravity other than gravity can alter its path if it passes to close to that mass.  

    Also as a scientist, you are telling me that the force that can be felt at an infinite distance in an infinite amount of places all at the same time such as gravity is not an unrealistic force?   If this is not the case then how does gravity decide where to exert its force?  I think that if we really looked at what we are saying about gravity we would decide that we could not possibly be correct.  The mass no matter how big or small would not have the energy needed to make that happen. 

    Yes it would be an infinite force as it is suppose to go in all directions at the same time and always be there even where it has already been meaning that the force is the sum of all the forces in an infinite amount of directions over an infinite distance.  If it is very small, which it is, this would add up to a ridiculous amount of energy. 

    This also is a violation of the conservation of energy as the amount of energy that would have to be lost to this process would mean that the mass of most of the galaxies we see today would be eaten up by the loose of energy from this force of gravity.  It should mean that all the galaxies we see are not there any more.  It should also mean that our galaxy is loosing mass at a very large rate and if this where the case we should be able to watch it evaporate.  

    But non of this is happening which implies our thoughts on gravity are not correct.  If we are not violating the conservation of energy then where is the energy coming from?  I am not wanting to argue but there are way to many problems with the theories to think that we are even close.   

    Gravity as all other forces has to have a limit.

  • Hector Nuñez added an answer in Detoxication:
    What is the scientific evidence for Detox diets?

    Once Detox diets have gained tremendous attention, people used it as the miracle pill. It does seem to be more an empirical practice than an evidence-based nutritional intervention for weight loss. I found in the majority of published studies, researches that administered a toxic substance, not a diet for desintoxication in healthy subjects. In almost all cases of Detox diets, the subject is not poisoned or intoxicated for following such a diet.
    What does the scientific studies shows? Is there a scientific base in this type of diet?

    Hope to hear your comments.
    Giuseppe PS

    Hector Nuñez · NutriOhm Consultores Ltda.

    In principle it is necessary to define what is considered as a toxin in a healthy individual and a good nutritional plan led by nutritionist can help lower what you consider toxins, I agree it's a marketing strategy not based on evidence

  • Mostafa Eidiani added an answer in Nanomagnetism:
    Looking for small, spherical TMR elements - anyone willing to collaborate?

    We have designed (and built) an experimental setup in which we intend to use small, high quality spherical TMR elements as sensors for magnetic stray fields. The sensors available to us in our first tests were in the diameter range of aprrox. 250 nm and 110 nm, respectively. For our purpose, smaller is normally better.

    These sensors did, however, suffer from too strong ellipticity: the ratio of longer to shorter axis was about 1.1. We estimate that this ratio should be below 1.02 for the sensors to make sense in our experiment. In this size range, achieving such small ellipticity probably represents quite a challenge. (I am not an expert in lithography!)

    The aim is to have TMR sensors with so little magnetic (shape) anisotropy that they strongly fluctuate at room temperature or can be driven to do so by just a very small magnetic field. (What sounds like an unstable device actually makes sense in our context.)

    If you can produce such items and are willing to collaborate on a project in nanomagnetism I invite you to contact me to exchange further details about our aims and goals. Likewise, if you know someone who is capable and might be interested, I'd be grateful for a hint towards such persons/groups.

    Mostafa Eidiani · Khorasan Institute of Higher Education

    Hi Dear Kai

    Please see the attached file

  • Are the countries, where chikungunya or dengue is ocurring, making specific surveillance for coinfection DEN/CHIK?

    I feel that in some countries in the region, surveillance protocols have been oriented to diagnose DEN, if this rule out diagnose CHIK, but what about DEN/CHIK? Are you aware of which countries, particularly in the Americas, are doing specific surveillance for both? from related studies, which is the proportion of patients with DEN with CHIK coinfection? and CHIK with DEN coinfection?

  • Alan Jarvis added an answer in Total Porosity:
    How can I use Minkowski functionals to to help me evaluate porosity?

    I have a ceramic coating and want to characterize its porosity. It has oriented porosity, as well as some cracks.

    I have SEM photographs, but I have some difficulties in processing the images. Part of the problem is that it is difficult to prepare the samples: I get some edge rounding. The other is that I have limited access to the SEM, and cannot take a lot of micrographs. Plus when I do, it is hard to take really consistent micrographs.

    I'd like to be able to determine things like: total porosity percent, aspect ratios of pores/cracks, and pore size distribution.

    I've used ImageJ, but it is hard to get consistent results. I was hoping to use Minkowski functions to address some of these issues.

    I have been trying to use some MATLAB routines, including Professor Legland's excellent collection. But I haven't figured out how to use them to solve my problem: perhaps the problem is I just don't know how to process the results.

    If there's a good paper or textbook that I should read that would be great too.

    Thanks for any help.

    Alan Jarvis · The University of Sheffield

    Sergei,

    Thanks very much: I've looked at Gwyddion and I see that it does compute three Minkowski functionals. I've also tried using some MATLAB routines from a paper by Toccafondi:

    Toccafondi C, Stępniowski WJ, Leoncini M, Salerno M. Advanced morphological analysis of patterns of thin anodic porous alumina. Materials Characterization. 2014 Aug;94:26–36.

    They provide some MATLAB routines that calculate:

    • coverage
    • boundary length
    • Euler characteristics

    My problem is that even with really good/consistent images I wasn't really sure how to use the Minkowski functionals.

    I think that the median of coverage gives me an "average" grey level that I could then use to select a processed image, and then process with something like ImageJ (make into a binary then determine black and white statistics).

    But I suspect I'm not getting the most that I can from the Minkowski functionals. But I don't see any good papers or textbook that explains how to use them for the non-image processing specialist.

  • Maohuan Lin asked a question in Angiotensin II:
    Is angiotensin-II the inducer of vascular smooth mucle cell phentopytic modulation?

    if not, what is the most accepted inducer? PDGF-BB?

  • Kenneth M Towe added an answer in Climate Change:
    Is it time we shift emphasis from technological solutions to climate change & focus on the 'Human Dimension'?

    Is it not obvious that nature can heal itself, if only left alone, and it is humans who need regulation? Many natural parks managers do just that; seal off the area from human interference to let nature heal and recover. It is classified as 'Strict Nature Reserve"by IUCN. Complacency is not advocated here, as many have misunderstood, but the shifting of focus from technology to the human being. As technology is no match for human greed, isn't introspection & restraining ourselves more relevant than developing more technology, which caused the mess in the first place, by making it easy for a few to consume more? Since technology is only a short term quickfix which fails after a short time, isn't the real problem our addiction to material consumption & our lack of understanding about human nature? Isn't developing more technology sustaining the addiction instead of correcting it, leading to more complex problems later on, needing more complex technological quickfixes like higher drug dosages, more ground troops & equipment, (along with their debilitating side effects) in the future? Isn't this the vicious addiction circle we are trapped in? As researchers, do we merely buy more time with technology OR go to the very root of the problem, the human being?

    A lot of hue and cry is made about climate change and the environment in general. Public and private money is poured into research to study its effects on the environment, sustainability etc. Should we study nature or ourselves?

    " Our studies must begin with our selves and not with the heavens. "-Ouspensky

    Human activities have been found to have a direct correlation to climate change and its impact on the environment(I=P x A x T, the Ehrlich and Holdren equation), in spite of what some complacent sections say to protect their own self interests.

    We hardly know about Human nature. We can scarcely predict human behavior. We need to find out why we think like we do and why we do what we do and why, in spite of all knowledge and wisdom, consume more than what we need, in the form of addictions to consumption and imbalance not only ourselves but also the family, society and environment around us..
    Humanity is directly responsible for all the unnatural imbalances occurring on the planet. Yet we refuse to take responsibility and instead focus on climate change, or fool the public exchequer with a 'breakthrough in renewable energy just around the corner'. We scarcely know what drives human beings. If we had known, all the imbalances around us would have had solutions by now, given the amount of money plowed into finding such solutions. Are we blindly groping in the dark of climate change because we don't know the answers to our own nature?
    Is it not high time we focus on what makes us human, correct our consumptive behavior and leave nature to take care of climate change? Why focus effort on 'externals' when the problem is 'internal'- 'me'?
    Aren't we addicts denying our addiction and blaming everything else but ourselves?

    " We are what we Think.

    All that we are arises with our thoughts.

    With our thoughts, we make the world." - Buddha 

    IMHO, We don't need to save the World. It is enough if we save ourselves from ourselves. The need of the hour is not vain glorious interventions, but self-restraint and self-correction!

    The Mind is the Final frontier.

  • Can thermal stress be used to reform the disulphide bonds that have been broken apart by alkaline solutions in hair fibre?

    hair fibre, disulphide bonds, polypeptide bonds, alkaline solutions, thermal treatment, 

    Kurt D. Berndt · Karolinska Institutet

    Hi Jimmy

    Formation of disulfides from thiols (at high pH, thiolates) will require the participant of a suitable redox partner. Molecular oxygen (from air) will do. The reaction is fairly rapid and will be catalyzed by divalent metal ions. I am not sure what you mean by thermal stress. Increasing the temperature should lower the activation energy according to the Arrhenius equation.

  • C.J. Chung added an answer in Patterning:
    Does patterning through E-beam lithography depend on type of Si substrate?

    I have optimized beam current, dose, voltage for intrinsic Si and I got the pattern. But when I use same parameters for P-type Si, I couldn't get good pattern. I see some layer (may be PMMA) on top of the pattern and some places I don't see pattern.

    C.J. Chung · National Cheng Kung University

    Therefore, the picture shows the RIE results?

    After EBL, did you check the patten?

  • F. Leyvraz added an answer in Fundamental Physics:
    Are you interested in measuring the speed of the electromagnetic force?

    The Smirnov-Rueda team claimed that they measured that the speed of the bound electromagnetic field is limit but is larger than the speed of light.[1-3] However, their result need be further tested.

    A direct way was presented to measure the speed of the electromagnetic force.[4] In this way, as three stationary charged balls or magnetisms (M1, M2 and M3) are interacting with each other, making M3 moved, M1 and M2 shall be moved by the motion of M3. If the distances between M1 and M3 and between M2 and M3 are L1 and L2 respectively, by observing the times that M1 and M2 start to move, the speed of the electromagnetic force can be calculated with: v=( L1-L2)/(t1-t2), where t1 and t2 are the times that M1 and M2 start to move respectively.

    Thus, the speed of the electromagnetic force can be direct observed.

    M3 can be a transformer. As the current is stopped, the magnetic field of it shall disappear. And M1 and M2 shall be moved by the gravity as they set at the positions under the condition they can be moved by the gravity as soon as the magnetic force disappears. In this case, the speed of the propagation of the magnetic field is measured.

    This is a simple experiment. Only three magnetisms (or charged balls) are needed. But, to observe the times that the magnetisms start to move, a high speed camera is needed. As ∆L=L1-L2 is on the level about 30cm, the precision of the observed time need be larger than 10-11 seconds. However, the general high speed camera can be observed the time on the precision about 10-12 seconds.

    This experiment is fundamental for physics.Besides Smirnov-Rueda team’s work, there is no the experimental and general accepted conclusion for the speed of electromagnetic force. It is clear, if there was such ones, Smirnov-Rueda team’s work could not be published.

    References

    [1] Kholmetskii A. L. et al, 2007, Experimental test on the applicability of the standard retardation

        condition to bound magnetic fields, J. Appl. Phys. 101, 023532

    [2] Kholmetskii A. L., Missevitch O. V. and Smirnov Rueda R., 2007, Measurement of propagation velocity of bound electromagnetic fields in near zone, J. Appl. Phys.102 013529

    [3] Missevitch O. V., Kholmetskii A. L. and Smirnov Rueda R., 2011, Anomalously small retardation of bound (force) electromagnetic fields in antenna near zone, Europhys. Lett.93 64004

    [4] Zhu Y., 2011, Measurement of the speed of gravity, arXiv: 1108.3761v8

    F. Leyvraz · Universidad Nacional Autónoma de México

    I still think you should work out hte details of your design, in an approximate way. The following questions are important: if you want to cut off the current in a magnet, for example, what is the characteristic time for the current to decay? It may well be milliseconds, and this is a serious problem, since th epropagation times you aim to measure, being I assume, some meters, at the speed of light, would be tens of nanoseconds. The other problem is the motion induced in the magnets. You say gravity will make the magnets move ``very quickly''. But you have to consider your timescales: you want to see whether something has moved over a time of tens of nanoseconds. Over such short times, the position changes caused by an acceleration like gravity are very small and presumably undetectable. Do try to work numbers, however approximate.

    Best wishes,

    Francois

  • Which are the best Transparent Conductive polymers?

    The best transparent conductive polymers as I know are PEDOT, PEDOT:PSS and Poly(4,4-dioctylcyclopentadithiophene). In fact I am searching to find aqueous or organic soluble polymer with good stability transparency and conductivity. Are there any other polymers in this area?

    Sundeep Kumar Dhawan · National Physical Laboratory - India

    Poly isothianapthene is the transparent conducting polymer. It is the best CP which is transparent in oxidized state and has a low bandgap.

  • Lawrence Kirkendall added an answer in Kenya:
    Anyone of you with any information on black berry genetic diversity/ characterization (unambiguous)?

    I am characterizing black berry germ plasm in Kenya ( genetic and morphological) and would like to know about the genetic diversity of the same around the world. Any information on the same would be helpful in my research. Regards

    Lawrence Kirkendall · University of Bergen

    not sure if this is relevant but might be interesting, at least: a paper on founder effect and genetic diversity in Rubus alceifolius (Asia vs Reunion)

    Amsellem L et al. 2000. Mol. Ecol. 9: 443-455.

  • Could anybody suggest a suitable programme for proximate analyses of activated carbons by using TGA?

    I'd need to determine Moisture, Volatiles and Fixed carbon for activated carbons by using a thermogravimetric analyzer (TGA). I found different programmes by literature which slightly differ one another. Just wondering which is the most suitable one to follow.

    I report the type of programme I was planning to use below:

    25-105 degC, 25 degC/min, N2
    105 degC, 10 min, N2
    105-900 degC, 25 degC/min, N2
    900 degC, (10 min), N2

    900 degC, (15 min), air

    Is there any steps I could omit/should add/change?

    Any feedback is more than welcome!

    Thanks,

    Antonio

    Jerry Hughes Martin · United States Department of Agriculture

    Moisture, volatiles, and ash performed simultaneously is fast; however, the data is of low repeatability.  The repeatability is hampered because if the room humidity changes between the time the first sample is started and the last sample is finished then this change affects every following measurement.  This measurement drift is apparent in a climate with humidity variation like we have here in the Southeastern US.  Because of our climate we cannot reproduce any simultaneous TGA proximate analysis. 

    Separate moisture, volatiles, and ash tests have served us well.  Moisture is tested in an oven so all samples can be tested simultaneously to avoid the humidity change issue.  In addition to avoiding the change in humidity issue, drying outside the TGA puts a larger amount of dry material in the crucible thus improving the accuracy of the ash test for material with low ash contents.

    Before any TGA tests the crucibles with the samples are dried at 110°C for 15 minutes and allowed to cool.  The sample robot removes the pan and places it right back before doing the experiment.  This action shifts the sample and starts the weighing over so as to avoid many issues concerning loss of accuracy. 

    The volatiles and ash test use temperature profile for ASTM D-3174 for ash and ASTM D-3175 for volatiles.  Each test is run on separate dry samples.  An implicit part of the temperature profile which not written in the method is the cooling curve.  The cooling curve must be included in a TGA proximate method for proper results as this duplicates the action of allowing an oven to cool before withdrawing a sample for weighing.

  • Santhosh Gatreddi asked a question in Solubility:
    How to dissolve ZnCl2 in water ?

    Iam trying to prepare 0.1M  ZnCl2 solution but it is forming white precipitate.

    What are factors that influence its solubility in water or what might be the probable solvent to make it soluble.

  • Can anybody tell me how to get a confocal image of biofilm by a pathogenic bacteria?

    I am working on biofilms of pseudomonas aeruginosa, which is a known human pathogen.Every place I have searched, tells me that they will not allow to take confocal images of pathogenic bacteria.I want to know a lab or a method to take confocal mages of pseudomonas aeruginosa biofilms.Kindly suggest a place or a method.It may be an international lab as well.

    Suparna Dutta Sinha · Jadavpur University

    Hi Maciej, thank you for your reply.

    But the problem is that standard labs havingCLSM do not allow us to take images due to the fear of spread of biofilms.

    Can ou suggest some suitable lab, which will allow me to...

  • Mary C R Wilson added an answer in Nurses:
    Are there publications highlighting bridges between health care assistants and registered nurses in the UK?

    Dear all,

    I am working on an artice that gives some evidence of the role of qualified nurses pay on assistant nurses (also called health care assistants) vacancies.

    As motivation we say that in the UK secondment exist for these less qualified staff (assistant nurses) to become qualified nurses (called registered nurses in US context).

    We would need also some figures that show how many AN are taking on secondment? Alternatively, studies that highlight the motivations, professional lifes of assistant nurses and would provide figures of AN being motivated by becoming in the future RN.

    I hope this is more or less clear what I am looking for.

    Thank you,

    Hello again, Jean-Baptiste 

    These documents were found in a general search:

    https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/236212/Cavendish_Review.pdf

    http://www.nursinginpractice.com/article/student-nurses-must-work-hcas-hunt-says

    The articles below were from the RCN database, so I was not able to include links:

    Waldie, J. (2010). Healthcare assistant role development: a literature review. Journal of Advanced Perioperative Care, 4(2) 61-72.

    The following two are from the same issue of Nursing Standard:

    No author identified (2010) Grave concern’ over plans to replace nurses with HCAs. Nursing Standard (Royal College of Nursing (Great Britain): 1987), 24(18), 9.

    Dean, E. (2010). Training is a 'lottery' for HCAs. Nursing Standard (Royal College of Nursing (Great Britain): 1987), 24(18), 12.

    Atwal, A., Tattersall, K., Caldwell, K., & Craik, C. (2006). Multidisciplinary perceptions of the role of nurses and healthcare assistants in rehabilitation of older adults in acute health care. Journal of Clinical Nursing, 15(11), 1418-1425.

    This article is about a course for HCAs that could lead on to higher training:

    Joy, P., & Wade, S. (2003). Opening the door to healthcare assistants and support workers: Penny Joy and Sîan Wade explain the background to and development of a course designed to enhance the careers of non-registered staff thereby improving the quality of care given to older people. Nursing Older People, 15(6), 18-20.

    Best wishes

    Mary