Question
Asked 6 November 2021

How do I know if crystalline nanocellulose is formed?

I am preparing crystalline nanocellulose in my laboratory. But I have a problem. How do I know if crystalline nanocellulose is is formed or not? I have a product in my hand but I want to know whether the product is nanocrystaliine cellulose or no. Thank you for your help

Most recent answer

A.V. Plastinin
Northern (Arctic) Federal University
Sorry , it is not my question

Popular answers (1)

Hi,
You may determine the crystallinity of the cellulose via XRD and NMR techniques.
Please refer to the below literature for your kind reference.
Thanks
8 Recommendations

All Answers (5)

Well, you need to do a XRD analysis. Using XRD you can measure the crystallinity of cellulose and calculate its purity by Rietveld refinament.
To measure the cellulose nanofibril I recomment a MEV or AFT analyses.
Limenew Abate
Addis Ababa Science and Technology University
Thank you very much
Hi,
You may determine the crystallinity of the cellulose via XRD and NMR techniques.
Please refer to the below literature for your kind reference.
Thanks
8 Recommendations
Kurt Haunreiter
MEGI Engineering
XRD will provide you the relative amount of crystalline to amorphous cellulose present, but won't necessarily confirm that it is CNC. Unless you have an amorphous standard specific to the biomass it will provide relative values for your laboratory only. Crystalline nanocellulose and Nanofibrillated cellulose have overlapping diameter specifications in the literature. Atomic Force Microscopy (AFM) would provide you the diameter and length information that you need. 0.05-0.5 μ length would be crystalline, > 1μ would be nanofibrillated. Here are some papers that discuss aspect ratio and characterization;
1.Moon, R. J., Schueneman, G. T., & Simonsen, J. (2016). Overview of Cellulose Nanomaterials, Their Capabilities and Applications. Jom, 68(9), 2383–2394. doi: 10.1007/s11837-016-2018-7
2.TAPPI WI 3021 - Standard Terms and Their Definition for Cellulose Nanomaterial. (n.d.). 2013, Atlanta, GA.
3.Börjesson, M., & Westman, G. (2015). Crystalline Nanocellulose — Preparation, Modification, and Properties. Cellulose - Fundamental Aspects and Current Trends. doi: 10.5772/61899
1 Recommendation
A.V. Plastinin
Northern (Arctic) Federal University
Sorry , it is not my question

Similar questions and discussions

Why so many published sensitivity analyses are false: a systematic review of sensitivity analysis practices
Discussion
2 replies
  • Nolberto MunierNolberto Munier
# 205
Dear Andrea Saltelli, Ksenia Aleksankina , William Becker, Pamela Fennell , Federico Ferretti , Niels Holst , Sushan Li , Qiongli Wu
I read your paper:
Why so many published sensitivity analyses are false: a systematic review of sensitivity analysis practices
My comments:
1 – In the abstract you say “A proper uncertainty analysis of the output of a mathematical model needs to map what the model does when selected input assumptions and parameters are left free to vary over their range of existence”
Excellent! It is the first time that I see the bold sentence, however, it is the core of SA, since a DM cannot increase or decrease as much as he wants, something that is ignored by those who claim that they performed a SA
Another fundamental sentence is “but is distinct from uncertainty analysis, which instead addresses the question ‘How uncertain is the prediction?”
This is true, uncertainty analysis is the key to perform a proper SA
“The results, while discipline-dependent, point to a worrying lack of standards and recognized good practice”
Absolutely correct! I would say that most of the papers that perform SA ignore the basic elements of this procedure, from the very beginning, by following the OAT time method that is against not only to Systems Theory, but also according to common sense by choosing one only criterion, which has the highest weight, which is intuitive, yes, but false, as can be easily demonstrated. This is the ‘ceteris partibus’ economic concept, and rejected by most economists.
SA is a difficult issue and more complex that increasing subjectively a weight of a criterion, which is a procedure completely irrelevant, not accepted in SA mathematics, and it appears that many people do not know or understand it. The reason is very simple: Weights are only a metric to arbitrarily quantify criteria relative importance, and that do not have the capacity to evaluate alternatives as Shannon Theorem demonstrated in the 1950s.
2- In page 3 “it is crucial to keep in mind that the uncertainty in the assumptions that are outside the set of input factors has not been explored”
Another hit! The underlined sentence is also true, for we are considering only internal variables and do not consider external or exogenous ones, that have the potential to correct and even revert the first decision made.
These authors mention something very important that is normally overlooked, and it is the propagation of uncertainty, a fascinating concept indeed, that those advocates of using subjective criteria weights that modify initial data, do not realize that initial perturbations travel along the process up to the end, as in top-down approach followed perhaps by 99% of methods. Even common sense indicates the fallacy of this procedure when real data objective and subjective (from reliable sources), are modified when multiplied by values that is only in the mind of the DM.
From my point of view, I encourage using the bottom-up approach, starting from a solid base bult on inputted data, as the authors show in Fig. 1.
However, I differ with the authors referring to what UA does. In my opinion, UA analyzes the external or exogenous factors that can affect the input and that have not been included in the decision matrix, due to their variability over time.
For instance, if an inputted value as demand, is one of the criteria that determinates the best solution (basic criteria), UA lookst how much can it vary without affecting the position in the ranking of the selected alternative. If the variation interval of this criterion is small, there is a risk for that alternative. The DM must research about the factors that directly affect the demand, that were not imputed in the decision matrix, for instance competition, quality, product life, etc., and find by statistics, the probability of each of them to affect the input. That is, SA gives information tu UA, which analyzes it, and feeds back the conclusion to SA.
3- Page 4 . “Once this is done, the next step could be to use sensitivity analysis to assign this uncertainty to the input factors. Sensitivity analysis allows us to infer that, for example, “this factor alone is responsible for 70% of the uncertainty in the output”.
I don’t think that I agree on the SA giving that information, since there can be various criteria that determine the best alternative, and it is the combination of all of these criteria that defines the total impact. Remember that some criteria may call for maximization and the other for minimization and even equalization. Say for instance that criterion C1 ‘demand’ (max), and criterion C2 ‘production cost’(min), are the two basic variables that affect a selected alternative or option ’A’.
The MCDM method used must determine the range of variability of C1 and C2. Say that C1 has a large variation while C2 a very small. This means that these two criteria can be increased and decreased between different intervals, during which there is no change in the position of A, however, A is quite stable or strong due to C1 variation, but could be risky due to C2, and even, if C2 has zero variation, then A is subject to a very high risk, it does not matter if it is safe under C1 variation. Most probably in this case, the DM will decide to ignore A and look for another alternative B. It appears that there is almost certainty in the result if the past performance and trend of cost (C2), shows highs and lows, and that nobody can control. Obviously, C2 is more significative than C1.
4- page 4 “Sensitivity analysis is used for many purposes. Primarily it is used as a tool to quantify the contributions of model inputs”
Sorry, it appears that we have different points of view on SA purpose, for I have always known a different definition for SA. I always understood that SA main purpose is to find out how strong or stable a solution found is. The contributions of the model inputs should be in the initial decision matrix, and correspond to the performance values.
Now, contributions to what?
If you refer to criteria influence on the results, that corresponds to a set of criteria, within the whole set, that participate in selecting the best alternative, and called ‘basic criteria’; if you refer to their relative importance, contribution exists under the name of ‘marginal values’ of basic criteria.
5- Page 10. “The reasoning here was that the most highly cited articles should represent, on average, “commonest practice”
In this I agree with you, but common practice does not necessarily mean that it is correct, and there are many examples in MCDM, for instance, using pairwise comparisons and quantifying a preference
6- Page 10 “This ensures that the paper has a significant focus on sensitivity analysis, that it is related to mathematical models, and concerns uncertainty (as opposed to e.g. design sensitivity analysis and optimisation, which is a separate topic)”:
It is hard to understand the difference. For me there is only one SA, and designing it n is preparing data to make it by incorporating UA finds from external values
7- On page 12 you mention a set of criteria adopted. I am in complete agreement with 1 and 2, but do not understand the third. Why do you talk of a method of SA, I know only one and using AAT instead OAT, the latter has been rejected by many scholars, and in addition, it is again System Theory, since an initial matrix is a system. I do not understand either why you refer to ‘models. Do you refer to linear and non-linear? If so, why? Since in MCDM we are normally working with linearity, although some methods introducing some non-linear concepts, I believe that you refer only to linear systems. Or am I mistaken?
8 – Page 12 “The identification of OAT and global sensitivity analyses is one of the focal points of this study.”
It appears that you are analyzing the UA scenario with only one selected criterion.
What happens if your problem, as is common, has three or more basic criteria, like demand, cost, and workers, and the three of them register large variations? Since they must operate at the same time, you have to considerer them as AAT (All At a Time), or simultaneously. So, why to spend time in OAT?
9- Page 13 “Clearly, this ignores the additional uncertainty in when more than one factor at a time is set to its maximum or minimum values”
This is reality and I put an example at the beginning of these comments.
10 – Figure 7 on page 19 very clearly illustrates and support the title of your article, because it shows that about 3500 articles use OAT which is considered a wrong procedure, instead of about 400 that use global (that I call AAT). Very good finding indeed!
11- Page 19 “Table 1 shows that most papers are unsurprisingly focused on the application, i.e. on the model at hand, and not on the methods.”
This is a good surprise, at least for me, denoting that people think it is a problem not in a method.
And as you say, it is indeed encouraging.
12 - page 20 “As discussed, if a model is linear, an OAT or derivative based approach is adequate”
Not in my opinion. Linear or not it is wrong a procedure, because it does not reflect neither reality nor common sense.
13- Page 20 “This fragmentation hinders development of the subject and spreading of good practice, while simultaneously allowing malpractice to survive relatively unchallenged”
Completely agree!
14- Page 20 “More generally, researchers may not even be aware that global sensitivity analysis techniques exist” “Under these circumstances, it seems that researchers often revert to the more intuitive OAT approach”
Completely agree! On the first and on the second, but this happens in MCDM with other aspects too, like using a method that is not adequate for the problem, but because the practitioner knows it, and not other
15 – Page 21 “…although mature global sensitivity analysis methods have been around for more than 25 years, this still may not be enough time for established good practice to filter down into the many research fields in which modelling is used
For more than a decade there is a model that take care of this, and that can work with any number of basic criteria simultaneously, and it is transparent
16 – “Who or what scientific forum can then decide if a method is a good or a bad practice? “
Since years I have been claiming and even written MCDM organizations asking for a quality control of methods. Asners? None
17 “With some exceptions, it is advisable to perform both uncertainty and sensitivity analysis. Once an analyst has performed an uncertainty analysis and is informed of the robustness of the inference, it would appear natural to ascertain where volatility/uncertainty is coming from”
This is exactly what I was suggesting in these comments and described it as a feedback
As a bottom line, I want to congratulate you guys for this paper, because what you say is true.
We have also to demolish old myths such as using subjective weights, pairwise comparisons, or the absurd that what is in the mind of the DM can be transferred to the real-world, or assuming that it is transitive, or that RR is an unexplained phenomenon, when it is not a phenomenon but a consequence of changes in spatial dimensions, or using fuzzy fed with invented values, etc.
Thank you for reading these extensive comments
Nolberto Munier

Related Publications

Chapter
Full-text available
In this chapter, methods of preparation, structure, properties, and applications of various kinds of nanocellulose are described and discussed. Currently five kinds of nanocellulose are known: crystalline nanoparticles, amorphous nanoparticles, nanofibrillated cellulose, bacterial nanocellulose, and cellulose nanoyarn that can be applicable in vari...
Article
A method of demonstrating cellulose utilization in biological systems by measurement of the turbidity of a cellulose suspension in a photo-electric absorptiometer is described. The crystalline styles of Ostrea edulis and Mytilus edulis appear to contain a cellulolytic factor as yet uncharacterized.
Got a technical question?
Get high-quality answers from experts.