Science topics: Vectorization
Science topic

Vectorization - Science topic

Explore the latest questions and answers in Vectorization, and find Vectorization experts.
Questions related to Vectorization
  • asked a question related to Vectorization
Question
1 answer
Hello everyone! I will summarize my situation: I am performing KO using CRISPR/Cas9 for PTEN in Prostate Cancer cells (DU145 and 22RV1). I am using a commercial vector that includes the guides, Cas9, puromycin resistance and GFP. I have infected the cells with the virions carrying the guides, I have done the selection with puromycin and I have even done a first western blot of PTEN. In this Western of the cell pool I have seen a large decrease of PTEN with respect to the Wildtype cells, which means that it has worked to some extent and now I must isolate clones to be able to obtain a complete KO. However, when I have gone to look at GFP expression on the fluorescence microscope I have not seen any of the green cells (I have also used GFP expressing cells as a positive control to make sure there is not a glitch with the microscope). I don't quite understand why you see a decrease of PTEN in the WesternBlot but I am not able to see GFP if in theory they go in the same vector (The vector is Merck's LVL01). Can anyone know what is going on? is there any explanation? should I start again from the beginning? Thank you very much in advance
Relevant answer
Answer
I'm not familiar with your exact system. But it might be the case, that the GFP is just your positive control for the transfection/transduction. Later when the virus is diluted by proliferation and selection the GFP signal is gone, since its only expressed while the vectors DNA is still present in the cells.
I would assume that when PTEN is gone, that the KO was successful.
You might want to try to amplify the site of your frame shift via PCR from genomic DNA, clone it and sequence some of the clones.
Best wishes
Soenke
  • asked a question related to Vectorization
Question
2 answers
I'm still an amateur and don't know where researchers usually get their research materials for biological research. I would like to ask where to obtain the lentivirus vector carrying the OSKM factor for reprogramming
Relevant answer
Answer
Addgene is always a good place to start.
  • asked a question related to Vectorization
Question
1 answer
I am trying to figure out how to make more TOPO vector. I'm using the PCR4Blunt-TOPO vector which is already linearized. In the kit there are only 25 ul of 10 ng/ul TOPO vector and I have do to several transformations using this vector. I was therefore wondering if it is possible to make more TOPO vector by doing transformation just without adding any insert. So basically I want the TOPO vector to self-ligate. The vector has blunt ends and as far as I understand, there are phosphates missing on the ends of the backbone, making it harder to self-circularize. Any suggestions on how this could be done?
Relevant answer
Answer
Dear Selma,
This answer is three years late, but maybe it will still be of use to you or others.
I suspect what you want to do is make more of the TOPO plasmid so that you can perform lots of TOPO cloning reactions without needing to buy lots of the kit.
In short, you won't be able to do that without quite a bit of effort.
The long explanation is that the pCR4 Blunt TOPO vector contains the coding sequence for ccdB toxin in frame with a Lacz alpha coding sequence under a bacterial expression promoter. The linearisation of the vector breaks this Lacz alpha - ccdB coding sequence. If the vector self ligates, any bacteria taking up self-ligated vector will express the ccdB toxin and be killed. This is designed to provide selection for colonies that have taken up plasmid with an insert.
You can get around this to get a circular pCR4 plasmid that you can amplify in E.coli by introducing essentially any PCR product you want, but that will still not allow you to do TOPO cloning.
To do TOPO cloning, you would need to clone in some restriction sites into the plasmid, and also clone, express and purify Vaccinia virus Topoisomerase 1B (you can get the coding sequence in a plasmid from Addgene) and treat your modified circular pCR4 Blunt vector with said restriction enzymes and Topoisomerase 1B. This paper describes the steps: https://pubmed.ncbi.nlm.nih.gov/26422141/
To make the TOPO cloning kit go further, I have had success simply halving the amount of each reagent I put into the reaction. TOPO cloning reactions are so efficient that I never had a problem doing so.
Best wishes,
Chris
  • asked a question related to Vectorization
Question
2 answers
Hello All,
I'm trying to clone a 1.5 kb fungal gene to a PJET cloning vector (Thermo scientific) that has 98% selection capability, and I'm also verifying the clone using the PCR, but every time after sequencing, I get the result as a vector sequence rather than the specific clone fragment that I'm targeting. Please suggest me if any one aware about this problem?
Relevant answer
Answer
Hello Sumit, more information would help us troubleshoot. Can you kindly attach your protocol? Alternatively , you can try RecA-independent recombination or RAIR cloning, see publication below. You just need four primers—two to amplify the vector and two to amplify the insert— where the 5’ and 3’ ends (of the resulting PCR amplicons) are homologous to one another. Following PCR, you just mix 1-2 microliters of each reaction to heat shock cells, perform the heat shock protocol, plate the cells, and perform colony PCR the next day to select the positive clones. Most common laboratory strains of E. coli are capable of RAIR. Good luck!
  • asked a question related to Vectorization
Question
4 answers
I want to clone my 60 bp insert into my approximately 7000 bp vector. I did the ligation with a vector to insert ratio of 1:10. Either very few or no colonies grew. Also my colony PCR results were negative. How should I change the vector insert ratio?
Relevant answer
Answer
Hello Elif, I agree with Eddy— we need more information to provide useful feedback. Sounds like your competent cells or protocol for transformation may be part of the issue. Have others in your lab had issues with those cells? Could you attach the protocol you used? Honestly, your insert seems small enough that you may just be able to order two primers to do your cloning. For example, each primer could be 45 nucleotides long (just making a number up) where 15 anneal to the vector and the other 30 are part of your insert. Following PCR, just do a DpnI digestion, PCR cleanup, ligation, and then transform your cells. You would need to account for melting temperatures, secondary structures, etc. given the length of the primers, but just an idea. You could also try Rec-A independent recombination or RAIR cloning, see the publication below. Most common laboratory strains of E. coli are capable of RAIR. All you would need to do is buy four primers, two to amplify the vector and two to amplify the insert, with the 5’ and 3’ ends being homologous to one another. After PCR you just add 1-2 microliters of each reaction to heat shock cells, carry out heat shock protocol, plate the cells, and do colony PCR the next day to screen for successful clones. Good luck!
  • asked a question related to Vectorization
Question
2 answers
Hello everyone,
I designed primers for Gateway cloning, but I'm unsure about the sequences I need to add. I plan to transfer my genes along with their native promoters into a donor vector. I added GGGG-attB1 and 2 seq-gene to forward and reverse primers. According to the information I gathered, I need to make changes to the stop codon region related to the destination vector, which I will use for the LR reaction. My destination vector contains a GFP tag. In this situation, should I design my reverse primer starting from the stop codon-containing region and eliminate the stop codon? If you have any additional insights that might be important for this experiment, could you please share them?
Thank you...
Relevant answer
Answer
Hello Busra, I do not fully understand your cloning setup, but you can explore using RecA-independent recombination or RAIR cloning as an alternative. RAIR cloning is pretty straightforward, but becomes more difficult to achieve depending on the number of inserts one is trying to clone into the vector. See the publication below. Good luck!
  • asked a question related to Vectorization
Question
3 answers
I have been struggling with getting clones with LR plus reaaction to clone three fragments into a destination vector. I use 5fmol of entry clones and 10fmol of the destination vector. I first calculate the fmol : (concentraion in ng x 10^6)/ (size in bp x660). I then calculate and dilute the sample and keep the reaction. for eg: if i get 29.9 fmol in concentration for an entry clone and to bring it to 5fmol, i would take 1.67uL of the plasmid and dilute in 8.33uL water and from it take 1uL for the reaction. So, I am taking 1uL of all three entry clones (of 5fmol each), 1uL of destination vector (10fmol) and 1uL of LR plus clonase. Can anyone suggest what I am doing wrong?
Relevant answer
Answer
Gayathri, your protocol is consistent with the Invitrogen protocol so unsure if concentration is the issue. A few questions. How exactly are you quantifying the vectors? Are you transforming into commercial or home-made heat shock cells and are they the same One Shot cells? The invitrogen manual says, “We recommend using plasmid DNA purified with the PureLink™ HiPure Plasmid Midiprep Kit. Mini-prep (alkaline lysis) DNA preparations are not recommended for MultiSite Gateway™ Pro cloning reactions. DNA cannot be quantitated by UV absorbance because of contaminating RNA and nucleotides.” How are your plasmids being isolated? You can linearize your vectors using either PCR or restriction enzyme digestion. The former may be more desirable because you can then DpnI digest the original supercoiled vectors that would be preferentially taken up by cells. The smaller, original vector will always be preferentially taken up by the cells compared to a larger vector (generated by LR clonase). I also assume that the LR clonase reaction is not complete and there are still the original vectors that have not undergone recombination. Have others in your lab tried LR cloning and have they had these issues? Thanks!
  • asked a question related to Vectorization
Question
3 answers
Hi there!
I am trying to clone with the LR Clonase enzyme (LR Clonase II, Thermo Fisher). I have an entry vector for each gene of interest (genes with a length of 120 and 490 bp approximately), which I am trying to recombine with the destiny vector (Gateway pK2GW7). I performed the recombination as indicated in the protocol and then transformed E. coli DH5 alpha bacteria but I am unable to obtain colonies with the constructs. The enzyme works
Things I tried:
- Check the sequences of the entry vectors: they are correct and checked by sequencing.
- Linearize the entry vectors
- Use 150 ng of destiny vector and 10 ng of entry vector for recombination.
- Leave recombination over night
- Cut a fragment of the entry vector to linearize it because it recirculates and has the same antibiotic resistance as the destiny vector. The few colonies I got after transforming were with the entry vector.
I don't know what else to try to get the clones. What do you recommend? Do you think the size of my genes affects the recombination in any way?
Thanks to all!
Relevant answer
Answer
Lourdes, I am recommending that you PCR amplify the vector AND inserts to linearize, DpnI digest to get rid of the original templates (by virtue of them being methylated), PCR cleanup, and then proceed with the LR clonase reaction. Of course check each PCR product before and after PCR cleanup. Good luck!
  • asked a question related to Vectorization
Question
4 answers
I'm trying to perform ligation by cleaning up the insert and vector prior to ligating them. Concentration is coming down to very low after the cleanup which is not giving good results. Pls help me .
Relevant answer
Answer
If you start with more molecules, then you end up with more molecules. Try a larger volume & concentration of the vector & insert.
If possible, don't do a cleanup step. If your enzyme is heat-inactivated that means you can skip the cleaning. If the insert is a PCR product & it's a single, bright band on the gel, then just use the PCR directly in the ligation reaction & skip the cleanup.
Are you phosphatase treating the vector? It will make a huge difference in reducing the self-ligated vectors.
  • asked a question related to Vectorization
Question
1 answer
Hello everyone,
I am working on a project focused on digitizing and vectorizing archival maps and forest inventory sheets. The goal is to integrate these documents into a spatial database to analyze forest dynamics influenced by fires, logging, and climate change.
I am looking for ways to automate the digitization and vectorization processes, particularly using shape recognition, image segmentation, and machine learning tools to convert map features into vector polygons.
If you have recommendations for effective tools or methodologies for these types of documents, I would greatly appreciate your insights. I will also be sharing some visual examples to illustrate these maps and sheets.
Thank you in advance for your suggestions!
Relevant answer
Answer
Automated vectorization of legacy maps has been a 'Holy Grail' since the origins of GIS in 1960s ( https://www.cia.gov/stories/story/the-mapmakers-craft-a-history-of-cartography-at-cia/#the-1960s ) - and sixty years later, it is still a persistent problem. And it still has the same parameters, the ROI of automation vs. manual, i.e. are you doing 10 or 10,000? What is the quality of the scanning? How dense and consistent is the symbology? How similar are they to one another? For instance a utility company's archive produced by the same drafting department with standard rules might be very tractable for automation, while 10 medieval manuscripts would be a career by them selves.
There are existing opensource and commercial products already, so why not test those first to see if they meet your quality threshold - for instance just gathering approximate areas versus a high quality cartographic product. There are tools outside of GIS also, I've used Adobe Photoshop to very good result because it has baked in preview tools for tuning transformations, despeckling, etc. - and dealing with half-tones and hachures is somewhat possible ( https://en.wikipedia.org/wiki/Hachure_map ).
Automation introduced a large unrecognized unpredictable QA task and tuning iterations, while manual digitizing can be linearly estimated by doing a small sample and extrapolating. So don't forget https://www.mturk.com/
  • asked a question related to Vectorization
Question
3 answers
Hello, I performed an in vitro transcription of a PCR product cloned into the PCR 4.1 vector. I used the T7 enzyme for the transcription. I need this transcript to be used as a control in quantitative RT-PCR. How can I ensure that the T7 enzyme transcribes only the cloned insert and not beyond, considering that the vector does not have a specific termination site for the enzyme?
Relevant answer
Answer
You should digest the vector with a restriction enzyme that cut immediately downstream of your insert and use the linearized vector as your IVT template.
  • asked a question related to Vectorization
Question
4 answers
I have been doing gibson assembly for months, amplifying vector and inserts separately, then doing the assembly reaction at 50 degrees for an hour following all the instructions using 100 ng vector DNA and abundant inserts (as planned to insert one gene for the vector). I screen the transformed colonies by plasmid purification, and I got some positive results with the amplification length, same for the gene length, but when I send it for sequencing results, all come negative with blank vector. I tried rechecking the result by single RE digestion, but the result is abrupt (the linear DNA is not aligning with its own length along with the NEB marker). What will you be that I will be checking? any suggestions? Where does the problem lie?
#cloning #Gibsonassembly
Relevant answer
Answer
Sounds like a problem with linearized vector not the assembly itself. What enzymes do you use for the digest? How do you purify the linearized vector before the assembly?
  • asked a question related to Vectorization
Question
2 answers
In the first figure (empty vector), only a short linker is in front of EGFP. EGFP is followed by IRES2 and mCherry. The cells transfected with this empty vector only express mCherry; I can not get any EGFP-positive cells via flow cytometry, which means EGFP was not expressed. Theoretically, the EGFP: Cherry ratio should be equal to 1.
However, in the second figure, I inserted the sequence of POI in front of the short linker and EGFP. This time, EGFP can be expressed.
So I am wondering if that is something wrong with my empty vector that leads to EGFP not being expressed.
Relevant answer
Answer
Hi Yannick, since the Kozak sequence has already included the ATG start codon (up figure). Therefore, I just let the EGFP sequence start from Val (2aa). I have another empty vector_2 (bottom figure), which has an identical sequence to the one I posted, except for the linker sequence in front of EGFP. The empty vector_2 can normally express EGFP in cells.
  • asked a question related to Vectorization
Question
31 answers
This discussion concerns the positivist versus the realist interpretation of quantum non-locality in the framework of EPRB experiment. It's about the possibility to change this question of interpretation into a falsifiable proposal: the conservation (or not) of 2-time correlations on Bob's side as long as only Alice performs polarization measurements.
More precisely, the article "Each moment of time a new universe" (https://arxiv.org/abs/1305.1615) by Aharonov, Popescu and Tollaksen, presents:
  • a T-symmetric formulation of the temporal “evolution” of a quantum system which does not evolve (H=0)
  • a very important consequence predicted thanks to this formulation concerning the interpretation of the EPRB experiment.
Cf. this very interesting 8 pages article (https://arxiv.org/pdf/1305.1615) and a video presented by Popescu (https://www.youtube.com/watch?v=V3pnZAacLwg).
Thanks to their 2-state vector T-symmetric formalism (https://arxiv.org/abs/quant-ph/0105101), Aharonov, Popescu and Tollaksen notably highlight the following facts:
  • as long as no quantum measurement is carried out on a given quantum system (undergoing a H=0 Hamiltonian evolution) the 2-time measurement O(t2) - O(t1) between instants t1 and t2 vanishes whatever the observable O. This proves the existence of a time correlation between successive states of a quantum system as long as it doesn't undergo any quantum measurement.
  • On the contrary, the correlation O(t2)-O(t1) = 0 is broken between instants t1 and t2 respectively preceding and following a quantum measurement (except in the specific cases when the measurement result is an eigenstate of O).
Concerning EPRB type experiment, this document indicates §Measurements on EPR state:
  • The break, on Alice's side, of the 2-time correlations between instants t1 and t2 preceding and following a quantum measurement by Alice. Indeed, except in a particular case when the measurement result is an eigenvalue of O, the 2-time correlation O(t2) - O(t1) = 0 is lost.
  • The conservation, on Bob's side, of the 2-time correlations O(t2) - O(t1) = 0 as long as Bob doesn't make any measurements on his side.
Thus, the 2-state vector time-symmetric formalism shows the asymmetry of the quantum state obtained, during an EPRB experiment, after a measurement carried out on one side only. That asymmetry doesn't show up in the standard formulation. Consequently, the standard one-state vector time-asymmetric quantum formalism suggests a (hidden) relativistic causality violation. On the contrary, the conservation of the 2-time correlation in the 2-state vector formalism provides, in my view, a proof that, on Bob's side, nothing happens as long as only Alice carries out quantum measurements on her side.
This seems providing a testable prediction allowing us to decide between:
  • a realist interpretation of the EPRB experiment where the quantum state is interpreted as the model of an objective physical state (cf. On the reality of the quantum state, https://arxiv.org/abs/1111.3328) and the reduction of the wave packet as instantaneous, non-local AND objective, cf.:
- Special Relativity and possible Lorentz violations consistently coexist in Aristotle space-time https://arxiv.org/abs/0805.2417 ...
  • on the contrary, a positivist interpretation of the EPRB experiment, the instantaneous and "non-local reduction of the wave packet" is interpreted as an irreversible and local record of information, hence up to be read by an observer carrying out the measurement, without objective change of Bob's photons state when only Alice performs polarization measurements on her photons. cf.:
When only Alice carries out measurements on her side, the prediction of the conservation of the 2-time correlation on Bob's side, resulting from the 2-state vector time-symmetric formalism, decides, in my view, in favor of the positivist interpretation of the EPR non-locality. In my view, the positivist interpretation becomes a falsifiable physical postulate instead of a pure philosophical question.
Such an experimental verification seems solving a 40 years debate between positivist and realist interpretations of Bell's inequalities violation. Hence, this experimental validation seems deserving to be carried out (but I don't know if it has already been Achieved).
Would you agree with this view?
(1) Note, however, that E.T. Jaynes supports a realist interpretation of physics and its role despite, paradoxically, his insistence on the importance of Bayesian inference and the broad development he gave to this approach (cf. Maxent https://bayes.wustl.edu/etj/articles/rational.pdf)
Relevant answer
Answer
PRESIDENT JIMMY CARTER APPARENT INITIATION OF A SCIENTIFIC RENAISSANCE
Good morning Bernard Chaverondier . Thank you for the courtesy of your detailed and documented report that I did read with attention and found it correct under the assumed axioms. Perhaps, it may be of interest to indicate the connection between your studies and those I initiated
in the late 1970s at Harvard University with DOE support under the Presidency of Jimmy Carter, and continued thereafter.
Being a nuclear physicist, Jimmy Carter was fully aware of: the limitations of quantum mechanics in nuclear physics [1] (that is, in his words under strong interactions); the "EPR Argument" referred to Einstein's view that "quantum mechanics is not a complete theory"; and all that. Hence, to my knowledge President Carter was seriously committed to launch a scientific renaissance reminiscent of the renaissance initiated by Lorenzo dei Medici in Italy half a millennium earlier. A main point is that Carter's scientific renaissance was based on the "EPR completion" of quantum mechanics for STRONG NUCLEAR INTERACTIONS because the accurate representation of said interactions is expected to require a generalization of the totality of pre-existing mathematics and physics and, therefore, of chemistry and biology.
According to President Carter, the locality of quantum axioms (read: point-like approximatiin of particles and wave packets) is fully acceptable for atomic structures due to large relative distances while, by contrast, the representation of nuclear structures require the representation of the actual extended size of protons and neutrons since they are in conditions of mutual penetration in a nuclear structure (President Carter knew the experimental evidence for which nuclear volumes are generally "smaller" than the sum of the volumes of the constituent proton and neutron). The deep mutual overlapping of hyperdense charge distributions than implied strong interactions to be non-local in the sense of occurring in volumes not reducible to points (as pioneered by the Duke de Broglie and David Bohm), with consequential non-linearity as a dependence on power of the wave functions (pioneered by W. Heisenberg) as well as of contact, thus zero range non-potential /character (as pioneered by R. M. Santilli in 1978 under DOE contract via the conditions of variational self-adjointness [2]).
To understand Carter's intended scientific renaissance one should note that in the late 1970s the mathematics, let alone the physics, capable of representing non-local, non-linear and and non-Hamiltonian strong interactions did not exist and, therefore had to be built.
In the late 1970s it was known that I had dedicated my Ph. D. studies in the 1960s to the EPR completion of quantum mechanics into an irreversible form, today known as hadronic mechanics [3], to represent nuclear fusions, combustion and living organisms
via the Lie-admissible generalization of Hiesenberg's equation. On the day of my arrival at Harvard in September 8, 1977 following a four year stay at MIT, Howard Georgi (my supervisor at Harvard) received a
phone call in my presence from the Australian David Peaslee of the DOE (then ERDA) inviting Harvard to apply for a research grant primarily intended for the EPR completion of quantum mechanics for strong nuclear interactions between extended, thus overlapping nucleons. Since David Peasslee did not have the authority for the indicated invitation, following the clarification that the invitation originated from high levels (later confirmed by Peaslee as originating from President Carter himself), Harvard University did apply for a grant resulting in this way in the DOE grants numbers ER-78-S-02-47420.A000, AS02-78ER04742, DE-ACO2-80ER10651, DE-ACO2-80ER-10651.A001 and DE-ACO2-90ER10651.A002 that produced a monumental amount of publications by scholars the world over including the proceedings of several workshops and formal conferences [4] .
To understand the connection between our studies in the "EPR argument" and yours in the "EPR effect," I have to go back to the late 1970s and clarify that President Carter was interested in the "EPR completion" (rather than a "generalization") of quantum mechanics for basic advances in string interactions for which task I was transferred from the Lyman Laboratory of Physics to Harvard's Department of Mathematics where I proposed the completion of quantum into the Lie-isotopic branch of hadronic mechanics for closed-isolated systems with Hamiltonian and non-Hamiltonian internal forces (such as stable nuclei) via the "axiom-preserving" (rather than "generalized") isoproduct [6]
A*B = ASB, S > 0 of the millenary old quantum associative product AB. The quantity S, nowadays (half a century later...) called the Santillian [6] represents the actual dimensions of nucleons and their non Hamiltonian interactions with related isounit \hat 1 = 1/S and compatible generalization of 20-th century applied mathematics and physics [3].
The above basic assumption allowed a new conception of entanglements [7], an explicit and concrete realization of Bohm's "hidden variables" \lambda = S as being hidden in the axiom of associativity [8], the inapplicability of Bell's inequalities [9] and the progressive recovering of Einstein's determinism under strong interactions [10] (see reviews [5] and [11]) with advances simply impossible under the Copenhagen realization of quantum axioms such as Refs. [12]-[13]-[14]-.[15]
In summary, it appears to me that our respective line of studies are mutually supportive of each other. In fact, you can interpret each and every product AB in your studies as being isotopic A*B = ASB, by therefore admitting a deep non-Hamiltoinan entanglement between wave packets [7] under which your studies on measurements seems to be reinforced.
The best way would be to develop the still unexplored "hadronic measurement theory." that we can discuss in case you are interested.
Needless to say, I remain at your disposal for what can do.
In closing,, I wonder whether some colleague at ResearchGate can bring "Carter's initiation of a Scientific Renaissance" to the attention of the Administration of President Elect Donald Trump in the hope that he can follow the call by Lorenzo dei Medici and Jimmy Carter and, in view of the advances permitted by a scientific renaissance in all fields, be remembered in history for millennia to come.
Sincerely
Ruggero Maria Santilli
Biographical Notes
REFERENCES
[1] R. M. Santilli: Lie-isotopic representation of stable nuclei I: Apparent insufficiencies of quantum mechanics in nuclear physics
\textit{Ratio Mathematica} {\bf 52}, 43-63 (2024),\\
[2] R. M. Santilli, {\it Foundation of Theoretical Mechanics,}
Springer-Verlag, Heidelberg,
Germany,
Vol. I (1978) {\it The Inverse Problem in Newtonian Mechanics,} \\
[3] R. M. Santilli, {\it Elements of Hadronic Mechanics},
Ukraine Academy of Sciences, Kiev,\\ Vol. I (1995)
Vol. II (1994)
[4] R. Anderson, Pblications under DOE grants numbers ER-78-S-02-47420.A000, AS02-78ER04742, DE-ACO2-80ER10651, DE-ACO2-80ER-10651.A001 and DE-ACO2-90ER10651.A002,\\
[5] R. M. Santilli, Need of subjecting to an experimental verification the
validity within a hadron of Einstein special relativity and Pauli exclusion principle, \textit{Hadronic Journal} {\bf 1}, 574-901 (1978),\\
[6] Editorial Board, Progressive Recovering of Einstein’s Determinism under Strong Interactions, Scientia November 2024,\\
[7] R. M. Santilli,
"A quantitative representation of particle entanglements
via Bohm's hidden variables according to hadronic mechanics,"
Progress in Physics {\bf 18}, 131-137 (2022)\\
[8] R. M. Santilli and G. Sobczyk, Representation of nuclear magnetic moments via a Clifford algebra formulation of Bohm’s hidden variables, \textit{Scientific Reports} {\bf 12}, 1-10 (2022), \\
[9] R. M. Santilli, Isorepresentation of the Lie-isotopic $SU(2)$ Algebra
with Application to Nuclear Physics and Local Realism, \textit{Acta Applicandae
Mathematicae} {\bf 50}, 177-190 (1998),\\
[10] R. M. Santilli, Studies on A. Einstein, B. Podolsky and N. Rosen prediction that quantum mechanics is not a complete theory, I: Basic methods,
\textit{Ratio Mathematica} {\bf 38}, 5-69 (2020),\\
[11] A. Muktibodh: Santilli's recovering of Einstein's determinism, \textit{Progress in Physics,} {\bf 20,} 26-34 (2024), \\
[12] R. M. Santilli: Lie-isotopic representation of stable nuclei II: Exact and time invariant representation of the Deuteron data,
\textit{Ratio Mathematica} {\bf 52}, 64-130 (2024),\\
[13] R. M. Santilli: Lie-isotopic representation of stable nuclei III: Exact and time invariant representation of nuclear stability
\textit{Ratio Mathematica} {\bf 52,} 131-175 (2024),\\
[14] Santilli, R. M. : Reduction of Matter in the Universe to Protons and Electrons via the Lie-isotopic Branch of Hadronic Mechanics,
Progress in Physics, Vol. 19, 73-99 (2023),\\
[15] R. M. Santilli, Apparent Resolution of the Coulomb Barrier for Nuclear Fusions Via the Irreversible Lie-admissible Branch of Hadronic Mechanics, \textit{Progress in Physics} {\bf 18}, 138-163 (2022),\\
  • asked a question related to Vectorization
Question
3 answers
I have been trying to clone a genomic region (Promoter + InEx) that sums up close to 10kb, so far it has been unsuccessful with restriction based cloning and have opted for a gateway strategy which now too has become a headache. I am amplifying my fragment with attB1 and attB2 sites added to my primers and afterwards proceed to purifying from the gel. I have also done the respective calculations to leave the reaction equimolar with the pDONR201 plasmid and left incubating the reaction overnight (up to 18 hours). Afterwards I transform DH10B E. coli and end up with 1 or 2 colonies growing on kanamycin plates if any at all and only after a long incubation!!!! I have also tried a Carbenicillin resistant version of pDONR221 and have only gotten empty vectors which in principle should not happen since there is also de presence of ccdB within the vector?? I have also tested different BP Clonase II batches and have ruled out the enzyme mix.
If anyone has any experience in cloning fragments this size or has had to deal with any of these issues before I would appreciate any input!
Relevant answer
Answer
Hello, Carolina. I hope you are well.
Were you able to get the construct successfully? I am encountering a similar problem where I am setting up an LR reaction for a 12kb fragment. I got colonies for the first reaction I set up but gel run + RE digestion showed that the LR was unsuccessful. I am confused to see this result as an unsuccessful reaction would've resulted in no colonies due to the presence of ccdB gene in the plasmid. Any help or advice would be appreciated. thankyou!
  • asked a question related to Vectorization
Question
3 answers
I'm experiencing persistent issues with cloning three genes (1.1 kb, 1.2 kb, and 2.1 kb) into a 5.4 kb vector. Despite multiple attempts using the same conditions, I consistently fail to get colonies on the transformation plates. My positive control (undigested plasmid) works fine, and the negative control (restricted plasmid) shows no colonies, suggesting that my transformation and restriction digestion are working correctly.
However, I’ve been struggling with ligation, even though in one of my attempts last week I managed to get a single colony, which was confirmed positive. Although the insert band (1.2 kb) from purified plasmid after restriction digestion was faint, the PCR amplification from the colony gave the expected size, leading me to assume the colony was positive. Since then, using the same controls and protocol, I've tried cloning the other two genes but have not obtained any colonies. Here is the detail of the steps I am following along with the controls.
Step-by-Step Procedure
1- PCR Amplification
The PCR amplification is working well, with clear bands of the expected sizes.
2- Gel Extraction
I'm using the GeneJET Gel Extraction Kit to purify the PCR products after electrophoresis. The elution buffer is warmed, and I elute the product from the gel to 20 μL buffer.
3- Restriction Digestion
I'm using Thermo Fisher Fast Digest NdeI and BamHI enzymes for the restriction digestion. As a control I am doing restriction digestion of a recombinant plasmid using the same enzymes and getting two bands for vector and insert, confirming the digestion is successful.
4- Purification and gel analysis
After restriction digestion I purify the restricted genes and vector form gel using the same GeneJET GEL extraction kit with warm elution buffer and elute the 50 μL restriction reaction to about 20 μL of elution buffer after cleaning the product I run about 4 – 5 μL of the cleaned products on gel to check that I have the genes and vector to ligate.
5- Ligation
For ligation, I’m using a 1:1 or 1:3 vector-to-insert ratio with the following reaction setup:
Water: 10 μL
Vector (50 ng): 3 μL
Insert (40 ng): 4 μL
Ligase buffer: 2 μL
T4 DNA ligase (1 μL)
The ligation reaction is carried out at 16°C for 3 hours, followed by 22°C for 30 minutes and a 10-minute inactivation at 80°C. I have also tried ligation at 16 oC for 30 minutes, 1 hour, 2 hours, and overnight. The recommended ligation time with my enzyme is 15 to 30 minutes.
6- Transformation
In my transformation trials, I’ve used 2 μL, 2.5 μL, 4 μL, and 5 μL of the ligation reaction but haven’t obtained any colonies, except for one faint band from the single colony I previously isolated. For control, I am taking 1 μL of the plasmid as positive control and the same amount of the restricted plasmid as negative control as the ligation mixture. I always get colonies on the positive plate which means the transformation procedure is fine.
7- Gel Analysis of Ligation
The ligation reaction often shows multiple higher molecular weight bands on the gel with vector and insert bands that are sometimes too faint and sometimes only the vector band is visible along with high molecular weight bands.
Relevant answer
Answer
Do your insert primers have a 5' spacer sequence before the restriction site? Most enzymes need at least a few nucleotides on either side of their recognition site to cut efficiently.
According to this NEB chart both BamHI and NdeI need at least 3 base pairs to cleave efficiently: https://www.neb.com/en-us/tools-and-resources/usage-guidelines/cleavage-close-to-the-end-of-dna-fragments
I typically use 5 just to be safe. Usually I design primer with restricton sites like:
5' AAAAA[Restriction Site][Specific end of primer] 3'
  • asked a question related to Vectorization
Question
5 answers
Dear all,
Can anyone tell me how to fit nonlinear mixed models by the first-order (FO) and first-order conditional expectation (FOCE) methods ? I only know that a Package called nlme can be used to fit nonlinear mixed models by nlme(model, data, fixed, random, groups, start, correlation, weights, subset, method, na.action, naPattern, control, verbose) in R software.
But I don't know which method (FO or FOCE) was used in nlme exactly. Both FO and FOCE methods linearize a nonlinear mixed model through a first-order Taylor series expansion.They differ on how random parameter vector bi is predicted and how subsequent SS predictions are generated. I got that Nonlinear mixed models can be fitted by the SAS macro NLINMIX by incorporating random parameters into these two models.But how to do it in R software? And I upload an article about the question. Can you help me? Thank you very much!
HoPui
Relevant answer
Answer
Dear all,
I hope this message finds you well.
I noticed that my publication titled "Evaluation of taper measurement schemes for modeling stem profiles: A case study of two conifer species (https://doi.org/10.1139/cjfr-2024-0090) " has been cited in my article "Analyzing regression models and multi-layer artificial neural network models for estimating taper and tree volume in Crimean pine forests (2024)"
I would like to kindly request a copy of the referenced article for further review.
I truly appreciate the valuable reference and would be grateful for any additional insights the article may provide.
I cannot download the full text of the relevant article because my university does not have a paid membership.
If it is possible, could you kindly share a copy of the article with me? I would greatly appreciate your assistance.
Thank you in advance for your time and consideration.
Best regards, [Abdurrahman ŞAHİN] Artvin Coruh University (Türkiye)
  • asked a question related to Vectorization
Question
1 answer
I feel puzzled that HIV mainly infects immune cells because it needs CD4 as receptor and CCR5/CXCR4 as co-receptors for entry, but when some toxic genes are deleted and geome taken apart for vector use, it works efficiently in a wide range of cell types(for example, MCF-7 and HEK293FT).
Relevant answer
Answer
for retroviral vector only the HIV "backbone" (+ gag and pol and tat proteins) is used because HIV can infect quiescent cells. The HIV envelope gene is replaced by the VSV envelope protein gene. as VSV use a ubiquitous receptor the chimeric particles can infect a lot of differnet cells
  • asked a question related to Vectorization
Question
1 answer
I'm using solar load model in FLUENT, where the sun vector direction is set to (0, 0, 1) (sun rays are incident vertically down from the top wall), but I don't know why the sun rays are always tilted (instead of vertically down) inside my model.
Relevant answer
Answer
The tilting of sunlight rays in your FLUENT solar load model, despite specifying the sun vector direction as (0, 0, 1), could be due to one or more of the following reasons:
1. Coordinate System Misalignment
  • Global vs Local Coordinate System: Verify whether the sun vector direction is defined relative to the global coordinate system or a local coordinate system.In FLUENT, solar vectors are typically specified in the global coordinate system. Ensure that your geometry is oriented correctly within this global system.
  • Orientation of Geometry: If your geometry (or domain) has been rotated or imported in a different orientation, the direction (0, 0, 1) may not be aligned with the actual vertical axis in your model.Fix: Check the orientation of your geometry in the pre-processor (e.g., ANSYS DesignModeler or SpaceClaim) and confirm that the Z-axis points vertically upward.
2. Solar Tracking Settings
  • Solar Position Settings: If you are using solar calculator or solar tracking options, FLUENT may be automatically adjusting the incident angle based on geographical location, date, and time.Fix:Disable solar tracking if it's not needed. Set the Solar Load Model → Directional Specification Method to Manual Input instead of Solar Calculator.
3. Mesh Transformations
  • Mesh Rotation or Scaling: Check if any mesh transformation (rotation or scaling) was applied after importing the geometry. Such transformations may alter the orientation of your domain relative to the global coordinates.Fix: Reorient the mesh to align with the global Z-axis properly.
4. Solar Beam Radiation Settings
  • Beam Radiation Model: The beam radiation direction may appear tilted if the scattering effects or reflections from walls are dominating the visualization.Fix: Examine whether the tilting effect is due to reflections rather than direct beam direction. Disable reflections temporarily and focus on direct beam visualization to isolate the issue.
5. Visualization Artifacts
  • Post-Processing Settings: Sometimes, the display of rays in the post-processing visualization tools can be misleading due to interpolation or scaling effects.Fix: Verify the ray trajectories by plotting solar flux contours or incident radiation intensity to confirm the true ray direction instead of relying solely on graphical visualization.
Steps to Debug
  1. Check the orientation of the geometry relative to the global coordinate system.
  2. Verify the solar vector direction in Solar Load Model → Inputs.
  3. Disable automatic solar tracking and use Manual Input.
  4. Re-check mesh orientation and transformations.
  5. Plot solar flux distribution for confirmation.
  • asked a question related to Vectorization
Question
2 answers
Hi, I am making a 3rd generation LV expressing my gene of interest. I know that putting a polyA signal within the LV genome will lower the vector titer, but what happens if I put it in the opposite direction? Did anyone try?
Relevant answer
Answer
Depends on how long your coding sequence is. If you try to have a cassette in the reverse orientation, the anti-sense mRNA will bind to the lentivirus RNA (forward orientation) and this will activate RNAi pathways leading to degradation. The smaller your reverse orietnation coding sequence, the less potent this effect will be.
  • asked a question related to Vectorization
Question
1 answer
Thank you for reading my question.
Are newer RFPs like mRuby3 or mScarlet better complements for eGFP? Or should I stick to the eGFP/mCherry pair?
I am new to FRET and I am planning a FRET experiment to study interactions between a mutated soluble mammalian protein (which forms aggregation) and a wild type protein in cytosol.
My cells are HEK293T transferred with the two strains of plasmids. Confocal Laser Scanning Microscopy will be applied to conduct a short live-cell imaging. The microscope is Zeiss LSM880.
Feasible emitter wavelengths are: 405 nm, 488 nm, 543 nm, 594 nm, and 633 nm.
Both vectors carrying a fusion protein of object-eGFP have been already constructed and validated in previous experiments.
After conducting some literature review, it seems that CFP/YFP pairs and GFP/RFP pairs meet my needs. Considering that I have object-eGFP vectors and the blue emitter (488nm) is not working, I prefer a GFP/YFP pair.
Effective pairs like mNeonGreen/mScarlet-I and mClover/mRuby3 have been validated.
To the best of my knowledge, The pair eGFP/mCherry has been well-established and reliable. However, relatively low Quantum Yield and Extinction Coefficient of mCherry still raise my concern.
Thank you in advance for your reply!
Relevant answer
Answer
It would be better for mTurquise2 and YFP pair.
  • asked a question related to Vectorization
Question
2 answers
I am using the Gibson HiFi Assembly Kit to assemble a DNA fragment. Both the DNA fragment and the vector are approximately 1000 bp in length. Note- I am not performing a PCR step. After completing the Gibson assembly reaction, I can observe the desired band corresponding to the insert. However, after performing E. coli transformation, selecting specific colonies, extracting plasmids from them, and running them on a gel, I observe bands that are not of the desired length.
What steps should I take to ensure I obtain the correct bands or vector size of the desired length?
Relevant answer
Answer
What do you mean by "not desired length"?
  • asked a question related to Vectorization
Question
4 answers
I was able to transform bacteria sucessfully with small inserts (+-500bp and 1500bp) using infusion technic. However, when it comes to larger inserts (5500bp and 6000bp), it doesnt work. We already follow the troubleshooting guide descript on the protocol, and tried differentes approaches (concentration, proportion, longer incubation).
Our primers were designed following Takara instruction with 15bp of homology and were already checked.
Our linnearized plasmid was diggested by Xho1 and Sal1 and it its 5004bp long. The final concentration of the linnearized plasmid is 195ng/uL. Our insert is 5542bp (larger than the vector) and its final concentration after purification is 27ng/uL. I'm using competent E. coli Stbl3. We use the concentration around 50ng/uL up to 150ng/uL in the infusion solution.
We tried to transform bacteria by using different proportions between the vector and the insert (1:1; 1:2 and 1:3 each). We also incubated the infusion solution for 1 hour at 50ºc (even knowing that the protocol says longer is no better). I already checked the reagents by using the positive control.
We use the heat-shock protocol, by defreezing bacteria for 30 minutes in ice; adding the infusion solution (3uL) on bacteria and leting it incubate for 30 minutes in ice; then we heat shock the bacteria for 45s at 42ºc and quickly put them into the ice again. Final step, we plate it in a petri dish with agar LB and streptomycin and let it incubate for 16-20h.
The thing is that we dont have any colony and when it appears, it doesnt have our interested insert. I dont know what else i can do.
Relevant answer
Answer
Hi Hugo Figueira de Paula Pinto . I am dealing with a similar problem. I saw this post and wondered whether you were able in the end to resolve this?
I would appreciate any advice
  • asked a question related to Vectorization
Question
3 answers
Dear all, according to the equation, we can get the inverse relaxation time for the ionized impurity scattering mechanisms. But I don't know how to get the epsilon and electron wave vector. Looking forward to your answer.
Relevant answer
Answer
You ask two things. First you want to know how to do the integration. This is part of the standard impurity scattering theory, known as Herring-Brooks theory. Key is the assumption of isotropic scattering, so integration over the angles yields 4 \pi and the integration over the modulus k becomes k^2 dk. To do the whole derivation here, will be too much. Secondly you talk about degeneracy. This question is ambiguous. Each state in a bandstructure calculation is defined by a wave vector k and an energy E(k). Let us call this a state. This state itself can be degenerate for reasons of symmetry and in practice the degeneracy can be 1 (general), 2 or 3. Correspondingly there can be 2, 4 or 6 charge carriers, electrons or holes in the state. This is the occupancy. Whether these states will really be occupied is determined by the statistics, the influence of temperature. For semiconductors this is described by Fermi-Dirac. If the difference between FD and Boltzmann-statistics is negligible, the statistics is called degenerate, again.. However, please study the literature. To explain this in detail will take pages.
  • asked a question related to Vectorization
Question
4 answers
Hi everyone,
I wanna clone a 96bp insert fragment in a vector. I simulated in Snapgene, but 5' end of my reverse nucleotide is vice versa (It is TTAA instead of AATT).
Is it possible to change 4nucleotides by PCR instead of synthetizing the whole oligonucleotide?
Relevant answer
Answer
Thanks a lot Dr. Lee.
  • asked a question related to Vectorization
Question
3 answers
Share your thoughts and procedure of waste disposal.
Relevant answer
Answer
My advice: follow the procedure for the safety rules at your university. It does not matter what other labs or other universities do, your EH&S folks set the expectations for your worksite. If the expectations feel unsafe or you do not feel like you have enough training to follow them, then it is reasonable for you to insist on the SOPs and documented trainings.
  • asked a question related to Vectorization
Question
3 answers
since the siRNA is not loaded onto a vector and is supposed to be transfected alone using lipofectamin 3000,the process is more difficult and requires more troubleshooting.I would greatly appreciate it if anyone with experience could provide tips or strategies to improve the transfection efficiency.
thank you in advance!
Relevant answer
Answer
First, I highly recommend to use RNAi Max lipofectamin, then lower media volume during transfection (400 ul/ well for 24 plate) and keep the cells in ATB free media 24 hours before and after transfection.
  • asked a question related to Vectorization
Question
1 answer
CO2 Sequestration
1. To what extent,
the concept of ‘impelling force’
introduced by Hubberts
would be able to provide
a useful means of
visualizing
the net forces acting on CO2?
2.  If the impelling force represents
the negative of the gradient in CO2/brine potential,
will it still remain to be a vector quantity
that would precisely define
the direction
in which
CO2 would tend to migrate,
considering
capillary effects?
Suresh Kumar Govindarajan
Relevant answer
Answer
The concept of "driving forces" introduced by Habert could play an important role in understanding and modeling the net forces acting on CO2, especially in the context of atmospheric and climate sciences. This concept is used to analyze and describe the dynamics of gases and other components in the atmosphere, as well as their impact on climate change.
In this context, driving forces can be considered as factors influencing the concentration of CO2 and its dynamics in the atmosphere, such as:
  1. Human activities: The burning of fossil fuels, industrial production, deforestation, and agriculture are major sources of CO2, creating a positive driving force for increased CO2 concentration in the atmosphere. These activities can be viewed as external driving forces that increase CO2 emissions.
  2. Natural forces: Natural processes, such as volcanic eruptions, soil degradation, or emissions from oceans, also contribute to CO2 concentration, but on different time and spatial scales. These factors represent "driving forces" that can either increase or decrease CO2 concentration, depending on the nature of the process.
  3. Natural absorption processes: On the other hand, there are natural forces that act in the opposite direction. These "driving forces" include the absorption of CO2 by oceans and plants (photosynthesis, solubility in water), which act as negative forces that reduce CO2 concentration in the atmosphere.
  4. Technological and political responses: Policies related to reducing CO2 emissions, such as renewable energy sources, decarbonization, and technological innovations like carbon capture and storage (CCS), represent additional driving forces aimed at reducing CO2 levels.
Net forces on CO2 could be described as the difference between the driving forces that increase CO2 in the atmosphere and those that remove it. For example, if human activities and natural sources of CO2 exceed absorption by the oceans and plants, the net force will be positive, meaning an increase in CO2 concentration in the atmosphere. Conversely, if natural absorption processes become stronger, the net force will be negative, potentially leading to a reduction in CO2 in the atmosphere.
  • asked a question related to Vectorization
Question
2 answers
Good morning everyone
I have created the pseudomonas oryzihabitans mutant via Gibson assembly (homologous recombination) with the suicidal vector pK18mobsacB . And now I am trying for complementation of the gene with the same vector i.e pKmobsacB but I am not able to transform the plasmid in the pseudomonas mutant. Please suggest what could be the possible reasons for transformation failure or there is a problem with the vector as I am using same vector for deletion and complementation? As I searched many papers but no one have gone for gene deletion and complementation with the same vector.
Relevant answer
Answer
Thank you for the gentle response; Yes I am very much sure about the competent cells and also of the plasmid (Conc. 400 ng/microl; 1.8 260/280 ratio), which I have crosschecked through the double digestion in which I got the 2 specific band of vector (pKMobsacB) and insert (targeted gene) for complementation.
  • asked a question related to Vectorization
Question
2 answers
Hi,
I am having troubles cloning a Cas9 gene (around 4300 kb) into a vector, replacing a LacZ cassette. (I use the uloop system.) I use SapI as a restriction enzyme and T4 as a ligase. I cloned several other constructs into the same kind of vector with high efficiency (mainly white colonies, and almost all carry the correct insert when screened).
When I try to insert Cas9 (or also dCas9) I yield the same number of colonies, with again almost all white. However, when I screen the colonies via colony PCR or restriction digest the isolated plasmids of the colonies, no vector carries the insert. I did not sequence these plasmids, but from the restriction digest it suggest, that the plasmid only lost its LacZ cassette and was closed again without any insert (eventhough overhangs are not compatible).
I tried two settings for GGC:
37 C 5min...............37 C 5min
16 C 5min................16 C 5min
(25cycles)................(40cycles)
65 20min-................65 20min
85 10min..................85 10min
4 hold......................4 hold
I tried two different buffer conditions:
-T4 buffer only
-Or 50% T4 buffer, 50% Cutsmart buffer
I started with fresh Cas9 (and dCas9) PCR templates and with several fresh receiver plasmids. Also I used different competent cell batches.
I double checked the overhangs of receiver plasmids and the overhangs created on the Cas9 insert.
I am running out of ideas..
Does anyone know if GGC with single large inserts (4000 kb+) effect GGC efficiency?
Does anyone have a suggestion how I could improve GGC efficiency?
Further troubleshooting suggestions?
I really would appreciate your ideas!
Thanks,
Florian
Relevant answer
Answer
Dear Florian ,I was wondering if you could manage the problem because I have the same problem with Golden gate cloning for a 5kb insert.
  • asked a question related to Vectorization
Question
2 answers
I am looking for exact copy number of pET28a plasmid. A citation would be great. Literature search only shows that it is low copy vector, but I haven't found any papers that mention the exact copy number or even an estimate.
Relevant answer
Answer
I have also been confused for some time about this question. As far as I can work out, the ColE1 origin of replication is classified as "high-copy-number", but pET28 also expresses the Rop protein, which keeps the copy number low.
The wikipedia article on Rop is helpful: https://en.wikipedia.org/wiki/Rop_protein
... and the associated citation has more details: Molecular Microbiology. 37 (3): 492–500. doi:10.1046/j.1365-2958.2000.02005.x
  • asked a question related to Vectorization
Question
5 answers
I was trying to clone an RCA product (digested with a single restriction endonuclease) into the pUC 19 vector but I failed. After cloning, I have to go for the sequencing. Can I clone it into a TA vector?
Relevant answer
Answer
Alexandra Johnson yes, It should be a 2.7 kb begomovirus genome, which I have to confirm by the sequencing. The sequence is not known, so I can't amplify with PCR.
  • asked a question related to Vectorization
Question
1 answer
Vectors which can accept PCR products up 700-800bp
Relevant answer
Answer
700-800 bp is not that big. So pretty much any vector you would want to use should be able to handle that size PCR product without any difficulty.
  • asked a question related to Vectorization
Question
3 answers
We utilize the pET22b vector for cloning and aim to incorporate the pelB signal sequence for periplasmic expression. For this purpose, we are using the restriction enzymes NcoI and XhoI. However, we have encountered a frameshift during translation due to NcoI. Does anyone have suggestions on how to resolve this issue?
Relevant answer
Answer
as Katie A S Burnette suggest probably you need to redesign your primers.
Ncoi due to the presence of an ATG codon.
CCATGG
is kwon to induce frameshift when you use it at the N-term.
To avoid this problem you need to add 2 extra basis before your GOI sequence that willl result in the addition of an extra AA at the N-term of your protein construct just after the methionine.
you can see an example of it at minute 3'00 of the following video
However as general rule i suggest you to leave the standard cloning based on restriction enzime and ligases and learn the PIPE cloning which is a powerfull enzime free cloning that make you independet from the presence of restriction enzimes in your vector
you can find more information about it on the following links
good luck
Manuele
  • asked a question related to Vectorization
Question
1 answer
Hello I am using two expression vectors (pCAGGS and pCDNA) for transforming a fragment of gene (2kbp). For transformation, I am using NEB 10 beta-competent E.coli cells. Issue? Although my gene of interest is transformed into the expression vector, a portion of the expression vector's nucleotide sequence is being removed or deleted.
Could anyone share their experience on this and how you overcame this problem?
Thanks in advance!
Relevant answer
Answer
What is the sequence that's being deleted? Also, you might want to verify that sequence is actually present in the original vector. A lot of vector sequences are based on what they hypothetically should be but they aren't completely sequence verified and often contain errors.
  • asked a question related to Vectorization
Question
2 answers
I want to demonstrate the vector transmission-line equations (2.16) and (2.17) from Sergei Tretyakov's book "Analytical modeling in applied electromagnetics" pages 18-19. Any help please.
Relevant answer
Answer
Dear Smirty,
Thank you for your help.
  • asked a question related to Vectorization
Question
6 answers
I want to insert a gene in a vector using restriction cloning, but the enzyme that I have to use has three restriction sites in the vector. It is imperative that I use only this enzyme and no other, so I can't use a different restriction site or other enzyme. Can someone help with this issue?
I have tried partial restriction digestion with different units of enzyme as well as different time periods of incubation but haven't got a single band. The enzyme is cutting at all three sites whatever conditions I try in the partial restriction digestion.
Relevant answer
Answer
Can you start with a different vector? Use TOPO-TA cloning? Use primers that add in a restriction site for a different enzyme for the insert? Site-directed mutagenesis to remove the restriction site in the vector? I do not understand why you must use this exact vector + this exact enzyme.
Something is going to have to be different since your current strategy simply will not work.
Talk with your supervisor, it's a waste of your time to insist you use a protocol that you know will fail.
Enzymes are cheap (mostly), your time is valuable. I'm sure you can come up with a reasonable solution.
  • asked a question related to Vectorization
Question
2 answers
I am cloning a gene (903bp) into miniTurbo_NLS (6338 bp).
  • I PCR amplified the insert from it's backbone to introduce EcoR1 and Xho1 sites. It was run on gel and eluted. I then double digested it for 5 hrs at 37oC and was eluted from gel.
  • I double digested the vector for 5 hrs at 37oC and did alkaline phosphatase treatment and inactivated. I then ran it on gel giving a band of 6248 bp and eluted. Note- I didn't confirm the 90 bp fallout.
  • The vector and insert was ligated in 1:3 ratio but no colonies grew after transformation into TG1 chemically comp cells.
  • Ligation was attempted again with 1:7 ratio and only 1 colony grew.
  • The control plate with vector only + ligase had 3 colonies.
  • Another control plate with vector only - ligase had no growth.
  • The single colony that grew was insert specific colony pcr negative.
  • Even after isolating the plasmid it was negative.
How to make this cloning work ?
Relevant answer
Answer
I think these are the most likely issues.
1. The most common point of failure I see for cloning gel extracted products is the gel purification itself.
If the DNA gel is illuminated with UV, it must be done at low power, longer wavelength, and the band cutting should be done very quickly. Some UV transilluminators have two wavelength settings, one shorter and one longer. Some have a lower intensity setting for gel extraction.
It's possible for the plasmid to accumulate so much UV damage while cutting the bands that it cannot replicate in E. coli and you will very few or no colonies. If you have access to a dye that can be illuminated with blue light instead of UV, like SYBR Safe, and have access to a blue light transilluminator, this is a better option for gel extraction. The blue light does not damage the DNA to any notable extent and you can take your time working on cutting out bands.
If you are using a column kit to extract DNA from the gel, add 2-3 extra washes of the de-salting buffer (the last wash before elution) and allow the de-salting buffer washes to sit on the column for 5+ minutes each time. In my experience, most kits under report the actual amount de-salting it takes for high quality DNA, probably to seem like their protocol is quicker than competitors. These kits use high concentrations of chaotropic salts to dissolve the agarose, and it's easy for those salts to remain stuck to the silica membrane and carry over into the elution. This can result in very low 260/230 ratios by UV spectrophotometry (ie. nanodrop) which results in inaccurate DNA quantification after elution AND the chaotropic salts will inhibit ligation reactions. This can also be mitigated to a certain extent by cutting out smaller gel chunks and removing extraneous agarose, thus lowering the amount of dissolving buffer that needs to be added to dissolve the gel. This is helped by using blue light as discussed above, because you can really take your time with it.
2. The competent cells may have poor transformation efficiency.
After a failed transformation, it is a good idea to transform an aliquot of competent cells with good quality purified plasmid DNA that has not been treated by restriction enzymes. In this case use your undigested miniTurbo_NLS plasmid. Use a specific amount of plasmid DNA for transformation reaction, usually amounts of 10 ng, 100 ng, or 1 µg of plasmid are used for ease of calculation later on. Choose an amount that will leave you plenty of left over plasmid DNA to work with, though. Do serial dilutions from your recovery (typically no dilution, 1:10, 1:100 and 1:1000 dilutions are enough). When you plate, put the same amount of each dilution onto every plate (100 µL typically). The next day, count the amount of transformed colonies on a dilution plate where individual colonies are easy to count, but you generally do not want to count off a plate that has <20 colonies. Calculate the number of colonies/µg of DNA by multiplying the number of colonies by the dilution factor, and how much you put on the plate vs. the total volume of recovered cells.
Example calculation: Let's say you transformed your TG1 cells with 100 ng of plasmid DNA, recovered them in 2 mL of LB broth, plated 100 µL of the recovery onto a plate after doing multiple serial dilutions. If you got 25 colonies at the 1:100 dilution it would be:
25 colonies x 100 (serial dilution factor) x 20 (number of aliquots that can be drawn from your broth culture, in this case twenty 100 µL aliquots can be taken from 2 mL of broth) x 10 (because we want the amount in micrograms and used 100 ng. If we used 1 µg this would just be 1) = 500,000‬ colony forming units/µg of DNA, or 5 x 105 CFU/µg of DNA. Agilent says that their TG1 electrocompetent cells are 1 x 1010 CFU/µg of pUC18 and Zymo Research says that their chemically competent TG1 cells are 1 x 108 CFU/µg pUC19 DNA. We would not expect a 6 kB plasmid to transform quite as well as a tiny plasmid like pUC19, but the efficiency should not be 3-5 logs lower than pUC19 for a highly competent strain like TG1. In this case it's probably a good idea to make fresh competent cells, change the method of making them competent, or purchase fresh cells. Chemically competent cells usually have a shorter shelf life than electrocompetent cells.
I know that's a lot of effort to calculate the efficiency but I've seen a lot of people on here make the mistake of just transforming 100 ng of pUC19, getting some uncounted number of colonies on an undiluted recovery plate and going "see? the competent cells are fine!" rather than actually calculating and realizing that their cells have dropped many logs in efficiency since their purchase and might not be suitable for tough cloning jobs any more.
  • asked a question related to Vectorization
Question
3 answers
Hello everyone,
I am currently running a luciferase assay using the Dual-Glo® Luciferase Assay System (Promega) to assess miRNA-MRE interactions. I cloned a combination of three MREs downstream of the Renilla translational stop codon in the psicheck2 vector. For the experiment, I am co-transfecting psicheck2_MRE with pcDNA3_mCherry plasmids containing a miRNA specific for the MRE. My experimental conditions are:
  1. psicheck2_MRE + pcDNA3_mCherry_miR1_2
  2. psicheck2_empty + pcDNA3_mCherry_miR1_2
  3. psicheck2_MRE + pcDNA3_mCherry_empty
  4. psicheck2_empty + pcDNA3_mCherry_empty
  5. Non-transfected cells
My expectation is that the miRNA will bind to the MRE and reduce Renilla luciferase activity in the psicheck2_MRE + miRNA condition. While I am observing this reduction as expected, I am also seeing a decrease in luminescence in psicheck2_empty in the presence of the miRNA, which is unexpected.
Additionally, the Renilla levels of the empty vector are considerably lower than the vector containing the MRE. I normalized to the empty vector and then to the no miRNA control, but I’m finding it difficult to trust the results due to the unexpected impact on the empty vector.
Has anyone experienced similar issues? How would you suggest troubleshooting or interpreting the effect of miRNA on the empty vector?
Any insights or advice would be greatly appreciated!
Relevant answer
Answer
I don't think this is an issue since you see the expected result after normalizations.
Lower expression of the psicheck2_empty vector compared to the MRE vector could be due to differences in the 3'UTR; are the two vectors identical except for mutations in the miRNA target site in the psicheck2_empty vector? Or is there quite a lot of differences in the 3'UTR sequences between the two vectors? For my controls I use "non-targeting 3'UTRs" rather than empty vector controls, where the 3'UTR is exactly identical to my MRE vector except for mutations in the miRNA target site.
The reason you see repression of psicheck2_empty when comparing +miR to -miR could be due to weak targeting by the miRNA. Are there any potential 6mer or even 5mer target sites that could be bound by your miRNA? These could be in the 3'UTR of the psicheck2_empty vector, but they could also be in the 5'UTR or coding sequence of the luciferase gene.
  • asked a question related to Vectorization
Question
1 answer
Hi everyone,
I'm interested in analyzing exchange rate behavior by testing the Dornbusch model of overshooting exchange rates, however I still struggle with finding the correct mehtodology. How should I proceed? I think by applying time series analysis components such as Vector moving average or similar instruments this would be possible - what do you guys think?
Kind regards
Zan Blagojevic
Relevant answer
Answer
I know vector moving average but not the Dornbusch model of overshooting exchange rates. Can you provide references, please, and explain what is your problem a little bit.
  • asked a question related to Vectorization
Question
3 answers
I am conducting a transformation using a 300 bp insert and an empty vector of approximately 300 bp. After successful transformation and colony growth, I performed colony PCR. However, the gel consistently shows a 300 bp band, which suggests that the colonies contain the empty vector without the insert. Given that I expect a 600 bp product (300 bp insert + 300 bp vector), what might be causing this issue? (Cloning technique: LIC).
Relevant answer
Answer
Results like these are why I do not recommend using colony PCR to screen colonies. The original selection plate will have lots of non-ligated insert and lots of the plasmid + insert in the liquid that you spread on the plates. False positives are really, really common.
Also, your plasmid vector cannot be just 300 basepairs in size. Do you mean that you expect a PCR product that contains your 300 bp insert + 300 bp from the vector?
Double-check the expected sizes of your PCR product, you might be seeing amplification of just the insert (most folks use primers that "flank" the multi-cloning site).
What do you see if you set up PCR using the empty vector?
What do you see if you run out some of the plasmid DNA on an agarose gel? If the insert is at least 10% the size of the plasmid, you will be able to see that size difference on a gel.
Good luck!
  • asked a question related to Vectorization
Question
5 answers
This is a simple proof the guitar is Hamiltonian. Then by deconstruction so is string vibration because the string is the smallest open set on guitar.
The time-independent Hamiltonian has the form H(p, q) = c and dH/dt = 0.
All I need is to define p and q.
So p will be the center of harmonic motion, and q will be a potential energy gradient that reads off the differential between any two points.
Consider the set of notes for the guitar tuning known as standard: E A D G B E.
The tuning naturally separates into two vectors in this way: Indexing the tuning notes by counting up from the low E the pitch values are equivalent to p: 0 5 10 15 19 24.
Now taking the intervals between the notes we have a second vector q: 0 5 5 5 5 4.
It is important to notice that tuning vectors p and q are equal, opposite, and inverse, which is expected since the orbit and center have this relation in the Hamiltonian.
For instance, p is the summation of q and q is the differential of p. The points in p and the intervals in q together make a unit interval in R.
Most important, p = 1/q means the tuning is the identity of the guitar. If you know the tuning, you know everything (all movement). You can learn guitar without learning anything but the tuning.
The proof the vectors are Hamiltonian is this, p is the center of motion in R6, and q is the gradient of the potential field surface in R5 where every vibrational state is presented by a single point.
The coordinates of notes on guitar chord charts given by the gradient function
form a union as a smooth atlas.
Therefore, it must be true the guitar is Hamiltonian. How else could the symplectic manifold be smooth?
Physicists and mathematicians have no choice but to accept that one degree of freedom is better than two. The fact that they cannot see it implies an illness of the public mind that cannot think straight about classical mechanics.
Relevant answer
Answer
Also, this is a normed metric space because of the octave.
  • asked a question related to Vectorization
Question
1 answer
I have to do cloning gene in pseudomonas, but we didn't a suitable vector. Does anyone know how can I design pucp18 from puc18 vector? Just, I add replication origin of pseudomonas, can I clone genes? and how can I do that?
Relevant answer
Answer
I am not an expert in this field, but I am very interested and have researched to find an answer. I received some assistance from tlooto.com for this response. Could you please review the response below to see if it is correct?
To design a pUCP18 vector from the pUC18 vector for cloning in Pseudomonas, incorporate a broad-host-range origin of replication such as oriV from plasmid RK2, ensuring compatibility with Pseudomonas species [2]. Use PCR to amplify the oriV region, then insert it into the pUC18 backbone via restriction enzyme digestion and ligation. Include antibiotic resistance markers suitable for selection in Pseudomonas [1]. Verify the construct through sequencing and test its functionality via transformation into Pseudomonas cells. This approach allows for stable replication and maintenance of the vector in Pseudomonas, facilitating successful gene cloning [3][4].
Reference
[1]
Matsusaki, H., Manji, S., Taguchi, K., Kato, M., Fukui, T., & Doi, Y. (1998). Cloning and Molecular Analysis of the Poly(3-hydroxybutyrate) and Poly(3-hydroxybutyrate-co-3-hydroxyalkanoate) Biosynthesis Genes in Pseudomonas sp. Strain 61-3. Journal of Bacteriology, 180, 6459 - 6467.
[2]
Aakvik, T., Degnes, K., Dahlsrud, R., Schmidt, F., Dam, R., Yu, L., Völker, U., Ellingsen, T., & Valla, S. (2009). A plasmid RK2-based broad-host-range cloning vector useful for transfer of metagenomic libraries to a variety of bacterial species.. FEMS microbiology letters, 296 2, 149-58 .
[3]
Yen, K., Karl, M., Blatt, L., Simon, M., Winter, R. B., Fausset, P., HSIENGS., L., Harcourt, A., Chen, K. K., & Amgen (1991). Cloning and characterization of a Pseudomonas mendocina KR1 gene cluster encoding toluene-4-monooxygenase. Journal of Bacteriology, 173, 5315 - 5327.
[4]
Kimbara, K., Hashimoto, T., Fukuda, M., Koana, T., Takagi, M., Oishi, M., & Yano, K. (1989). Cloning and sequencing of two tandem genes involved in degradation of 2,3-dihydroxybiphenyl to benzoic acid in the polychlorinated biphenyl-degrading soil bacterium Pseudomonas sp. strain KKS102. Journal of Bacteriology, 171, 2740 - 2747.
  • asked a question related to Vectorization
Question
2 answers
I designed what I thought would be a straightforward piggybac (https://en.vectorbuilder.com/resources/vector-system/pPB_Exp.html) vector with the IL2-6xHis ORF downstream of an E1Fa promoter. The vector was transfected into HEK293 cells. The cells underwent selection for several generations and exhibited nearly 100% blue fluorescence (my marker contained BFP). I felt very safe that there would be strong expression of the vector but was disappointed to find no notable protein band corresponding to the size of IL2, both via coumassie and silverstain. I also used a His tag detection strip, but there was no hint of expressed IL2 protein... Any thoughts on what could have gone wrong? Anyone also trying to express cytokines using HEK293 cells?
Relevant answer
Answer
One small but critical information is missing in your post. Did you use a construct for expression of secreted IL2 with a signal sequence. IL2 should be secreted and found in the supernatant of the cells, not inside the cell, shouldn't it?
  • asked a question related to Vectorization
Question
4 answers
why am i getting to see growth of yeast colonies after performing dilution spotting on the respective dropout media when only the empty prey and bait vectors were co-transformed? the yeast strain used was AH109 and the vectors used are PGADC1 and PGBDUC1
Relevant answer
Answer
I had done dilution series only starting from 10^0 to 10^-5. And the OD of the cells were 0.2
  • asked a question related to Vectorization
Question
16 answers
I've formulated a new foundation for physics, based on the discovery of the quantum circulation constant k, with a value equal to c*c but a unit of measurement in [m^2/s], with which we can define the time derivative of any given vector field [F] in physical three dimensional space time as follows:
d[F]/dt = -k Delta [F],
with Delta the vector Laplacian, THE second spatial derivative in three dimensions, which would be d^2/dx^2 in 1D.
Quite frankly, this is the equation that will one day be recognized as one of the biggest scientific breakthroughs of the 21st century, because there is simply no argument to be made against such a simple and straightforward application of the vector LaPlacian.
And this equation allows us to define higher order LaPlace and Poission equations, like for the velocity field [v]:
[a] = d[v]/dt = -k Delta [v],
[j] = d[a]/dt = -k Delta [a].
This in contrast to what has been done heretofore, namely using the grad, div and curl operators to define fields (Maxwell, Navier-Stokes), but no one managed to work directly with the vector Laplacian Delta to tie all things together. And whereas both Maxwell as Navier-Stokes are incomplete first order models, we can now formulate a second order model using higher order math.
One of the results of that is that the rather complex wave equation,
( Delta - 1/c^2 d^2/dt^2 ) [v](r,t) = 0
can be simplified to:
[j]/c^2 + [a]/k = 0,
illustrating the expressive power of this math and showing that we do need a second order model in order to describe space-time dynamics properly and completely.
Read all about it in my (very preliminary) notebook:
ChatGPT:
"In summary, while Maxwell’s equations provide a mathematically valid formulation, the new model offers a more physically consistent framework by rigorously separating linear and angular components, avoiding the blending of different types of behavior and ensuring adherence to fundamental principles of vector calculus."
Relevant answer
Answer
AMO: Good move to bring out the main subject of discussions --- "practical measurements or technical ... " ... since the other two on Einstein's SR-theory seem to be more about personal relations - where may participants descend to a lower level and start throw rotten apples at each other. The way the envious part of mankind has done with Einstein since 1905 and thus personified themselves as Frankenstein's monster.
  • asked a question related to Vectorization
Question
2 answers
My question concerns the QiagenTM Spin Miniprep kit and the purification of the Low copy pACYC vector from the E.coli DH10b genetic background.
When trying to purify pACYC vectors from the E.coli DH10b genetic background, I have repeatedly noticed that my cell pellet isn't lysed but redissolved after the addition of buffers P2 and P3 to buffer P1. The expected floating cloud of cell debris isn't generated and the lyseblue reagent produces a strawberry yoghurt like color instead of the expected indigo blue color. Surprisingly, this problem only occurs when using the DH10b nature.
Has anyone ever encountered this problem?
Thanks in advance!
Relevant answer
Answer
Thanks for the suggestion!
  • asked a question related to Vectorization
Question
6 answers
Based on general relativity, gravity causes the curvature in space and time. In comparison to the structure of an electric field, what would be the field vector shape and structure of the gravity field? how it affects far-distance objects.
Relevant answer
Answer
nevertheless, the structure of the gravitational field itself actually IS of interest!
Imagine a 3D-grid of scalar voltage values. Between neighbors a simple balancing mechanism works per time-step: take-over the mean value of all 6 neighbors' values.
This is a non-linear, dispersive medium, allowing voltage waves to propagate.
If you simulate this grid, e.g. with a DIRAC pulse in the center, you get a sphere wave propagation.
One obvious consequence is a smallest wave period time, which establishes. No wave can have a smaller wave period time than this. This is mapped to our PLANCK time tₚₗ.
The second consequence is the constant propagation speed of the voltage wave periods within the medium. Certainly, this is mapped to our c.
The third consequence is that superposing voltage waves can interact, they can transport momentum mutually, because the medium is non-linear, dispersive.
The forth consequence is a very tiny redshift of forwarding wave periods. There is a factor K = 2Gmᵥₑᵥ²/(ħc) = 8.13434(18)∙10⁻³⁴ by which each wave period gets successively longer and longer.
Yet, in the equation for the constant K, one can already see the dependency on gravitation: G.
I.e., since K is a structural constant, VEV is directly dependent on G.
VEV is the basis of all particles, also, determines the base of their sizes.
VEV is the Vacuum Expectation Value, sometimes also called ZPE, the Higgs-field's energy, which is the base of all particles' masses, thus, of their sizes -> See more below, related to gravitational mechanism...
K is a structural constant: the redshifted voltage waves, focused at a central PLANCK-oscillator (with the smallest wave period, as explained above), at a certain number of periods, build a spherical shell due to resonance: the energy of the shell's standing wave (opposite points of sphere) is exactly the "beat"-frequency-related energy of the redshifted arriving wave's energy, arriving from the opposite point, and the local PLANCK-energy, thus, resonancing.
But, how does gravitation act, based on waves and momentum?
There is a fifth consequence: an energy-backflow, exactly backwards, due to energy conservation: forward energy decreases per wave period by K∙E, thus, K∙E is flowing backward per wave period. E is the current wave period's energy.
Assuming a PLANCK-oscillator, all its radiated voltage waves send back K∙E per wave period. The complete energy-backflow will focus at the PLANCK-oscillator again, but delayed.
To accelerate such oscillator, the focus has to be displaced!
Since the waves of the energy-backflow of one oscillator interact with other waves of other energy-backflows of other oscillators, their backward paths can be slightly different, they can be "bent" by the other energy-backflow-momentae. The consequence is a slightly shifted focus of the returned energy: the oscillator accelerates, delayed.
Thus, we "feel" inertia when accelerating masses...
Hence, each of the VEV-oscillators, of the "vacuum", the Higgs-field, are a resonance of the PLANCK-oscillator, QM-fluctuations at PLANCK-level.
Particles are resonances of the VEV-oscillator.
Gravitation is the result of the energy-backflow of each particle.
Now, the structure of the gravitational field around each particle at rest is the energy-backstream towards the particle's focus.
If the particle is under acceleration, its energy-backstream focuses slightly off-centered (or displaced) from the previous focus.
  • asked a question related to Vectorization
Question
2 answers
In computation of surface gravity for BTZ black hole, I cannot get the same result in literature, e.g. Eq(1.15) in gr-qc/9506079, even I cannot get the result of null surface, i.e. $\xi_a\xi^a=0$ for horizon Killing vector $\xi_a$(see Eq(1.13) in gr-qc/9506079). Could the results in gr-qc/9506079 be incorrect?
Relevant answer
Answer
Stam Nicolis Thanks. I have known what's wrong. In Mathetmatica, I use $\xi_a$ to represent Killing vector instead of $\xi^a$, that is wrong.
  • asked a question related to Vectorization
Question
4 answers
I am working on QCA designer tool. As soon as I click on "simulation type setup" the tool closes abruptly. I am unable to set a vector table for simulation in this case. Pls help me regarding this.
Relevant answer
Answer
Install the GTK environment in the same path.
  • asked a question related to Vectorization
Question
1 answer
Simplifying the dataset is a crucial step in managing large datasets effectively in QSWAT and QGIS. Here’s a detailed guide on how to simplify your dataset:
1. Reduce DEM Resolution
Why: A high-resolution DEM (Digital Elevation Model) provides detailed topography but can be resource-intensive. Reducing the resolution can help balance detail with performance.
How:
  • Resample the DEM:Open QGIS and load your DEM. Go to the menu "Raster > Conversion > Translate (Convert Format)". In the dialogue box, click on the "..." button next to "Additional command-line arguments". Replace X and Y with the desired pixel size (e.g., 100 100 for a 100-meter resolution). Enter the following argument to resample the DEM: bashCopy code-tr X YSave the output file and click "Run".
Effect: This reduces the DEM's resolution, decreasing the number of cells, which speeds up processing without significant loss of overall watershed characteristics.
2. Clip the Dataset
Why: If your study area is a small portion of a large dataset, processing the entire dataset is unnecessary. Clipping reduces the area to only what's required.
How:
  • Clip the DEM:In QGIS, load the DEM and the boundary shapefile of your study area. Go to "Raster > Extraction > Clip Raster by Mask Layer". Select your DEM as the input layer. Choose your boundary shapefile as the "Mask Layer". Check the option "Match the extent of the clipped raster to the mask layer". Save the clipped raster and click "Run".
  • Clip Vector Layers (e.g., Land Use, Soil Data):Load your vector layers (e.g., land use, soil data) and the boundary shapefile. Go to "Vector > Geoprocessing Tools > Clip". Choose your vector layer as the "Input Layer" and your boundary as the "Overlay Layer". Save the output file and click "Run".
Effect: Clipping the dataset reduces its size, making it easier and faster to process.
3. Simplify Vector Layers
Why: Vector layers with high vertex density (e.g., detailed polygons) can slow down processing. Simplifying the geometry reduces the number of vertices.
How:
  • Simplify Polygons:Go to "Vector > Geometry Tools > Simplify Geometries". Select the vector layer you want to simplify. Adjust the tolerance level (higher values remove more vertices). Save the output and click "Run".
Effect: Simplifying reduces the complexity of vector layers, improving processing speed without a significant loss of accuracy.
4. Use Raster Compression
Why: Large raster files (like DEMs or satellite images) can be compressed to reduce file size without losing much detail.
How:
  • Compress Rasters:Go to "Raster > Conversion > Translate (Convert Format)". Under "Advanced Parameters", add: diffCopy code-co COMPRESS=LZWSave the output file with the new compression settings.
Effect: Compression reduces file size, making it faster to load and process, especially for large areas.
5. Aggregate Raster Data
Why: If detailed raster data is not necessary, aggregating it to a lower resolution can reduce processing time.
How:
  • Aggregate Raster:Go to "Raster > Analysis > Aggregate". Choose the input layer (e.g., land use raster). Set the aggregation factor (e.g., 2x2 cells to one cell). Save the output and click "Run".
Effect: Aggregating reduces the number of raster cells, simplifying the dataset.
6. Remove Unnecessary Layers
Why: Having too many layers loaded in QGIS can slow down the software, especially when handling large datasets.
How:
  • Unload Unused Layers:Review all layers currently loaded in QGIS and remove any that aren't necessary for your current task.
Effect: This reduces the memory load and improves QGIS’s performance.
7. Simplify Using External Tools
Why: Some tools outside of QGIS (like GDAL or Python scripts) might handle large datasets more efficiently for specific preprocessing tasks.
How:
  • Use GDAL Commands:GDAL offers command-line tools to simplify, clip, and reproject large datasets efficiently. These can be run directly from the command line or integrated into a script for batch processing.
Effect: GDAL can handle large datasets more efficiently and can be used for preprocessing before loading the data into QGIS.
Summary
By simplifying your dataset, you reduce the computational load on QGIS and QSWAT, leading to fewer errors and smoother processing. Start by reducing the DEM resolution, clipping to your area of interest, and simplifying vector geometries. These steps will help make the dataset more manageable without compromising the accuracy needed for your analysis.
4o
Relevant answer
Answer
INRE:"... without significant loss of overall watershed characteristics"
Actually, depending on the characteristics of terrain relief in a particular area of interest, resampling will not only lose data, it can introduce artifacts that totally corrupt the hydrodynamic utility of the DEM, especially one that has already been post-processed for input to a hydro model. That's why NASA released versions of the old SRTM DEM, each to preserve various characteristics ( height, slope, curvature, etc ) depending on use ( https://lpdaac.usgs.gov/product_search/?collections=MEaSUREs+NASADEM&status=Operational&view=list )
  • asked a question related to Vectorization
Question
1 answer
I am trying to add 3 fragments to a vector. All 3 fragments are different sizes (125 bp, 4578 bp, 1445 bp). But I can't reach the correctly combined plasmid. I use 1 uL of each fragment and backbone, and I add the total (4uL) of gibson reaction (2x concentration). I tried chemical transformation and electroporation to send my gibson reaction to the cell and I can't observe any colonies. What am I doing wrong, what should I do?
Relevant answer
Answer
Try sequecing fragment befre assembly to authenticate integrity.
Consider performing a solution chang to mitigate toxicity if you are transforming bacteria after assembly
For optima annealing ensure DNA fragments have correct overlapping ends and absolute temperature maintained.
  • asked a question related to Vectorization
Question
3 answers
Let me share a quote from my own essay:
"Dynamic flows on a seven-dimensional sphere that do not coincide with the globally minimal vector field, but remain locally minimal vector fields of matter velocities, we interpret as physical fields and particles. At the same time, if in the space of the evolving 3-sphere $\mathbb{R}^{4}$ the vector field forms singularities (compact inertial manifolds in which flows are closed), then they are associated with fermions, and if flows are closed only in the dual space $\mathbb{R}^{4}$ with an inverse metric, then the singularities are associated with bosons. For example, a photon is a limit cycle (circle) of a dual space, which in Minkowski space translationally moves along an isotropic straight line lying in an arbitrary plane $(z,t)$, and rotates in the planes $(x,t)$, $(y,t)$." (p. 12 MATHEMATICAL NOTES ON THE NATURE OF THINGS)
Relevant answer
Answer
Unlike the Kaluza theory, where the fifth dimension serves as a source of electromagnetic vector potential, we have a ring (circle) in additional dimensions, the movement of which in Minkowski space is equivalent to the movement of a polarized photon. However, this does not mean that our dynamical system is unable to cope, like Kaluza's theory, with the assignment of a vector potential.
  • asked a question related to Vectorization
Question
3 answers
Near linear algebra textbooks.
Relevant answer
Answer
I am using " Linear Algebra & Differential Equations by Stephen Goode (Author), Scott Annin (Author)"
  • asked a question related to Vectorization
Question
6 answers
I used PCR to add a few nucleotides to the 3' end of a gene on a pc3.1 plasmid and produced a linear vector with 18bp overlap at 5' and 3'. Then I used DpnI to cut off the template chain, transformed the linearized vector into BL21, plated it, picked a single clone and continued to culture it in Kan-resistant LB culture medium, and then handed it over to a sequencing company. The sequencing company said that my bacteria did not grow when further expanded and could not be sequenced. So I extracted the plasmid and sent it for sequencing again, and it still showed no signal. I simply asked the sequencing company to continue testing the Kan resistance gene, and it still showed no signal. If it is because the antibiotics are degraded, then when I did the transformation, the negative control that did not transform any genes did not have any colonies, which shows that the antibiotics worked. I am very confused. If the plasmid was not successfully transformed, why can it grow in the resistance culture dish?
Relevant answer
Answer
Sounds like you'll need to start over from the PCR step.
One colony on a plate can be contamination with another species of bacteria or another strain of your bacteria (anyone else using kan resistance in your lab?). Also, mutations that allow antibiotic resistance happen at a low, but not 0 level.
Remember, your goal is to make the plasmid as part of a bigger project, not to chase down exactly what went wrong.
Good luck!
  • asked a question related to Vectorization
Question
3 answers
Vector systems are essential tools in genetic engineering used to introduce foreign genes into host cells for purposes like gene cloning, gene expression, or therapeutic applications. The choice of vector system significantly influences both the efficiency and safety of these processes.
Relevant answer
Answer
Vector systems in genetic engineering, such as plasmids and viral vectors, play crucial roles in determining the efficiency and safety of gene cloning. Here’s how they impact both aspects:
Efficiency
  1. Gene Delivery:Plasmids: Plasmids are circular DNA molecules that can replicate independently within a host cell. They are easy to manipulate and can carry relatively large DNA inserts. Their ability to autonomously replicate in host cells makes them efficient tools for gene cloning, especially in bacteria. Viral Vectors: Viral vectors are derived from viruses and have evolved mechanisms to efficiently enter host cells and deliver their genetic material. They are particularly effective in transducing a wide range of cell types, including those that are difficult to transfect with plasmids. Some viral vectors can integrate the cloned gene into the host genome, ensuring long-term expression.
  2. Expression Levels:Plasmids: The expression level of the cloned gene can be controlled by choosing appropriate promoters and copy number of the plasmid. High-copy-number plasmids can lead to higher gene expression but may also impose a metabolic burden on the host. Viral Vectors: Viral vectors, especially those derived from lentiviruses or adenoviruses, can achieve high levels of gene expression. They are often used when stable or high expression of the gene of interest is required.
  3. Scalability:Plasmids: Plasmids are easily propagated in bacterial cultures, making them suitable for large-scale gene cloning projects. Viral Vectors: Viral vectors require more complex production processes, often involving cell cultures, which can be more challenging to scale up.
Safety
  1. Insertional Mutagenesis:Plasmids: Generally, plasmids do not integrate into the host genome, which reduces the risk of insertional mutagenesis (disruption of host genes due to integration). However, this also means that gene expression may be transient unless the plasmid is maintained. Viral Vectors: Some viral vectors, such as retroviruses and lentiviruses, integrate into the host genome, which can lead to insertional mutagenesis. This poses a risk of disrupting important genes, potentially leading to oncogenesis (cancer formation).
  2. Immune Response:Plasmids: Plasmids typically provoke a minimal immune response, making them safer for certain applications, particularly in gene therapy where repeated administration may be required. Viral Vectors: Viral vectors can trigger immune responses, especially adenoviruses, which are known for their strong immunogenicity. This can limit their use in clinical applications and may necessitate immune suppression strategies.
  3. Toxicity:Plasmids: Plasmids are generally non-toxic and well-tolerated by host cells. However, high-copy-number plasmids can impose a metabolic burden on the host, affecting cell growth and viability. Viral Vectors: Some viral vectors can be toxic to host cells, particularly at high doses. The toxicity depends on the type of virus, the method of vector preparation, and the target cell type.
Conclusion
The choice between plasmid and viral vectors depends on the specific requirements of the gene cloning project. Plasmids are often preferred for simple, non-integrating, and non-toxic applications, particularly in bacterial systems. Viral vectors, on the other hand, are favored for applications requiring efficient delivery and high expression in a wide range of cell types, especially in mammalian systems, albeit with considerations for safety and potential immune responses.
  • asked a question related to Vectorization
Question
1 answer
In quantum fluids the phase of a wavefunction is smooth and can be represented by a topological manifold of genus 0, the velocity creates a vector field over this manifold. Then can the hairy ball theorem be directly applied to state that there must exist a point where the vector field creates a vortex, showing a purely mathematical reason for the formation of vortices?
Relevant answer
Answer
No. The hairy ball theorem simply leads to a condition on the phase of the wavefunction, so that it remain single-valued. Furthermore, the wavefunction refers to the state of the system in phase space, not in spacetime.
  • asked a question related to Vectorization
Question
2 answers
Hello, I am unable to file any plasmid map of the pHAL vector. I have searched addgene and other vector databases but still couldn't find anything, can anyone help me with this? Thanks.
Relevant answer
Answer
Hello Alexandra Johnson , I am specifically looking for pHAL-14 and pHAL-30 vector
  • asked a question related to Vectorization
Question
6 answers
Hi all,
Is there anyone who have the experience of solving nonlinear eigenvalue problem? I meet a special question.
The question is to solving the eigenfunction:
H(x) x=a x
where a is a real number, x is an eigenvector while H(x) is a matrix depended by x. The aim is to find a vector x and H(x) satisfying the equation which make the eigenvalue a minimum. In this question, H is far from a Hermitian in most case. And this question is not similar to conventional Kohn-Sham equation where most of the eigenvectors contribute to the construction of H. In contrast, in present question, H(x) may have many eigenvectors but, it depends on one of its eigenvector (x) only.
The method I try is self-consistent field method, i.e. determine a "temporary" H by a random or guess x, and by solving this temporary H a new x is obtained, and from new x a new H can be determined, etc. Currently, a major problem is that the matrix is not Hermitian, and therefore no real eigenvalue and corresponding eigenvector can be selected to define new "H". Since the target is to find a real number a and vector with real number elements, I hope that any temporary H can be composed by real number as well.
Does anyone have some idea about solving such a question? Any suggestion will be appreciated.
Thanks!
  • asked a question related to Vectorization
Question
2 answers
The question "What are the most common challenges in selecting an appropriate vector for cloning a large gene?" addresses the complexities involved in choosing the right vector for successfully cloning and expressing large genes in various host organisms.
Relevant answer
Answer
Selection of appropriate vector.
Look for High number of GC content.
Look for Restriction clevage sites.
check for open reading frame. framshit mutation during inserting in the vector.
  • asked a question related to Vectorization
Question
2 answers
Error 1:Invalid setting for output port dimensions of 'fccu/Mux6'. The dimensions are being set to 1. This is not valid because the total number of input and output elements are not the same
Error2:Error in port widths or dimensions. 'Input Port 1' of 'fccu/Zero-Order Hold' is a one dimensional vector with 1 elements.
Relevant answer
Answer
Exact Error
these are the exact error messages i am getting
Error1: Invalid setting for output port dimensions of 'fccu/Mux6'. The dimensions are being set to 1. This is not valid because the total number of input and output elements are not the same,
Error2: Error:Error in port widths or dimensions. 'Input Port 1' of 'fccu/Zero-Order Hold' is a one dimensional vector with 1 elements.
the simulink file i s uploaded above
i am also going to upload the matlab file,
Expected behavior
what i am trying to do is offset free Nonlinear MPC using comparison of different formulation, so i have a matlab folder that consists of the controller, continuous state equation, discrete equation, the NMPC controller.
Actual behavior
so what i am expecting the code to do is to track setpoint and remove offset but the error above is not allowing the simulation to run and error is only associated with matlab 2024a
  • asked a question related to Vectorization
Question
5 answers
Hello, everyone
Do you know some techniques to improve the efficiency of reprogramming if I use a kit (cytotune-ips 2.0 sendai vector) expired in July 2021?
Thank you for your answers!
Relevant answer
Answer
Hi, could you wanna try our sendai iPSC reprogramming kit ? More chiper than thermofisher, and we just developed a new version, higher induction effeciency. www.meltonbiomedtech.com
  • asked a question related to Vectorization
Question
2 answers
Dear all,
I have tried to use a mammalian Expression Vector System to overexpress a protein of interest in AGS cell line. Cells were selected based on puromycin resistance conferred by successful vector integration. However, despite successful antibiotic selection, I haven’t detected overexpression of the desired protein in this cell line. This has now happened for two different vectors encoding different target proteins.
Does anyone have any tips to reinforce the expression of the desired proteins or a possible explanation for this?
Thank you in advance,
Andreia Peixoto
Relevant answer
Answer
Could be many reasons. Are you sure the coding region is intact, that the promoter is active, or that your antibody is good enough and specific enough? What kind of vector is this, plasmid or lentivirus? Have you tried transient expression just to see if you get anything at all? How do you prepare protein lysates? Is your protein insoluble or chromatin associated? So many reasons....