Science topic
Cache - Science topic
Explore the latest questions and answers in Cache, and find Cache experts.
Questions related to Cache
I am using Core™ i9 Processor 13900HX (36MB Cache, 24 Cores, 32 Threads, 5.40GHz) with NVIDIA® GeForce RTX™ 4070, 8GB GDDR6 and 32GB DDR5 RAM system.
How can I boost my calculation using max core and GPU?
Dear ResearchGate Support Team,
I am writing to report a problem I am encountering while attempting to upload my research paper titled "Physicochemical analysis of trigona honey produced by Tetragonula biroi in Soppeng Regency, Indonesia" with DOI: 10.26656/fr.2017.8(5).281.
Despite being the sole author of this paper, I am consistently receiving an error message stating, "You can only add research to your profile when you're the author." I have double-checked all the information I have entered, including my name, affiliation, and the paper's title and DOI, to ensure they are accurate.
I have already tried the following troubleshooting steps:
Checked my internet connection: My internet connection is stable.
Verified file format: I am using a PDF file format.
Ensured sufficient file size: The file size is within the allowed limits.
Cleared my browser cache and cookies.
Unfortunately, these steps have not resolved the issue.
I would be grateful if you could investigate this matter further and provide me with guidance on how to successfully upload my research paper. I have attached a copy of my paper for your reference.
Thank you for your prompt attention to this matter.
Sincerely,
Andi Sitti Rahma
I keep receiving verifying you're human when i want to search. or acter that the error about firefox resending data. why? I have also logged in, tried clearing cache and password and entered again but is the same.
Hello, Can anybody assist me in expanding the number of cores using gaussian 9.0 program. I have multiprocessor system with Intel® Core™ i5-1135G7 (up to 4.2 GHz with Intel® Turbo Boost Technology, 8 MB L3 cache, 4 cores) and 8GB RAM. I am using Avogadro and Gaussian, so kindly assist me where to put the commands?
Hi Everyone,
GROMACS version:2022.1((single precision)
GROMACS modification: Yes/No
Hi Everyone,
Today I am running my second protein ligand simulation(read I am new to GROMACS)
I am using a LINUX server cluster of following configurations:
$ lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 24
On-line CPU(s) list: 0-23
Thread(s) per core: 1
Core(s) per socket: 12
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 63
Model name: Intel(R) Xeon(R) CPU E5-2670 v3 @ 2.30GHz
Stepping: 2
CPU MHz: 1200.042
BogoMIPS: 4594.33
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 30720K
NUMA node0 CPU(s): 0-11
NUMA node1 CPU(s): 12-23
I am running a protein-ligand simulation of 500ns having the following atoms :
Compound #atoms
Protein 140 residues
ligand 71
SOL 317409 H2O molecules
SOD 9
I used md.mdp file as follows:
title = Protein-ligand complex MD simulation
; Run parameters
integrator = md ; leap-frog integrator
nsteps = 250000000 ; 2 * 5000000 = 10000 ps (500 ns)
dt = 0.002 ; 2 fs
; Output control
nstenergy = 5000 ; save energies every 10.0 ps
nstlog = 5000 ; update log file every 10.0 ps
nstxout-compressed = 5000 ; save coordinates every 10.0 ps
; Bond parameters
continuation = yes ; continuing from NPT
constraint_algorithm = lincs ; holonomic constraints
constraints = h-bonds ; bonds to H are constrained
lincs_iter = 1 ; accuracy of LINCS
lincs_order = 4 ; also related to accuracy
; Neighbor searching and vdW
cutoff-scheme = Verlet
ns_type = grid ; search neighboring grid cells
nstlist = 20 ; largely irrelevant with Verlet
rlist = 1.2
vdwtype = cutoff
vdw-modifier = force-switch
rvdw-switch = 1.0
rvdw = 1.2 ; short-range van der Waals cutoff (in nm)
; Electrostatics
coulombtype = PME ; Particle Mesh Ewald for long-range electrostatics
rcoulomb = 1.2
pme_order = 4 ; cubic interpolation
fourierspacing = 0.16 ; grid spacing for FFT
; Temperature coupling
tcoupl = V-rescale ; modified Berendsen thermostat
tc-grps = Protein_ligan SOL_SOD ; two coupling groups - more accurate
tau_t = 0.1 0.1 ; time constant, in ps
ref_t = 300 300 ; reference temperature, one for each group, in K
; Pressure coupling
pcoupl = Parrinello-Rahman ; pressure coupling is on for NPT
pcoupltype = isotropic ; uniform scaling of box vectors
tau_p = 2.0 ; time constant, in ps
ref_p = 1.0 ; reference pressure, in bar
compressibility = 4.5e-5 ; isothermal compressibility of water, bar^-1
; Periodic boundary conditions
pbc = xyz ; 3-D PBC
; Dispersion correction is not used for proteins with the C36 additive FF
DispCorr = no
; Velocity generation
gen_vel = no ; continuing from NPT equilibration
I used the command nohup mpiexec -np 24 gmx_mpi mdrun -deffnm md -v, and unbearably, it shows that it will finish Thu Jun 13 04:47:54 2024. Please suggest anything to fasten up the process.
I would be grateful for suggestions/help
Thanks in advance
For an amino acid of size around 1000 aa, during the energy minimization step, I am not able to go beyond because it shows, "segmentation core dump", I have tried sudo get update and sudo clean all command to clear the cache memory, but nothing works.
Could someone please help me out with solving it via any other actions?
Hello Dear.
I need some statistical information about number of works on edge caching during 10 years. would you please help me how I can obtain it?
Both SRAM and Flip-flop are volatile memory element. Is there any applications where both are used?
ASP.NET provided an in-memory cache implementation in the System.Web.Caching namespace.
I need a 5g dataset with kpis helpful in optimal node prediction enhancing the user experience.
DNSSEC is the security extension of DNS and it is recommended to enable DNSSEC in all zones to mitigate DNS cache poisoning attacks. KSK (Key signing key) and ZSK (zone signing key) are used to generate RRSIGs of the zone records and the algorithms used to generate KSK/ZSK is very important in generating strong RRSIGs. Some of the zones has used SHA-1 as the security algorithm for KSK and ZSK. As the SHA-1 is an outdated algorithm, it is required to change the key algorithm in those zones. Anyone having any experiences in DNSSEC key algorithm roll over process, please let me know.
Thanks
Information-centric networking brings novel benefits of managing traditional networks by addressing content by names and exploiting in-network caching. Although it brings benefits of efficient content management, I am interested in knowing the challenges it may cause in managing traditional networks.
It is not feasible to store transactional data in ontologies. when the size of data will grow, the number of triples in the ontology will be in million or trillion. Due to the un-structured file structure of ontologies, it is not possible to traverse ontologies with trillions of tripples even with the help of the latest fastest processor and un-limited memory (primary, cache, registers, etc).
what do you say? waiting for your valuable comments.
Hi everyone,
I am installing gromacs(2019) on my desktop. I could successfully install gromacs but when building mpi version it shows the following error. I have applied all the possible ways, searched online, updated cmake, openmpi and everything but no good results. Can anyone please share his/her experience.
tests-2019.2/
-- Could NOT find MPI_C (missing: MPI_C_LIB_NAMES MPI_C_HEADER_DIR MPI_C_WORKS)
-- Could NOT find MPI_CXX (missing: MPI_CXX_LIB_NAMES MPI_CXX_HEADER_DIR MPI_CXX_WORKS)
-- Could NOT find MPI (missing: MPI_C_FOUND MPI_CXX_FOUND)
CMake Error at cmake/gmxManageMPI.cmake:169 (message):
MPI support requested, but no MPI compiler found. Either set the
C-compiler (CMAKE_C_COMPILER) to the MPI compiler (often called mpicc), or
set the variables reported missing for MPI_C above.
Call Stack (most recent call first):
CMakeLists.txt:460 (include)
-- Configuring incomplete, errors occurred!
See also "/home/khan/gromacs-2019.2/build/CMakeFiles/CMakeOutput.log".
See also "/home/khan/gromacs-2019.2/build/CMakeFiles/CMakeError.log".
You have changed variables that require your cache to be deleted.
Configure will be re-run and you may have to reset some variables.
The following variables have changed:
CMAKE_C_COMPILER= gcc
-- Generating done
CMake Warning:
Manually-specified variables were not used by the project:
REGRESSIONTEST_PATH
-- Build files have been written to: /home/khan/gromacs-2019.2/build
In NDN, a node wants to broadcast an interest, that interest contains information about an other node. The neighboring nodes should receive the interest and store in their cache without sending back any data packet. in such scenario, which of following strategy is best?
- UDP multicast
- Publish- Subscribe
I want to implement a 5G network in which I will be implementing cache at the edge. What are the possible options? What simulators are available currently and which one has the best learning curve?
I am trying to design a caching strategy for ICN-based IoT which will use the popularity for caching the content. if someone kindly tell me what is a good way to measure popularity. a link to a research article or any reference to a mathematical method will be highly appreciated.
thanks in advance
sudo scons FULL_SYSTEM=1 build/ALPHA/gem5.opt RUBY=true PROTOCOL=MOESI_hammer
./build/ALPHA/gem5.opt -d m5out/blackscholes --debug-flags=RubyCache --debug-file=trace.out.gz configs/example/fs.py --ruby --num-cpu=16 --l1i_size=32kB --l1d_size=32kB --l2_size=8MB --cpu-type=timing --restore-with-cpu=timing --script=run_scripts/blackscholes_16c_simsmall.rcS --checkpoint-at-end --kernel=/home/xx/gem5/full_system_images_ALPHA/binaries/vmlinux_2.6.27-gcc_4.3.4 --disk-image=/home/xxx/gem5/full_system_images_ALPHA/disks/linux-parsec-2-1-m5-with-test-inputs.img --max-checkpoints=5
i used debug-flags=RubyCache,
Actually , i need data for tracing data such as cache memory address, cpu number, Hit/Miss, and Write/ read? is that debug-flags=RubyCache correct or other flag?
For example, in the case of a web application, an architectural description includes building the system of databases, web servers, application servers, e-mail, and cache systems.
Current parallel BFS algorithms are known to have reduced time complexity. However, such cases do not take into account synchronization costs which increase exponentially with the core count. Such synchronization costs stem from communication costs due to data movement between cores, and coherence traffic if using a cache coherent multicore. What is the best parallel BFS algorithm available in this case?
I'm currently conducting a simulation using CloudSim.
The objective is to compare performance between two architectures.
One is the Cloudlet system using computing caching based only on popularity of task result among cloud users.
The other is the Cloudlet system using computing caching based jointly on response time, computation time of task as well as popularity of task.
I finished the design of two architectures, and I'm now trying to simulate these two, but I need task dataset for these.
Can I make these task dataset? Or where can I get these?
Moreover, how can I extract information about computation time and response time of task data in cloud?
Plz help me...
Content are cached on the edge of the network. what if the edge node fails. How do we handle this situation. kindly answer with a proper citation
I'm currently working on implement a caching mechanism to AODV protocol. In here I'm going to cache the RREQ packet and reduce the router discovery process. I'm stuck on this position. I need to know method to abstract the information from RREQ packets and guidance to to this task.
I'm implementing a route caching mechanism for AODV protocol. In order to do this I want to add separate structure for each node and extract information from RREQ packets and add it to the structure created. How this can be done using NS2?
I want to add caching and compression features to mobile based web services
Need to evaluate the performance of peer to peer file sharing system when different cache algorithms is implemented in the networks. I am stuck at the point where I do not know which p2p simulation software allows to change the cache algorithms in the peer nodes to test the performance of the network.
I proposed caching algorithm for the web content and I want to simulate it using NS2. how can I start?
Hello, all. is there any mechanism in the Xen scheduler to check the required cache of individual VM on dynamic basis? When virtual machine creates then how the Xen scheduler will detect that how much cache is required to this particular VM. and re-partition the cache according to each VM requirement. I need to check which function in the Xen source code?? Kindly I need your help.
does iSCSI protocol itself or open-source iSCSI implementations like open-iscsi have a cache scheme?
since an iscsi initiator may access the same data many times, it does not have to get the data from remote target every time via network. so i am wondering if iSCSI provides a scheme by which clients can use a local disk to cache the hot data so that when the accessed data is stored in the local cache disk initiator can directly get the data from local disk, just like a web browser cache does. does anyone konw about this? thank you.
How to convert Netcdf4 files to Netcdf3 files with NETCDF_nccopy
My system is Ubuntu 14.04, and netcdf-4.3.3.1has been installed
_____________________terminal message_________________
root@xx-desktop:~/Desktop/cc# nccopy
nccopy: nccopy [-k kind] [-[3|4|6|7]] [-d n] [-s] [-c chunkspec] [-u] [-w] [-[v|V] varlist] [-[g|G] grplist] [-m n] [-h n] [-e n] [-r] infile outfile
[-k kind] specify kind of netCDF format for output file, default same as input
kind strings: 'classic', '64-bit offset',
'netCDF-4', 'netCDF-4 classic model'
[-3] netCDF classic output (same as -k 'classic')
[-6] 64-bit-offset output (same as -k '64-bit offset')
[-4] netCDF-4 output (same as -k 'netCDF-4')
[-7] netCDF-4-classic output (same as -k 'netCDF-4 classic model')
[-d n] set output deflation compression level, default same as input (0=none 9=max)
[-s] add shuffle option to deflation compression
[-c chunkspec] specify chunking for dimensions, e.g. "dim1/N1,dim2/N2,..."
[-u] convert unlimited dimensions to fixed-size dimensions in output copy
[-w] write whole output file from diskless netCDF on close
[-v var1,...] include data for only listed variables, but definitions for all variables
[-V var1,...] include definitions and data for only listed variables
[-g grp1,...] include data for only variables in listed groups, but all definitions
[-G grp1,...] include definitions and data only for variables in listed groups
[-m n] set size in bytes of copy buffer, default is 5000000 bytes
[-h n] set size in bytes of chunk_cache for chunked variables
[-e n] set number of elements that chunk_cache can hold
[-r] read whole input file into diskless file on open (classic or 64-bit offset format only)
infile name of netCDF input file
outfile name for netCDF output file
netCDF library version 4.3.3.1 of Nov 6 2015 20:09:00 $
NetCDF: Unknown file format
Location: file nccopy.c; line 1354
root@xx-desktop:~/Desktop/cc#
________________________________________________________


Should caches be based on positioning?
Should recency and frequency in content manipulation be the major variables to caching in ICN?
What do you suggest According to your experience?
Thank You.
I am planning to investigate the problems related with hardware implementation of cache memory replacement policy. I have read various advanced replacement policies in the publication. So far, most research focus on improvement over Least Recently Used (LRU) replacement policy, which improved over LRU in terms of miss ratio but no hardware implementation in details.
So, I am hoping someone can recommend latest publication/review related to hardware implementation of cache memory replacement policy.
As part of a study, I use a website (.NET) to administer an experiment via a tablet and 4G WiFi modem using Google Chrome. The experiment involves downloading a total of 70MB of images from the website as it goes from page to page, with each page downloading around 5MB of images each.
I have written an application cache manifest file "dvams.appcache" in which I list all of the image files and other resources which I wish to have cached locally on my tablet. In the aspx page using the image files, I have included the manifest attribute (manifest="dvams.appcache") in the HTML element, as directed.
However, when I run my experiments out in the field, Chrome (version 43) does not seem to take any notice of the manifest file, and the files do not persist locally on my tablet for more than a short period of time, after which Chrome behaves as if they have been removed from the cache.
In every experimental run I do, the browser continues to retrieve the files it already downloaded from the website. The speed of my 4G connection seems to vary markedly, and downloading these files time and time again can be excruciatingly slow, with pages taking as long as 2 or 3 minutes to download the required image files.
My theory is the file cache size needs to be increased to accommodate the 120MB of files I wish to be cached, or that an "offline mode" setting that used to exist in earlier versions of chrome (but did appear to work) needs to be set somewhere. I have seen how a cache size parameter "--disk-cache-size=104857600" can be added as a parameter to the chrome executable file when called from Windows, but cannot figure out how to accomplish this on Android 5.1.
Why are these cached files being deleted? How can I force my tablet to ALWAYS go to the cache for images it has already downloaded, and make these offline files persist permanently or until I manually tell Chrome to delete them?
from which file/log file i can able to get page modificaiton information (Dirty Page information)
I would like to add some optimization functions in the AOMDV routing protocol, i want add the concept of cooperative packet caching, but to do that, i have to have the code for the implementation of AOMDV.
I'm trying to guess what are L1 caches typical pipeline stages. The attached file describes a 3-cycle one, like those found in Silvermont, Jaguar, and Cortex-A9. The notation conventions are:
- blue for the adress computation;
- yellow for the adress translation;
- orange for the data access.
However, high-end CPUs such as Haswell, Bulldozer, and Cortex-A15 have a 4-cycle L1 cache access latency. Where does the fourth cycle come from? Could someone explain in detail what do the four stages do?

I know the L1 is called first before L2 and fourth but why? Is there anybody with a theoretical and practical reason?
The mapping between semantic data (e.g. file systems and namespaces, databases, object stores) and the devices that store the associated bytes and blocks that compose them has been one-way by design - from semantic data through pointers (e.g. inodes) to blocks of bytes, but the reverse association is rarely used, except for debug. In semantic storage, the storage controllers (and in some cases devices themselves) would know that a block belongs to a specific object, file, table/record/field and this reverse association could be used by new applications for performance optimization, security monitors, and data protection, for example. The idea is not to revolutionize or change storage, but rather allow for reverse mapping so lower-level features can be semantically aware and so new applications like intrusion detection systems can know that block access is suspect, sub-optimal, or requires cache updates.
I am doing some power-aware work that can hugely benefit from it.
We are trying to "evaluate interconnect power" of on-chip memory in NoC.
Can I evaluate power/Testing power for NoC by measuring the interconnect length (accessing data from Core to different cache levels).
I need help to resolve two issues.
1. Can I use the word "Length" to access data from core to different cache Level.
2. Do I use the term "Evaluate power" / "Power testing" or any other term you suggest.
Can anyone suggest few research article which evaluate power in NoC interconnects, evaluate power of links between Router to cache (L1,L2, L3) in NOC?
Any Simulator easy to use with object oriented programming code for Analyzing pipeline efficiency? Suggestions of good papers regarding the above matter?
Hardware pipeline. I follow the same mechanism in which Pipeline will be full with Instruction, there is no branch prediction so 100% cache hit ratio. Our mechanism is different compared to Branch delay slot and Dynamically Branch prediction.