Science topic

Cache - Science topic

Explore the latest questions and answers in Cache, and find Cache experts.
Questions related to Cache
  • asked a question related to Cache
Question
2 answers
I am using Core™ i9 Processor 13900HX (36MB Cache, 24 Cores, 32 Threads, 5.40GHz) with NVIDIA® GeForce RTX™ 4070, 8GB GDDR6 and 32GB DDR5 RAM system.
How can I boost my calculation using max core and GPU?
Relevant answer
Answer
hi Robert
i use a GPU of system of a supercomputer with RTX 3090
is it ok with Abaqus 2023?
it seems that GPU is not under the Abaqus load, how can I use all of my GPUs power?
  • asked a question related to Cache
Question
1 answer
Dear ResearchGate Support Team,
I am writing to report a problem I am encountering while attempting to upload my research paper titled "Physicochemical analysis of trigona honey produced by Tetragonula biroi in Soppeng Regency, Indonesia" with DOI: 10.26656/fr.2017.8(5).281.
Despite being the sole author of this paper, I am consistently receiving an error message stating, "You can only add research to your profile when you're the author." I have double-checked all the information I have entered, including my name, affiliation, and the paper's title and DOI, to ensure they are accurate.
I have already tried the following troubleshooting steps:
Checked my internet connection: My internet connection is stable.
Verified file format: I am using a PDF file format.
Ensured sufficient file size: The file size is within the allowed limits.
Cleared my browser cache and cookies.
Unfortunately, these steps have not resolved the issue.
I would be grateful if you could investigate this matter further and provide me with guidance on how to successfully upload my research paper. I have attached a copy of my paper for your reference.
Thank you for your prompt attention to this matter.
Sincerely,
Andi Sitti Rahma
Relevant answer
Answer
When uploading publications you don't only have to add your own name, you need to select your profile in the process of adding your name. As you need to specify that the current name you are adding is associated with your account rather than someone with the exact same name as yourself.
  • asked a question related to Cache
Question
1 answer
I keep receiving verifying you're human when i want to search. or acter that the error about firefox resending data. why? I have also logged in, tried clearing cache and password and entered again but is the same.
Relevant answer
Answer
If you are referring to ResearchGate, I have noticed that when I am trying to search, it stops me half in the process of entering keywords and asks me to verify that I am not a Robot. But after I verify, I can use the search function.
You can try a different browser.
  • asked a question related to Cache
Question
6 answers
Hello, Can anybody assist me in expanding the number of cores using gaussian 9.0 program. I have multiprocessor system with Intel® Core™ i5-1135G7 (up to 4.2 GHz with Intel® Turbo Boost Technology, 8 MB L3 cache, 4 cores) and 8GB RAM. I am using Avogadro and Gaussian, so kindly assist me where to put the commands?
Relevant answer
Answer
For those still interested, a new version of GaussMem is available for calculating the amount of memory required by Gaussian calculations as a function of the type of calculation and the number of processors. It can be freely downloaded at the program page:
  • asked a question related to Cache
Question
3 answers
Hi Everyone,
GROMACS version:2022.1((single precision) GROMACS modification: Yes/No Hi Everyone, Today I am running my second protein ligand simulation(read I am new to GROMACS) I am using a LINUX server cluster of following configurations: $ lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 24 On-line CPU(s) list: 0-23 Thread(s) per core: 1 Core(s) per socket: 12 Socket(s): 2 NUMA node(s): 2 Vendor ID: GenuineIntel CPU family: 6 Model: 63 Model name: Intel(R) Xeon(R) CPU E5-2670 v3 @ 2.30GHz Stepping: 2 CPU MHz: 1200.042 BogoMIPS: 4594.33 Virtualization: VT-x L1d cache: 32K L1i cache: 32K L2 cache: 256K L3 cache: 30720K NUMA node0 CPU(s): 0-11 NUMA node1 CPU(s): 12-23 I am running a protein-ligand simulation of 500ns having the following atoms : Compound #atoms Protein 140 residues ligand 71 SOL 317409 H2O molecules SOD 9 I used md.mdp file as follows: title = Protein-ligand complex MD simulation ; Run parameters integrator = md ; leap-frog integrator nsteps = 250000000 ; 2 * 5000000 = 10000 ps (500 ns) dt = 0.002 ; 2 fs ; Output control nstenergy = 5000 ; save energies every 10.0 ps nstlog = 5000 ; update log file every 10.0 ps nstxout-compressed = 5000 ; save coordinates every 10.0 ps ; Bond parameters continuation = yes ; continuing from NPT constraint_algorithm = lincs ; holonomic constraints constraints = h-bonds ; bonds to H are constrained lincs_iter = 1 ; accuracy of LINCS lincs_order = 4 ; also related to accuracy ; Neighbor searching and vdW cutoff-scheme = Verlet ns_type = grid ; search neighboring grid cells nstlist = 20 ; largely irrelevant with Verlet rlist = 1.2 vdwtype = cutoff vdw-modifier = force-switch rvdw-switch = 1.0 rvdw = 1.2 ; short-range van der Waals cutoff (in nm) ; Electrostatics coulombtype = PME ; Particle Mesh Ewald for long-range electrostatics rcoulomb = 1.2 pme_order = 4 ; cubic interpolation fourierspacing = 0.16 ; grid spacing for FFT ; Temperature coupling tcoupl = V-rescale ; modified Berendsen thermostat tc-grps = Protein_ligan SOL_SOD ; two coupling groups - more accurate tau_t = 0.1 0.1 ; time constant, in ps ref_t = 300 300 ; reference temperature, one for each group, in K ; Pressure coupling pcoupl = Parrinello-Rahman ; pressure coupling is on for NPT pcoupltype = isotropic ; uniform scaling of box vectors tau_p = 2.0 ; time constant, in ps ref_p = 1.0 ; reference pressure, in bar compressibility = 4.5e-5 ; isothermal compressibility of water, bar^-1 ; Periodic boundary conditions pbc = xyz ; 3-D PBC ; Dispersion correction is not used for proteins with the C36 additive FF DispCorr = no ; Velocity generation gen_vel = no ; continuing from NPT equilibration
I used the command nohup mpiexec -np 24 gmx_mpi mdrun -deffnm md -v, and unbearably, it shows that it will finish Thu Jun 13 04:47:54 2024. Please suggest anything to fasten up the process. I would be grateful for suggestions/help Thanks in advance
Relevant answer
Answer
Because it is related to the HPC setting. I DM you.
  • asked a question related to Cache
Question
6 answers
For an amino acid of size around 1000 aa, during the energy minimization step, I am not able to go beyond because it shows, "segmentation core dump", I have tried sudo get update and sudo clean all command to clear the cache memory, but nothing works.
Could someone please help me out with solving it via any other actions?
Relevant answer
Answer
@ Aqib, so can I do the same at nvt because me too I am facing the same issue to run nvt; step 0 segment error (splitting core). It is tiring you know. Please help.
  • asked a question related to Cache
Question
2 answers
Hello Dear.
I need some statistical information about number of works on edge caching during 10 years. would you please help me how I can obtain it?
Relevant answer
Answer
Thanks for your attention and response.
Best regards
  • asked a question related to Cache
Question
1 answer
,,
Relevant answer
Answer
Dear doctor
I chose the following quotation hoping illustrate the question
"A distributed cache is a system that pools together the random-access memory (RAM) of multiple networked computers into a single in-memory data store used as a data cache to provide fast access to data. While most caches are traditionally in one physical server or hardware component, a distributed cache can grow beyond the memory limits of a single computer by linking together multiple computers–referred to as a distributed architecture or a distributed cluster–for larger capacity and increased processing power.
Distributed caches are especially useful in environments with high data volume and load. The distributed architecture allows incremental expansion/scaling by adding more computers to the cluster, allowing the cache to grow in step with the data growth.
A distributed cache pools the RAM of multiple computers into a single in-memory data store used as a data cache to provide fast access to data.
With a distributed cache, you can have a large number of concurrent web sessions that can be accessed by any of the web application servers that are running the system. This lets you load balance web traffic over several application servers and not lose session data should any application server fail."
Sincere regards
Dr.Sundus Fadhil Hantoosh
  • asked a question related to Cache
Question
1 answer
,,
Relevant answer
Answer
Hi Wisam
MapReduce is an algorithm for processing big data and is implemented on several platforms.
In Haddoop, this algorithm processes data stored in the HDFS, which uses hard disks. Thus, read/write operations are slow when compared to other solutions like Apache SPARK that operate in RAM.
Using a cache the these performance problems are mitigated.
Best regards,
RHP
  • asked a question related to Cache
Question
4 answers
Both SRAM and Flip-flop are volatile memory element. Is there any applications where both are used?
Relevant answer
Answer
Flip-flop are the bricks from them more complex functional units can be built. These can be, for instance, registers, counters, frequency dividers, state machines, or SRAM modules that you mentioned. Complex state machines are CPUs, integrated in microcontrollers, they contain a plenty of flip-flops. Some of them are used in the CPU to process the instructions or store data. Other ones build CPU's RAM or I/O registers, counters, etc.
  • asked a question related to Cache
Question
2 answers
ما هو عمل ذاكرة الكاش في جهاز الكمبيوتر
Relevant answer
Answer
Caching is a mechanism that hides slower data storage behind a smaller amount of faster data storage. Cache is the generic term for that faster layer of storage and can usually be seen between ( CPU and Memory ) also ( Memory and I/O device). Think of a cache as a place to keep the "favourites" close to hand while the rest of the information takes longer to access.
The bestsellers shelf at the front of a bookshop could be considered to be a real life cache.
  • asked a question related to Cache
Question
3 answers
ASP.NET provided an in-memory cache implementation in the System.Web.Caching namespace.
Relevant answer
Answer
It seems to be more accurate to refer to Microsoft's technical documentation or to inquire Microsoft about this part rather than asking me. I don't know this. I am so sorry.
  • asked a question related to Cache
Question
6 answers
I need a 5g dataset with kpis helpful in optimal node prediction enhancing the user experience.
Relevant answer
Answer
Thanks
  • asked a question related to Cache
Question
3 answers
DNSSEC is the security extension of DNS and it is recommended to enable DNSSEC in all zones to mitigate DNS cache poisoning attacks. KSK (Key signing key) and ZSK (zone signing key) are used to generate RRSIGs of the zone records and the algorithms used to generate KSK/ZSK is very important in generating strong RRSIGs. Some of the zones has used SHA-1 as the security algorithm for KSK and ZSK. As the SHA-1 is an outdated algorithm, it is required to change the key algorithm in those zones. Anyone having any experiences in DNSSEC key algorithm roll over process, please let me know.
Thanks
Relevant answer
Answer
Dr
Zeashan H. Khan
Thanks for your answer and as I have noticed, this RFC is discussing about the key rollover but not the algorithm rollover. It has a small notice related to algorithm roll over but not enough details in it. Thanks and will look for more detailed resources.
  • asked a question related to Cache
Question
3 answers
Information-centric networking brings novel benefits of managing traditional networks by addressing content by names and exploiting in-network caching. Although it brings benefits of efficient content management, I am interested in knowing the challenges it may cause in managing traditional networks.
Relevant answer
Answer
When moving from traditional IP networks to ICN new caching, routing security strategies should be conceived. Indeed, several research works focused on defining new data caching strategies for ICN networks. However, ICN has several advantages and is a very interesting paradigm for IoT networks.
  • asked a question related to Cache
Question
7 answers
It is not feasible to store transactional data in ontologies. when the size of data will grow, the number of triples in the ontology will be in million or trillion. Due to the un-structured file structure of ontologies, it is not possible to traverse ontologies with trillions of tripples even with the help of the latest fastest processor and un-limited memory (primary, cache, registers, etc).
what do you say? waiting for your valuable comments.
Relevant answer
Answer
Thanks You Laurent Berry . I will explore TerminusDB and TerminusHub.
  • asked a question related to Cache
Question
6 answers
Hi everyone,
I am installing gromacs(2019) on my desktop. I could successfully install gromacs but when building mpi version it shows the following error. I have applied all the possible ways, searched online, updated cmake, openmpi and everything but no good results. Can anyone please share his/her experience.
tests-2019.2/
-- Could NOT find MPI_C (missing: MPI_C_LIB_NAMES MPI_C_HEADER_DIR MPI_C_WORKS)
-- Could NOT find MPI_CXX (missing: MPI_CXX_LIB_NAMES MPI_CXX_HEADER_DIR MPI_CXX_WORKS)
-- Could NOT find MPI (missing: MPI_C_FOUND MPI_CXX_FOUND)
CMake Error at cmake/gmxManageMPI.cmake:169 (message):
MPI support requested, but no MPI compiler found. Either set the
C-compiler (CMAKE_C_COMPILER) to the MPI compiler (often called mpicc), or
set the variables reported missing for MPI_C above.
Call Stack (most recent call first):
CMakeLists.txt:460 (include)
-- Configuring incomplete, errors occurred!
See also "/home/khan/gromacs-2019.2/build/CMakeFiles/CMakeOutput.log".
See also "/home/khan/gromacs-2019.2/build/CMakeFiles/CMakeError.log".
You have changed variables that require your cache to be deleted.
Configure will be re-run and you may have to reset some variables.
The following variables have changed:
CMAKE_C_COMPILER= gcc
-- Generating done
CMake Warning:
Manually-specified variables were not used by the project:
REGRESSIONTEST_PATH
-- Build files have been written to: /home/khan/gromacs-2019.2/build
Relevant answer
Answer
Try, sudo apt --fix-broken install
  • asked a question related to Cache
Question
3 answers
In NDN, a node wants to broadcast an interest, that interest contains information about an other node. The neighboring nodes should receive the interest and store in their cache without sending back any data packet. in such scenario, which of following strategy is best?
  1. UDP multicast
  2. Publish- Subscribe
Relevant answer
Answer
In simple way of description, it is a Network coding algorithm where the co-operative data is allowed to be broadcasted only once, so the receiver has the option to use the data or discard it
  • asked a question related to Cache
Question
5 answers
I want to implement a 5G network in which I will be implementing cache at the edge. What are the possible options? What simulators are available currently and which one has the best learning curve?
Relevant answer
Answer
NetSim
  • asked a question related to Cache
Question
5 answers
I am trying to design a caching strategy for ICN-based IoT which will use the popularity for caching the content. if someone kindly tell me what is a good way to measure popularity. a link to a research article or any reference to a mathematical method will be highly appreciated.
thanks in advance
Relevant answer
Answer
Right now I dont have any, but you will find few, please search at google scholar
  • asked a question related to Cache
Question
4 answers
sudo scons FULL_SYSTEM=1 build/ALPHA/gem5.opt RUBY=true PROTOCOL=MOESI_hammer
./build/ALPHA/gem5.opt -d m5out/blackscholes --debug-flags=RubyCache --debug-file=trace.out.gz configs/example/fs.py --ruby --num-cpu=16 --l1i_size=32kB --l1d_size=32kB --l2_size=8MB --cpu-type=timing --restore-with-cpu=timing --script=run_scripts/blackscholes_16c_simsmall.rcS --checkpoint-at-end --kernel=/home/xx/gem5/full_system_images_ALPHA/binaries/vmlinux_2.6.27-gcc_4.3.4 --disk-image=/home/xxx/gem5/full_system_images_ALPHA/disks/linux-parsec-2-1-m5-with-test-inputs.img --max-checkpoints=5
i used debug-flags=RubyCache,
Actually , i need data for tracing data such as cache memory address, cpu number, Hit/Miss, and Write/ read? is that debug-flags=RubyCache correct or other flag?
Relevant answer
Answer
Hi there.
I suppose 500GB is fine! I mean, recall that there are a lot of instructions (including LW and SW ones) being processed into full-system simulation, thus, large trace files are expected.
Did you think about simulating small synthetic workloads using system call? Or perhaps the TraficGen from gem5 (never used it)...
What about restore the simulation from some checkpoint, just to get a ROI?
Try that options. It could help also if you wrote what you want to do with that experiment.
Sincerely
Matheus
  • asked a question related to Cache
Question
3 answers
For example, in the case of a web application, an architectural description includes building the system of databases, web servers, application servers, e-mail, and cache systems.
Relevant answer
Answer
I agree with C K Gomathy
  • asked a question related to Cache
Question
3 answers
Current parallel BFS algorithms are known to have reduced time complexity. However, such cases do not take into account synchronization costs which increase exponentially with the core count. Such synchronization costs stem from communication costs due to data movement between cores, and coherence traffic if using a cache coherent multicore. What is the best parallel BFS algorithm available in this case?
Relevant answer
Answer
Level-Synchronous Parallel Breadth-First Search Algorithms
  • asked a question related to Cache
Question
1 answer
I'm currently conducting a simulation using CloudSim.
The objective is to compare performance between two architectures.
One is the Cloudlet system using computing caching based only on popularity of task result among cloud users.
The other is the Cloudlet system using computing caching based jointly on response time, computation time of task as well as popularity of task.
I finished the design of two architectures, and I'm now trying to simulate these two, but I need task dataset for these.
Can I make these task dataset? Or where can I get these?
Moreover, how can I extract information about computation time and response time of task data in cloud?
Plz help me...
Relevant answer
Answer
Dear,
The real trace of data center is given in workload directory. you can use these traces as task in CloudSim. If u need further help ping me.
  • asked a question related to Cache
Question
3 answers
Content are cached on the edge of the network. what if the edge node fails. How do we handle this situation. kindly answer with a proper citation
Relevant answer
Answer
Dear Hamid Asmat,
that was just an idea. According to the concept of "virtual router", a "virtual proxy" can be set up, namely: router = proxy of caches. Several cache systems can be connected to such a virtual proxy.
  • asked a question related to Cache
Question
3 answers
I'm currently working on implement a caching mechanism to AODV protocol. In here I'm going to cache the RREQ packet and reduce the router discovery process. I'm stuck on this position. I need to know method to abstract the information from RREQ packets and guidance to to this task.
Relevant answer
Answer
Please see https://en.wikipedia.org/wiki/Ns_(simulator) and NS-2 dev stopped in 2010. I dont think you can get any support for it.
  • asked a question related to Cache
Question
3 answers
I'm implementing a route caching mechanism for AODV protocol. In order to do this I want to add separate structure for each node and extract information from RREQ packets and add it to the structure created. How this can be done using NS2?
Relevant answer
Answer
I wish you good luck.
  • asked a question related to Cache
Question
3 answers
I want to add caching and compression features to mobile based web services
Relevant answer
Answer
Thank you reply. But my question is, can we modify cloudlet of cloudsim (Simulator tool) and add caching and compress the mobile based
web services request.
  • asked a question related to Cache
Question
1 answer
Need to evaluate the performance of peer to peer file sharing system when different cache algorithms is implemented in the networks. I am stuck at the point where I do not know which  p2p simulation software allows to change the cache algorithms in the peer nodes to test the performance of the network.
  • asked a question related to Cache
Question
3 answers
I proposed caching algorithm for the web content and I want to simulate it using NS2. how can I start?
  • asked a question related to Cache
Question
1 answer
Hello, all. is there any mechanism in the Xen scheduler to check the required cache of individual VM on dynamic basis? When virtual machine creates then how the Xen scheduler will detect that how much cache is required to this particular VM. and re-partition the cache according to each VM requirement. I need to check which function in the Xen source code?? Kindly I need your help.
Relevant answer
Answer
Hi Zakira,
I believe it could answer your question:
Cache Allocation Technology (CAT) in:
BR,
Fábio
  • asked a question related to Cache
Question
14 answers
does iSCSI protocol itself or open-source iSCSI implementations like open-iscsi have a cache scheme?
since an iscsi initiator may access the same data many times, it does not have to get the data from remote target every time via network. so i am wondering if iSCSI provides a scheme by which clients can use a local disk to cache the hot data so that when the accessed data is stored in the local cache disk initiator can directly get the data from local disk, just like a web browser cache does. does anyone konw about this? thank you. 
Relevant answer
Answer
The Linux (or any other operating system) buffer cache will work just as well above iSCSI as above any other block device. 
See Radkov et al., "A Performance Comparison of NFS and iSCSI for IP-Networked Storage" in FAST '04, https://www.usenix.org/legacy/events/fast04/tech/radkov.html
  • asked a question related to Cache
Question
7 answers
How to convert Netcdf4 files to Netcdf3 files with NETCDF_nccopy
My system is Ubuntu 14.04, and netcdf-4.3.3.1has been installed
_____________________terminal message_________________
root@xx-desktop:~/Desktop/cc# nccopy
nccopy: nccopy [-k kind] [-[3|4|6|7]] [-d n] [-s] [-c chunkspec] [-u] [-w] [-[v|V] varlist] [-[g|G] grplist] [-m n] [-h n] [-e n] [-r] infile outfile
[-k kind] specify kind of netCDF format for output file, default same as input
kind strings: 'classic', '64-bit offset',
'netCDF-4', 'netCDF-4 classic model'
[-3] netCDF classic output (same as -k 'classic')
[-6] 64-bit-offset output (same as -k '64-bit offset')
[-4] netCDF-4 output (same as -k 'netCDF-4')
[-7] netCDF-4-classic output (same as -k 'netCDF-4 classic model')
[-d n] set output deflation compression level, default same as input (0=none 9=max)
[-s] add shuffle option to deflation compression
[-c chunkspec] specify chunking for dimensions, e.g. "dim1/N1,dim2/N2,..."
[-u] convert unlimited dimensions to fixed-size dimensions in output copy
[-w] write whole output file from diskless netCDF on close
[-v var1,...] include data for only listed variables, but definitions for all variables
[-V var1,...] include definitions and data for only listed variables
[-g grp1,...] include data for only variables in listed groups, but all definitions
[-G grp1,...] include definitions and data only for variables in listed groups
[-m n] set size in bytes of copy buffer, default is 5000000 bytes
[-h n] set size in bytes of chunk_cache for chunked variables
[-e n] set number of elements that chunk_cache can hold
[-r] read whole input file into diskless file on open (classic or 64-bit offset format only)
infile name of netCDF input file
outfile name for netCDF output file
netCDF library version 4.3.3.1 of Nov 6 2015 20:09:00 $
root@xx-desktop:~/Desktop/cc# nccopy -k classic pres.nc pres3.nc
NetCDF: Unknown file format
Location: file nccopy.c; line 1354
root@xx-desktop:~/Desktop/cc#
________________________________________________________
Relevant answer
Answer
Please try the latest version of NCO. NCO 4.5.4
git clone https://github.com/nco/nco.git;cd nco;git checkout 4.5.4
  • asked a question related to Cache
Question
5 answers
Should caches be based on positioning?
Should recency and frequency in content manipulation be the major variables to caching in ICN?
  • asked a question related to Cache
Question
5 answers
What do you suggest According to your experience?
Thank You.
Relevant answer
Answer
First, it depends on the mobility trace (movement pattern) of the nodes in question. Some mobility models more realistically represent some mobility patterns than others. So choosing a realistic mobility model for your scenario is equally important. Second, understanding the difference between on demand and table driven protocols will make a huge difference. So my suggestion is this - study the difference between the two classes of protocols and setup your simulation (preferably with ns-2 or 3) with different scenarios using different protocols and comparing your results.
That is, run the same simulation severally with different protocols (and with even different mobility models if you like) and compare the results. Two of my papers looked into similar area
  • asked a question related to Cache
Question
5 answers
I am planning to investigate the problems related with hardware implementation of cache memory replacement policy. I have read various advanced replacement policies in the publication. So far, most research focus on improvement over Least Recently Used (LRU) replacement policy, which improved over LRU in terms of miss ratio but no hardware implementation in details.
So, I am hoping someone can recommend latest publication/review related to hardware implementation of cache memory replacement policy.
Relevant answer
Answer
Hi,
Another very important research for cache replacement policies though it is not that new.
High Performance Cache Replacement Using Re-Reference Interval Prediction (RRIP)
  • asked a question related to Cache
Question
5 answers
As part of a study, I use a website (.NET) to administer an experiment via a tablet and 4G WiFi modem using Google Chrome. The experiment involves downloading a total of 70MB of images from the website as it goes from page to page, with each page downloading around 5MB of images each.
I have written an application cache manifest file "dvams.appcache" in which I list all of the image files and other resources which I wish to have cached locally on my tablet. In the aspx page using the image files, I have included the manifest attribute (manifest="dvams.appcache") in the HTML element, as directed.
However, when I run my experiments out in the field, Chrome (version 43) does not seem to take any notice of the manifest file, and the files do not persist locally on my tablet for more than a short period of time, after which Chrome behaves as if they have been removed from the cache.
In every experimental run I do, the browser continues to retrieve the files it already downloaded from the website. The speed of my 4G connection seems to vary markedly, and downloading these files time and time again can be excruciatingly slow, with pages taking as long as 2 or 3 minutes to download the required image files.
My theory is the file cache size needs to be increased to accommodate the 120MB of files I wish to be cached, or that an "offline mode" setting that used to exist in earlier versions of chrome (but did appear to work) needs to be set somewhere. I have seen how a cache size parameter "--disk-cache-size=104857600" can be added as a parameter to the chrome executable file when called from Windows, but cannot figure out how to accomplish this on Android 5.1.
Why are these cached files being deleted? How can I force my tablet to ALWAYS go to the cache for images it has already downloaded, and make these offline files persist permanently or until I manually tell Chrome to delete them?
Relevant answer
Answer
Thanks, Sanjay.
Some clarification about the task (which is conducted from my project portal website at xvams.com): 
1) 7 pages of scales are presented random order, for which about 5MB of image files (101 per page) are required. The pages needing images are mixed in with pages that use simple, text based sliders which require no images.
2) After these have been submitted, there is a multiple choice questionnaire of 14 questions, each on a separate page.
3) There is then a REPEAT of 1), with the same pages and the same images being presented again (for the purposes of examining test-retest reliability).
In part 3, since the images were already downloaded in part 1, I would expect Chrome to go to the cache to re-use the images it already downloaded in part 1, but I see little or no evidence that it is even cacheing the files at all. Today I ran the experiment, and by the time my participant had reached 3), my router connection had slowed almost to a stop, and Chrome was struggling to reload the same images it had already loaded just a few minutes earlier. It is getting to the point where a 10-15 minute task is taking up to 50 minutes because of these unnecessary reloads. My participants are stroke survivors with varying impairments, and are easily fatigued by the long periods of tedious waiting involved.
I have ALL of these files on the tablet (Nexus 10 running Android 5.1) and there is absolutely no need for Chrome to download ANY of them more than once. All the resources are specified in the appcache file, and the Chrome console (Resources - Application Cache) confirms that it is downloading them as directed. Yet Chrome seems to totally ignore the cache and just goes back to a server half way across the world.
I have searched high and low on the internet for weeks now, and all I see are the same instructions for specifying the appcache file and how to include resources to use offline, but nothing seems to work at all, and even basic cacheing of my images in a single section seems totally absent (as shown by Chrome downloading the same images again in part 3). In total, there are 7 x 5MB = 35MB of images each, for 2 separate 'actors' modelling the images, - a total of 70MB that I want to just persist INDEFINITELY on my tablet, and for it to NEVER retrieve them again from online.
I would be really grateful if you or anybody else could help me figure this out, as it is killing my study. I just want cacheing to work the way it is supposed to, and for Chrome use offline resources without constantly downloading the same bandwidth-hogging images.
  • asked a question related to Cache
Question
5 answers
from which file/log file i can able to get page modificaiton information (Dirty Page information)
Relevant answer
Answer
Stelios Sir,
thanks for your quick reply. I have seen Paper and code  but at this stage it would be difficult for me to extract information. May i request to give i) if log files of such output because i don't know whether NUMA architecture would work on my platform (any linux) and/or x86 architecture ii) if any standard log details of process migration available, so i am able to  extract page info from those logs....
- what are the parameters to migrate process and what is the role of kernel at the same time? can we generate a tiny OS program given in Figure 4. (https://www.academia.edu/760613/Survey_of_Virtual_Machine_Migration_Techniques)
  • asked a question related to Cache
Question
8 answers
I would like to add some optimization functions in the AOMDV routing protocol, i want add the concept of cooperative packet caching, but to do that, i have to have the code for the implementation of AOMDV.
Relevant answer
Answer
You may need to check the INTMANET source codes to find something similar.
You can check the following link as well 
  • asked a question related to Cache
Question
15 answers
I'm trying to guess what are L1 caches typical pipeline stages. The attached file describes a 3-cycle one, like those found in Silvermont, Jaguar, and Cortex-A9. The notation conventions are:
  • blue for the adress computation;
  • yellow for the adress translation;
  • orange for the data access.
However, high-end CPUs such as Haswell, Bulldozer, and Cortex-A15 have a 4-cycle L1 cache access latency. Where does the fourth cycle come from? Could someone explain in detail what do the four stages do?
Relevant answer
Answer
I'm looking for someone who knows for sure the answer.
  • asked a question related to Cache
Question
10 answers
I know the L1 is called first before L2 and fourth but why? Is there anybody with a theoretical and practical reason?
Relevant answer
Answer
There is no theoretical reason. The most compelling practical reason for this is cost. Maybe a minor practical reason is that is hard to implement a fast search for a larger cache.
The thing is that fast memory is very expensive. Otherwise we would use computers with the number of registers equivalent to several gigabytes of data. Because the processor logic would be quite complex for billions of registers (and thus expensive) we are using a L1 cache. This cache is still quite fast, but having a larger size of this kind of fast memory would again cost a lot. Therefore, the trade-off is again a small size. In the beginning of computing it would only take about 2 CPU cycles to access RAM (which by today's standards is also really small). With every increase in CPU speed another level of cache is needed to compensate for speed differences between CPU registers and RAM. Large and slow memory costs as much as fast and small memory. This is always a trade-off between size, speed, and money. You cannot optimize for all three at the same time. The main reason to have a cache is to hide latency to the RAM (or to the L3 cache, or to the L2 cache). Sure, this only works for programs that adhere to memory locality. Otherwise, you would have roughly 200 CPU cycles without any work; just waiting for data from the RAM.
  • asked a question related to Cache
Question
2 answers
The mapping between semantic data (e.g. file systems and namespaces, databases, object stores) and the devices that store the associated bytes and blocks that compose them has been one-way by design - from semantic data through pointers (e.g. inodes) to blocks of bytes, but the reverse association is rarely used, except for debug. In semantic storage, the storage controllers (and in some cases devices themselves) would know that a block belongs to a specific object, file, table/record/field and this reverse association could be used by new applications for performance optimization, security monitors, and data protection, for example. The idea is not to revolutionize or change storage, but rather allow for reverse mapping so lower-level features can be semantically aware and so new applications like intrusion detection systems can know that block access is suspect, sub-optimal, or requires cache updates.
Relevant answer
Answer
Marcos - thanks - can you suggest a specific SRM I should look at?  A paper or open source?
  • asked a question related to Cache
Question
3 answers
.
Relevant answer
Answer
Do you want to develop a web application that can obtain details about the web browser? If you are using the Django web framework for Python, you can use the Django User Agents package from https://pypi.python.org/pypi/django-user_agents
The browser sends some information within an HTTP request, and you can write Javascript to obtain the other information and pass it back to the web server.
If you want to develop a desktop application, you can obtain Internet Explorer settings from the Windows registry, for which we have the winreg Python module. Firefox settings are stored in a prefs.js file. You can similarly find the location of the settings for other web browsers to read their settings.
  • asked a question related to Cache
Question
6 answers
I am doing some power-aware work that can hugely benefit from it. 
Relevant answer
Answer
There are processors (PPC, ARM) for example where you can disable an entire cache. “Entire” can mean only the data or only the i-cache. Disabling part of a cache would be trickier, because you would need to have logic that allows altered addressing, which would add delay to the critical path on L1, less an issue for lower-level caches.
  • asked a question related to Cache
Question
1 answer
We are trying to "evaluate interconnect power" of on-chip memory in NoC.
Can I evaluate power/Testing power for NoC by measuring the interconnect length (accessing data from Core to different cache levels).
I need help to resolve two issues.
1. Can I use the word "Length" to access data from core to different cache Level.
2. Do I use the term "Evaluate power" / "Power testing" or any other term you suggest.
Relevant answer
Answer
if by length you refer to the topological distance from the core to the level where the data is, or the base latency to reach that cache level, this would be a simplistic metric that assume there is no network contention. Besides, you should also consider the network interface cost as part of the access time.
This metric may be ok to do some initial mapping, i.e. decide which node to locate data to increase locality of reference, but not good to evaluate the network design.
A common term to discuss power constrains is power efficiency; you can calculate the energy delay product.
  • asked a question related to Cache
Question
3 answers
Can anyone suggest few research article which evaluate power in NoC interconnects, evaluate power of links between Router to cache (L1,L2, L3) in NOC?
Relevant answer
  • asked a question related to Cache
Question
2 answers
L1 Cache, L2 Cache
Relevant answer
Answer
Cache memory and flash memory are quite different.
cache memory is more expensive per MB than DRAM, volatile, and low latency
flash memory is less expensive per MB than DRAM. non-volatile, and high latency
Neither really has much to do with loading programs into memory, unless
you are loading the program from a thumb drive for example.
cache memory is high speed memory used to try to minimize latency in
accessing high latency large capacity DRAM, It uses the fact that many
programs will access memory sequentially, so a DRAM to cache transfer
of an entire cache line at a time (say 32 bytes at a time) to low latency
but expensive cache memory can mean that instead of 8 successive
high latency 4 byte memory accesses, there is one high latency 4 byte
memory access (which causes an entire cache line transfer) followed by
7 low latency cache memory transfers.
In this way, DRAM can appear to have nearly 1/8 the latency it really has.
Flash memory is what is used in SSDs or USB sticks as faster than
rotating storage, but much slower than DRAM.
  • asked a question related to Cache
Question
2 answers
.
Relevant answer
Answer
The original question was about replacement algorithms, not coherence.
I did a quick search for "LIRS implementation" and something on Google code showed up as the first result. I would suggest some additional searching.
  • asked a question related to Cache
Question
5 answers
Any Simulator easy to use with object oriented programming code for Analyzing pipeline efficiency? Suggestions of good papers regarding the above matter?
Hardware pipeline. I follow the same mechanism in which Pipeline will be full with Instruction, there is no branch prediction so 100% cache hit ratio. Our mechanism is different compared to Branch delay slot and Dynamically Branch prediction.
Relevant answer
Answer
Shahnawaz,
I am sorry for telling Mac OS kernel written in C, of course it`s object C.
And i will be rather talking about that multithreading process communicating through pipes than some "multithreaded pipes" it;s just spontenous shortenning. It doesn't means that single threaded process or multithreaded process will communicate different through pipes.