Science topic
Operating Systems - Science topic
Explore the latest questions and answers in Operating Systems, and find Operating Systems experts.
Questions related to Operating Systems
NIST's new cybersecurity profile is designed to help mitigate risks to systems that use positioning, navigation and timing (PNT) data, including systems that underpin modern finance, transportation, energy and other critical infrastructure. While its scope does not include ground- or space-based PNT source signal generators and providers (such as satellites), the profile still covers a wide swath of technologies.
Source:
Safeguarding Critical Infrastructure: NIST Releases Draft Cybersecurity Guidance, Develops GPS-Free Backup for Timing Systems | NIST
Assessing Cyber Threats To Canadian Infrastructure - Canada.ca
Right now, in 2022, we can read with perfect understanding mathematical articles and books
written a century ago. It is indeed remarkable how the way we do mathematics has stabilised.
The difference between the mathematics of 1922 and 2022 is small compared to that between the mathematics of 1922 and 1822.
Looking beyond classical ZFC-based mathematics, a tremendous amount of effort has been put
into formalising all areas of mathematics within the framework of program-language implementations (for instance Coq, Agda) of the univalent extension of dependent type theory (homotopy type theory).
But Coq and Agda are complex programs which depend on other programs (OCaml and Haskell) and frameworks (for instance operating systems and C libraries) to function. In the future if we have new CPU architectures then
Coq and Agda would have to be compiled again. OCaml and Haskell would have to be compiled again.
Both software and operating systems are rapidly changing and have always been so. What is here today is deprecated tomorrow.
My question is: what guarantee do we have that the huge libraries of the current formal mathematics projects in Agda, Coq or other languages will still be relevant or even "runnable" (for instance type-checkable) without having to resort to emulators and computer archaeology 10, 20, 50 or 100 years from now ?
10 years from now will Agda be backwards compatible enough to still recognise
current Agda files ?
Have there been any organised efforts to guarantee permanent backward compatibility for all future versions of Agda and Coq ? Or OCaml and Haskell ?
Perhaps the formal mathematics project should be carried out within a meta-programing language, a simpler more abstract framework (with a uniform syntax) comprehensible at once to logicians, mathematicians and programers and which can be converted automatically into the latest version of Agda or Coq ?
Mac OS vs Linux vs Windows??
I personally use MacOS but would like to know what other people use for their research work, preferably researchers associated with Computation work. If possible do let me know the reason. This is just a survey.
Any new innovations other than First-Come, First-Served (FCFS) / Shortest-Job-First (SJF)/ Round Robin (RR) or mix-development of those?
I have taught a subject called 'Operating Systems' for the first semester students in two batches. (in 2019 and 2021). In 2019, it is physical and in 2021, it is totally online. I have used some new approaches to make the online class interesting. I have the final grades of students in both batches. Can I write a paper by comparing the results of these two batches? I don't have a big idea on how to compare them? I feel like I can compare the results based on physical setting and online setting. Since the batches are different, can I do a comparison like that? I would like to hear from you.
What are the User goals and system goals of a specific operating system for a Small Garden plant maintenance computer system which consists of automated devices, IoTs, Wi-Fi and cloud database ?
Disclaimer
The discussion targets the perspectives of two types of participants.
- Software Engineering Student: please analyse the situation with your knowledge and answer.
- Software Engineering professional / scientists: please use your experience and insert answers as advice.
It is warmly welcome the ideas of both the groups as well as interest audience
User mode and Kernel-mode are two processing statuses of the operating system. Please suggest to me, a very simple example in which you can explain the differences and other functionality such as system calls, interrupt to a novice learner.
Further, just inform how to map the example with subject
Disclaimer
The discussion targets the perspectives of two types of participants.
- Operating System subject's Student: please analyse the functionalities and construct an answer.
- Computing professionals/teachers: please use your experience and insert answers as advice.
It is warmly welcome the ideas of both the groups as well as interest audience
I've been doing transwell experiment recently, however when I'm counting cells with ImageJ, it always goes wrong, I can't find out the reason.
1: Image, Type, 8-bit
2: Image, Adjust, threshold
3: Process, Binary, Watershed
4: Analyze, Analyze particles
the final result of the cell counts: 2787, which is apparently wrong, the number I counted is 330.
Operating System: macOS; ImageJ is the latest version.
Please share if anyone has encountered the same question or have a solution to it.
I am developing a tool . A GUI interface ( using pyqtgraph) continuously read high baudrate data from serial port and need processing to plot in the GUI in Real-time.
GUI will run on main thread, so currently I am doing the data collection and preprocessing in a thread. But problem is here. Python threads does not give priority controlling. Most of the time data collection and preprocessing is getting less amount of CPU.
Should I go for multi processing or any idea to improve the current strategy?
Conceptually I see equalities in Human Body Functionalities and Computer Operating System processes.
What do you feel, the processes of both are happening in the same way?
If yes what are the similarities ?
If not what are the differences?
I'm targeting to deploy a mesh network and manually configure MANET routing protocols. I'm preparing scenarios, architectures, and hard devices needed to do that. Are there some step-by-step guidelines?
How different is the software development for IoT when compared to traditional software development?
I understand that there are a couple of challenges in this domain, like:
1. Computing power of the devices.
2. Security issues.
3. Operating system based issues.
And many more.
Are there any other factors too that set this field apart from the conventional software development field?
Do traditional software design principles offer any help?
Are there any good resources to get a better understanding of this topic, to know the challenges in this field and also to know the recent advances?
I want to create a N.N which can learn person OS usage based on their use of application and data related to specific application's background time, usage frequency, usage time of the application (elapsed time) and foreground time of the application to predict scheduling of the processes and memory requirements.
This neural network can help predict user behaviour and pre scheduled its working and system instructions rather than working on real time, to create a faster system response.
Please refer: for more detailed explanation of the system discussed above and research done on it
Hard drive crashed on an old windows 2000 that was used to control a TA Instruments 2910 DSC. Installed new hard drive and I am trying to figure out where I can find the software/operating system for the TA 2910 DSC. Instructions say it is located in the DSC module. Also need universal analysis software. Any help would be greatly appreciated.
the answer is just its consider or if it not consider could you but the level type of cloud service for this (ex. SaaS,PaaS ...)
I want to log user activity in a way to determine how they access their files on daily basis. For example, student have book in pdf and he read this file and other files during learning process. Log data will be used to model of users. Any ideas or suggestions are welcome, it would be best if there is some built in feature or tools inside Operating system so user stay unaware of tracking their behavior.
There are many works on cloud task scheduling that mainly address run-time efficient scheduling of tasks submitted to a cloud. The problem is much similar to the generic problem of scheduling processes in a generic operating systems. Average turnaround time is a system oriented metric that is optimized in process scheduling algorithms such as shortest-job-first or highest-response-ratio-next.
A point that is confusing for me is why scheduling in cloud service centers is treated so much differently than generic OSs ? What are cloud specific challenges that lead to evolutionary solutions instead of deterministic algorithms for generic OSs?
Hi everyone
Hope you are doing well. Is there anyone who is working on ROS ( Robot Operating System ). I need one favor to convert .bag files in to .csv . I would appreciate for your kind response.
Regards
How to configure open stack in a system? Which OS is best suited for cloud configuration?
I am currently doing my thesis in the topic "Anomaly Detection System for SCADA".
My work is focused on finding malicious data in Modbus/TCP packets that can cause a failure in a Power Plant (SCADA system).
From my understanding, Master and Slaves devices in SCADA systems communicate through Modbus/TCP protocol,
e.g. Master sends to Slave: Query -> Write single Coil(5)
Slave to Master: Response -> Write single Coil(5)
Since I would like to use this dataset to train Machine Learning algorithms, I should need a big amount of data composed of benign and malicious Modbus/TCP packets (pcap files for example).
Does somebody knows where could I get this testbeds pcap filas?
Hi,
I am looking for reviewing journals/articles in the following areas.
1) Computer Applications, Embedded Systems, Software Engineering, Communication Protocols, Operating Systems.
2) Security/AAA(Authentication, Authorization and Accounting)/Policy/Identity Networking.
3) Data Structures, Algorithms and Programming.
4) Wireless Networking.
5) Software Architecture.
Please point me to any international journals accepting new reviewers.
Thanks,
Subash
Hi,
I work in a lab where we try to extend the capacity of the Hololens to process point cloud data. One way would be to accelerate some algorithms by running them on the GPU through OpenCL.
The Hololens CPU is an Atom processor with Intel® HD Graphics supposed to support OpenCL:
The problem is that the OpenCL drivers do not seem to be installed on the device and I cannot find how to install them.
Does anybody have succeed in running OpenCL code on the Hololens? If so, how did you do it?
Regards,
Bruno
P.S.: Did anybody succeeded in opening a console to run command line programs or scripts on the Hololens. That might be part of the solution.
P.S.: Please, refrain from vague suggestions like "try through the device portal".
Hi
Please, how can we make a new operating system ?
Is the method to bring some developers with strong background in theoretical operating system operations and then they should learn from the source codes from another operating systems such as Linux various versions. Then, you can write your operating system and then testing it with ethical hackers to fix the security issues. Then, you can deploy it for some users as testing version. Then, it is validated to be used.
Please, any help for developers to do this than talking alot in useless things.
Thanks
Osman
I am trying to make changed in AODV Protocal for which i need to rebuild ns2. Rebuild NS2 (Network Simulator) in Ubantu never ends, every time i have to terminate the Terminal....I did wait for approx 20 minutes, Please suggest something?
Now that we have support for fault recovery in VirtuosoNext, We have been wondering how extensive the coverage could be in real-life systems. The issue is that data on failure root causes is either considered as confidential, either narrowly focusing on specific elements (e.g. hardware reliability). We cannot really find statistical data for these system level failures. Do you know of any such data?
In a real system, we have layers and we have some assumptions. Firstly, todays hardware can be considered as highly reliable. Of course, it assumes that design rules were followed. If hardware fails, it will most often be because faults are introduced from the outside (bit flips, power supply spikes, etc, I/O issues, ...). Secondly, software can be correct (e.g. when formally developed and proofed), but will likely still contain residual errors. These can be due to incomplete specifications, numerical instability, compiler errors, memory access violations, etc. To simplify things, we also have to assume that the hardware provides some support in detection such faults. Memory management circuits can detect memory access violations, illegal instructions are data errors can generate an exception interrupt and at a coarser grain level, time-outs can signal that a complete unit is no longer responding. Everything else, might require redundancy in the architecture.
The RTOS kernel of VirtuosoNext handles faults detected by the CPU as exceptions:
Memory access violations (triggered by bit flips, but most likely software errors or security breaches).
Numerical exceptions: can be triggered by I/O not being clamped, but also by software errors and algorithmic instability.
Illegal instructions: pointer errors, bit flips, security breaches, …
Above support aims at providing continuity of the real-time embedded applications even when faults as above occur. The development environment assists in fine-grain space and time partitioning but also allows to define automatic "clean-up and recovery" actions. The code generators can be extended to automatically generate temporal and spatial redundancy (because VirtuosoNext is MP-transparent).
Much of such fault recovery support can of course be manually programmed, but the ideal case is that this is automated. The latter should be based on a trade-off analysis. Note that todays practice is often coarse grain. If a fault occurs, the whole application of even the complete processing system is rebooted. Even if such an event can have a low probability, in many cases it can be catastrophic. Boot times might be relatively fast with small programs (the code must be read from e.g. flash and the system re-initialised) but if the time constraints are too short (read: micro- or milliseconds) and the code is relatively large, this is not a real option. Hence, the system should prevent that a reboot is the last option available.
In order to provide such support in a meaningful (and economical) way, we need to know more about the residual probabilities of failure and errors in a real (embedded) system. We cannot really find statistical data for these system level failures. Do you know about any such data? We are aware that this might not be trivial, but your help will be greatly appreciated. Contact me at eric.verhulst (at) altreonic.com
Wearable devices typically need a compact operating system to fit into low-power microcontrollers.
What happens to the R0 register value after the execution of DEC R0 using 8051 (before the execution R0=0x00).
I want to find the total memory access time for a matrix program. The formula is
MAT = HITRate × CacheAccessTime + MissRate × RAMAccessTime
I have calculated the cache miss and hit rate by cache grind. But I dont know how to find the cache access time and Ram access time? Kindly I need your help. Is there any tool like perf and cache grid which find the cache and Ram access time in ubunutu.
I am working on Energy optimisation and using DVFS technique. For this purpose I have to give game traces workload i.e game workload instead of planet lab own workload. Currently I am using cloudsim, Can any one suggest the way to add external workload in cloud sim planet lab power Dvfs example?
I am using perf for checking the cache performance I am using the following command in ubuntu but it does not give the required output. I want to check cache references and cache-misses. this command is working on core i5 but it does not work on the virtual machine.
$ perf stat -r 5 -B perf stat -r 5 -B -e cache-references,cache-misses, ./cmiss
the output of the above command on virtual machine is:
Performance counter stats for './cmiss' (5 runs):
<not supported> cache-references
<not supported> cache-misses
66.994228 cpu-clock ( +- 0.14% )
0.067178350 seconds time elapsed ( +- 0.13% )
TORA tcl script is not executing without cloning of AODV in TORA. Is there any other way to execute TORA in NS2.35?
I would like to see the effects of inlining C/C++ functions at the source code level.
After some searchs it looks like IDEs like Eclipse can do that on Java code, but not in C/C++. Am I right?
Also I would like to control when to inline or not inline a function at the calling site. I understand that Intel's ICC "#pragma forceinline" is exactly addressing this, in contrast to the GCC "__attribute__((always_inline))" which is addressing all the function invocations.
I understand that it would be possible to implement such translation using clang. Do you foresee any major roadblock for that?
I am also wondering if anyone has done anything similar, or there exist other alternatives I haven't though of...
I want to use this model for a domain of 150*150 km for forecasting weather parameters.
Please suggest these:
1) Operating System
2) Compiler
3) RAM and ROM
4) Any related material for this purpose.
Hello, all. is there any mechanism in the Xen scheduler to check the required cache of individual VM on dynamic basis? When virtual machine creates then how the Xen scheduler will detect that how much cache is required to this particular VM. and re-partition the cache according to each VM requirement. I need to check which function in the Xen source code?? Kindly I need your help.
I'm doing my undergraduate thesis in "Power Aware Disk scheduling algorithms". I'm presently working on simulating and testing of a new algorithm. Can you tell me from where I can get traces especially of data centers consisting of several disks.?
Also, how can I know how much time and energy will it take to complete a disk request?
What is the number of instructions (how big is the instruction set) in a modern general purpose CPU, for example an lntel Core i7?
It has already been proved that the performance of operable systems can be improved by the technique of redundancy (Cold standby and parallel) and by using proper repair facilities in normal weather conditions.
There is some place/repository where I can find some of implementation of the most common algorithms for VANETs using OMNET++ , VEINS and SUMO?
Can be any one, like DSR, AODV, PBR, GPSR. I created one algorithm and I want test him with another(s) algorithm(s)/protocol(s).
I tried VANETProjec (https://github.com/chaotictoejam/VANETProject), but didn't compile correctly.
I'd appreciate any help
I'm looking for an analytical model to detect the effect of the memory system on the performance of multi-core processors? Like the number of levels in the memory hierarchy and the size of each level. Taking into account the application type.
For example, task periods and computation times can be generated using Stafford's Randomfixedsum algorithm especially for tasks that have implicit deadlines. Can the same algorithm be used to generate arbitrary deadlines? Or, are there other accepted methods of doing so?
Thanks for your reply.
in ULT, scheduling is performed at user level (without the kernel involvement). but if one user level thread leave CPU and another will be schedule, then we have to change the value of program counter, Stack pointer and CPU register. without the involvement of kernel, how we can access these registers (hardware)?
- i m doing simulation RPL using cooja, and i want to calculate the remaing energy of each nodes after finishing simulation.
- in fact i want to calculate the life time of nodes and i found that the remaing energy reflect this.
- there is any other solution ?
My understanding is that it depends upon the software. If software handles the faulty prefetching misses to recover from error and do error related work, it is faulty. If the software handler is implemented in a way that it ignores the prefetching misses and replace them with nops. then it is non faulty?
Any ideas and related sources will be useful
The Robot Operating System, as it's defined in the Wikipedia, is a set of software frameworks for robots. I want to know what are the practical uses of this operating system? are there any example of the uses out there ? or it is just for research purposes ?
What kind of help can I get from the ROS in my Robotic researches ?!
There are 4 types of MAC frame. Data frame, Acknowledgement frame, Beacon frame, Command frame.
Device<-PAN Coordinator by Beacon frame
Device<-PAN Coordinator by Aknowledgment frame
Device<-PAN Coordinator by Data frame
If device connect first time to PAN coordinator, then Beacon frame should used. But When Command frame use to?
Is Command frame once time transmit for all duration?
Some paper use this frame for emergency system. So Command frame transmit every Superframe order?
Windows 2000 Active Directory data store, the actual database file, is %SystemRoot%\ntds\NTDS.DIT. The ntds.dit file is the heart of Active Directory including user accounts. Active Directory's database engine is the Extensible Storage Engine ( ESE ) which is based on the Jet database used by Exchange 5.5 and WINS. The ESE has the capability to grow to 16 terabytes which would be large enough for 10 million objects. Back to the real world. Only the Jet database can maniuplate information within the AD datastore.
I am building a framework in a .NET C# . where we can compile CUDA program on local machine and execute it on a remote machine, where capable GPU exists. So is there a specific way to achieve this. I found psexec is used to execute commands on remote machine. Considering fact that CUDA devices has lot of limitations to access remote machine. Is there better alternatives for this purpose. (I do not need existing solutions like rcuda and etc..)..
Hi All.
How can we implement INTERACT(INT) feature selection process in WEKA ? I think SymmetricalUncertainityAttributeEvaluation (SU) is not enough. Right?
It also says after SU, do ''c-contribution''. But I couldn`t find it.
When I apply SymmetricalUncertainityAttributeEvaluation, it takes all the features(by default).. I have to do one more thing but I could`t figured it out.
I am trying to install Plink/Seq on linux elementary freya. Does anyone know if there are any pre-required packages to be installed before plink/seq?.
That's because it is just not working whatever I do. Thanks
Hi,
I just want to broadcast a message to a number of nodes, using Contiki socket.
i am a looking for a good research on UX and UI of the modern Operating systems and how the OS is being evolved in the UX department and how is it going to be in the future.
As far as I know it is highly unreccomended to set a proccess priority on MS Windows as Real-TIme, however on which cases it would be reccomended ?
The research may be concerning maritime ports, container terminals, general cargo terminals or yacht marinas.
I am trying to find some performance parameters of middleware, so I want to calculate overhead on top of my original data size by middleware.
e.g., Data size = 8 bytes, sending 1000 times, will be = 8000 bytes, what will be overhead by middleware.
I refer to the receiver where conversations are heard on a smartphone
Hello,
What do you consider the diversification operator in GAs? is it the crossover or the mutation operator? I always thought it is the crossover because it provides large jumps in the solution space by exchanging material between the parents, while mutation is the intensification because it allows for only a small change around the solution. However, it seems that many researchers consider mutation as the diversification strategy since it provides a variation in the solution to avoid getting stuck in a local optimum. What do you think?
I have some tasks, which are further divided into runnables. Runnables execute as task instances. Runnables have dependencies within the tasks and also to other tasks's runnables. I have the information of deadlines and periods of tasks and the execution order of tasks and runnables i.e I can extract the data flow but the only point where I am stucking is that how can I get the information IF the task instances are executing within the period i.e obeying the deadlines and if not executing withtin the deadline then that task instance will execute in the next cycle or next period.
Any ideas ? Suggestions ?
p.s I dont have timing information for the execution of runnables.
My current understanding is that the OS steps in only for page faults, so this should be difficult.
I need to redesign hardware again of all devices like mobile and laptop to be consists of configurable blocks then built os based on HDL language.
is it possible or not and why?
Agilent N3520M DSPedia self-paced training DVD DSP for Communications, Training Curriculum and SystemVue Resource DVD (5-user license)
How can I create a virtual environment with low clock speed to study about application performance ?
(i.e) I wish to create few Mhz clock speed environment with in my multi core Ghz processor (if possible control on my windows 8.1 operating system)
Please share Tool name so far you come across.
Thank you.
It is origin but many of its application does not works, for example: video, picture,... that is because of updating problem or something else. What should I do? Thanks in advance.
Hi there,
I'm working on WSN and use OMNet++ simulation. Do some operating systems support WSN protocols better than others?
What operating system supports the most protocols for WSN?
I am working on VANET and using NS-2.34. I am unable to perform beaconing. Should I have to patch WAVE/DSRC protocols in NS-2, if yes then where can I find the patch and example TCL files?
Hi,
I want to hide the messages of startup and boot up Debian, and then show the splash screen (logo) instead of messages.
I did some work using Plymouth, but it didn't work so well. I tried to use kernel, bootup logo, but they didn't work on my system.
Recently I used FBI and wrote script on /etc/init.d/ but it didn't work.
Lmbench is a series of micro benchmarks intended to measure basic operating system and hardware system metrics.
The program was executed successfully with gcc 4.3 but if I use gcc 4.6, will the execution speed vary ?
ARM processors are used in handheld devices, thus, requiring energy efficient operation. Intel, on the other hand, is dominant in server market for high performance application and is not quite energy efficient. A 1Ghz core of ARM and Intel have quite different power ratings. What is the main difference in the ARM architectures that makes it energy efficient with respect to Intel?
Some operating systems can run other operating systems as guests or as "virtual machines". Examples include Solaris Zones and IBM mainframe LPARs on large systems. VMware, WINE and Lindows on smaller systems. How is this accomplished? How do the virtual environment map memory, interrupts, processes, etc to those on the host OS?
I want to use ns-3 for research in nano sensor networks. I don't know which OS and which version of NS-3 is useful to me for that.
I know there maybe different applications for robots and different software and operating system are optimized for each application, but I wonder if there maybe something very popular what is used in majority.
I would be grateful if you consider applications of each software/operating system you mention.
Context Switch happens when processes CPU time slice finishes or interruption happens.
How can we design a file system for storing the data? Will it be fine if we make a dummy program in C/C++ with a predefined size of window that accepts input of the file of 5KB and stores in it by maintaining the address table of each data block as well as the hierarchy. I know that in real, file system is a complex thing and more than this dummy program.
If there is any good material or simulation tool available for this then inform the same.
Implementing threads could be in user space or kernel space.