Science topic
Hardware - Science topic
Explore the latest questions and answers in Hardware, and find Hardware experts.
Questions related to Hardware
**Personal Background**: I got a PhD degree from Electrical Engineering at Peking University, currently a postdoc in the U.S., and preparing to pursue a faculty position. I have published 4 first-author conference/journal papers (with IF>8, CCF-A), 2 are under review, and 2 are in the implementation stage. I have led multiple successful tape-outs, with 28nm technology and designs involving millions of transistors. I have modest expectations – I’m not a big shot, just looking to make progress.
**Target Collaborators**: Peers, undergraduates, master’s students, or PhD students. Undergraduates are particularly welcome.
**Research Situation**: I have many mature ideas and solutions that lack the manpower for implementation. These are not just purely algorithmic, far-fetched ideas but are combined with hardware and can demonstrate real results, with a high probability of success. Specifically, the work involves optimizing existing implementations, requiring either training network algorithms or writing hardware modules, with a high likelihood of resulting in strong publications. Alternatively, I already have hardware and need to explore its application scenarios.
**What I Offer**: Co-authorship (with order based on contribution), along with soft support such as research guidance, AI algorithms, hardware implementation, chip tape-out mentoring, recommendations for further education, etc.
**What You Should Offer**: Most importantly, sufficient time to carry out the work (estimated at three months, with an average of 3 hours per day). Secondly, you should have a basic programming foundation or Verilog hardware background, or enjoy conducting research. Ideally, you should not be affiliated with a specific advisor.
Serious inquiries only, thanks.
If convenient, please send your resume for review. Personal information can be omitted.
What are the best practices for optimizing the performance of hardware components?
Are the most powerful hardware options for deep learning always the best choice?
What is the long-term effect of the today’s hardware architecture choices (Von Neumann or parallel architectures of GPUs etc.) on the trajectory of AI system development?. What do Marr’s three levels say about that? How do NVIDIA's parallel hardware impact the way we perceive intelligence?
Let us consider the possibility that the behavioural similarity of different intelligences could be coupled with their endogenous structural similarities. While it’s plausible for other forms of intelligences to exhibit some similarity to the human traits, the likelihood of such emergence might be comparatively lower once designed independent of human brain. In other words, it seems reasonable to hypothesize that similar structural properties might lead to similar functional and operational characteristics under several known conditions.
I would suppose ground breaking genuine research work might be needed to precisely define such conditions and demonstrate a potentially inherent coupling between the hardware (the architecture) and the software (algorithm and/or learnable applications) in any intelligent system.
How about the effects of the hardware on which the intelligence is implemented on the development of our brain tissue? neural connectivity? biochemistry of information transfer? All such considerations must be given priority in the design of next generation AI if we really want to maintain a healthy mental state at the societal level and uphold human values against corruption.
I am currently studing Electrical & Electronics Engineering. For my individual research my institution asked to do only software or simulation based research. Please suggest an IOT or electronics side topic without hardware implementation.
Hi everyone, my computer don't recognize the nanodrop and shows an error Code 10. Could be a driver error or a damage in hardware? Thanks for your help! :)
Does anyone recommend an inexpensive and reliable software for animal behavior video recording?
I'm looking for a digital video recording software that can receive external hardware signal like 5V TTL pulse to trigger START/STOP of the camera. Therefore, the video can synchronize with other data acquisition software. Thank You!
I could not find a proper model to replicate or to make changes in.
The required hardware, software skills, and knowledge to build a computation platform for drug discovery from scratch.
Hello Everyone,
We have LTE tool box option in MATLAB and we can generate the LTE signal for visualization. Is this possible to have an LTE signal in .C or .HDL form? What are the possibilities to get LTE signal in these form, because to develop a hardware and if LTE used as a reference signal then what are the solutions?
Thanks in advance and I welcome to all researchers, Faculties and Industrialist for your participation.
Best Wishes,
Dr. Akhilesh Verma
Maximum time when I going to finetune a big modal like GPT-neo GPT-J. after download I I have faced RAM issue or GPU or version issue. if need more resource what will be the best pc configuration because I am going to build a new pc for this issue. at present I have 16GB RAM , 4GB graphic and Core-i5 HQ series processor.
Thanks in advance.
We have an Akta Purifier at the lab and It was working Ok til last month when without warning the comunication between the software and the Akta doesn't seem to work properly.
I can send orders from the software and the akta response, but the software doesn't register anything in the logbook. It also does not register the UV or de Conductivity data, and the Run, Pause, End buttoms are disabled.
The software does not send any error at all. but if I disconnect the AKTA, the software warns that it loose the connection.
So, software sends orders, the equipment react properly
the software does not register anything (no logbook/no Error/no UV/ no Cond)
The software, received signals form the Akta, when it is disconnected.
can anyone give me an idea of what the problem could be the problem?
I recently reactivated our AKTA Pure (after we moved it to a new bench), but the AKTA is suddenly not recognizing the F9-C fraction collector. (It was working well before.) I can hear the fraction collector arm trying to move, but it does not. The AKTA shows the error message "(Error) Hardware Manager: Fraction Collector Arm (F9-A) : (30) The fractionation arm failed to find the home position."
I checked the tubing from the fraction collector to the AKTA and the cable from the fraction collector to the AKTA, and they are connected. I checked the system properties, and the enabled components of the system look correct. I manually purged all the valves with ethanol. I restarted the AKTA and the computer several times, but I get the same message. Has anyone experienced this, and how did you fix it? Thanks.
mainly for result validation by using matlab based simulation
Recently domain-specific hardware accelerators are getting more popular. While designing them what should be the approach? If anyone can guide me here. As I am a bit confused about the implementation part.
What is the difference between a conventional rectifier and an ultrafast rectifier?
What is the hardware difference ?
What different control technique is implemented to make the rectifier ultrafast?
Simulating AM processes are time-consuming action compared with doing casting process, since the AM processes fabricate CAD models upon a layer-by-layer method. Let's consider condition that the model dimensions are roughly large. Thus, the computing time will take a long time to be done. Apart from the time issues, this action definitely has a bad effect on the hardware of the system!
Are there any practical alternatives to solve this problem? In my opinion, the solution may include changing such material properties as density and conductivity in a way differing from those which have been selected for computing casting process, or just simplifying the model and reducing the domain sizes!
Any comments will be appreciated.
I am doing research in IOT/WSN and developing my own protocols in NetSim. For testing with real hardware, I want to connect Raspberry Pi to NetSim. Would you know how this can be done?
Hi all the ResearchGate community!
I am going to do some spectrophotometric measures, for what I am going to use a spectrophotmeter which is controlled by means of a software installed in a computer (to which the spectrophotometer is connected).
Even though the spectrophotometer is properly connected to the computer via the corresponding wire, I am not able to control the spectrophotometer with the computer.
I have made sure to install the program correctly, and I have also installed the drivers which allow the software to control the spectrophotometer.
The spectrophotometer cannot be used without the software. Could someone give me some light about this?
Thanks in advance!
Pablo
we want to develop 5G research lab so please suggest software/hardware
Can we look at relationship between brain and consciousness similar to relationship between hardware and software?
There conciseness sound more like GUI part of computer software?!
I am a CSE student in last year of undergraduate program . I need a specific thesis proposal .
my interest is in AI . I am really struggling to find a real life problem . I've got 6month to complete my graduation . I would like to avoid hardware implementation . Please suggest some specific topic or idea that might help me to start thesis early .
I'm struggling with understanding and troubleshooting qPCR issues with cycles 1-3.
The rate of change for the cycles 1-3 is so high, it is tricking the software into thinking that is where the fastest rate of change is and throwing an error. See attached pictures of the PCR curve (Top) and rate of change graph (bottom).
My questions are as follows:
What are the factors and phenomenon that control PCR fluorescence in ~cycles 1-3? How do I eliminate the sudden increase at the start? Is there a hardware problem?
Note: Multiplex QPCR reaction using Taqman probes. Established and FDA approved assay.
Dear respected
I would like to implement Automatic Switch between two PV panel Depending on output voltage (Exactly I mean depending on the shading ) to change connection form series to parallel or from parallel to series .... I implement it hardware by using two relay but in Simulink I can not do it
so how I can implement below image as MATLAB Simulink ?
kindly any one can help ?
Why vedic mathematics based algorithms for division, multiplication etc. are not seen in reputed journals of Algorithms, Hardware architectures.
Dear colleagues,
from a practical point of view, how do I accurately measure sediment pH of more or less compacted, anoxic lake sediments in the field? Imagine there is a freshly recovered and opened core in front of you.
The problems that I imagine I have to deal with are:
1. that I may only have a very small amount of material available, say maximum 2 g or ml, maybe even less
2. that water content of the sediment will vary (maybe this is important when preparing a suspension?)
3. that handling steps should be minimezed due to anoxic sediment becoming oxidised quickly
4. that due to the small amount of of sediment available, the amount of suspension is also limited, so I might have to use 2 ml Eppendorf tubes, if I am lucky I can use 15 ml falcons, to obtain enough suspension for the pH-Meter to be submerged into and measure. Standard lab pH-Meters only barely fit 15 ml falcon tubes but not 2 ml Eppendorf tubes. Any advice about the hardware?
Using needle-like microsensors to measure directly in the sediment core is unfortunately not feasilbe.
Thank you for your comments!
gmx mdrun -v -deffnm em
Back Off! I just backed up em.log to ./em.log.
Running on 1 node with total 8 cores, 16 logical cores
Hardware detected:
CPU info:
Vendor: GenuineIntel
Brand: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz
SIMD instructions most likely to fit this hardware: SSE4.1
SIMD instructions selected at GROMACS compile time: SSE4.1
Reading file em.tpr, VERSION 5.1.5 (single precision)
Using 16 MPI threads
Using 1 OpenMP thread per tMPI thread
Back Off! I just backed up em.trr to ./em.trr.
Back Off! I just backed up em.edr to ./em.edr.
Steepest Descents:
Tolerance (Fmax) = 1.00000e+03
Number of steps = 100000
Step= 0, Dmax= 1.0e-02 nm, Epot= 9.29257e+19 Fmax= inf, atom= 8167
Step= 14, Dmax= 1.2e-06 nm, Epot= 9.29257e+19 Fmax= inf, atom= 8167
Energy minimization has stopped, but the forces have not converged to the
requested precision Fmax < 1000 (which may not be possible for your system).
It stopped because the algorithm tried to make a new step whose size was too
small, or there was no change in the energy since last step. Either way, we
regard the minimization as converged to within the available machine
precision, given your starting configuration and EM parameters.
Double precision normally gives you higher accuracy, but this is often not
needed for preparing to run molecular dynamics.
You might need to increase your constraint accuracy, or turn
off constraints altogether (set constraints = none in mdp file)
writing lowest energy coordinates.
Back Off! I just backed up em.gro to ./#em.gro.3#
Steepest Descents converged to machine precision in 15 steps,
but did not reach the requested Fmax < 1000.
Potential Energy = 9.2925744e+19
Maximum force = inf on atom 8167
Norm of force = inf
NOTE: 8 % of the run time was spent in domain decomposition,
13 % of the run time was spent in pair search,
you might want to increase nstlist (this has no effect on accuracy)
I am looking for PhD level project related to optimizing blockchain using hardware acceleration. For example, implementing PoW in hardware may reduce resource consumption in IoT application, however, as PoS is becoming popular PoW may not be feasible in the future. Are there any other areas such as this in BC that can benefit from hardware acceleration?
Thank you for your time.
We are using Agilent 7700X ICP-MS to test Iodine in milk products. Blank reading is usually at 4008-8000cps but recent readings are now at about 20000cps. Standards and samples have shown an increase in cps readings as well. We have replaced sample and standard tubings and cleaned the torch and cones. We have also purged the sample introduction system with Argon for around 8hrs at 0.2L/min. No other deviations were observed during hardware settings optimization and tuning but the problem still persists.
HI!
I was wondering if it is possible and to a certain extent viable program a normal webcamera to be used as an eyetracker? This can be a great tool for those who may not have funds to procure a professional eyetracking hardware.
Also, are there any opensource softwares available that can do the abovementioned task?
Regards
Thanks
How can I go for low-cost implementation of QPSK/BPSK transceiver, with one transmitting and another receiving antenna? For the reconfigurability, I need to implement it using SDR and USRP.
Which hardware devices or boards shall be required? Any suggestions?
I am interested to buy hardware for validating my simulation results. Which hardware equipment do you recommend to me in the following two?, and why?
Typhoon HIL 402 or dSpace MicroLAb Box.
Or
I need some help with the simulation of the device on Matlab simulink .
Thank you
I have been researching the topic of image encryption and implementing it in software, using Wolfram Mathematica and Mathworks Matlab. So far, this has been successful and I am able to publish my results, for example:
How would you suggest I go down the path of learning about and implementing the same ideas using hardware?
What are good options to start with? A Raspberry Pi, Arduino, more advanced ICs?
Would you suggest any specific textbooks to go through in this learning journey?
I hv to send signals to Honeywell notifier system / Fire Alarm system using potential free relay from a python program running on a server machine. What hardware (potential free relay board) and libraries do I need to implement this?
Hardware-in-the-loop (HIL) simulation
Research Elements articles are brief, peer-reviewed articles that complement full research papers and describe output that has come about as a result of following the research cycle – this includes things like data, methods and protocols, software, code, hardware and more.
What do you think about it's acceptance and future scope?
I am running various Abaqus simulations with a model size of about 1.5 million degrees of freedom in total. To speed up the calculations I am trying to decide what number of CPUs would be optimal and what are influencing factors (like model size, steps, time steps, outputs, hardware etc.). I'm interested in the question at what number the writing and merging of partial results and data between the different cores outweighs the benefit of using multiple CPUs.
I have drawn a sample using Proportionate stratified random sampling technique.
The variables include name, designation, group, division, pay-scale
I would like to know what metrics I could use to check if the sample represents the population. Any resources will be useful. Thankyou!
Since the variables are categorical I am not sure what I can use here
Dear All,
Is anyone aware of any benchmarks that pit pmemd.cuda (the GPU accelerated AMBER simulation software) against GROMACS' simulation software in a "apples to apples" comparison?
Same protein, water model, salt concentration, temperature, time step, and most importantly, same Hardware configuration.
How to the suites compare in that regard? Are they both 100% efficient and hardware bound? Or does one or the over have an edge in their efficiency and use of hardware to perform the SAME simulation.
Thanks in advance
ps. I've failed to find any 'head-to-head' benchmarks of my own.
I wanted to know how states in US manage their traffic signal hardware - there maintaninance schedule, standards?
I suppose peak cancellation CFR(PC-CFR) is now a feasible method concerning to hardware implementation. But when evaluating TM2.0A example of matlab 2020b 5g toolbox in which there is a fast freqency content switching of waveform in each slot(shown in figure), I suppose it's hard to know the frequency content of the signal in advance or recompute the new cancellation pulse coefficients in real time since each set of cancellation pulse coefficients is specified for certain carrier configuration of waveform. Is there any good solution to solve the problem? Or is there any other CFR algorithm which is feasible in this application secene with not much difficulty in hardware implementation?
I want to simulate biped robot before going to hardware. I'll send the simulated kinematics from visual studio to that platform. It'll be great if that plat form can give some feedback after applying the kinematics like real life (using position sensors) to visual studio again so that my code can calibrate itself.
Program execution time depends on the number of instructions as well as on computing power of the machine. Does anyone have some recommendation where to find an analytical model for estimating program execution time according to program instructions and CPU, RAM, and DISK characteristics?
For example, if we know the number of instructions, CPI (cycles per instruction), as well as hardware specifications of CPU, RAM, DISK, how to calculate (estimate) program execution time?
I would be interested to know more about the software itself, hardware requirements, features and pricing.
There was already a similar question in 2012, see: https://www.researchgate.net/post/What_is_your_experience_with_electronic_lab_notebooks_software_to_keep_your_experiments_organized
I think new information would be helpful.
Could you help me to find articles on topical on hardware and software implementations of collision warning block(unit) for cars?
Answers to the question will greatly contribute to the acquisition of hardware and software for academic and research purposes.
I intend to work on topic modeling in embedding spaces based on the following paper (Dieng, Blei, et al. 2020) https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00325/96463.
and Dynamic Embedding Topic Modeling (Dieng, Ruiz, Blei, et al. 2018)
My question is whether anyone has worked on the python code and had any hardware issues during implementation. I've been working on it for two days, and the algorithm is not yet finished.
I really appreciate any help.
Xilinx System Generator hardware co-simulation compilation is complete without any problem but the block used for simulation did not appear? Does anyone know where or how to find the generated block for co-simulation?
Thank you
Can any body tell me the procedure of making hardware of any microstrip line to rectangular waveguide transitions or suggest me the name of person who can make this hardware. its very urgent...
thanks...
+917983388622
We have a Raman spectrometer (HORIBA/ Jobin-Yvon, LabRAM) purchased around 2005 and uses LabSpec 4.14.01. I was wondering if anyone uses the same system. I have some technical questions about the calibration of the hardware and software.
I tried to design this circuit into proteus i m getting my output in software but i m not getting my output on hardware.
i write the code for on time 5s and off time 1s
Blockchain is revolutionizing all information and communications technologies. However, many of its functionalities depend on repetitive and computational intensive operations. Is there a way to implement a blockchain in hardware? What would be the benefits? What possible technologies could be employed?
Dear all,
I am currently studying MPC in a temperature controller. I've got the concepts of MPC such as model, control horizon, prediction horizon, constraint, ..etc. With these fundamental understanding, I can easily do simulations test on Matlab Simulink toolbox.
However, I need to implement this MPC algorithm on C# WinForm Application, since I created UI on C# with temperature controllers basing on Modbus ASCII protocol. The hardware configuration is as the figure.
I've found someone who has done with CASADI toolbox on Python, but it seems quite hard to follow.
I am looking for suggestions on which library on Python to implement it. Besides, are there any problems with the hardware configuration?
Thank you!
Hello,
I'm trying to run my Abaqus simulation using gpus. I have a PC with AMD Ryzen™ 5 2400G & Radeon™ RX Vega 11 Graphics.
Calling the gpu from CAE and the Command window doesn't work. I have found a possible solution using CUDA, which I have not tried yet since it refers to NVIDIA hardware. Other posts suggest using OpenCL but I can not find how to download it.
Any ideas would be helpful!
The paper describes the possibilities of training drivers and students to effective driving modes of the train. Could you suggest the similar Software and Hardware Simulators for combine harvester Drivers Training?
How to implement software and hardware of fractional order controllers on robots?
how can we understand the meaning of "r317 , r3_17 or r31_7" at decompressor stage using Hardware Data Compression (HDC) where r is a run of length on number 7?
I need to know the computing requirements to do SAR time series analysis and how many SAR rasters (images) should be analyzed?
Speech Based Silence ejection algorithm is tested for results in MATLAB. To implement it as a Hardware, what type of recent Micro - controller can be used ?
Hello! I am needing some help with HPLC - I had to pick it back up after years of not doing it.
Re-learning has been going well, but I have never had to collect compound fractions and this is what I need help with.
I am performing extracts on microalgae to detect and isolate specific compounds in the UV range. My extract is generated using 100% HPLC -grade Methanol, and my eluents are 100% methanol and DDI water (all compounds have been filtered by 0.22un membrane and degassed via vacuum pump) for my flow gradient. I generated great chromatographs, spectra, etc
BUT, how do I collect specific fractions for analysis? I have multiple peaks at various times that I need to collect. I know that I can manually time it and manually collect the sample, but I need better resolution than a manual collection. I've been looking for manuals and protocols online, but the information for my systems seems to be sporadically available.
Any chemists/biologists/physicists or experienced HPLC users have some tips or resources?
System Information:
Waters 2695 Separations Module w/Column-Heating Cabinet (set to 40*C) - Hardware
Waters 2996 Photodiode Array Detector - Hardware
Waters Fraction Collector II - Hardware
C18 Column - Hardware
Empower 2on a Windows XP OS - Software
There is also a Waters 2424 ELS Detector (Hardware) , but I don't think I need this.
I'm trying to run excitation-emission matrices on some water samples, but I'm running into some software issues that the manuals aren't helpful for. In order to use R studio to extract my data, I need to save my output as a .eem file, but I don't see how to do that.
When I try to save my matrix, the only option I get is an .spc, which it doesn't even let me do that - I get an error saying I need to change my matrix into a worksheet. I can't even figure out how to do that.
Is it saving these eems automatically somewhere I can't find? All I've been able to do is save an ASCII version of my data, which isn't a format the eemR package lets me import.
Hardware: Horiba Fluoromax 4
Software: Horiba FluorEscence, R studio eemR, Matlab drEEM
Thanks!
I have built a piece of hardware that produce random samples from a Gaussian distribution. Now that I have the hardware I want to empirically evaluate how random the samples are.
I am aware that the NIST and Diehard randomness tests exist but as far as I am aware they are for uniformly distributed random numbers and therefore not directly applicable to Gaussian distributed random numbers. Is there a standard empirical test for the randomness of Gaussian distributed random numbers?
EyeTribe eye trackers are now obsolete, following the shutdown of the company. Yet we think that there may be labs that are still using these devices.
Recently, in our lab at METU Cognitive Science we tried to make their EyeTribe eye tracking devices work in a Windows 10 Release 1803. However, the trackers were initialized as generic USB devices and were not recognized by neither the EyeTribe eye tracking server nor the EyeTribe eye tracking UI. A roll back to Windows 1607 patch solved the problem. With the exact same driver set and hardware in place, the only observed difference between the two systems were the OS level security patches against Spectre & Meltdown security vulnerabilities. The Intel microcode level patches applied via BIOS updates were still in place, yet the devices operated correctly after OS rollback.
Relevant system settings with EyeTribe eye tracker problem present:
- Intel 7th gen Core i5 7500
- Dell OptiPlex 3050, BIOS ver. 1.10.2
- Windows 10 ver. 1803
System settings with no issues:
- All other settings were kept the same
- Windows 10 ver. 1607
Thanks Efecan Yılmaz for testing and implementation.
--
Cengiz Acarturk, PhD
Hi,
I need to get a quote for oral endoscopy equipment, all the components (hardware and software) going with it. I've been trying to get a quote from different companies, but am not getting any answers or reactions. Would anyone happen to have a recent quote from a reliable company, or recommendations for a company that's likely to communicate with me?
Many thanks!
I would like to know whether VASP and/or Wien2K are affected in terms of performance by the type the hardware (Hard disk SSD or HDD). If yes, what do you suggest for a better (VASP and Wien2K) environement?
Let 𝑘, 𝑚, 𝑎𝑛𝑑 𝑛 are positive integers and 𝑚 ≤ 𝑛 I need to calculate the following function as accurately as possible when my hardware has ability to calculate only integer values.
f(𝑚,𝑛,𝑘)=𝑘x2^(𝑚/𝑛)
For example 𝑘=2 , 𝑚=𝑛 then my system can calculate f(𝑚,𝑛,𝑘)=𝑘x2 (Upper bound)
if 𝑚 < 𝑛 then my system gives f(𝑚,𝑛,𝑘)=𝑘 (Lower bound )
Anyone can suggest me how to calculate the intermediate steps by using basic arithmetic operations for example +,-,/,x ?
What are the differences between 1D, 2D and 3D hydrological model in terms of advantage, disadvantage, function, compatibility and hardware requirement?
I need to implement matrix inversion in hardware. The size of the matrix can be anywhere from 50 *50 to 8*8. Are ready-made standard codes available or inbuilt modules available with FPGA boards, please let me know? How feasible it is. If not FPGA can anyone suggest, another type of hardware to do so.
Hi,
We are having issues using Desmond on our workstation. Simulations run fine on one of our gpus ("Gpu0") but crash consistently on the other gpu ("gpu1" - which is also set up as the display gpu). Not sure if hardware issue or software/set up problem. I have attached screenshots of errors. Both gpus are visible to schrodinger. Similar issue on both schrodinger 2020-3 and 2019-1.
Ubuntu 20.04.1 LTS
2x GeForce RTC 2080
-nvcc returns:
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2019 NVIDIA Corporation
Built on Sun_Jul_28_19:07:16_PDT_2019
Cuda compilation tools, release 10.1, V10.1.243
Any suggestions to work out if this is a hardware or set-up problem?
Thanks,
JK
Dear colleagues,
does anybody know of a analytical service company providing measurements of NQR (most preferably 14N) spectra.
What are the vendors providing the hardware needed if we want to purchase our own instrument.
For limited hardware resources, which ones are best? Is it the CNN architectures based on depthwise convolution or any other concept?