Science topic
Network Performance - Science topic
Explore the latest questions and answers in Network Performance, and find Network Performance experts.
Questions related to Network Performance
What is the impact of varying the number of hidden layers in a deep neural network on its performance for a specific classification task, and how does this impact change when different activation functions are used?
hello..
i want to know if i change the transmission power or the path loss in 5G networks will network performance like (delay, jitter, throughput) be effected by this change?
or only other parameters like signal to noise ratio and other thing is effected by the transmission power and PL
and what are the main parameter that will be effected ?
thanks
I am currently working on a 5G network simulation using a network simulator. For a given scenario, there are many configurable network parameters such as channel models, scheduler, Bands, transmitter power, bandwidth, numerology, mobility, etc. My goal is to adjust the tunable network parameters to get optimal network performances such as throughput, delay, etc based on the user traffic. Any suggestions on how to proceed and what kind of algorithm to choose will be beneficial.
hi
i want to know how SDN can Improve network performance like delay, throughput, pdr
for the delay i understand that SDN controller have a global view of network and so i chose the best bath (less congestion)
what about the throughput ?
and Packet Delivery Ratio
thanks
Why does my neural network perform better with more input neurons than features/variables ? Now if I use the exact input neurons to features/variables the neural network performs much worse. For example, I have 6 dimensions of data that are 200 in length (or 200 samples). Within that data there are groups of 6 data sets - is this why my NN with 36 neuron inputs performs better than 6 neuron inputs? The dimensions should suggest I just need a NN with 6 input neurons. The hidden layers for the 6 inputs are 12 and 6 and for the 36 inputs, are 72 and 36 respectively.
It's been a long time since I last used NNs and so many thanks for answers or pointers on this?
Hello,
In my experiment setup, I have different servers that host files with different sizes. The client downloads files continually and estimates the performance of each server by estimating the throughput. However, when I download a big file it shows higher throughput comparing small files.
My question, for a given server, how I can aggregate correctly the measured throughputs to have a single value that represents its performance.
Thank you very much for your time.
How can i customize my Convolution Neural Network (CNN) to deal with gray images (2D ultrasound) as the input layer is something like (3, 256, 256) the 3 represents the R, G and B channel in CNN but in my case the number of channels will be only one.
when i run the code i got the error :
ValueError: Cannot feed value of shape (16, 256, 256, 3) for Tensor 'x:0', which has shape '(?, 256, 256, 1)'.
is there any workaround for this without affecting the network performance?
Analyze the working of token bucket traffic shaper on 5th generation mobile networks and the performance of fuzzy predictor for dynamic token generation rate in 5th generation mobile networks.
In a set of connected Nodes(routers), how can i measure the distance between them in which i can predict how much time a Packet take to reach next node?
Thank You.
Nowadays there are different types of available access networks which are used by end users to connect to the internet. Thus, the users must be provided with seamless network connectivity to stay connected while moving around from one place to another. This seamless network connectivity is achieved by connecting different types of networks which is called heterogeneous internetworking. By integrating different network technologies into one common heterogeneous network architecture, they can coexist and interoperate with each other and improve network performance in term of Quality of Service (QoS).
The use of pre-trained CNN models to perform transfer learning from one task to another one is widely utilized nowadays. Most of the models publicly available are trained on natural images, whose values fall in the range between 0 and 255. However, this is not the case for amplitude (or intensity) SAR images, where the dynamic is high and not comparable to that of a natural image.
The problem is then: how can one adapt a model trained on natural images to effectively process SAR images? Do SAR values have to be clipped above some threshold (let's say threshold = mean + 3*std) and then linearly normalized between 0 and 255, thus resulting in a loss of information? Or is there any smart way to preserve the full statistics of a SAR image and shrink the dynamic to make it comparable to that of a natural image?
For instance, one of the tasks could be edge detection, for which there is scarcity of groundtruth of SAR. Let us say we want to train a network to perform edge detection on 1-look noisy SAR images. One possibility is to corrupt the natural images with simulated speckle and train the network to detect edges on this images, for which the groundtruth is available. The problem then would be how to pre-process this simulated noisy images and real SAR images to make the network work effectively.
I hope it is clear enough. Thank you in advance.
I am working on a project to compare Software Defined network and conventional network using performance metrics such as throughput, delay, and latency; but I have not been able to use OPNET Modeller to simulate the OpenFlow SDN. I need help and if possible an instructor regarding this.
In the complex-valued neural network, the first few layers of neural networks perform complex operations. The operation and the activation function output, then reshape is placed in the fully connected layer, normalized before output, and the imaginary part is divided by the real part and then the arctan is used to obtain the angle. I intensively sample the generated sample data for the purpose of realizing regression and complex value signal inverse-mapping to the phase angle.
I got the training process as shown in the figure. It seems that I was learning something successfully, but it always fell to the middle and it was strangely convergent. But the error is still very large, and it is impossible to estimate the angle slightly. Please give some advice.

I am running the simulation for multi-hop network to observe the effect of the different number of hops on the CBR throughput. I ran the simulation for hops = 1, 2, 3, 4 and 5. After running the simulation, I have observed that CBR throughput remains almost same for the values 1, 2 and 3 but when I increased hops to 4, the CBR throughput decreased slightly. The most surprising aspect of the simulation results was that when I increased the hop count to 5, the CBR throughput jumped to 73 % from 67 %.
I fail to understand this upward trend as I thought that due to different aspect such as duty cycling, nodes synchronization, the throughput should decrease when increasing the hop count but the results for counts count 5 were surprising. Can someone explain me why is the reason for this behavior?
The peer-to-peer (P2P) is common architecture to share wide range of media on the Internet. P2P traffic represents about 50% of the total Internet traffic, subject to geographical location. The high volume of P2P traffic is due to file sharing, video streaming, on-line gaming. Dynamic increase in P2P traffic volume often causes poorer network performance and higher congestion rates, triggered by P2P high bandwidth demand.
usually simulators or network traffic generators calculates network performance metrics like: latency,jitter,throughput.. , they calculate the metrics using different formulas . I wonder if there are known mathematical formulas to calculate these metrics.
Can some one please let me know the exact mathematical definition of jitter in packet networks ? Some RFCs indicate it as packet delay variation ? Is this variation to calculated among consecutive packets or for all the packets ?
Also does some one know to calculate this is NetSim simulator? It is not directly available in the results window ?
Seismic noise levels analysis is useful to characterize network performance; to detect problems with seismic stations and to characterize the frequency dependent noise levels due to background site conditions.
I am researching throughput of random access with capture.
I am wondering throughput of S-ALOHA considering frequency reuse factor?
In my opinion, the throughput becomes FG exp(-FG), where F is frequency reuse factor and G is transmission attempt per unit time. From that equation, the maximum throughput is not changed even when F is changed.
However, i thought if the freq. reuse factor increases, the max. throughput decreases.
Does anyone have any idea for get throughput of S-ALOHA considering frequency reuse factor?
In LTE networks , how we can know that a particular eNB node is overloaded or not ? I mean how to know that there is a requirement of load balancing . Is there any unit or metric to measure load of eNB?
Kindly reply asap.
Thanks Much
The ONE is a simulator designed for the DTN routing protocols. I have experience in NS2.
Now want to understand how ONE works?
For example for NS2 simulation, first of all one has to write a TCL script which is run via terminal. Output of TCL is a trace file which is further processed through AWK script to get the network performance i.e PDR, delay etc. Output of AWK can be plotted using GNUPLOT.
If I replace NS2 to ONE simulator then how to get the network performance?
Most common low-cost Bluetooth modules used in micro-controller based projects are limited to around 1 Mbps thru Serial Port Profile (SPP). In addition, while the modules can transfer short data packets at all baud rates, they halt or fail completely when trying to achieve sustained high data rates. How to successfully transfer/stream data at all available baud rates for prolonged period of time? Are there any other feasible options for use i.e. different profile of Bluetooth in this case?
I am master student starting my thesis on this title:
"Performance Evaluation of Application layer/middleware so-lutions for the IoT"
here my task is to test application layer IOT protocols using simulator:
protocols to be tested:MQTT,COAP,XMPP,...etc
now the question is which simulator to use for which protocol?
which simulator fits which protocol?
thanks in Advance,
Internet of things is not merely connecting few LED,s , temp controller with any board Now a days some people give training on the name of IOT. and teach the students only basics of embedded system Will it be the training on IOT? Kindly comment
I'm wondering that if I can use Real Time Digital Simulator (RTDS) to model a distribution network and then perform some optimal placement of PVDGs with the economic objective function.
Please, am looking for a D2D system level simulator, either free or pay-for, Can anyone help? I want to set up a D2D communication underlaying LTE network, and perform interference analysis
I have been struggling with getting the right model to evaluate campus network performance
The purpose of the query is to know some of the best open source monitoring tools for SDN network and measuring controller, switch of SDN architecture. Tools with reporting and exporting functionalities are preferred for comparative analysis.
Are there any papers for the comparison of automated diagnosis for network performance metric?
Control system over wired/wireless network performance might be unstable due to packet loss which causes system inconstancy. To achieve optimized operation, finding a method to compensate to packet loss is significant awareness.
Recently I’m reading paper[1][2] related to my interest. while here are some questions about Traffic Aware Scheduling Algorithm of Time Slotted Channel Hopping mode in 802.15.4e which I could’t well understand.
It’s stated that “minimize the number of active time slots; such number direct impacts the end-to-end delay, and also the network duty cycle, and, thus, the network lifetime.” [2] introduction part, page 2. I can get the point the duty cycle is reduced obviously, but on the average view, the end-to-end delay may not reduced as expected(e.g. if we always allow the deeper nodes that far from root transfer first, the average end-to-end delay maybe shorter). Also, once the traffic load(bit*meter) is determined, without other assumption like aggregators traffic reduction or additional routing method, we know nodes not transmitting or receiving are in inactive/sleep mode at each time slot, how can energy be saved compare to other method(e.g a method treat degree or level as priority when scheduling)? Or does it necessary mean less power consumption when we get least active slots? Are there any nodes that have to keep active for other consideration even they don’t have transmission load right now?
[1].Traffic Aware Scheduling Algorithm for reliable low-power multi-hop IEEE 802.15.4e networks
[2].On Optimal Scheduling in Duty-Cycled Industrial IoT Applications Using IEEE802.15.4e TSCH
Please refer any journal paper if you wish. Thank you.
inserting a distributed generators in distribution system improves the distribution network performance in terms of minimizing of power losses and simultaneously improves the voltage stability of the system. But up to what extent it should be added? and how it influences on existing system.
I need to do a transition from Ipv4 to Ipv6, i konw how to do it, but also I want to analise/evaluate the network performance, latency, packet loss/loss rate, throughput, CPU Utilization, jitter , TTL, etc.. Anyone know how to do it?? Anyone know a software to do this analyse??
it is helpful for my project...
In my setup, the probability density function of the noise is not Gaussian-like and its PDF changes with the system statement.
Thus, I want to use the BER calculation by error counting instead of BER estimation. Moreover, the experimental BER level is around 10^-5. However, it seems the common method to calculate the BER is based on the PDF estimation.
How can I calculate the error by direct error counting? I set up a simple system model in the attachment.
Could anyone give me some advice?
I am using VGA camera at input side and framegrabber for H.264 compression. I am getting RTSP stream from framegrabber over ethernet. this stream has been connected to Server laptop with point to point connection.
When I request a RTSP stream from Client side using gstreamer (sometimes VLC), I get the stream for max 1 minute. After 1 minute the network from server side is going down. only server WiFi connection getting disturbed. Client is alive in this case too. I am unable to troubleshoot the exact problem.
I did some wireshark testing with different inputs: 1. Framegrabber with VGA camera 2. Surveillence camera
It work perfectly fine with serveillence camera.
One thing which I have seen is even there is network breakdown, framegrabber is keep sending frames to server. Normally it has to stop but it is keep sending it. I am confuse here also.
Configuration: Framegrabber: bitrate - 1 Mbps Resolution - 720 * 480 Framerate - 30fps ( can not be changed because of use of PAL )
Same with surveillence camera except framerate is 25fps.
Please guide me for solving network breakdown issue.
Thanks in advance!!!
dear expert , I have trained my neural network and simulate the output it is completely matched with training set but not in testing data set why please any one can suggest me

Suppose our TCP SYN packet goes directly to the attacker. Can he generate fake response (TCP SYN-ACK) packet and send it back to us?
I am starting research on organizational design in start-up companies. In particular I am interested in the relations between organization design, networking and performance of start-up companies. I am relatively new to the field of start-ups and I would like to know what the most important readings in this field are.
I would like to know if any of you has heard about in-band control and out-of-band control on SDN and if there is any article that mentions both types of control?
Is it possible to simulate an LTE network on the academic version of the Riverbed Opnet modeler? If not, do I actually need a professional version to use it?? P.S URGENT
As we know that originally Internet was not designed for the applications which become part of Internet with the passage of time. The evolution in the design architecture of Internet was much slower then the invention of new applications e.g now content become first order element.
What is your opinion, to meet the future Internet challenges an evolutionary approach (e.g SDN,) or clean slate (e.g CCN, NetInf and PSIRP) approach is more appropriate .
Answer could be different from practicality, effectiveness, efficiency and implementation point of view.
I need a tutorial to make changes in NS2 source code for supporting multichannel communication.
As bandwidth on WiFi links is measured with "up to", I want to measure throughput on these links to plan how to optimize the traffic in my network. But throughput is not constant and depends of some factors. I want to see how it is changed during a defined period (day, week, month).
In the developing countries, mobile internet service is the only means to access web services for the people living in the rural areas. 2G internet service is available all most all rural areas. But since the voice call services returns more revenue than that of Internet service, the voice call is prioritized. So practically available bandwidth is inconsistent and service quality is very poor for some operators. For some operators it is very difficult to perform the surfing and upload/download data.
In this circumstance I would like to perform a survey to identify the quality of GPRS/EDGE service offered by different operators and also identify whether the service quality is good enough to access the web services from the rural areas.
Which of those simulators is more performant and supporting mobility, energy, QoS, Faire index and more metrics?
4G networks environments are composite cooperative and opportunistic, with multiuser diversity being provided in wireless networks.