Prasad R Satya’s research while affiliated with Acharya Nagarjuna University and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (3)


Table 2 : Classification of ports based on risk vulnerability
Figure 3 Figure 3 depicts network traffic analysis at the server under the simulated ICMP Smurf attack after execution of the implementation of algorithm 6.2. The total number of packets received per second on the server after defending the attack at approximate Time (T) 161secs on the server by applying algorithm 6.2 (i.e by dropping the packets if the type is ICMP echo request or ICMP echo reply) has stabilized the network traffic back to the earlier observed normal operating load.
Enhancing the Impregnability of Linux Servers
  • Article
  • Full-text available

March 2014

·

121 Reads

·

2 Citations

International Journal of Network Security & Its Applications

Rama Koteswara Rao

·

Prasad R Satya

·

·

[...]

·

Prasad V Potluri

Worldwide IT industry is experiencing a rapid shift towards Service Oriented Architecture (SOA). As a response to the current trend, all the IT firms are adopting business models such as cloud based services which rely on reliable and highly available server platforms. Linux servers are known to be highly secure. Network security thus becomes a major concern to all IT organizations offering cloud based services. The fundamental form of attack on network security is Denial of Service. This paper focuses on fortifying the Linux server defence mechanisms resulting in an increase in reliability and availability of services offered by the Linux server platforms. To meet this emerging scenario, most of the organizations are adopting business models such as cloud computing that are dependant on reliable server platforms. Linux servers are well ahead of other server platforms in terms of security. This brings network security to the forefront of major concerns to an organization. The most common form of attacks is a Denial of Service attack. This paper focuses on mechanisms to detect and immunize Linux servers from DoS .

Download

SPC for Software Reliability: Imperfect Software Debugging Model

May 2011

·

105 Reads

·

3 Citations

International Journal of Computer Science Issues

Software reliability process can be monitored efficiently by using Statistical Process Control (SPC). It assists the software development team to identify failures and actions to be taken during software failure process and hence, assures better software reliability. In this paper, we consider a software reliability growth model of Non-Homogenous Poisson Process (NHPP) based, that incorporates imperfect debugging problem. The proposed model utilizes the failure data collected from software development projects to analyze the software reliability. The maximum likelihood approach is derived to estimate the unknown point estimators of the model. We investigate the model and demonstrate its applicability in the software reliability engineering field.


TABLE 1 . TIME BETWEEN FAILURES OF A SOFTWARE
TABLE 4 . MEAN SUCCESSIVE DIFFERENCES OF GO
TABLE 5 . MEAN SUCCESSIVE DIFFERENCES OF WEIBULL
A Comparative Study of Software Reliability Models Using SPC on Ungrouped Data

146 Reads

·

2 Citations

Control charts are widely used for process monitoring. Software reliability process can be monitored efficiently by using Statistical Process Control (SPC). It assists the software development team to identify failures and actions to be taken during software failure process and hence, assures better software reliability. If not many, few researchers proposed SPC based software reliability monitoring techniques to improve Software Reliability Process. In this paper we propose a control mechanism based on the cumulative quantity between observations of time domain failure data using mean value function of Weibull and Goel-Okumoto distribution, which are based on Non Homogenous Poisson Process (NHPP). The Maximum Likelihood Estimation (MLE) method is used to derive the point estimators of the distributions.

Citations (2)


... The author uses the Rayleigh curve or distribution ( Rayleigh, 1880 ) to predict defects based on peak staff, estimated size and production rate. Ravi et al. (2011) propose a software reliability growth model based on Non-Homogenous Poisson Process (NHPP) that incorporates imperfect debugging problem, where new faults might be introduced when removing faults. They estimate the initial number of faults and the fault detection rate using Maximum likelihood, and then calculate the values using the iterative method for the given cumulative time between failures. ...

Reference:

Software Project Management in High Maturity: A Systematic Literature Mapping
SPC for Software Reliability: Imperfect Software Debugging Model

International Journal of Computer Science Issues