You are on page 1of 60

1.

INTRODUCTION
1.1 INTRODUCTION
A software bug or defect is a coding mistake that may cause an unintended or
unexpected behaviour of the software component. Upon discovering an abnormal
behaviour of the software project, a developer or a user will report it in a document,
called a bug report or issue report. A bug report provides information that could help
in fixing a bug, with the overall aim of improving the software quality. A large
number of bug reports could be opened during the development life-cycle of a
software product.
For example, there were 3,389 bug reports created for the Eclipse Platform
product in 2013 alone. In a software team, bug reports are extensively used by both
managers and developers in their daily development process. A developer who is
assigned a bug report usually needs to reproduce the abnormal behaviour and perform
code reviews in order to find the cause. However, the diversity and uneven quality of
bug reports can make this process nontrivial. Essential information is often missing
from a bug report.
Bacchelli and Bird surveyed 165 managers and 873 programmers, and
reported that finding defects requires a high level understanding of the code and
familiarity with the relevant source code files. In the survey, 798 respondents
answered that it takes time to review unfamiliar files. While the number of source
files in a project is usually large, the number of files that contain the bug is usually
very small. Therefore, we believe that an automatic approach that ranked the source
files with respect to their relevance for the bug report could speed up the bug finding
process by narrowing the search to a smaller number of possibly unfamiliar files. If
the bug report is construed as a query and the source code files in the software
repository are viewed as a collection of documents, then the problem of finding
source files that are relevant for a given bug report can be modelled as a standard task
in information retrieval (IR).
As such, we propose to approach it as a ranking problem, in which the source
files (documents) are ranked with respect to their relevance to a given bug report
(query). In this context, relevance is equated with the likelihood that a particular
source file contains the cause of the bug described in the bug report. The ranking
function is defined as a weighted combination of features, where the features draw

1
heavily on knowledge specific to the software engineering domain in order to measure
relevant relationships between the bug report and the source code file. While a bug
report may share textual tokens with its relevant source files, in general there is a
significant inherent mismatch between the natural language employed in the bug
report and the programming language used in the code.
Ranking methods that are based on simple lexical matching scores have
suboptimal performance, in part due to lexical mismatches between natural language
statements in bug reports and technical terms in software systems. Our system
contains features that bridge the corresponding lexical gap by using project specific
API documentation to connect natural language terms in the bug report with
programming language constructs in the code. Furthermore, source code files may
contain a large number of methods of which only a small number may be causing the
bug. Correspondingly, the source code is syntactically parsed into methods and the
features are designed to exploit method level measures of relevance for a bug report.
It has been previously observed that software process metrics (e.g., change
history) are more important than code metrics (e.g., size of codes) in detecting
defects. Consequently, we use the change history of source code as a strong signal for
linking fault-prone files with bug reports. Another useful domain specific observation
is that a buggy source file may cause more than one abnormal behaviour, and
therefore may be responsible for similar bug reports. If we equate a bug report with a
user and a source code file with an item that the user may like or not, then we can
draw an analogy with recommender systems and employ the concept of collaborative
filtering. Thus, if previously fixed bug reports are textually similar with the current
bug report, then files that have been associated with the similar reports may also be
relevant for the current report. We expect complex code to be more prone to bugs than
simple code.
Correspondingly, we design query-independent features that capture the code
complexity through proxy properties derived from the file dependency graph, such as
the PageRank score of a source file or the number of file dependencies. The resulting
ranking function is a linear combination of features, whose weights are automatically
trained on previously solved bug reports using a learning-to-rank technique. We have
conducted extensive empirical evaluations on six large-scale, open-source software
projects with more than 22,000 bug reports in total. To avoid contaminating the
training data with future bug-fixing information from previous reports, we created

2
fine-grained benchmarks by checking out the before-fix version of the project for
every bug report. Experimental results on the before-fix versions show that our
system significantly outperforms a number of strong baselines as well as three recent
state-of-the-art approaches. In particular, when evaluated on the Eclipse Platform UI
dataset containing over 6,400 solved bug reports, the learning-to-rank system is able
to successfully locate the true buggy files within the top 10 recommendations for over
70 per cent of the bug reports, corresponding to a mean average precision (MAP) of
over 40 per cent. Overall, we see our adaptive ranking approach as being generally
applicable to software projects for which a sufficient amount of project specific
knowledge, in the form of version control history, bug-fixing history, API
documentation, and syntactically parsed code, is readily available.
The main contributions of this paper include: a ranking approach to the
problem of mapping source files to bug reports that enables the seamless integration
of a wide diversity of features; exploiting previously fixed bug reports as training
examples for the proposed ranking model in conjunction with a learning-to-rank
technique; using the file dependency graph to define features that capture a measure
of code complexity; fine-grained benchmark datasets created by checking out a
before-fix version of the source code package for each bug report; extensive
evaluation and comparisons with existing state-of-the-art methods; and a thorough
evaluation of the impact that features have on the ranking accuracy.

1.2 EXISTING SYSTEM:

 Recently, researchers have developed methods that concentrate on ranking


source files for given bug reports automatically.
 Saha et al. syntactically parse the source code into four document fields: class,
method, variable, and comment. The summary and the description of a bug
report are considered as two query fields.
 Kim et al. propose both a one-phase and a two-phase prediction model to
recommend files to fix. In the one-phase model, they create features from
textual information and metadata (e.g., version, platform, priority, etc.) of bug
reports, apply Native Bayes to train the model using previously fixed files as
classification labels, and then use the trained model to assign multiple source
files to a bug report.

3
 Rao and Kak apply various IR models to measure the textual similarity
between the bug report and a fragment of a source file.

1.3 DISADVANTAGES OF EXISTING SYSTEM:

 Their one-phase model uses only previously fixed files as labels in the training
process, and therefore cannot be used to recommend files that have not been
fixed before when being presented with a new bug report.
 Existing methods require runtime executions.

1.4 PROPOSED SYSTEM:

The main contributions of this paper include: a ranking approach to the


problem of mapping source files to bug reports that enables the seamless integration
of a wide diversity of features; exploiting previously fixed bug reports as training
examples for the proposed ranking model in conjunction with a learning-to-rank
technique; using the file dependency graph to define features that capture a measure
of code complexity; fine-grained benchmark datasets created by checking out a
before-fix version of the source code package for each bug report; extensive
evaluation and comparisons with existing state-of-the-art methods; and a thorough
evaluation of the impact that features have on the ranking accuracy.

1.5 ADVANTAGES OF PROPOSED SYSTEM:

 Our approach can locate the relevant files within the top 10 recommendations
for over 70 per cent of the bug reports in Eclipse Platform and Tomcat.
 Furthermore, the proposed ranking model outperforms three recent state-of-
the-art approaches.
 Feature evaluation experiments employing greedy backward feature
elimination demonstrate that all features are useful.

4
2. LITERATURE SURVEY
1) Feature identification: A novel approach and a case study
AUTHORS: G. Antoniol and Y.-G. Gueheneuc

Feature identification is a well-known technique to identify subsets of a


program source code activated when exercising functionality. Several approaches
have been proposed to identify features. We present an approach to feature
identification and comparison for large object-oriented multi-threaded programs using
both static and dynamic data. We use processor emulation, knowledge filtering, and
probabilistic ranking to overcome the difficulties of collecting dynamic data, i.e.,
imprecision and noise. We use model transformations to compare and to visualise
identified features. We compare our approach with a naive approach and a concept
analysis-based approach using a case study on a real-life large object-oriented multi-
threaded program, Mozilla, to show the advantages of our approach. We also use the
case study to compare processor emulation with statistical profiling.

2) Feature identification: An epidemiological metaphor


AUTHORS: G. Antoniol and Y.-G. Gueheneuc
Feature identification is a technique to identify the source code constructs
activated when exercising one of the features of a program. We propose new
statistical analyses of static and dynamic data to accurately identify features in large
multithreaded object-oriented programs. We draw inspiration from epidemiology to
improve previous approaches to feature identification and develop an epidemiological
metaphor. We build our metaphor on our previous approach to feature identification,
in which we use processor emulation, knowledge-based filtering, probabilistic
ranking, and metamodeling. We carry out three case studies to assess the usefulness
of our metaphor, using the "save a bookmark" feature of Web browsers as an
illustration. In the first case study, we compare our approach with three previous
approaches (a naive approach, a concept analysis-based approach, and our previous
probabilistic approach) in identifying the feature in MOZILLA, a large, real-life,
multithreaded object-oriented program. In the second case study, we compare the
implementation of the feature in the FIREFOX and MOZILLA Web browsers. In the
third case study, we identify the same feature in two more Web browsers, Chimera (in

5
C) and ICE Browser (in Java), and another feature in JHOTDRAW and XFIG, to
highlight the generalizability of our metaphor.

3) Debug advisor: A recommender system for debugging


AUTHORS: B. Ashok, J. Joy, H. Liang, S. K. Rajamani, G. Srinivasa, and V.
Vangala
In large software development projects, when a programmer is assigned a bug
to fix, she typically spends a lot of time searching (in an ad-hoc manner) for instances
from the past where similar bugs have been debugged, analysed and resolved.
Systematic search tools that allow the programmer to express the context of the
current bug, and search through diverse data repositories associated with large
projects can greatly improve the productivity of debugging this paper presents the
design, implementation and experience from such a search tool called Debug Advisor.
The context of a bug includes all the information a programmer has about the bug,
including natural language text, textual rendering of core dumps, debugger output etc.
Our key insight is to allow the programmer to collate this entire context as a query to
search for related information. Thus, Debug Advisor allows the programmer to search
using a fat query, which could be kilobytes of structured and unstructured data
describing the contextual information for the current bug. Information retrieval in the
presence of fat queries and variegated data repositories, all of which contain a mix of
structured and unstructured data is a challenging problem. We present novel ideas to
solve this problem.
We have deployed Debug Advisor to over 100 users inside Microsoft. In addition to
standard metrics such as precision and recall, we present extensive qualitative and
quantitative feedback from our users.

4) Expectations, outcomes, and challenges of modern code review


AUTHORS: A. Bacchelli and C. Bird
Code review is a common software engineering practice employed both in
open source and industrial contexts. Review today is less formal and more lightweight
than the code inspections performed and studied in the 70s and 80s. We empirically
explore the motivations, challenges, and outcomes of tool-based code reviews. We
observed, interviewed, and surveyed developers and managers and manually

6
classified hundreds of review comments across diverse teams at Microsoft. Our study
reveals that while finding defects remains the main motivation for review, reviews are
less about defects than expected and instead provide additional benefits such as
knowledge transfer, increased team awareness, and creation of alternative solutions to
problems. Moreover, we find that code and change understanding is the key aspect of
code reviewing and that developers employ a wide range of mechanisms to meet their
understanding needs, most of which are not met by current tools. We provide
recommendations for practitioners and researchers.

5) Leveraging usage similarity for effective retrieval of examples in


code repositories
AUTHORS: S. K. Bajracharya, J. Ossher, and C. V. Lopes
Developers often learn to use APIs (Application Programming Interfaces) by
looking at existing examples of API usage. Code repositories contain many instances
of such usage of APIs. However, conventional information retrieval techniques fail to
perform well in retrieving API usage examples from code repositories. This paper
presents Structural Semantic Indexing (SSI), a technique to associate words to source
code entities based on similarities of API usage. The heuristic behind this technique is
that entities (classes, methods, etc.) that show similar uses of APIs are semantically
related because they do similar things. We evaluate the effectiveness of SSI in code
retrieval by comparing three SSI based retrieval schemes with two conventional
baseline schemes. We evaluate the performance of the retrieval schemes by running a
set of 20 candidate queries against a repository containing 222,397 source code
entities from 346 jars belonging to the Eclipse framework. The results of the
evaluation show that SSI is effective in improving the retrieval of examples in code
repositories.

7
3. SYSTEM ANALYSIS

3.1 SOFTWARE REQUIREMENT SPECIFICATION


Software Requirement Specification (SRS) is the starting point of the Software
developing activity. As system grew more complex it became evident that the goal of
the entire system cannot be easily comprehended. Hence the need for the requirement
phase arose. The software project is initiated by the client needs. The SRS is the
means of translating the ideas of the minds of clients (the input) into a formal
document (the output of the requirement phase).

The SRS consist of two basic activities:

1. Problem/Requirement Analysis:

The process is order and more nebulous of the two , deals with understand the
problem, the goals and the constraints.

2. Requirement Specification:

Here, the focus is on specifying what has been found giving analysis such as
representation, specification languages and tools, and checking the specification are
addressed during this activity. The requirement phase terminates with the production
of the validate SRS document. Producing the SRS document is the basic goal of this
phase.

Role of SRS:

The purpose of the Software Requirement Specification is to reduce the


communication gap between the clients and the developers. Software Requirement
Specification is the medium through which the client and the user needs are
accurately specified. It requires basis of software development. A good SRS should
satisfy all the parties involved in the system

Scope:

This document is the only one that describes the requirement of system. It is meant for
the use by the developers, and will also be the basis for validating the final delivered

8
system. Any changes made to the requirement in the future will have to go through a
formal change approval process.

3.2 SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS:
• System : Pentium IV 2.4 GHz.
• Hard Disk : 40 GB.
• Floppy Drive : 1.44 Mb.
• Monitor : 15 VGA Colour.
• Mouse : Logitech.
• Ram : 512 Mb.

SOFTWARE REQUIREMENTS:
 Operating system : Windows XP/7.
 Coding Language : JAVA/J2EE
 Data Base : MYSQL

3.3 INPUT DESIGN

The input design is the link between the information system and the user. It
comprises the developing specification and procedures for data preparation and those
steps are necessary to put transaction data in to a usable form for processing can be
achieved by inspecting the computer to read data from a written or printed document
or it can occur by having people keying the data directly into the system. The design
of input focuses on controlling the amount of input required, controlling the errors,
avoiding delay, avoiding extra steps and keeping the process simple. The input is
designed in such a way so that it provides security and ease of use with retaining the
privacy. Input Design considered the following things:

 What data should be given as input?


 How the data should be arranged or coded?
 The dialog to guide the operating personnel in providing input.
 Methods for preparing input validations and steps to follow when error
occur.

9
OBJECTIVES

1. Input Design is the process of converting a user-oriented description of the input


into a computer-based system. This design is important to avoid errors in the data
input process and show the correct direction to the management for getting correct
information from the computerized system.

2. It is achieved by creating user-friendly screens for the data entry to handle large
volume of data. The goal of designing input is to make data entry easier and to be free
from errors. The data entry screen is designed in such a way that all the data
manipulates can be performed. It also provides record viewing facilities.

3. When the data is entered it will check for its validity. Data can be entered with the
help of screens. Appropriate messages are provided as when needed so that the user
will not be in maize of instant. Thus the objective of input design is to create an input
layout that is easy to follow.

3.4 SOFTWARE DEVELOPMENT LIFE CYCLE (SDLC)

Our document plays a vital role in the development of life cycle (SDLC) as it
describes the complete requirement of the system. It means for use by developers and
will be the basic during testing phase. Any changes made to the requirements in the
future will have to go through formal change approval process.

SPIRAL MODEL was defined by Barry Boehm in his 1988 article, “A spiral
Model of Software Development and Enhancement. This model was not the first
model to discuss iterative development, but it was the first model to explain why the
iteration models.

As originally envisioned, the iterations were typically 6 months to 2 years


long. Each phase starts with a design goal and ends with a client reviewing the
progress thus far. Analysis and engineering efforts are applied at each phase of the
project, with an eye toward the end goal of the project.

The entire project can be aborted if the risk is deemed too great. Risk factors
might involve development cost overruns, operating-cost miscalculation.

The steps for Spiral Model can be generalized as follows:

10
 The new system requirements are defined in as much details as possible. This
usually involves interviewing a number of users representing all the external or
internal users and other aspects of the existing system.

 A preliminary design is created for the new system.

 A first prototype of the new system is constructed from the preliminary design.
This is usually a scaled-down system, and represents an approximation of the
characteristics of the final product

 A second prototype is evolved by a fourfold procedure:

1. Evaluating the first prototype in terms of its strengths, weakness, and risks.

2. Defining the requirements of the second prototype.

3. Planning a designing the second prototype.

4. Constructing and testing the second prototype.

 At the customer option, the entire project can be aborted if the risk is deemed too
great.Risk factors might involve development cost overruns, operating-cost
miscalculation, or any other factor that could, in the customer’s judgment, result
in a less-than-satisfactory final product.

 The existing prototype is evaluated in the same manner as was the previous
prototype, and if necessary, another prototype is developed from it according to
the fourfold procedure outlined above.

 The preceding steps are iterated until the customer is satisfied that the refined
prototype represents the final product desired.

 The final system is constructed, based on the refined prototype.

 The final system is thoroughly evaluated and tested. Routine maintenance is


carried on a continuing basis to prevent large scale failures and to minimize down
time.

The following diagram shows how a spiral model acts like:

11
Fig 3.1: Spiral Model

3.5 SYSTEM STUDY

FEASIBILITY STUDY

The feasibility of the project is analyzed in this phase and business proposal is
put forth with a very general plan for the project and some cost estimates. During
system analysis the feasibility study of the proposed system is to be carried out. This
is to ensure that the proposed system is not a burden to the company. For feasibility
analysis, some understanding of the major requirements for the system is essential.

Three key considerations involved in the feasibility analysis are

 ECONOMICAL FEASIBILITY
 TECHNICAL FEASIBILITY
 SOCIAL FEASIBILITY
ECONOMICAL FEASIBILITY

This study is carried out to check the economic impact that the system will
have on the organization. The amount of fund that the company can pour into the

12
research and development of the system is limited. The expenditures must be justified.
Thus the developed system as well within the budget and this was achieved because
most of the technologies used are freely available. Only the customized products had
to be purchased.

TECHNICAL FEASIBILITY
This study is carried out to check the technical feasibility, that is, the technical
requirements of the system. Any system developed must not have a high demand on
the available technical resources. This will lead to high demands on the available
technical resources. This will lead to high demands being placed on the client. The
developed system must have a modest requirement, as only minimal or null changes
are required for implementing this system.

SOCIAL FEASIBILITY

The aspect of study is to check the level of acceptance of the system by the
user. This includes the process of training the user to use the system efficiently. The
user must not feel threatened by the system, instead must accept it as a necessity. The
level of acceptance by the users solely depends on the methods that are employed to
educate the user about the system and to make him familiar with it. His level of
confidence must be raised so that he is also able to make some constructive criticism,
which is welcomed, as he is the final user of the system.

13
4. DESIGN METHODOLOGY

4.1 INTRODUCTION TO DESIGN METHODOLOGY


The most creative and challenging phase of the life cycle is system design.
The term design describes a final system and the process by which it is developed. It
refers to the technical specifications that will be applied in implementations of the
candidate system. The design may be defined as “the process of applying various
techniques and principles for the purpose of defining a device, a process or a system
with sufficient details to permit its physical realization”.

The designer’s goal is how the output is to be produced and in what format.
Samples of the output and input are also presented. Second input data and database
files have to be designed to meet the requirements of the proposed output. The
processing phases are handled through the program Construction and Testing.
Finally, details related to justification of the system and an estimate of the impact of
the candidate system on the user and the organization are documented and evaluated
by management as a step toward implementation.

The importance of software design can be stated in a single word “Quality”.


Design provides us with representations of software that can be assessed for quality.
Design is the only way where we can accurately translate a customer’s requirements
into a complete software product or system. Without design we risk building an
unstable system, which might fail if small changes are made. It may as well be
difficult to test, or could be one who’s quality can’t be tested. So it is an essential
phase in the development of a software product.

4.1.1 Scope and purpose:

The purpose of the design phase is to develop a clear understanding of what


the developer want people to gain from his/her project.
As you the developer work on the project, the test for every design decision should be
"Does this feature fulfil the ultimate purpose of the project?"
A purpose statement affects the design process by explaining what the developer
wants the project to do, rather than describing the project itself.
The Design Document will verify that the current design meets all of the explicit

14
requirements contained in the system model as well as the implicit requirements
desired by the customer.
4.1.2 Overall System Design Objectives:
The overall system design objective is to provide an efficient, modular design
that will reduce the system’s complexity, facilitate change and result in an easy
implementation. This will be accomplished by designing strongly cohesion system
with minimal coupling. In addition, this document will provide interface design
models that are consistent user friendly and will provide straight forward transition
through the various system functions.
4.1.3 Structure of Design Document
System Architecture Design– The System architecture section has detailed diagram
of the system, server and client architecture.
Data Design – The data Design include an ERD as well as Database design.
Functional Design Description –This section has the functional partitioning from the
SRS, and goes into great detail to describe each function.

4.2 MODULES:

 System Construction Module


 Ranking Function
 Feature Representation
 Collaborative Filtering Score

MODULES DESCRIPTION:

System Construction Module:

In the first module, we develop the system with the entities required to
evaluate our proposed model. When a new bug report is received, developers usually
need to reproduce the bug and perform code reviews to find the cause, a process that
can be tedious and time consuming. So In This paper introduces an adaptive ranking
approach that leverages project knowledge through functional decomposition of
source code, API descriptions of library components, the bug-fixing history, the code
change history, and the file dependency graph. Given a bug report, the ranking score
of each source file is computed as a weighted combination of an array of features,

15
where the weights are trained automatically on previously solved bug reports using a
learning-to-rank technique.

We propose to approach it as a ranking problem, in which the source files


(documents) are ranked with respect to their relevance to a given bug report (query).
In this project we apply three entities namely User, Developer, Admin. If User has an
error in a source code then user send the error message to the Admin. Then Admin
analysis the errors and ranking the reports and send to the Developers. And
Developers find the solutions of the errors.

Ranking Function:

The ranking function is defined as a weighted combination of features, where


the features draw heavily on knowledge specific to the software engineering domain
in order to measure relevant relationships between the bug report and the source code
file. While a bug report may share textual tokens with its relevant source files, in
general there is a significant inherent mismatch between the natural language
employed in the bug report and the programming language used in the code.

Ranking methods that are based on simple lexical matching scores have
suboptimal performance, in part due to lexical mismatches between natural language
statements in bug reports and technical terms in software systems. Our system
contains features that bridge the corresponding lexical gap by using project specific
API documentation to connect natural language terms in the bug report with
programming language constructs in the code.

Source Code files may contain a large number of methods of which only a
small number may be causing the bug. Correspondingly, the source code is
syntactically parsed into methods and the features are designed to exploit method
level measures of relevance for a bug report. It has been previously observed that
software process metrics (e.g., change history) are more important than code metrics
(e.g., size of codes) in detecting defects.

16
Feature Representation:

The proposed ranking model requires that a bug report - source file pair (r,s)
be represented as a vector of k features. We distinguish between two major categories
of features.

Query dependent: These are features that depend on both the bug report r and
the source code file s. A query dependent feature represents a specific relationship
between the bug report and the source file, and thus may be useful in determining
directly whether the source code file s contains a bug that is relevant for the bug
report r.

Query independent. These are features that depend only on the source code
file, i.e., their computation does not require knowledge of the bug report query. As
such, query independent features may be used to estimate the likelihood that a source
code file contains a bug, irrespective of the bug report.

We hypothesize that both types of features are useful when combined in an


overall ranking model.

Collaborative Filtering Score:


It has been observed in that a file that has been fixed before may be
responsible for similar bugs. This collaborative filtering effect has been used before in
other domains to improve the accuracy of recommender systems, consequently it is
expected to be beneficial in our retrieval setting, too. Given a bug report r and a
source code file s, let br(r , s) be the set of bug reports for which file s was fixed
before r was reported. The feature computes the textual similarity between the text of
the current bug report r and the summaries of all the bug reports in br(r , s). This
feature is query-dependent.

In a Class Name Similarity a bug report may directly mention a class name in
the summary, which provides a useful signal that the corresponding source file
implementing that class may be relevant for the bug report. Our hypothesis is that the
signal becomes stronger when the class name is longer and thus more specific.

17
4.3 SYSTEM ARCHITECTURE

A System architecture is a conceptual model that defines


the structure, behaviour, and more views of a system. An architecture description is a
formal description and representation of a system, organized in a way that supports
reasoning about the structures and behaviors of the system. A system architecture can
comprise system components that will work together to implement the overall system.
There have been efforts to formalize languages to describe system architecture;
collectively these are called architecture description languages.

System Architecture is abstract, conceptualization-oriented, global, and focused to


achieve the mission and life cycle concepts of the system. It also focuses on high‐
level structure in systems and system elements. It addresses the architectural
principles, concepts, properties, and characteristics of the system-of-interest.

Fig 4.3.1 Software Architecture

18
BLOCK DIAGRAM:

Fig 4.3.2 Block Diagram

4.4 INPUT AND OUTPUT DESIGNS

INPUT DESIGN

The input design is the link between the information system and the user. It
comprises the developing specification and procedures for data preparation and those
steps are necessary to put transaction data in to a usable form for processing can be
achieved by inspecting the computer to read data from a written or printed document
or it can occur by having people keying the data directly into the system. The design
of input focuses on controlling the amount of input required, controlling the errors,
avoiding delay, avoiding extra steps and keeping the process simple. The input is
designed in such a way so that it provides security and ease of use with retaining the
privacy. Input Design considered the following things:

 What data should be given as input?


 How the data should be arranged or coded?
 The dialog to guide the operating personnel in providing input.

19
 Methods for preparing input validations and steps to follow when error
occur.
OBJECTIVES

1. Input Design is the process of converting a user-oriented description of the input


into a computer-based system. This design is important to avoid errors in the data
input process and show the correct direction to the management for getting correct
information from the computerized system.

2. It is achieved by creating user-friendly screens for the data entry to handle large
volume of data. The goal of designing input is to make data entry easier and to be free
from errors. The data entry screen is designed in such a way that all the data
manipulates can be performed. It also provides record viewing facilities.

3. When the data is entered it will check for its validity. Data can be entered with the
help of screens. Appropriate messages are provided as when needed so that the user
will not be in maize of instant. Thus the objective of input design is to create an input
layout that is easy to follow

OUTPUT DESIGN

A quality output is one, which meets the requirements of the end user and
presents the information clearly. In any system results of processing are
communicated to the users and to other system through outputs. In output design it is
determined how the information is to be displaced for immediate need and also the
hard copy output. It is the most important and direct source information to the user.
Efficient and intelligent output design improves the system’s relationship to help user
decision-making.

1. Designing computer output should proceed in an organized, well thought out


manner; the right output must be developed while ensuring that each output element is
designed so that people will find the system can use easily and effectively. When
analysis design computer output, they should Identify the specific output that is
needed to meet the requirements.

2. Select methods for presenting information.

20
3. Create document, report, or other formats that contain information produced by the
system.

The output form of an information system should accomplish one or more of the
following objectives.

 Convey information about past activities, current status or projections of the


 Future.
 Signal important events, opportunities, problems, or warnings.
 Trigger an action.
 Confirm an action.
4.5 DATA FLOW DIAGRAM:
1. The DFD is also called as bubble chart. It is a simple graphical formalism that
can be used to represent a system in terms of input data to the system, various
processing carried out on this data, and the output data is generated by this
system.
2. The data flow diagram (DFD) is one of the most important modeling tools. It
is used to model the system components. These components are the system
process, the data used by the process, an external entity that interacts with the
system and the information flows in the system.
3. DFD shows how the information moves through the system and how it is
modified by a series of transformations. It is a graphical technique that depicts
information flow and the transformations that are applied as data moves from
input to output.
4. DFD is also known as bubble chart. A DFD may be used to represent a system
at any level of abstraction. DFD may be partitioned into levels that represent
increasing information flow and functional detail.

21
Fig 4.5 Data flow diagram

4.6 UML DIAGRAMS


UML stands for Unified Modelling Language. UML is a standardized general-
purpose modelling language in the field of object-oriented software engineering. The
standard is managed, and was created by, the Object Management Group.
The goal is for UML to become a common language for creating models of
object oriented computer software. In its current form UML is comprised of two
major components: a Meta-model and a notation. In the future, some form of method
or process may also be added to; or associated with, UML.
The Unified Modelling Language is a standard language for specifying,
Visualization, Constructing and documenting the artifacts of software system, as well
as for business modelling and other non-software systems.
The UML represents a collection of best engineering practices that have
proven successful in the modelling of large and complex systems.

22
The UML is a very important part of developing objects oriented software
and the software development process. The UML uses mostly graphical notations to
express the design of software projects.
GOALS:
The Primary goals in the design of the UML are as follows:
1. Provide users a ready-to-use, expressive visual modelling Language so that
they can develop and exchange meaningful models.
2. Provide extendibility and specialization mechanisms to extend the core
concepts.
3. Be independent of particular programming languages and development
process.
4. Provide a formal basis for understanding the modelling language.
5. Encourage the growth of OO tools market.
6. Support higher level development concepts such as collaborations,
frameworks, patterns and components.
7. Integrate best practices.
4.6.1 USE CASE DIAGRAM:
A use case diagram in the Unified Modeling Language (UML) is a type of
behavioral diagram defined by and created from a Use-case analysis. Its purpose is to
present a graphical overview of the functionality provided by a system in terms of
actors, their goals (represented as use cases), and any dependencies between those use
cases. The main purpose of a use case diagram is to show what system functions are
performed for which actor. Roles of the actors in the system can be depicted.

23
Fig 4.6.1 Use case Daigram

4.6.2 CLASS DIAGRAM:

In software engineering, a class diagram in the Unified Modelling


Language (UML) is a type of static structure diagram that describes the
structure of a system by showing the system's classes, their attributes,
operations (or methods), and the relationships among the classes. It
explains which class contains information.

24
Fig 4.6.2 Class Daigram

4.6.3 SEQUENCE DIAGRAM:


A sequence diagram in Unified Modelling Language (UML) is a kind of interaction
diagram that shows how processes operate with one another and in what order. It is a
construct of a Message Sequence Chart. Sequence diagrams are sometimes called
event diagrams, event scenarios, and timing diagrams.

25
Fig 4.6.3 Sequence Daigram

4.6.4 ACTIVITY DIAGRAM:


Activity diagrams are graphical representations of workflows of stepwise activities
and actions with support for choice, iteration and concurrency. In the Unified
Modelling Language, activity diagrams can be used to describe the business and
operational step-by-step workflows of components in a system. An activity diagram
shows the overall flow of control.

26
Star
t

Manger &Team Login developer

Asign developers lOGIN

Split Bugs NO

Accept
Selection
Assigned bugs
Method

Bugs Assigned Bugs details


details

Rectified

Bugs History
Un rectified

Rectified details
Solution

Fig 4.6.4 Activity Daigram

27
5. TECHNOLOGY IMPLEMENTATION AND RESULTS

5.1 SOFTWARE ENVIRONMENT

5.1.1 Java Technology:


Java technology is both a programming language and a platform.

The Java Programming Language


The Java programming language is a high-level language that can be
characterized by all of the following buzzwords:
 Simple
 Architecture neutral
 Object oriented
 Portable
 Distributed
 High performance
 Interpreted
 Multithreaded
 Robust
 Dynamic
 Secure
With most programming languages, you either compile or interpret a program so that
you can run it on your computer. The Java programming language is unusual in that a
program is both compiled and interpreted. With the compiler, first you translate a
program into an intermediate language called Java byte codes —the platform-
independent codes interpreted by the interpreter on the Java platform. The interpreter
parses and runs each Java byte code instruction on the computer. Compilation
happens just once; interpretation occurs each time the program is executed. The
following figure illustrates how this works.

28
Fig 5.1.11 Compilation and Interpretation of a java program

You can think of Java byte codes as the machine code instructions for the Java
Virtual Machine (Java VM). Every Java interpreter, whether it’s a development tool
or a Web browser that can run applets, is an implementation of the Java VM. Java
byte codes help make “write once, run anywhere” possible. You can compile your
program into byte codes on any platform that has a Java compiler. The byte codes can
then be run on any implementation of the Java VM. That means that as long as a
computer has a Java VM, the same program written in the Java programming
language can run on Windows 2000, a Solaris workstation, or on an iMac.

Fig 5.1.12 Java Virtual Machine

The Java Platform


A platform is the hardware or software environment in which a program runs.
We’ve already mentioned some of the most popular platforms like Windows 2000,
Linux, Solaris, and MacOS. Most platforms can be described as a combination of the
operating system and hardware. The Java platform differs from most other platforms
in that it’s a software-only platform that runs on top of other hardware-based
platforms.

29
The Java platform has two components:
 The Java Virtual Machine (Java VM)
 The Java Application Programming Interface (Java API)
You’ve already been introduced to the Java VM. It’s the base for the Java platform
and is ported onto various hardware-based platforms.

The Java API is a large collection of ready-made software components that


provide many useful capabilities, such as graphical user interface (GUI) widgets. The
Java API is grouped into libraries of related classes and interfaces; these libraries are
known as packages. The next section, What Can Java Technology Do? Highlights
what functionality some of the packages in the Java API provide.
The following figure depicts a program that’s running on the Java platform.
As the figure shows, the Java API and the virtual machine insulate the program from
the hardware.

Fig 5.1.13 Program running on java platform

Native code is code that after you compile it, the compiled code runs on a
specific hardware platform. As a platform-independent environment, the Java
platform can be a bit slower than native code. However, smart compilers, well-tuned
interpreters, and just-in-time byte code compilers can bring performance close to that
of native code without threatening portability.
What Can Java Technology Do?
The most common types of programs written in the Java programming
language are applets and applications. If you’ve surfed the Web, you’re probably
already familiar with applets. An applet is a program that adheres to certain
conventions that allow it to run within a Java-enabled browser.

However, the Java programming language is not just for writing cute,
entertaining applets for the Web. The general-purpose, high-level Java programming

30
language is also a powerful software platform. Using the generous API, you can write
many types of programs.
An application is a standalone program that runs directly on the Java platform.
A special kind of application known as a server serves and supports clients on a
network. Examples of servers are Web servers, proxy servers, mail servers, and print
servers. Another specialized program is a servlet. A servlet can almost be thought of
as an applet that runs on the server side. Java Servlets are a popular choice for
building interactive web applications, replacing the use of CGI scripts. Servlets are
similar to applets in that they are runtime extensions of applications. Instead of
working in browsers, though, servlets run within Java Web servers, configuring or
tailoring the server.
How does the API support all these kinds of programs? It does so with
packages of software components that provides a wide range of functionality. Every
full implementation of the Java platform gives you the following features:
 The essentials: Objects, strings, threads, numbers, input and output,
data structures, system properties, date and time, and so on.
 Applets: The set of conventions used by applets.
 Networking: URLs, TCP (Transmission Control Protocol), UDP (User
Data gram Protocol) sockets, and IP (Internet Protocol) addresses.
 Internationalization: Help for writing programs that can be localized
for users worldwide. Programs can automatically adapt to specific
locales and be displayed in the appropriate language.
 Security: Both low level and high level, including electronic
signatures, public and private key management, access control, and
certificates.
 Software components: Known as JavaBeansTM, can plug into existing
component architectures.
 Object serialization: Allows lightweight persistence and
communication via Remote Method Invocation (RMI).
 Java Database Connectivity (JDBCTM): Provides uniform access to a
wide range of relational databases.
The Java platform also has APIs for 2D and 3D graphics, accessibility,
servers, collaboration, telephony, speech, animation, and more. The following figure
depicts what is included in the Java 2 SDK.

31
Fig 5.1.14 Java 2 SDK
How Will Java Technology Change My Life?
We can’t promise you fame, fortune, or even a job if you learn the Java
programming language. Still, it is likely to make your programs better and requires
less effort than other languages. We believe that Java technology will help you do the
following:
 Get started quickly: Although the Java programming language is a
powerful object-oriented language, it’s easy to learn, especially for
programmers already familiar with C or C++.
 Write less code: Comparisons of program metrics (class counts,
method counts, and so on) suggest that a program written in the Java
programming language can be four times smaller than the same
program in C++.
 Write better code: The Java programming language encourages good
coding practices, and its garbage collection helps you avoid memory
leaks. Its object orientation, its JavaBeans component architecture, and
its wide-ranging, easily extendible API let you reuse other people’s
tested code and introduce fewer bugs.
 Develop programs more quickly: Your development time may be as
much as twice as fast versus writing the same program in C++. Why?
You write fewer lines of code and it is a simpler programming
language than C++.

32
 Avoid platform dependencies with 100% Pure Java: You can keep
your program portable by avoiding the use of libraries written in other
languages. The 100% Pure JavaTM Product Certification Program has a
repository of historical process manuals, white papers, brochures, and
similar materials online.
 Write once, run anywhere: Because 100% Pure Java programs are
compiled into machine-independent byte codes, they run consistently
on any Java platform.
 Distribute software more easily: You can upgrade applets easily from
a central server. Applets take advantage of the feature of allowing new
classes to be loaded “on the fly,” without recompiling the entire
program.
ODBC
Microsoft Open Database Connectivity (ODBC) is a standard programming
interface for application developers and database systems providers. Before ODBC
became a de facto standard for Windows programs to interface with database systems,
programmers had to use proprietary languages for each database they wanted to
connect to. Now, ODBC has made the choice of the database system almost irrelevant
from a coding perspective, which is as it should be. Application developers have
much more important things to worry about than the syntax that is needed to port their
program from one database to another when business needs suddenly change.
Through the ODBC Administrator in Control Panel, you can specify the
particular database that is associated with a data source that an ODBC application
program is written to use. Think of an ODBC data source as a door with a name on it.
Each door will lead you to a particular database. For example, the data source named
Sales Figures might be a SQL Server database, whereas the Accounts Payable data
source could refer to an Access database. The physical database referred to by a data
source can reside anywhere on the LAN.
The ODBC system files are not installed on your system by Windows 95.
Rather, they are installed when you setup a separate database application, such as
SQL Server Client or Visual Basic 4.0. When the ODBC icon is installed in Control
Panel, it uses a file called ODBCINST.DLL.

33
From a programming perspective, the beauty of ODBC is that the application
can be written to use the same set of function calls to interface with any data source,
regardless of the database vendor. The source code of the application doesn’t change
whether it talks to Oracle or SQL Server. We only mention these two as an example.
There are ODBC drivers available for several dozen popular database systems. Even
Excel spreadsheets and plain text files can be turned into data sources. The operating
system uses the Registry information written by ODBC Administrator to determine
which low-level ODBC drivers are needed to talk to the data source (such as the
interface to Oracle or SQL Server). The loading of the ODBC drivers is transparent to
the ODBC application program. In a client/server environment, the ODBC API even
handles many of the network issues for the application programmer.
The advantages of this scheme are so numerous that you are probably thinking
there must be some catch. The only disadvantage of ODBC is that it isn’t as efficient
as talking directly to the native database interface. ODBC has had many detractors
make the charge that it is too slow. Microsoft has always claimed that the critical
factor in performance is the quality of the driver software that is used. In our humble
opinion, this is true. The availability of good ODBC drivers has improved a great deal
recently. And anyway, the criticism about performance is somewhat analogous to
those who said that compilers would never match the speed of pure assembly
language. Maybe not, but the compiler (or ODBC) gives you the opportunity to write
cleaner programs, which means you finish sooner. Meanwhile, computers get faster
every year.

JDBC
In an effort to set an independent database standard API for Java; Sun
Microsystems developed Java Database Connectivity, or JDBC. JDBC offers a
generic SQL database access mechanism that provides a consistent interface to a
variety of RDBMSs. This consistent interface is achieved through the use of “plug-in”
database connectivity modules, or drivers. If a database vendor wishes to have JDBC
support, he or she must provide the driver for each platform that the database and Java
run on.
To gain a wider acceptance of JDBC, Sun based JDBC’s framework on
ODBC. As you discovered earlier in this chapter, ODBC has widespread support on a

34
variety of platforms. Basing JDBC on ODBC will allow vendors to bring JDBC
drivers to market much faster than developing a completely new connectivity
solution.
JDBC was announced in March of 1996. It was released for a 90 day public
review that ended June 8, 1996. Because of user input, the final JDBC v1.0
specification was released soon after.
The remainder of this section will cover enough information about JDBC for you to
know what it is about and how to use it effectively. This is by no means a complete
overview of JDBC. That would fill an entire book.
JDBC Goals
Few software packages are designed without goals in mind. JDBC is one that,
because of its many goals, drove the development of the API. These goals, in
conjunction with early reviewer feedback, have finalized the JDBC class library into a
solid framework for building database applications in Java.
The goals that were set for JDBC are important. They will give you some insight
as to why certain classes and functionalities behave the way they do. The eight design
goals for JDBC are as follows:

SQL Level API


The designers felt that their main goal was to define a SQL interface for Java.
Although not the lowest database interface level possible, it is at a low enough level
for higher-level tools and APIs to be created. Conversely, it is at a high enough level
for application programmers to use it confidently. Attaining this goal allows for future
tool vendors to “generate” JDBC code and to hide many of JDBC’s complexities from
the end user.

SQL Conformance

SQL syntax varies as you move from database vendor to database vendor. In an
effort to support a wide variety of vendors, JDBC will allow any query statement to
be passed through it to the underlying database driver. This allows the connectivity
module to handle non-standard functionality in a manner that is suitable for its users.

1. JDBC must be implemental on top of common database interfaces


The JDBC SQL API must “sit” on top of other common SQL level APIs.

35
This goal allows JDBC to use existing ODBC level drivers by the use of a
software interface. This interface would translate JDBC calls to ODBC and
vice versa.
2. Provide a Java interface that is consistent with the rest of the Java system
Because of Java’s acceptance in the user community thus far, the designers
feel that they should not stray from the current design of the core Java system.

3. Keep it simple
This goal probably appears in all software design goal listings. JDBC is no
exception. Sun felt that the design of JDBC should be very simple, allowing for
only one method of completing a task per mechanism. Allowing duplicate
functionality only serves to confuse the users of the API.

4. Use strong, static typing wherever possible


Strong typing allows for more error checking to be done at compile time; also,
less error appear at runtime.

5. Keep the common cases simple


Because more often than not, the usual SQL calls used by the programmer are
simple SELECT’s, INSERT’s, DELETE’s and UPDATE’s, these queries should
be simple to perform with JDBC. However, more complex SQL statements should
also be possible.

J2ME (Java 2 Micro edition):-

Sun Microsystems defines J2ME as "a highly optimized Java run-time


environment targeting a wide range of consumer products, including pagers, cellular
phones, screen-phones, digital set-top boxes and car navigation systems." Announced
in June 1999 at the JavaOne Developer Conference, J2ME brings the cross-platform
functionality of the Java language to smaller devices, allowing mobile wireless
devices to share applications. With J2ME, Sun has adapted the Java platform for
consumer products that incorporate or are based on small computing devices.

36
1. General J2ME architecture

Fig 5.1.23 J2ME Architecture

J2ME uses configurations and profiles to customize the Java Runtime Environment
(JRE). As a complete JRE, J2ME is comprised of a configuration, which determines
the JVM used, and a profile, which defines the application by adding domain-specific
classes. The configuration defines the basic run-time environment as a set of core
classes and a specific JVM that run on specific types of devices. We'll discuss
configurations in detail in the The profile defines the application; specifically, it adds
domain-specific classes to the J2ME configuration to define certain uses for devices.
We'll cover profiles in depth in the The following graphic depicts the relationship
between the different virtual machines, configurations, and profiles. It also draws a
parallel with the J2SE API and its Java virtual machine. While the J2SE virtual
machine is generally referred to as a JVM, the J2ME virtual machines, KVM and
CVM, are subsets of JVM. Both KVM and CVM can be thought of as a kind of Java
virtual machine -- it's just that they are shrunken versions of the J2SE JVM and are
specific to J2ME.

2. Developing J2ME applications

Introduction In this section, we will go over some considerations you need to keep in
mind when developing applications for smaller devices. We'll take a look at the way

37
the compiler is invoked when using J2SE to compile J2ME applications. Finally, we'll
explore packaging and deployment and the role preverification plays in this process.

3. Design considerations for small devices

Developing applications for small devices requires you to keep certain strategies in
mind during the design phase. It is best to strategically design an application for a
small device before you begin coding. Correcting the code because you failed to
consider all of the "gotchas" before developing the application can be a painful
process. Here are some design strategies to consider:

 Keep it simple. Remove unnecessary features, possibly making those features


a separate, secondary application.
 Smaller is better. This consideration should be a "no brainer" for all
developers. Smaller applications use less memory on the device and require
shorter installation times. Consider packaging your Java applications as
compressed Java Archive (jar) files.
 Minimize run-time memory use. To minimize the amount of memory used at
run time, use scalar types in place of object types. Also, do not depend on the
garbage collector. You should manage the memory efficiently yourself by
setting object references to null when you are finished with them. Another
way to reduce run-time memory is to use lazy instantiation, only allocating
objects on an as-needed basis. Other ways of reducing overall and peak
memory use on small devices are to release resources quickly, reuse objects,
and avoid exceptions.

4. Configurations overview

The configuration defines the basic run-time environment as a set of core classes and
a specific JVM that run on specific types of devices. Currently, two configurations
exist for J2ME, though others may be defined in the future:

 Connected Limited Device Configuration (CLDC) is used specifically with


the KVM for 16-bit or 32-bit devices with limited amounts of memory. This is
the configuration (and the virtual machine) used for developing small J2ME
applications. Its size limitations make CLDC more interesting and challenging

38
(from a development point of view) than CDC. CLDC is also the
configuration that we will use for developing our drawing tool application. An
example of a small wireless device running small applications is a Palm hand-
held computer.
 Connected Device Configuration (CDC) is used with the C virtual machine
(CVM) and is used for 32-bit architectures requiring more than 2 MB of
memory. An example of such a device is a Net TV box.

5. J2ME profiles

What is a J2ME profile?

As we mentioned earlier in this tutorial, a profile defines the type of device supported.
The Mobile Information Device Profile (MIDP), for example, defines classes for
cellular phones. It adds domain-specific classes to the J2ME configuration to define
uses for similar devices. Two profiles have been defined for J2ME and are built upon
CLDC: KJava and MIDP. Both KJava and MIDP are associated with CLDC and
smaller devices. Profiles are built on top of configurations. Because profiles are
specific to the size of the device (amount of memory) on which an application runs,
certain profiles are associated with certain configurations.

A skeleton profile upon which you can create your own profile, the Foundation
Profile, is available for CDC.

Profile 1: KJava

KJava is Sun's proprietary profile and contains the KJava API. The KJava profile is
built on top of the CLDC configuration. The KJava virtual machine, KVM, accepts
the same byte codes and class file format as the classic J2SE virtual machine. KJava
contains a Sun-specific API that runs on the Palm OS. The KJava API has a great deal
in common with the J2SE Abstract Windowing Toolkit (AWT). However, because it
is not a standard J2ME package, its main package is com.sun.kjava. We'll learn more
about the KJava API later in this tutorial when we develop some sample applications.

Profile 2: MIDP :MIDP is geared toward mobile devices such as cellular phones
and pagers. The MIDP, like KJava, is built upon CLDC and provides a standard run-

39
time environment that allows new applications and services to be deployed
dynamically on end user devices. MIDP is a common, industry-standard profile for
mobile devices that is not dependent on a specific vendor. It is a complete and
supported foundation for mobile application

development. MIDP contains the following packages, the first three of which are core
CLDC packages, plus three MIDP-specific packages.

 java.lang
 java.io
 java.util
 javax.microedition.io
 javax.microedition.lcdui
 javax.microedition.midlet
 javax.microedition.rms

5.1.2 MySQL:
MySQL is a relational database management system (RDBMS) that runs as a
server providing multi-user access to a number of databases. MySQL is a popular
choice of database for use in web applications and is an open source product.
The process of setting up a MySQL database varies from host to host, however we
will end up with a database name, a user name and a password. Before using our
database, we must create a table. Creating a table in phpMyAdmin is simple; we just
type the name, select the number of fields and click the ‘go’ button. we will then be
taken to a setup screen where you must create the fields for the database. Another way
of creating databases and tables in phpMyAdmin is by executing simple SQL
statements.
We have used this method in order to create our database and tables.
What is Database?
A database is a separate application that stores a collection of data. Each
database has one or more distinct APIs for creating, accessing, managing, searching
and replicating the data it holds.

40
Other kinds of data stores can be used, such as files on the file system or large
hash tables in memory but data fetching and writing would not be so fast and easy
with those types of systems. So now a day, we use relational database management
systems (RDBMS) to store and manage huge volume of data. This is called relational
database because all the data is stored into different tables and relations are
established using primary keys or other keys known as foreign keys.

A Relational Database Management System (RDBMS) is software that:

 Enables you to implement a database with tables, columns and indexes.

 Guarantees the Referential Integrity between rows of various tables.

 Updates the indexes automatically.

 Interprets an SQL query and combines information from various table

RDBMS Terminology:
Before we proceed to explain MySQL database system, let's revise few definitions
related to database.

 Database: A database is a collection of tables, with related data.

 Table: A table is a matrix with data. A table in a database looks like a simple
spreadsheet.

 Column: One column (data element) contains data of one and the same kind,
for example the column postcode.

 Row: A row (= tuple, entry or record) is a group of related data, for example
the data of one subscription.

 Redundancy: Storing data twice, redundantly to make the system faster.

 Primary Key: A primary key is unique. A key value cannot occur twice in
one table. With a key, you can find at most one row.

 Foreign Key: A foreign key is the linking pin between two tables.

41
 Compound Key: A compound key (composite key) is a key that consists of
multiple columns, because one column is not sufficiently unique.

 Index: An index in a database resembles an index at the back of a book.

MySQL Database:
MySQL is a fast, easy-to-use RDBMS being used for many small and big businesses.
MySQL is developed, marketed, and supported by MySQL AB, which is a Swedish
company. MySQL is becoming so popular because of many good reasons:

 MySQL is released under an open-source license. So you have nothing to pay


to use it.

 MySQL is a very powerful program in its own right. It handles a large subset
of the functionality of the most expensive and powerful database packages.

 MySQL uses a standard form of the well-known SQL data language.

 MySQL works on many operating systems and with many languages


including PHP, PERL, C, C++, JAVA, etc.

 MySQL is very friendly to PHP, the most appreciated language for web
development.

 MySQL supports large databases, up to 50 million rows or more in a table.


The default file size limit for a table is 4GB, but you can increase this (if your
operating system can handle it) to a theoretical limit of 8 million terabytes
(TB).

Backup software:

MySQL dump is a logical backup tool included with both community and enterprise
editions of MySQL. It supports backing up from all storage engines. MySQL
Enterprise Backup is a hot backup utility included as part of MySQL Enterprise
subscription from Oracle ,offering native InnoDB hot backup,as well as backup for
other storage .

42
6. TESTING
6.1 Test Case description
The purpose of testing is to discover errors. Testing is the process of trying to
discover every conceivable fault or weakness in a work product. It provides a way to
check the functionality of components, sub assemblies, assemblies and/or a finished
product It is the process of exercising software with the intent of ensuring that the

Software system meets its requirements and user expectations and does not fail in an
unacceptable manner. There are various types of test. Each test type addresses a
specific testing requirement.

6.2 TYPES OF TESTS


Functional test

Functional tests provide systematic demonstrations that functions tested are


available as specified by the business and technical requirements, system
documentation, and user manuals.

Functional testing is centered on the following items:

Valid Input : identified classes of valid input must be accepted.

Invalid Input : identified classes of invalid input must be rejected.

Functions : identified functions must be exercised.

Output : identified classes of application outputs must be exercised.

Systems/Procedures: interfacing systems or procedures must be invoked.

Organization and preparation of functional tests is focused on requirements,


key functions, or special test cases. In addition, systematic coverage pertaining to
identify Business process flows; data fields, predefined processes, and successive
processes must be considered for testing. Before functional testing is complete,
additional tests are identified and the effective value of current tests is determined.

43
System Test

System testing ensures that the entire integrated software system meets
requirements. It tests a configuration to ensure known and predictable results. An
example of system testing is the configuration oriented system integration test.
System testing is based on process descriptions and flows, emphasizing pre-driven
process links and integration points.

White Box Testing


White Box Testing is a testing in which in which the software tester has
knowledge of the inner workings, structure and language of the software, or at least its
purpose. It is purpose. It is used to test areas that cannot be reached from a black box
level.

Black Box Testing

Black Box Testing is testing the software without any knowledge of the inner
workings, structure or language of the module being tested. Black box tests, as most
other kinds of tests, must be written from a definitive source document, such as
specification or requirements document, such as specification or requirements
document. It is a testing in which the software under test is treated, as a black box
.you cannot “see” into it. The test provides inputs and responds to outputs without
considering how the software works.

6.3 Unit Testing:

Unit testing is usually conducted as part of a combined code and unit test
phase of the software lifecycle, although it is not uncommon for coding and unit
testing to be conducted as two distinct phases.

Test strategy and approach


Field testing will be performed manually and functional tests will be written in
detail.

Test objectives
 All field entries must work properly.
 Pages must be activated from the identified link.

44
 The entry screen, messages and responses must not be delayed.
Features to be tested
 Verify that the entries are of the correct format
 No duplicate entries should be allowed
 All links should take the user to the correct page.
6.4 Integration Testing
Software integration testing is the incremental integration testing of two or
more integrated software components on a single platform to produce failures caused
by interface defects.

The task of the integration test is to check that components or software


applications, e.g. components in a software system or – one step up – software
applications at the company level – interact without error.

Test Results:

All the test cases mentioned above passed successfully. No defects encountered.

6.5 Acceptance Testing


User Acceptance Testing is a critical phase of any project and requires
significant participation by the end user. It also ensures that the system meets the
functional requirements.

Test Results:

All the test cases mentioned above passed successfully. No defects encountered.

45
7. OUTPUT SCREENS

SCR 7.1 Main page

46
SCR 7.2 Developer Application Form

SCR 7.3 Team lead login

SCR 7.4 Manager Login

47
SCR 7.5 Manager Home

48
SCR 7.6 Recruit Employee

SCR 7.7 Recruit Employee (Changed Values)

SCR 7.8 Team lead login

49
SCR 7.9 Team lead Home

SCR 7.10 Affix New Bug Report

50
SCR 7.11 Data Assigned based on Feature Selection

SCR 7.12 Data Assigned based on Instance Selection

51
SCR 7.13 Bug Report Analysis

SCR 7.14 Assign Developer

52
SCR 7.15 Developer Login

SCR 7.16 Bug Report Status

53
SCR 7.17 Rectification Status

SCR 7.18 Rectified Bugs

54
SCR 7.19 Team lead login

SCR 7.20 Checking Bug Rectified status

55
SCR 7.21 Data Reduction based on Instance Selection

SCR 7.22 Manager login

56
SCR 7.23 Trace History of Bug

SCR 7.24 Checking Bug Rectified Status

57
SCR 7.25 Data Reduction based on Instance Selection

58
8. CONCLUSION
To locate a bug, developers use not only the content of the bug report but also
domain knowledge relevant to the software project. We introduced a learning-to-rank
approach that emulates the bug finding process employed by developers. The ranking
model characterizes useful relationships between a bug report and source code files by
leveraging domain knowledge, such as API specifications, the syntactic structure of
code, or issue tracking data. Experimental evaluations on six Java projects show that
our approach can locate the relevant files within the top 10 recommendations for over
70 percent of the bug reports in Eclipse Platform and Tomcat. Furthermore, the
proposed ranking model outperforms three recent state-of-the-art approaches. Feature
evaluation experiments employing greedy backward feature elimination demonstrate
that all features are useful. When coupled with runtime analysis, the feature
evaluation results can be utilized to select a subset of features in order to achieve a
target trade-off between system accuracy and runtime complexity. The proposed
adaptive ranking approach is generally applicable to software projects for which there
exists a sufficient amount of project specific knowledge, such as a comprehensive
API documentation (Section 3.1.2) and an initial number of previously fixed bug
reports (Section 6.1). Furthermore, the ranking performance can benefit from
informative bug reports and well documented code leading to a better lexical
similarity (Section 3.1.1), and from source code files that already have a bug-fixing
history (Section 3.2). In future work, we will leverage additional types of domain
knowledge, such as the stack traces submitted with bug reports and the file change
history, as well as features previously used in defect prediction systems. We also plan
to use the ranking SVM with nonlinear kernels and further evaluate the approach on
projects in other programming languages.

59
9. REFERENCES

[1] G. Antoniol and Y.-G. Gueheneuc, “Feature identification: A novel approach and
a case study,” in Proc. 21st IEEE Int. Conf. Softw. Maintenance,Washington, DC,
USA, 2005, pp. 357–366.

[2] G. Antoniol and Y.-G. Gueheneuc, “Feature identification: An epidemiological


metaphor,” IEEE Trans. Softw. Eng., vol. 32, no. 9, pp. 627–641, Sep. 2006.

[3] B. Ashok, J. Joy, H. Liang, S. K. Rajamani, G. Srinivasa, and V. Vangala,


“Debugadvisor: A recommender system for debugging,” in Proc. 7th Joint Meeting
Eur. Softw. Eng. Conf. ACM SIGSOFT Symp. Found. Softw. Eng., New York, NY,
USA, 2009, pp. 373–382.

[4] A. Bacchelli and C. Bird, “Expectations, outcomes, and challenges of modern


code review,” in Proc. Int. Conf. Softw. Eng., Piscataway, NJ, USA, 2013, pp. 712–
721.

[5] S. K. Bajracharya, J. Ossher, and C. V. Lopes, “Leveraging usage similarity for


effective retrieval of examples in code repositories,” in Proc. 18th ACM SIGSOFT
Int. Symp. Found. Softw. Eng., New York, NY, USA, 2010 pp. 157–166.

[6] R. M. Bell, T. J. Ostrand, and E. J. Weyuker, “Looking for bugs in all the right
places,” in Proc. Int. Symp. Softw. Testing Anal., New York, NY, USA, 2006, pp.
61–72.

[7] N. Bettenburg, S. Just, A. Schr€oter, C. Weiss, R. Premraj, and T. Zimmermann,


“What makes a good bug report?” in Proc. 16th ACM SIGSOFT Int. Symp. Found.
Softw. Eng., New York, NY, USA, 2008, pp. 308–318.

[8] T. J. Biggerstaff, B. G. Mitbander, and D. Webster, “The concept assignment


problem in program understanding,” in Proc. 15th Int. Conf. Softw. Eng., Los
Alamitos, CA, USA, 1993, pp. 482–498.

60

You might also like