You are on page 1of 66

CHAPTER- I

INTRODUCTION

Introduction

Recent development of the network and computing technology enables many people to
easily share their data with others uses online external storages. People can share their lives with
friends by uploading their private photos or messages into the online social networks such as
Facebook and Myspace; or upload highly sensitive personal health records into online data
servers such as Microsoft Health Vault, Google Health for ease of sharing with their primary
doctors or for cost saving. As people enjoy the advantages of these new technologies and
services, their concerns about data security and access control also arise. Improper use of the
data by the storage server or unauthorized access by outside users could be potential threats to
their data. People would like to make their sensitive or private data only accessible to the
authorized people with credentials they specified.

Over view of Project

Cloud storage service has significant advantages on both convenient data sharing and cost
reduction. Thus, more and more enterprises and individuals outsource their data to the cloud to
be benefited from this service. However, this new paradigm of data storage poses new challenges
on data confidentiality preservation. As cloud service separates the data from the cloud service
client (individuals or entities), depriving their direct control over these data, the data owner
cannot trust the cloud server to conduct secure data access control. Therefore, the secure access
control problem has become a challenging issue in public cloud storage.
Ciphertext-policy attribute-based encryption (CP-ABE) is a useful cryptographic method
for data access control in cloud storage. All these CP-ABE based schemes enable data owners to
realize fine-grained and flexible access control on their own data. However, CP-ABE determines
users’ access privilege based only on their inherent attributes without any other critical factors,
such as the time factor. In reality, the time factor usually plays an important role in dealing with
time-sensitive data (e.g. to publish a latest electronic magazine, or to expose a company’s future
business plan). In these scenarios, both the mechanism of access privilege timed releasing and
fine-grained access control should be together taken into account. Let us take the enterprise data
exposure for instance: A company usually prepares some important files for different intended
users, and these users can gain their access privilege at different time points. For example, the
future plan of this company may contain some business secrets.

Thus at an early time, the access privilege can be released to the CEO only. Then the
managers of some relevant departments could get access privilege at a later time point, when
they take responsibility for the plan execution. At last, other employees in some specific
departments of the company can access the data to evaluate the completeness of this enterprise
plan. When uploading time-sensitive data to the cloud, the data owner wants different users to
access the content after different time points. To the outsourced data storage, CP-ABE can
characterize different users and provide fine-grained access control. However, to our best
knowledge, these schemes cannot support gradual access privilege releasing

Motivation

 A new time and attribute factors combined CP-ABE (Ciphertext- Policy Attribute-based
Encryption) based access control scheme is proposed that allows privacy ensured fine-
grained access control over time-sensitive data stored in cloud.
 To enable personalized sharing of individuals data with other in cloud.
 To support fine-grained time-sensitive data publishing.

Aim & Objective


A new multi authority CP-ABE scheme with efficient decryption & attribute
revocation method that achieve both forward security and backward security (DAC- MACS).
To solve these problem by combining “CP-ABE and TRE” mechanism in public cloud. So
different users can get different releasing time points.

Challenges of Present System

Cloud storage service has significant advantages on both convenient data sharing and cost
reduction. Thus, more and more enterprises and individuals outsource their data to the cloud to
be benefited from this service. However, this new paradigm of data storage poses new challenges
on data confidentiality preservation. As cloud service separates the data from the cloud service
client (individuals or entities), depriving their direct control over these data, the data owner
cannot trust the cloud server to conduct secure data access control. Therefore, the secure access
control problem has become a challenging issue in public cloud storage. Ciphertext-policy
attribute-based encryption (CP-ABE) is a useful cryptographic method for data access control in
cloud storage. All these CP-ABE based schemes enable data owners to realize fine-grained and
flexible access control on their own data. However, CP-ABE determines users’ access privilege
based only on their inherent attributes without any other critical factors, such as the time factor.
In reality, the time factor usually plays an important role in dealing with time-sensitive data (e.g.
to publish a latest electronic

Problem statement

 Most deniable public key schemes are bitwise, which means these schemes can only
process one bit a time; therefore, bitwise deniable encryption schemes are inefficient for
real use, especially in the cloud storage service case.
 To solve this problem, designed a hybrid encryption scheme that simultaneously uses
symmetric and asymmetric encryption.
 They use a deniably encrypted plan-ahead symmetric data encryption key, while real data
are encrypted by a symmetric key encryption mechanism.
 Most deniable encryption schemes have decryption error problems. These errors come
from the designed decryption mechanisms.
 Uses the subset decision mechanism for decryption. The receiver determines the
decrypted message according to the subset decision result.
 If the sender chooses an element from the universal set but unfortunately the element is
located in the specific subset, then an error occurs.
 The same error occurs in all translucent set- based deniable encryption schemes.

Project Description:

A integrating TRE and CP-ABE in public cloud storage, To propose an efficient scheme to
realize secure finegrained access control for time-sensitive data. In the proposed scheme, the data
owner can autonomously designate intended users and their relevant access privilege releasing
time points. Besides realizing the function, it is proved that the negligible burden is upon owners,
users and the trusted CA. To present how to design access structure for any potential timed
release access policy, especially embedding multiple releasing time points for different intended
users. To the best of our knowledge, we are the first to study the approach to design structures for
general time-sensitive access requirements. Furthermore, a rigorous security proof is given to
validate that the proposed scheme is secure and effective
CHAPTER- II

LITERATURE SURVEY

OVER VIEW OF SECURITY ABE


ABE comes in two flavors called key-policy ABE (KP-ABE) and ciphertext policy ABE.
In KP-ABE, attributes are used to describe the encrypted data and policies are built into users’
keys; while in CP-ABE, the attributes are used to describe users’ credentials, and an encryptor
determines a policy on who can decrypt the data.
Removing Escrow:
Most of the existing ABE schemes are constructed on the architecture where a single
trusted authority or KGC has the power to generate the whole private keys of users with is
master secret information. thus ,the key escrow problem is inherent such that the KGC can
decrypt every ciphertext addressed to users in the system by generating their secret keys at any
time.
Chase and Chow presented a distributed KP-ABE scheme that solves the key escrow
problem in a multi-authority system. In this approach, all attribute authorities are participating in
the key generation protocol in a distributed way such that they cannot pool their data and a link
multiple attribute sets belonging to the same user. One disadvantage of this kind of fully
distributed approach is the performance degradation. Since there is no centralized authority with
master secret information, all attribute authorities should communicate with the other authorities
in the system to generate a user’s secret key.

Recently, Chow proposed an anonymous private key generation protocol in identity-


based literature such that the KGC cab issues a private key to an authenticated user without
knowing the list of users’ identities. It seems that this anonymous private key generation protocol
works properly in ABE systems when we treat an attribute as an identity in this construction.
However, we found that this cannot be adapted to ABE systems due to mainly two reasons. First,
In Chow’s protocol, identities of users are not public anymore, at least to the KGC, because the
KGC can generate users’ secret keys otherwise. Since public keys are no longer “public”, it
needs additional secure protocols for users to obtain the attributes information from attribute
authorities. Second, since the collusion attack between users is the main security threat in ABE,
the KGC issues different personalized key components to users by blinding them with a random
secret even if they are associated with the same set of attributes. The random secret is unique and
should be consistent with the same user for any possible attribute change of the user. However, it
is impossible for the KGC to issue a personalized key component with the same random secret as
that of attribute key components to a user.
Bettencourt et al and Boldyreva et al proposed first key revocation mechanisms in CP-
ABE and KP-ABE settings, respectively. These schemes enable an attribute key revocation by
encrypting the 1message to the attribute set with its validation time. These attribute-revocable
ABE schemes have the security degradation problem in terms of the backward and forward
secrecy. They revoke attribute itself using timed rekeying mechanism, which is realized by
setting expiration time on each attribute. In ABE systems, it is a considerable scenario that
membership may change frequently in the attribute group. Then, a new user might be able to
access the previous data encrypted before his joining until the data are reencrypt with the newly
updated attribute keys by periodic rekeying. On the other hand, a revoked user would still be able
to access the encrypted data even if he does not hold the attribute any more until the next
expiration time. Such an uncontrolled period is called the window of vulnerability.
Recently, the importance of immediate user revocation has been taken notice of in
many practical ABE-based systems. The user revocation can be done by using ABE that supports
negative clauses, proposed by Ostrovky et al. To do so, one just adds conjunctively the AND of
negation of revoked user identities. One drawback in this scheme is that the private key size
increases by a multiplicative factor of logn, where n is the maximum number
of attributes. Lewko et al.proposed more efficient instantiations of Ostrovky et al. framework for
nonmonotonic ABE, where public parameters is only O(1) group elements, and private keys for
access structures involving t leaf attributes is of size O(t). However, these user-revocable
schemes also have a limitation with regard to the availability.

User-revocable ABE scheme addressing this problem by combining broadcast encryption


schemes with ABE schemes. However, in this scheme, the data owner should take full charge of
maintaining the entire membership list for each attribute group to enable the direct user
revocation. This scheme is not applicable to the data sharing system, because the data owners
will no longer be directly in control of data after storing their data to the external storage server.
Yu et al. also recently addressed the user revocation in the ABE-based data sharing system. In
this scheme, the user revocation is realized using proxy reencryption by the data server.
However, in order to revoke users, the KGC should generate all secret keys including the proxy
key on behalf of the data server. Then, the server would reencrypt the ciphertext under the proxy
key received from the KGC to prevent revoked users from decrypting the ciphertext. Thus, the
key escrow problem is also inherent in this scheme, since the KGC manages all secret keys of
users as well as the proxy keys of the data server.

LITERATURE SURVEY

Title 1: SPOC: A Secure and Privacy-Preserving Opportunistic Computing Framework


for Mobile-Healthcare Emergency
Author: Rongxing Lu, Xiaodong Lin.
Year: 2013
Description:
.
 Mobile Healthcare (m-Healthcare) system has been envisioned as an important
application of computing to improve health care quality and save lives.
 A opportunistic computing paradigm can be applied in m-Healthcare emergency to
resolve the challenging reliability issue in PHI process.
 To propose a new secure and privacy preserving opportunistic computing framework,
called SPOC, to address this challenge

Advantages:
 Shift from a clinic-oriented, centralized healthcare system to a patient-oriented,
distributed healthcare system.
 Reduce healthcare expenses through more efficient use of clinical resources and earlier
detection of medical conditions.

Disadvantages:
 Performance, Reliability, Scalability, QoS, Privacy, Security.
 More prone to failures, caused by power exhaustion, software and hardware faults,
natural disasters, malicious attacks, and human errors etc.

Title 2: Privacy-Preserving Multi-Keyword Ranked Search over Encrypted Cloud Data.

Author: Ning Cao, Cong Wang.


Year:2014
Description:
 Cloud computing is the long dreamed vision of computing as a utility, where cloud
customers can remotely store their data into the cloud so as to enjoy the on-demand high-
quality applications and services from a shared pool of configurable computing resources.

 Its great flexibility and economic savings are motivating both individuals and enterprises
to outsource their local complex data management system into the cloud.

 To protect data privacy and combat unsolicited accesses in the cloud and beyond,
sensitive data.

Advantages:
 Low overhead on computation and communication cost.
 A ranked search mechanism to support extra search semantics and dynamic data
operations.
 It is more secure and efficient mechanism.

Disadvantages:
 The large number of data users and documents in cloud, it is crucial for the search service
to allow multi-keyword query and provide result similarity ranking to meet the effective
data retrieval need.
 Single-keyword search without ranking.
 Boolean- keyword search without ranking.

Title 3: Dominating Set and Network Coding-Based Routing in Wireless Mesh Networks.
Author: Jing Chen, Ruiying Du,
Year: 2015

Description:
 Wireless mesh networks are widely applied in many fields such as industrial controlling,
environmental monitoring, and military operations.
 Network coding is promising technology that can improve the performance of wireless
mesh networks.
 A network coding is suitable for wireless mesh networks as the fixed backbone of
wireless mesh is usually unlimited energy.
 It effectively deals with the coding collision problem of flows by introducing the
information process, which effectively decreases the failure rate of decoding.
Advantages:
 Optimum combination of coding opportunity and coding validity.
 Improve Network Performance.
 To distribute the flow of data to different routing to make sure energy consumption is
balanced.
 Connected Dominating Set (CDS) can efficiently cover the network topology,
dominating nodes are a good choice to converge data flows

Disadvantages:
 Coding collision is a severe problem affecting network performance
 Increase packet loss ratio.
CHAPTER- III

SYSTEM ANALYSIS

Existing System

In Existing System, Cipher text-policy attribute based encryption (CP-ABE) is a useful


cryptographic method for data access control in cloud storage. All these CP-ABE based schemes
enable data owners to realize fine-grained and flexible access control on their own data.
However, CP-ABE determines users’ access privilege based only on their inherent
attributes without any other critical factors, such as the time factor. Fine-grained access
control for time sensitive data in cloud storage. One challenge is to simultaneously achieve
both flexible timed release and fine granularity with lightweight overhead, which was not
explored in existing works.

Disadvantages

 The data owner cannot trust the cloud server to conduct secure data access control.
 Users cannot access the data until the corresponding time arrives
Proposed System

In Proposed System, Our scheme seamlessly incorporates the concept of timed


release encryption to the architecture of cipher text policy attribute based encryption. With
a suit of proposed mechanisms, this scheme provides data owners with the capability to
flexibly release the access privilege to different users at different time, according to a well-
defined access policy over attributes and release time. We further studied access policy design
for all potential access requirements of time sensitive, through suitable placement of time
trapdoors. The analysis shows that our scheme can preserve the confidentiality of time sensitive
data, with a lightweight overhead on both CA and data owners. It thus well suits the practical
large scale access control system.

Advantages

 Highly efficient and satisfies the security requirements for time sensitive data storage in
public cloud.
 Time related decryption can be outsourced to the cloud without losing confidential.
CHAPTER

SYSTEM STUDY

PRELIMINARY INVESTIGATION

The first and foremost strategy for development of a project starts from the thought of
designing a mail enabled platform for a small firm in which it is easy and convenient of sending
and receiving messages, there is a search engine ,address book and also including some
entertaining games. When it is approved by the organization and our project guide the first
activity, ie. Preliminary investigation begins. The activity has three parts:

 Request Clarification

 Feasibility Study

 Request Approval

REQUEST CLARIFICATION

After the approval of the request to the organization and project guide, with an
investigation being considered, the project request must be examined to determine precisely what
the system requires. Here our project is basically meant for users within the company whose
systems can be interconnected by the Local Area Network (LAN). In today’s busy schedule man
need everything should be provided in a readymade manner. So taking into consideration of the
vastly use of the net in day to day life, the corresponding development of the portal came into
existence.

FEASIBILITY ANALYSIS

An important outcome of preliminary investigation is the determination that the system


request is feasible. This is possible only if it is feasible within limited resource and time. The
different feasibilities that have to be analyzed are

 Operational Feasibility
 Economic Feasibility
 Technical Feasibility
Operational Feasibility
Operational Feasibility deals with the study of prospects of the system to be developed.
This system operationally eliminates all the tensions of the Admin and helps him in effectively
tracking the project progress. This kind of automation will surely reduce the time and energy,
which previously consumed in manual work. Based on the study, the system is proved to be
operationally feasible.

Economic Feasibility

Economic Feasibility or Cost-benefit is an assessment of the economic justification for a


computer based project. As hardware was installed from the beginning & for lots of purposes
thus the cost on project of hardware is low. Since the system is a network based, any number of
employees connected to the LAN within that organization can use this tool from at anytime. The
Virtual Private Network is to be developed using the existing resources of the organization. So
the project is economically feasible.

Technical Feasibility
According to Roger S. Pressman, Technical Feasibility is the assessment of the technical
resources of the organization. The organization needs IBM compatible machines with a graphical
web browser connected to the Internet and Intranet. The system is developed for platform
Independent environment. Java Server Pages, JavaScript, HTML, SQL server and WebLogic
Server are used to develop the system. The technical feasibility has been carried out. The system
is technically feasible for development and can be developed with the existing facility.

REQUEST APPROVAL

Not all request projects are desirable or feasible. Some organization receives so many
project requests from client users that only few of them are pursued. However, those projects that
are both feasible and desirable should be put into schedule. After a project request is approved, it
cost, priority, completion time and personnel requirement is estimated and used to determine
where to add it to any project list. Truly speaking, the approval of those above factors,
development works can be launched.
SYSTEM DESIGN AND DEVELOPMENT

INPUT DESIGN

Input Design plays a vital role in the life cycle of software development, it requires very
careful attention of developers. The input design is to feed data to the application as accurate as
possible. So inputs are supposed to be designed effectively so that the errors occurring while
feeding are minimized. According to Software Engineering Concepts, the input forms or screens
are designed to provide to have a validation control over the input limit, range and other related
validations.

This system has input screens in almost all the modules. Error messages are developed to
alert the user whenever he commits some mistakes and guides him in the right way so that
invalid entries are not made. Let us see deeply about this under module design.

Input design is the process of converting the user created input into a computer-based
format. The goal of the input design is to make the data entry logical and free from errors. The
error is in the input are controlled by the input design. The application has been developed in
user-friendly manner. The forms have been designed in such a way during the processing the
cursor is placed in the position where must be entered. The user is also provided with in an
option to select an appropriate input from various alternatives related to the field in certain cases.

Validations are required for each data entered. Whenever a user enters an erroneous data,
error message is displayed and the user can move on to the subsequent pages after completing all
the entries in the current page.
Output Design
The Output from the computer is required to mainly create an efficient method of
communication within the company primarily among the project leader and his team members,
in other words, the administrator and the clients. The output of VPN is the system which allows
the project leader to manage his clients in terms of creating new clients and assigning new
projects to them, maintaining a record of the project validity and providing folder level access to
each client on the user side depending on the projects allotted to him. After completion of a
project, a new project may be assigned to the client. User authentication procedures are
maintained at the initial stages itself. A new user may be created by the administrator himself or
a user can himself register as a new user but the task of assigning projects and validating a new
user rests with the administrator only.

The application starts running when it is executed for the first time. The server has to be started
and then the internet explorer in used as the browser. The project will run on the local area
network so the server machine will serve as the administrator while the other connected systems
can act as the clients. The developed system is highly user friendly and can be easily understood
by anyone using it even for the first time.
CHAPTER- IV

SYSTEM SPECIFICATION
System specification

Hardware specification:

Processor - Pentium –IV

Speed - 1.1 Ghz

RAM - 256 MB(min)

Hard Disk - 20 GB

Floppy Drive - 1.44 MB

Key Board - Standard Windows Keyboard

Mouse - Two or Three Button Mouse

Monitor - SVGA

Software specification:

Operating System : Windows95/98/2000/XP

Application Server : Tomcat5.0/6.X

Front End : HTML, Java, Jsp

Scripts : JavaScript.

Server side Script : Java Server Pages.

Database : Mysql 5.0

Database Connectivity : JDBC.


SOFTWARE REQUIREMENT SPECIFICATIONS

JAVA OVERVIEW

Java is a high-level language that can be characterized by all of the following exhortations.

 Simple

 Object Oriented

 Distributed

 Multithreaded

 Dynamic

 Architecture Neutral

 Portable

 High performance

 Robust

 Secure

In the Java programming language, all the source code is first written in plain text files
ending with the .java extension. Those source files are then compiled into .class files by the Java
compiler (javac). A class file does not contain code that is native to your processor; it instead
contains byte codes - the machine language of the Java Virtual Machine. The Java launcher tool
(java) then runs your application with an instance of the Java Virtual Machine.
JAVA PLATFORM:

A platform is the hardware or software environment in which a program runs. The most
popular platforms are Microsoft Windows, Linux, Solaris OS and MacOS. Most platforms can
be described as a combination of the operating system and underlying hardware. The java
platform differs from most other platforms in that it’s a software-only platform that runs on the
top of other hardware-based platforms.

The java platform has two components:

 The Java Virtual Machine.

 The Java Application Programming Interface(API)

Java Virtual Machine is the base for the java platform and is pored onto various
hardware-based platforms.

The API is a large collection of ready-made software components that provide many
useful capabilities, such as graphical user interface (GUI) widgets. It is grouped into libraries of
related classes and interfaces, these libraries are known as packages.

As a platform-independent environment, the Java platform can be a bit slower than native
code. However, advances in compiler and virtual machine technologies are bringing performance
close to that of native code without threatening portability.

Development Tools:

The development tools provide everything you’ll need for compiling, running,
monitoring, debugging, and documenting your applications. As a new developer, the main tools
you’ll be using are the Java compiler (javac), the Java launcher (java), and the Java
documentation (javadoc).

Application programming Interface (API):

The API provides the core functionality of the Java programming language. It offers a
wide array of useful classes ready for use in your own applications. It spans everything from
basic objects, to networking and security.
Deployment Technologies:

The JDK provides standard mechanisms such as Java Web Start and Java Plug-In, for
deploying your applications to end users.

User Interface Toolkits:

The Swing and Java 2D toolkits make it possible to create sophisticated Graphical User
Interfaces (GUIs).

Drag-and-drop support:

Drag-and-drop is one of the seemingly most difficult features to implement in user


interface development. It provides a high level of usability and intuitiveness.

Drag-and-drop is, as its name implies, a two step operation. Code must to facilitate
dragging and code to facilitate dropping. Sun provides two classes to help with this namely
DragSource and DropTarget

Look and Feel Support:

Swing defines an abstract Look and Feel class that represents all the information central to a
look-and-feel implementation, such as its name, its description, whether it’s a native look-and-
feel- and in particular, a hash table (known as the “Defaults Table”) for storing default values for
various look-and-feel attributes, such as colors and fonts.

Each look-and-feel implementation defines a subclass of Look And Feel (for example,
swing .plaf.motif.MotifLookAndFeel) to provide Swing with the necessary information to
manage the look-and-feel.

The UIManager is the API through which components and programs access look-and-feel
information (They should rarely, if ever, talk directly to a LookAndFeelinstance). UIManager is
responsible for keeping track of which LookAndFeel classes are available, which are installed,
and which is currently the default. The UIManager also manages access to the Defaults Table for
the current look-and-feel.

Dynamically Changing the Default Look-and-Feel:

When a Swing application programmatically sets the look-and-feel, the ideal place to do
so is before any Swing components are instantiated. This is because the
UIManager.setLookAndFeel() method makes a particular Look And Feel the current default by
loading and initializing that LookAndFeel instance, but it does not automatically cause any
existing components to change their look-and-feel.

Remember that components initialize their UI delegate at construct time, therefore, if the
current default changes after they are constructed, they will not automatically update their UIs
accordingly. It is up to the program to implement this dynamic switching by traversing the
containment hierarchy and updating the components individually.

Integrated Development Environment (IDE)


IDE Introduction
An Integrated Development Environment (IDE) or interactive development environment
is a software application that provides comprehensive facilities to computer programmers for
software development. An IDE normally consists of a source code editor, build automation
tools and a debugger. Most modern IDEs have intelligent code completion. Some IDEs
contain a compiler, interpreter, or both, such as Net Beans and Eclipse. Many modern IDEs
also have a class browser, an object browser, and a class hierarchy diagram, for use in object-
oriented software development. The IDE is designed to limit coding errors and facilitate error
correction with tools such as the “NetBeans” Find Bugs to locate and fix common Java
coding problems and Debugger to manage complex code with field watches, breakpoints and
execution monitoring.

An Integrated Development Environment (IDE) is an application that facilitates


application development. In general, an IDE is a graphical user interface (GUI)-based
workbench designed to aid a developer in building software applications with an integrated
environment combined with all the required tools at hand. Most common features, such as
debugging, version control and data structure browsing, help a developer quickly execute
actions without switching to other applications. Thus, it helps maximize productivity by
providing similar user interfaces (UI) for related components and reduces the time taken to
learn the language. An IDE supports single or multiple languages.

One aim of the IDE is to reduce the configuration necessary to piece together multiple
development utilities, instead providing the same set of capabilities as a cohesive unit.
Reducing that setup time can increase developer productivity, in cases where learning to use
the IDE is faster than manually integrating all of the individual tools. Tighter integration of
all development tasks has the potential to improve overall productivity beyond just helping
with setup tasks.

IDE Supporting Languages

Some IDEs support multiple languages, such as Eclipse, ActiveState Komodo, IntelliJ
IDEA, MyEclipse, Oracle JDeveloper, NetBeans, Codenvy and Microsoft Visual studio GNU
Emacs based on C and Emacs Lisp, and IntelliJ IDEA, Eclipse, MyEclipse or NetBeans, all
based on Java, or MonoDevelop, based on C#. Eclipse and Netbeans have plugins for C/C++,
Ada, GNAT (for example AdaGIDE), Perl, Python, Ruby, and PHP.

IDE Tools

There are many IDE tools available for source code editor, built automation tools and
debugger. Some of the tools are,

 Eclipse
 NetBeans
 Code::Blocks
 Code Lite
 Dialog Blocks

NetBeans IDE 8.0 and new features for Java 8


NetBeans IDE 8.0 is released, also providing new features for Java 8 technologies. It has code
analyzers and editors for working with Java SE 8, Java SE Embedded 8, and Java ME Embedded
8. The IDE also has new enhancements that further improve its support for Maven and Java EE
with PrimeFaces.

Most important highlights are:

The top 5 features of NetBeans IDE 8 are as follows:

1. Tools for Java 8 Technologies. Anyone interested in getting started with lambdas, method
references, streams, and profiles in Java 8 can do so immediately by downloading NetBeans IDE
8. Java hints and code analyzers help you upgrade anonymous inner classes to lambdas, right
across all your code bases, all in one go. Java hints in the Java editor let you quickly and
intuitively switch from lambdas to method references, and back again.
Moreover, Java SE Embedded support entails that you’re able to deploy, run, debug or profile
Java SE applications on an embedded device, such as Raspberry PI, directly from NetBeans IDE.
No new project type is needed for this, you can simply use the standard Java SE project type for
this purpose.

1. Tools for Java EE Developers. The code generators for which NetBeans IDE is well
known have been beefed up significantly. Where before you could create bits and pieces
of code for various popular Java EE component libraries, you can now generate complete
PrimeFaces applications, from scratch, including CRUD functionality and database
connections.

Additionally, the key specifications of the Java EE 7 Platform now have new and enhanced tools,
such as for working with JPA and CDI, as well as Facelets.

Let’s not forget to mention in this regard that Tomcat 8.0 and TomEE are now supported, too,
with a new plugin for WildFly in the NetBeans Plugin Manager.

3. Tools for Maven. A key strength of NetBeans IDE, and a reason why many developers have
started using it over the past years, is its out of the box support for Maven. No need to install a
Maven plugin, since it’s a standard part of the IDE. No need to deal with IDE-specific files, since
the POM provides the project structure. And now, in NetBeans IDE 8.0, there are enhancements
to the graph layouting, enabling you to visualize your POM in various ways, while also being
able to graphically exclude dependencies from the POM file, without touching the XML.
4. Tools for JavaScript. Thanks to powerful new JavaScript libraries and frameworks over the
years, JavaScript as a whole has become a lot more attractive for many developers. For some
releases already, NetBeans IDE has been available as a pure frontend environment, that is, minus
all the Java tools for which it is best known. This lightweight IDE, including Git versioning
tools, provides a great environment for frontend devs. In particular, for users of AngularJS,
Knockout, and Backbone, the IDE comes with deep editor tools, such as code completion and
cross-artifact navigation. In NetBeans IDE 8.0, there’s a very specific focus on AngularJS, since
this is such a dominant JavaScript solution at the moment. From these controllers, you can
navigate, via hyperlinks embedded in the JavaScript editor, to the related HTML views. And, as
shown in this screenshot, you can use code completion inside the HTML editor to access
controllers, and even the properties within the controllers, to help you accurately code the related
artifacts in your AngularJS applications.

Also, remember that there’s no need to download the AngularJS Seed template, since it’s built
into the NetBeans New Project wizard.

5. Tools for HTML5. JavaScript is a central component of the HTML5 Platform, a collective
term for a range of tools and technologies used in frontend development. Popular supporting
technologies are Grunt, a build tool, and Karma, a test runner framework. Both of these are now
supported out of the box in NetBeans IDE 8.0

JDBC (JAVA DATABASE CONNECTIVITY)

In an effort to set an independent database standard API for Java, Sun Microsystems
developed Java Database Connectivity, or JDBC. JDBC offers a generic SQL database access
mechanism that provides a consistent interface to a variety of RDBMSs. This consistent interface
is achieved through the use of “plug-in” database connectivity modules, or drivers. If a database
vendor wishes to have JDBC support, he or she must provide the driver for each platform that the
database and Java run on.

To gain a wider acceptance of JDBC, Sun based JDBC’s framework on ODBC. As you
discovered earlier in this chapter, ODBC has widespread support on a variety of platforms.
Basing JDBC on ODBC will allow vendors to bring JDBC drivers to market much faster than
developing a completely new connectivity solution.

JDBC was announced in March of 1996. It was released for a 90 day public review that
ended June 8, 1996. Because of user input, the final JDBC v1.0 specification was released soon
after.

The remainder of this section will cover enough information about JDBC for you to know
what it is about and how to use it effectively. This is by no means a complete overview of JDBC.
That would fill an entire book.

JDBC Goals
Few software packages are designed without goals in mind. JDBC is one that, because of
its many goals, drove the development of the API. These goals, in conjunction with early
reviewer feedback, have finalized the JDBC class library into a solid framework for building
database applications in Java.

The goals that were set for JDBC are important. They will give you some insight as to why
certain classes and functionalities behave the way they do. The eight design goals for JDBC are
as follows:

1. SQL Level API:


The designers felt that their main goal was to define a SQL interface for Java. Although
not the lowest database interface level possible, it is at a low enough level for higher-level tools
and APIs to be created. Conversely, it is at a high enough level for application programmers to
use it confidently. Attaining this goal allows for future tool vendors to “generate” JDBC code
and to hide many of JDBC’s complexities from the end user.

2. SQL Conformance:

SQL syntax varies as you move from database vendor to database vendor. In an effort to
support a wide variety of vendors, JDBC will allow any query statement to be passed through it
to the underlying database driver. This allows the connectivity module to handle non-standard
functionality in a manner that is suitable for its users.

3. JDBC must be implemental on top of common database interfaces

The JDBC SQL API must “sit” on top of other common SQL level APIs. This goal
allows JDBC to use existing ODBC level drivers by the use of a software interface. This
interface would translate JDBC calls to ODBC and vice versa.

4. Provide a Java interface that is consistent with the rest of the Java system

Because of Java’s acceptance in the user community thus far, the designers feel that
they should not stray from the current design of the core Java system.
SQL Server 2008

Microsoft SQL Server is a relational database management system developed by


Microsoft. As a database server, it is a software product with the primary function of storing and
retrieving data as requested by other software applications-which may run either on the same
computer or on another computer across a network (including the Internet).

SQL is Structured Query Language, which is a computer language for storing, manipulating
and retrieving data stored in relational database. SQL is the standard language for Relation
Database System. All relational database management systems like MySQL, MS Access, Oracle,
Sybase, Informix, postgres and SQL Server use SQL as standard database language. Also, they
are using different dialects, such as:

 MS SQL Server using T-SQL,


 Oracle using PL/SQL,
 MS Access version of SQL is called JET SQL (native format) etc.

History

The history of Microsoft SQL Server begins with the first Microsoft SQL Server product
- SQL Server 1.0, a 16-bit server for the OS/2 operating system in 1989 - and extends to the
current day. As of December 2016 the following versions are supported by Microsoft:

 SQL Server 2008


 SQL Server 2008 R2
 SQL Server 2012
 SQL Server 2014
 SQL Server 2016

The current version is Microsoft SQL Server 2016, released June 1, 2016. The RTM
version is 13.0.1601.5. SQL Server 2016 is supported on x64 processors only.

SQL Process
When you are executing an SQL command for any RDBMS, the system determines the
best way to carry out your request and SQL engine figures out how to interpret the task. There
are various components included in the process. These components are Query Dispatcher,
Optimization Engines, Classic Query Engine and SQL Query Engine, etc. Classic query engine
handles all non-SQL queries but SQL query engine won't handle logical files.

Data storage

Data storage is a database, which is a collection of tables with typed columns. SQL
Server supports different data types, including primary types such as Integer, Float, Decimal,
Char (including character strings), Varchar (variable length character strings), binary (for
unstructured blobs of data), Text (for textual data) among others. The rounding of floats to
integers uses either Symmetric Arithmetic Rounding or Symmetric Round Down (fix) depending
on arguments: SELECT Round(2.5, 0) gives 3.

Microsoft SQL Server also allows user-defined composite types (UDTs) to be defined
and used. It also makes server statistics available as virtual tables and views (called Dynamic
Management Views or DMVs). In addition to tables, a database can also contain other objects
including views, stored procedures, indexes and constraints, along with a transaction log. A SQL
Server database can contain a maximum of 231 objects, and can span multiple OS-level files
with a maximum file size of 260 bytes (1 exabyte). The data in the database are stored in primary
data files with an extension .mdf. Secondary data files, identified with a .ndf extension, are used
to allow the data of a single database to be spread across more than one file, and optionally
across more than one file system. Log files are identified with the .ldf extension

Storage space allocated to a database is divided into sequentially numbered pages, each 8
KB in size. A page is the basic unit of I/O for SQL Server operations. A page is marked with a
96-byte header which stores metadata about the page including the page number, page type, free
space on the page and the ID of the object that owns it. Page type defines the data contained in
the page: data stored in the database, index, allocation map which holds information about how
pages are allocated to tables and indexes, change map which holds information about the
changes made to other pages since last backup or logging, or contain large data types such as
image or text.

Buffer management

SQL Server buffers pages in RAM to minimize disk I/O. Any 8 KB page can be buffered
in-memory, and the set of all pages currently buffered is called the buffer cache. The amount of
memory available to SQL Server decides how many pages will be cached in memory. The buffer
cache is managed by the Buffer Manager. Either reading from or writing to any page copies it to
the buffer cache. Subsequent reads or writes are redirected to the in-memory copy, rather than
the on-disc version. The page is updated on the disc by the Buffer Manager only if the in-
memory cache has not been referenced for some time. While writing pages back to disc,
asynchronous I/O is used whereby the I/O operation is done in a background thread so that other
operations do not have to wait for the I/O operation to complete. Each page is written along with
its checksum when it is written.

Concurrency and locking

SQL Server allows multiple clients to use the same database concurrently. As such, it
needs to control concurrent access to shared data, to ensure data integrity-when multiple clients
update the same data, or clients attempt to read data that is in the process of being changed by
another client. SQL Server provides two modes of concurrency control: pessimistic concurrency
and optimistic concurrency. When pessimistic concurrency control is being used, SQL Server
controls concurrent access by using locks. Locks can be either shared or exclusive. Exclusive
lock grants the user exclusive access to the data-no other user can access the data as long as the
lock is held. Shared locks are used when some data is being read-multiple users can read from
data locked with a shared lock, but not acquire an exclusive lock. The latter would have to wait
for all shared locks to be released.

SQLCMD

SQLCMD is a command line application that comes with Microsoft SQL Server, and
exposes the management features of SQL Server. It allows SQL queries to be written and
executed from the command prompt. It can also act as a scripting language to create and run a set
of SQL statements as a script. Such scripts are stored as a .sql file, and are used either for
management of databases or to create the database schema during the deployment of a database.

SQLCMD was introduced with SQL Server 2005 and this continues with SQL Server
2012 and 2014. Its predecessor for earlier versions was OSQL and ISQL, which is functionally
equivalent as it pertains to TSQL execution, and many of the command line parameters are
identical, although SQLCMD adds extra versatility.

THE SQL SERVER

Microsoft SQL Server is a relational database management system produced by


Microsoft. It supports a superset of Structured Query Language SQL, the most common database
language. It is commonly used by businesses for small to medium sized databases, but the past
five years have seen greater adoption of the product for larger enterprise databases.
FEATURES OF MY SQL SERVER

The OLAP Services feature available in SQL Server version 7.0 is now called MY SQL
Server Analysis Services. The term OLAP Services has been replaced with the term Analysis
Services. Analysis Services also includes a new data mining component. The Repository
component available in SQL Server version 7.0 is now called Microsoft MY SQL Server Meta
Data Services. References to the component now use the term Meta Data Services. The term
repository is used only in reference to the repository engine within Meta Data Services.

SQL-SERVER database consist of five type of objects,

They are,

1. TABLE

2. QUERY
3. FORM

4. REPORT

5. MACRO

1) TABLE:

A database is a collection of data about a specific topic.

We can View a table in two ways,

a) Design View

b) Datasheet View

A) Design View

To build or modify the structure of a table, we work in the table design view. We can specify
what kind of dates will be holed.

B) Datasheet View

To add, edit or analyses the data itself, we work in table’s datasheet view mode.

2) QUERY:

A query is a question that has to be asked to get the required data. Access gathers data
that answers the question from one or more table. The data that make up the answer is either
dynast (if you edit it) or a snapshot (it cannot be edited).Each time we run a query, we get latest
information in the dynast. Access either displays the dynast or snapshot for us to view or perform
an action on it, such as deleting or updating.

3) FORMS:

A form is used to view and edit information in the database record. A form displays only the
information we want to see in the way we want to see it. Forms use the familiar controls such as
textboxes and checkboxes. This makes viewing and entering data easy. We can work with forms
in several views. Primarily there are two views, They are,
a) Design View

b) Form View

To build or modify the structure of a form, we work in form’s design view. We can add control
to the form that are bound to fields in a table or query, includes textboxes, option buttons, graphs
and pictures.

4) REPORT:

A report is used to view and print the information from the database. The report can
ground records into many levels and compute totals and average by checking values from many
records at once. Also the report is attractive and distinctive because we have control over the size
and appearance of it.

5) MACRO:

A macro is a set of actions. Each action in a macro does something, such as opening a form or
printing a report .We write macros to automate the common tasks that work easily and save the
time.

FEATURES OF SQL PROCEDURES

SQL procedures are characterized by many features. SQL procedures:

 Can contain SQL Procedural Language statements and features which support the

implementation of control-flow logic around traditional static and dynamic SQL

statements.

 Are supported in the entire DB2 family brand of database products in which many if not

all of the features supported in DB2 Version 9 are supported.

 Are easy to implement, because they use a simple high-level, strongly typed language.

 SQL procedures are more reliable than equivalent external procedures.

 Adhere to the SQL99 ANSI/ISO/IEC SQL standard.


 Support input, output, and input-output parameter passing modes.

 Support a simple, but powerful condition and error-handling model.

 Allow you to return multiple result sets to the caller or to a client application.

 Allow you to easily access the SQL STATE and SQLCODE values as special variables.

 Reside in the database and are automatically backed up and restored.

 Can be invoked wherever the CALL statement is supported.

 Support nested procedure calls to other SQL procedures or procedures implemented in

other languages.

 Support recursion.
CHAPTER V

SYSTEM DESIGN

The DFD is also called as bubble chart. It is a simple graphical formalism that can be
used to represent a system in terms of the input data to the system, various processing carried out
on these data, and the output data is generated by the system.

Data Flow Diagram

SENDER

UserLogin

Check

No

yes
UserRegistration

Send Message

Public Key

View Request

End Process
Receiver

UserLogin

Check

No

yes
Registration

View Message

Request Key

End Process
UML DIAGRAM
UML stands for Unified Modelling Language. UML is a standardized general-purpose
modelling language in the field of object-oriented software engineering. The standard is
managed, and was created by, the Object Management Group.
The goal is for UML to become a common language for creating models of object
oriented computer software. In its current form UML is comprised of two major components: a
Meta-model and a notation. In the future, some form of method or process may also be added to;
or associated with, UML.
The Unified Modelling Language is a standard language for specifying, Visualization,
Constructing and documenting the artefacts of software system, as well as for business modelling
and other non-software systems.
The UML represents a collection of best engineering practices that have proven
successful in the modelling of large and complex systems.
The UML is a very important part of developing objects oriented software and the software
development process. The UML uses mostly graphical notations to express the design of
software projects.
Sequence Diagram

A sequence diagram in Unified Modeling Language (UML) is a kind of interaction diagram that
shows how processes operate with one another and in what order. It is a construct of a Message
Sequence Chart. Sequence diagrams are sometimes called event diagrams, event scenarios, and
timing diagrams.

Sender

Generate Key View Requst


user1 Login Send Message

Enter username and password


message

message

Receiver:

Send Key Request LOGOUT


User2 Login View Encrypted Message

Enter User2 name and password


message

message
The most creative and challenging phase of the system life cycle is system design. The
term design describes a final system and the process by which it is developed. It refers to the
technical specifications that will be applied in implementing the candidate system. It also
includes the construction of programs and program testing. The key question involved here is
“How the problem should be solved”.

System design is a solution for the question of how to approach to the creation of a new
system. This important phase is composed of several steps. It provides the understanding and
procedural details necessary for implementing the system recommended in the feasibility study.
Emphasis is on translating the performance requirements into design specifications. Design goes
through logical and physical stages of development. Logical reviews the present physical system;
prepares input and output specifications; makes edit, security, and control specifications; details
the implementation plan; prepares a logical design walkthrough. Physical design maps out the
details of the physical system, plans the system implementation, devises a test and
implementation plan and specifies any new hardware and software.

The first step is to determine how the output is to be produced and in what format.
Samples of output and input are presented. Second, input data and master files have to be
designed to meet the requirement of the proposed output. The operational phases are handled
through program construction and testing, including a list of programs needed to meet the
system’s objectives and complete documentation. Finally details related to justification of the
system and estimate of the impact of the candidate system on the user and organization are
documented and evaluated by management as a step toward implementation.

The final report prior to the implementation phase includes procedural flowcharts, record
layouts and a workable plan for implementing the candidate system. Information on personnel,
money, hardware, facilities, and their estimated cost must also be available. At this point,
projected costs must be close to actual costs of implementation.
Component diagram:

Sender:

Enter Username and Password


Login

User
no
yes

Register
Send Message

Generate Key Send Key For user Requset

Receiver:

Enter User2 name and Password


Login

User2
no
yes

Register
Send Message

Key Generate Request Send


Activity diagram

Sender

Userlogin

Send Message View Request Send Key Logout

Receiver

User2 login

Receive Message View Message Send Key Request Logout


CHAPTER- V

SYSTEM IMPLEMENTATION

Module description
1. Cloud Access Control
 The central authority (CA)
 The data owner (Owner)
 Cloud service provider (Cloud)
2. Security Assumption
3. Timed-Release Encryption
4. Access Policy and Time-Related Components
 Unexposed
 Exposed

1. Cloud Access Control


 The central authority (CA)
It is responsible to manage the security protection of the whole system: It publishes
system parameters and distributes security keys to each user. In addition, it acts as a time agent
to maintain the timed-releasing function.

 The data owner (Owner)


This modules is decides the access policy based on a specific attribute set and one or
more releasing time points for each file, and then encrypts the file under the decided policy
before uploading it.

 The data owner (Owner)


This module is assigned a security key from CA. He/she can query any ciphertext stored
in the cloud, but is able to decrypt it only if both of the following constraints are satisfied:
1) His/her attribute set satisfies the access policy;
2) The current access time is later than the specific releasing time.

 Cloud service provider (Cloud)


This module includes the administrator of the cloud and cloud servers. The cloud
undertakes the storage task for other entities, and executes access privilege releasing algorithm
under the control of CA.

2. Security Assumption
In our access control system, the cloud is assumed to be honest-but-curious, which is
similar to that assumed in most of the related literatures on secure cloud storage On the one hand,
it offers reliable storage service and correctly executes every computation mission for other
entities; On the other hand, it may try to gain unauthorized information for its own benefits. The
proposed scheme is defined to be compromised if either of the following two types of users can
successfully decrypt the ciphertext: 1) A user whose attribute set does not satisfy the access
policy of a corresponding ciphertext; 2) A user who tries to access the data before the specified
releasing time, even if he/she has satisfying attributes set.

3. Timed-Release Encryption
The concept of timed-release encryption is for scenarios that someone wants to securely
send a message to another one in the future. In detail, the owner encrypts his/her message for the
purpose that intended users can decrypt it after a designated time. From the security aspect, TRE
satisfies that: 1) Except the intended users, no one is able to get any information of the message;
2) Even the intended user cannot get the plaintext of the message before the designated releasing
time. In order to support an accurate timed-release mechanism, a trusted time agent is required to
manage the clock of the system. At each time point T, the agent releases a time token TKT,
which is an important notion in TRE.

4. Access Policy Structure:


In TAFC, an access policy is over some attributes and one or more releasing time points.
Time Trapdoors and Time Tokens: Time trapdoor (TS) can be embedded in an access structure,
such that the corresponding user’s access permission is restricted by the status of TS.
 Unexposed: A trapdoor (TS) is unexposed if the intended users cannot access the
corresponding secret through the trapdoor with their security keys.
 Exposed: A trapdoor is exposed if the intended users can get the corresponding secret
through this trapdoor. An exposed trapdoor is denoted as TS.

CHAPTER- VII

SYSTEM TESTING

The purpose of testing is to discover errors. Testing is the process of trying to discover
every conceivable fault or weakness in a work product. It provides a way to check the
functionality of components, sub-assemblies, assemblies and/or a finished product. It is the
process of exercising software with the intent of ensuring that the Software system meets its
requirements and user expectations and does not fail in an unacceptable manner. There are
various types of test. Each test type addresses a specific testing requirement.

7.1 TYPES OF TESTING


7.1.1 UNIT TESTING
Unit testing involves the design of test cases that validate that the internal program logic
is functioning properly, and that program input produces valid outputs. All decision branches and
internal code flow should be validated. It is the testing of individual software units of the
application .it is done after the completion of an individual unit before integration. This is a
structural testing, that relies on knowledge of its construction and is invasive. Unit tests perform
basic tests at component level and test a specific business process, application, and/or system
configuration. Unit tests ensure that each unique path of a business process performs accurately
to the documented specifications and contains clearly defined inputs and expected results.
Unit testing is usually conducted as part of a combined code and unit test phase of the
software lifecycle, although it is not uncommon for coding and unit testing to be conducted as
two distinct phases.

Test Strategy and approach:

Field testing will be performed manually and functional tests will be written in detail.

Test objectives:

 All field entries must work properly.


 Pages must be activated from the identified link.
 The entry screen, messages and responses must not be delayed.
 Features to be tested

7.1.2 INTEGRATION TESTING

Integration tests are designed to test integrated software components to determine if they
actually run as one program. Testing is event driven and is more concerned with the basic
outcome of screens or fields. Integration tests demonstrate that although the components were
individually satisfaction, as shown by successfully unit testing, the combination of components is
correct and consistent. Integration testing is specifically aimed at exposing the problems that
arise from the combination of components.

Software integration testing is the incremental integration testing of two or more


integrated software components on a single platform to produce failures caused by interface
defects.

The task of the integration test is to check that components or software applications, e.g.
components in a software system or – one step up – software applications at the company level –
interact without error.

7.1.3 FUNCTIONAL TESTING


Functional tests provide systematic demonstrations that functions tested are available as
specified by the business and technical requirements, system documentation and user manuals.

Functional testing is centered on the following items:

Valid Input : identified classes of valid input must be accepted.

Invalid Input : identified classes of invalid input must be rejected.

Functions : identified functions must be exercised.

Output : identified classes of application outputs must be exercised.

Systems/Procedures : interfacing systems or procedures must be invoked.

Organization and preparation of functional tests is focused on requirements, key


functions, or special test cases. In addition, systematic coverage pertaining to identify

Business process flows; data fields, predefined processes, and successive processes must
be considered for testing. Before functional testing is complete, additional tests are identified and
the effective value of current tests is determined.

7.1.4 SYSTEM TESTING


System testing ensures that the entire integrated software system meets requirements. It
tests a configuration to ensure known and predictable results. An example of system testing is the
configuration oriented system integration test. System testing is based on process descriptions
and flows, emphsizing pre-driven process links and integration points.

7.1.5 WHITE BOX TESTING


White Box Testing is a testing in which in which the software tester has knowledge of the
inner workings, structure and language of the software, or at least its purpose. It is purpose. It is
used to test areas that cannot be reached from a black box level.

7.1.6 BLACK BOX TESTING


Black Box Testing is testing the software without any knowledge of the inner workings,
structure or language of the module being tested. Black box tests, as most other kinds of tests,
must be written from a definitive source document, such as specification or requirements
document, such as specification or requirements document. It is a testing in which the software
under test is treated, as a black box .you cannot “see” into it. The test provides inputs and
responds to outputs without considering how the software works.

7.1.7 ACCEPTANCE TESTING


User Acceptance Testing is a critical phase of any project and requires significant
participation by the end user. It also ensures that the system meets the functional requirements.

7.2 OTHER TESTING METHODOLOGIES

7.2.1 User Acceptance Testing

User Acceptance of a system is the key factor for the success of any system. The system
under consideration is tested for user acceptance by constantly keeping in touch with the
prospective system users at the time of developing and making changes wherever required. The
system developed provides a friendly user interface that can easily be understood even by a
person who is new to the system.

7.2.2 Output Testing

After performing the validation testing, the next step is output testing of the proposed
system, since no system could be useful if it does not produce the required output in the specified
format. Asking the users about the format required by them tests the outputs generated or
displayed by the system under consideration. Hence the output format is considered in 2 ways –
one is on screen and another in printed format.

7.2.3 Validation Checking

Validation checks are performed on the following fields.

7.2.4 Text Field:


The text field can contain only the number of characters lesser than or equal to its size.
The text fields are alphanumeric in some tables and alphabetic in other tables. Incorrect entry
always flashes and error message.

7.2.5 Numeric Field:


The numeric field can contain only numbers from 0 to 9. An entry of any character
flashes an error messages. The individual modules are checked for accuracy and what it has to
perform. Each module is subjected to test run along with sample data. The individually tested
modules are integrated into a single system. Testing involves executing the real data
information is used in the program the existence of any program defect is inferred from the
output. The testing should be planned so that all the requirements are individually tested.

A successful test is one that gives out the defects for the inappropriate data and produces
and output revealing the errors in the system.

7.3 UNIT TESTING


Expected
S.No Procedure Test Condition Test Data Data

1 1.Validate the 1. If the User User name admin Successfully


Admin login name and entered login
page Password are Account
available in the Password admin
Database.

2.Enter the 2.If the User User ID admin1


User name name and Error message
and Password Password are not will display
available in Password user
Database

Result on
3. Click login 3.Authentication Fail / Pass database
button verification verification

4.Cancel 4.Password will Fail / Pass Process


be terminated terminator

Test Result : Success

7.4 INTEGRATION TESTING


Expected
S.No Procedure Test Condition Test Data Data

1 1.Add new 1. Enter customer User Name pavi@gmail.com New


customer details valid on Password pavi customer
Details database Re-type pwd pavi added
Hit question pet Successfully
Answer dog
First name pavi
Last name devi
DOB 03/22/1994
Gender Female
Contact no 9800218218
Email id pavi@gmail.com
Address Thimiri
Country India
State Tamil nadu
City Vellore
Zip code 632512

2.Add new 2.Enter category Catgry name man New


category details valid in category
Description shirt
details the database added
Uploadimg c:/shopping/img successfully

Brand name Raymond


3. Add new 3.Enter product New product
Catgry name Men
product details valid on added
Product name Shirt
details database successfully
Quantity 1
Price 2000
Sale price 1500
Description shirt quality
Upload img shirt.jpeg

4. To submit 4. Check the


the button Details on Check the
Fail / Pass
database valid details

Test Result : Success

CHAPTER- VIII
CONCLUSION

APPENDIX
SAMPLE CODING
Login:
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">

<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<title>Data Sharing Index</title>
<meta name="keywords" content="" />
<meta name="description" content="" />
<script type="text/javascript" src="js/jquery-1.7.1.min.js"></script>
<script type="text/javascript" src="js/jquery.slidertron-1.3.js"></script>
<link href="http://fonts.googleapis.com/css?family=Open+Sans:400,300,600,700,800"
rel="stylesheet" />
<link href="css/default.css" rel="stylesheet" type="text/css" media="all" />
<link href="css/fonts.css" rel="stylesheet" type="text/css" media="all" />
<link href="css/style3.css" rel="stylesheet" type="text/css" media="all" />
<script type="text/javascript">
function valid()
{
var a = document.fun.uname.value;
if(a=="")
{
alert("Enter The Username");
document.fun.uname.focus();
return false;
}
var b = document.fun.pass.value;
if(b=="")
{
alert("Enter The Password");
document.fun.pass.focus();
return false;
}
}
</script>
</head>
<body>
<div id="header-wrapper">
<div id="header" class="container">
<div id="logo">
<h1 align="center">Improving Security and Efficiency in Attribute-Based
Data Sharing</h1>
</div>
<div id="menu" style="width:480px;margin-top: -178px;">
<ul style="margin-top: 117px;">
<li Style="width:auto;"class=""><a href="index.html"
accesskey="1" title="">Homepage</a></li>
<li Style="width:auto;"class=""><a href="login.jsp"
accesskey="2" title="">User Login</a></li>
<li Style="width:auto;"class=""><a href="register.jsp"
accesskey="3" title="">Registration</a></li>

</ul>
</div>
</div>
<section class="login">
<div class="titulo">User Login</div>
<form action="logvalid.jsp" method="post" name="fun" onsubmit="return valid();">
<input type="text" required title="Username" name="uname" placeholder="Username">
<input type="password" required title="Password" name="pass" placeholder="Password">
<div class="olvido">
<div class="col"><a href="register.jsp" title="Registration">Register</a></div>
</div>
<input type="submit" class="enviar" name="sub"style="font-size:16px;"value="Login" />
</form>
</section>
<div style="margin-left: 110px; margin-top: -437px;">
<img src="images/idbased_e.jpg" width="567px" height="488px"/>
</div>
</div>
</body>
</html>

Message view:
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<%@ page import="java.sql.*" %>
<%@ page import="java.util.*" %>
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<title>Data Sharing Index</title>
<meta name="keywords" content="" />
<meta name="description" content="" />
<script type="text/javascript" src="js/jquery-1.7.1.min.js"></script>
<script type="text/javascript" src="js/jquery.slidertron-1.3.js"></script>
<link href="http://fonts.googleapis.com/css?family=Open+Sans:400,300,600,700,800"
rel="stylesheet" />
<link href="css/default.css" rel="stylesheet" type="text/css" media="all" />
<link href="css/fonts.css" rel="stylesheet" type="text/css" media="all" />
<script type="text/javascript">
function valid(){
var b=document.fun2.pubkey.value;
if(b=="")
{
alert("Enter The key");
document.fun2.pubkey.focus();
return false;
}
}
</script>
</head>
<body>
<div id="header-wrapper">
<div id="header" class="container">
<div id="logo">
<h1 align="center">Improving Security and Efficiency in Attribute-Based
Data Sharing</h1>
</div>
<div id="menu" style="width:480px;margin-top: -178px;">
<ul style="margin-top: 117px;width: 520px;">
<li style="width:auto;">
<h3 style="color:#FFF">Welcome<font color="#FFFFFF" size="4">
<% String name = (String)session.getAttribute("name");
out.print(name); %></font></h3></li>
<li style="width:160px;"class=""><a href="home.jsp" accesskey="2"
title="">Home</a></li>
<li style="width:160px;"class=""><a href="index.html"
accesskey="2" title="">Logout</a></li>
</ul>
</div>
<img src="images/idbased_e.jpg" width="567px" height="488px" style="margin-left:
110px; margin-top: 156px;float:left;"/>
<h3 style="margin-top: 198px;margin-right: 355px;float:right;"><font
color="#FFFFFF">Received Message</font></h3>
<div class="repeat">
<%
String samp = request.getQueryString();
String sample[] = samp.split("/");
String id = sample[0];
int i=1;
String trim = sample[1];
if(trim.equals("pub"))
{
Connection con;
String sentby = null;
String encptmsg=null;
String pubkey=null;
try{
Class.forName("com.mysql.jdbc.Driver");
con =
DriverManager.getConnection("jdbc:mysql://localhost:3306/datashare","root","admin");
PreparedStatement pst = con.prepareStatement("select *
from msgpub where id='"+id+"'");
ResultSet rst = pst.executeQuery();
while(rst.next())
{
sentby = rst.getString("sentby");
encptmsg = rst.getString("encptmsg");
pubkey = rst.getString("pubkey");
%>
<h3 style="width:auto;"><font
color="#FFFFFF"><%out.println("Sent By : "+sentby);%></font></h3>
<div class="repeat1">
<h3 style="width:auto;"><font color="#FFFFFF"><%out.println("Key :
"+pubkey);%></font></h3>
</div>
<h3 style="width:auto;"><font color="#FFFFFF"><%out.print("Message :
"+encptmsg);%></font></h3><br />
<img src="images/cryptocoded.gif"width="440px" heigth="220px" >
<h5 style="width:auto;"><font color="#FFFFFF">Enter The Key To View The
Original Message : </font></h5>
<div>
<form action="showmsg.jsp?<%=id+"/"+trim%>" method="post" name="fun2"
onsubmit="return valid();">
<input type="text" title="Public Key" placeholder="Key"name="pubkey" />
<input type="submit" name="sub"
value="Submit"/>
</form>
</div>
<%
}
catch(Exception e)
{
out.print(e);
}
}
if(trim.equals("name"))
{
Connection con;
String sentby = null;
String encptmsg=null;
String pubkey=null;
try{
Class.forName("com.mysql.jdbc.Driver");
con =
DriverManager.getConnection("jdbc:mysql://localhost:3306/datashare","root","admin");
PreparedStatement pst = con.prepareStatement("select *
from msgname where id='"+id+"'");
ResultSet rst = pst.executeQuery();
while(rst.next())
{
sentby = rst.getString("sentby");
pubkey = rst.getString("pubkey");
%>
<h3 style="width:auto;"><font
color="#FFFFFF"><%out.println("Sent By : "+sentby);%></font></h3>
<div class="repeat1">
<h3 style="width:auto;"><font color="#FFFFFF"><%out.println("Key :
"+pubkey);%></font></h3>
</div>
<br />
<h5 style="width:auto;"><font color="#FFFFFF">Enter The Key To View The
Message : </font></h5>
<div>
<form action="showmsg.jsp?<%=id+"/"+trim%>" method="post" name="fun2"
onsubmit="return valid();">
<input type="text" title="Public Key" placeholder="Key"name="pubkey" />
<input type="submit" name="sub"
value="Submit"/>
</form>
</div>
<%
}
}
catch(Exception e)
{
out.print(e);
}
}
if(trim.equals("attri"))
{
Connection con;
String sentby = null;
String encptmsg=null;
String pubkey=null;
try{
Class.forName("com.mysql.jdbc.Driver");
con =
DriverManager.getConnection("jdbc:mysql://localhost:3306/datashare","root","admin");
PreparedStatement pst = con.prepareStatement("select *
from msgattri where id='"+id+"'");
ResultSet rst = pst.executeQuery();
while(rst.next())
{
sentby = rst.getString("sentby");
pubkey = rst.getString("pubkey");
%>
<h3 style="width:auto;"><font
color="#FFFFFF"><%out.println("Sent By : "+sentby);%></font></h3>
<div class="repeat1">
<h3 style="width:auto;"><font color="#FFFFFF"><%out.println("Key :
"+pubkey);%></font></h3>
</div>
<br />
<h5 style="width:auto;"><font color="#FFFFFF">Enter The Key To View The
Message : </font></h5>
<div>
<form action="showmsg.jsp?<%=id+"/"+trim%>" method="post" name="fun2"
onsubmit="return valid();">
<input type="text" title="Public Key" placeholder="Key"name="pubkey" />
<input type="submit" name="sub"
value="Submit"/>
</form>
</div>
<%
}
}
catch(Exception e)
{
out.print(e);
}
}
%>
</div>
</div>
</div>
</body>
</html>
FUTURE WORK

You might also like