You are on page 1of 58

Giani Zail Singh

Punjab Technical University Campus


Bathinda
4 Months Industrial Training Project Report
on
HUMAN RESOURCE MANGEMENT

Department of Computer Sc. &Engg

SUBMITTED BY

SUBMITTED TO:-

Name:-Gurwinder kaur

Prof. Dinesh Kumar

Univ. Rollno:- 1145344

Training Incharge

Course/Branch:- B.TECH (CSE)

Of CSE Department

Project Incharge:Name:- Mr. Hemant Singla

ACKNOWLEDGEMENT
The satisfaction that accompanies that the successful completion of any task
would be incomplete without the mention of people whose ceaseless
cooperation made it possible, whose constant guidance and encouragement
crown all efforts with success.
I am grateful to our project guide Mr. Hemant Singla for his guidance,
inspiration and constructive suggestions that helped us in the preparation of this
project. She has helped us at each moment. I would like to thank all the staff
members of GHTP, Lehra Mohobbat for their kind co-operation.
We will also like to thanks Mr. Dinesh Kumar for giving us such a great
opportunity to have an experience in the real industrial environment.
At last but not the least we would like to thank Mrs. Abhilasha Jain, the head of
department for providing us such a great opportunity to show our hidden talent
under the given syllabus.
At last but not the least, we would also like to thank our friends, who helped us
in one way or the other in the completion of this project.

Gurwinder Kaur
Cse-2K11
1145344

ABSTRACT
As the name specifies HUMAN RESOURCE MANAGEMENT is software
developed for managing various activities of employees in the thermal plants. It
is one of the modules of the ENERGIES software that is being developed by
TCS Ltd. Identification of the drawbacks of the existing system leads to the
designing of computerized system that will be compatible to the existing system
with the system which is more user friendly and more GUI oriented. We can
improve the efficiency of the system, thus overcome the drawbacks of the
existing system like this project contains:

Less human error


Strength and strain of manual labor can be reduced
High security
Data redundancy can be avoided to some extent
Data consistency
Easy to handle
Easy data updating
Easy record keeping
Backup data can be easily generated

TABLE OF CONTENTS

S.NO.

TOPIC

1.

INTRODUCTION

2.

FEASIBILITY STUDY

3.

ENVIRONMENT & TOOLS USED FOR THE


SYSTEM

4.

INTRODUCTION TO SQL SERVER

5.

COMPANY PROFILE

6.

DATAFLOW DIAGRAMS

7.

DATABASE TABLES

8.

SNAPSHOTS

9.

CODE EFFICIENCY

10.

SYSTEM TESTING & DEBUGGING

11.

SECURITY

12.

CONCLUSION

13.

REFERNCES

1. INTRODUCTION
1.1 PROBLEM STATEMENT
All the work in thermal plants till date is done using paper-pen but now a
process of computerizing all the thermal plants is in progress. The TCS Ltd. is
supervising the software named ENERGIES to connect various thermal
plants together using ETHRANET .It uses MIS report system where
information is gathered from various resources and is stored in the database.
There are 18 modules of ENERGIES from which Human Resource
Management is one. Identification of the drawbacks of the existing system
leads to the designing of computerized system that will be compatible to the
existing system with the system which is more users friendly.

1.2 OBJECTIVE OF PROJECT


The project of the human resource management improves the services for all
the employees of the thermal plants. Through this one can check the personal
profile of all the current employees within few minutes from the data base of
the system .The system will help to check the entries, training details,
attendance, salary, complains of every employee. The employees will be
recognized by the unique Employee ID allocated.

1.3 EXISTING SYSTEM


Existing system is manual system. It requires a lot of file work to be done. It is
a time consuming system. All customer information is maintained manually.
Any searching requires a lot of manual effort. The departments operate under a
manual system to deal with the employees records. The primary function of the
department is to retrieve the information related to its employees about their
leaves, training and complains.

1.4 DRAWBACKS OF EXISTING SYSTEM


Data redundancy and formatting: The various files are likely to have
different formats therefore lead to redundancy and inconsistency.

Maintaining registers is costly: Traditionally documents have been stored in


batches and they field in file cabinets and boxes. A numerical system is they
assigned. Specifically a consumer number assigned to organize the files.
Error prone: Existing systems are error prone, since manual work is required.
More time is consumed and errors may propagate due to human mistakes.
Low security feature: Due to maintenance's of records manually, it could
viewed easily by anyone. Also these could be possible lose of data and
confidential information due to some disaster in the form of fire, theft etc.

1.5 VARIOUS MENUS & STAGES OF DEVELOPMENT:


i.
ii.
iii.
iv.
v.
vi.
vii.
viii.

PROBLEM DEFINITION
SYSTEM REQUIREMENTS
FEASIBILITY STUDY
ANALYSIS
DESIGN
IMPLEMENTATION
POST IMPLEMENTATION
MAINTAINANCE

1.6 SYSTEM ANALYSIS & DESIGN


System development, a process consisting of the two major steps of systems
analysis and design, start when management or sometimes system development
personnel feel that a new system or an improvement in the existing system is
required. The SDLC is classically, thought of as the set of activities that
analysts, designers and users carry out to develop and implement an
information system.

1.7 FUTURE ENHANCEMENTS


In the software development process the implementation stage is not the last
stage and the development process. As the time progresses user requirements
increases and at one time the current software seems unable to cope up with
them. Hence regular updating of the software is needed.

1.8 PLATFORMS
Front end: - We have used MS Visual Studio 2010(C#.Net) as the front end.
This language is Graphical User Interface (GUI) programming language. In this
language the coding method is similar to C++.
Back end: - We have used SQL/SERVER as the back end.

1.9ADVANTAGES OF THE PROJECT


i.
ii.
iii.
iv.
v.
vi.

It facilitates paper free work.


It provides instant information.
It opens an account very easily.
dIt replaces all required files and registers.
It can modify and remove the account of customer very easily.
It is very easy to run and reduces paper complexity.

PROJECT REQUIREMENTS
HARDWARE USED

1. Intel 845 GVSR Motherboard


2. 250 GB HDD
3. 2 GB RAM
4. Sony 52x Writer
5. 3 FDD
6. 17 LG Color Monitor
7. Tech-com Keyboard
8. Hp 6 L Laser Printer
9. P-Duel/Core 1.6 MHz
10.Intel 845 GVSR Motherboard
11.250 GB HDD
12.2 GB RAM
13.Sony 52x Writer
14.3 FDD
15.17 LG Color Monitor

SOFTWARE USED

1. Windows Vista
2. Microsoft
Visual
Studio
2008/SQL-Server 2005
3. Microsoft Word

2. Feasibility Study
2.1 INTRODUCTION
Feasibility is the measure of how beneficial or practical the development of the
system will be to the organization. It is a preliminary survey for the systems
investigation. It aims to provide information to facilitate a later in-depth
investigation. The report produced at the end of the feasibility study contains
suggestions and reasoned arguments to help management decide whether to
commit further resources to the proposed project. Within the scheduled duration
we were assigned to study both the positive and negative aspects of the current
manual system, in which we have come up with a number of drawbacks that
prevent the progress of the clinic if it is continued to function manually.

2.2 TYPES OF FEASIBILITY


TECHNICAL FEASIBILITY: Based on the outline design of system
requirements in terms of inputs, outputs, files, procedures and staff, the
technical issues raised during technical feasibility include:
Does the necessary technology exist to do what is proposed?
Does the proposed equipment have the technical capacity to hold the data
required to use in the new system?
Adequate responses provided by the proposed system?
Is the system flexible enough to facilitate expansion?
OPERATIONAL FEASIBILITY: A system often fails if it does not fit within
existing operations and if users resist the change. Important issues a systems
developer must look into are:
Will the new system be used if implemented in an organization?
Are there major barriers to implementation or is proposed system
accepted without destructive resistance?
ECONOMICAL FEASIBILITY: The proposed system must be justifiable in
terms of cost and benefit, to ensure that the investment in a new/changed
system provide a reasonable return. According to the computerized system we
propose, the costs can be broken down to two categories:

Costs associated with the development of the system.


Costs associated with operating the system.

2.3 SOFTWARE DEVELOPMENT LIFE CYCLE


Every software development consists several phases, have certain predefined
works and at the end of each phase document is prepared. This phase is based
on certain Software Development Model.
Software Development Model: Software engineering is a discipline that
integrates process, methods, and tools for the development of computer
software. To solve actual problems in an industry setting, software engineer or a
team of software engineers must incorporate a development strategy that
encompasses methods and tools. This strategy is often referred to as a process
model or a software-engineering paradigm. A number of different process
models for the software engineering have been proposed, each exhibiting
strengths and weaknesses, but all having a series of generic phases in common.
Software Requirement Analysis: The requirements gathering process is
intensified and focused specifically on software. To understand the nature of the
program(s) to be built, the software engineer (analyst) must understand the
information domain for the software, as well as required function, behavior,
performance, and interfacing. Requirements for the both the system and the
software are documented and reviewed with the customer.
Design: Software design is actually a multi-step process that focuses on four
distinct attributes of a program are data structures, software architecture,
interface representations, and procedural (algorithm) detail. The design process
translates requirement into a representation of the software that can be assessed
for quality before code generation begins.
Code Generation: The design must be translated in to a machine-readable
form. The testing process focuses on the logical internals of the software,
assuring that all statements have been tested and on the functional externals that
is, conducting tests to uncover errors and ensure that defined inputs will
produce actual results that agree with required results.

Testing: Once code has been generated, program testing begins. The testing
process focuses on the logical internals of the software, assuring that all
statements have been tested and on the functional external that is, conducting
tests to uncover errors and ensure that defined inputs will produce actual results
that agree with required results.
Maintenance: Software will undoubtedly undergo change after it is delivered
to the customer. Change will occur because errors have been encountered,
because the software must be adapted to accommodate change in its external
environment or because the customer requires functional or performance
enhancements.

3. ENVIRONMENT & TOOLS USED FOR THE SYSTEM


3.1 INTRODUCTION OF THE .NET FRAMEWORK
The .NET Framework is an integral Windows component that supports building
and running the next generation of applications and XML Web services. The
key components of the .NET Framework are the common language runtime and
the .NET Framework class library, which includes ADO.NET, ASP.NET, and
Windows Forms. The .NET Framework provides a managed execution
environment, simplified development and deployment, and integration with a
wide variety of programming languages.
Fig 3.1

.NET Framework in Context

The .NET Framework is designed to fulfill the following objectives:


To provide a consistent object-oriented programming environment .
To provide a code-execution environment that minimizes software
deployment and versioning conflicts.
To provide a code-execution environment that guarantees safe execution
of code.
To provide a code-execution environment that eliminates the
performance problems.

To make the developer experience consistent across widely varying types


of applications, such as Windows-based applications and Web-based
applications.
Fig 3.2The architectural layout of the .NET Framework

3.2An overview of the .NET architecture: The .NET Framework has


two main components:
1. The Common language runtime
2. The .NET Framework class library.
The Common Language Runtime
CLR is the foundation of the .NET Framework. You can think of the runtime as
an agent that manage code at execution time, providing core services such as
memory management, thread management while also enforcing strict type
safety and other forms of code accuracy that ensure security and robustness. In
fact, the concept of code management is a fundamental principle of the runtime.
Code that targets the runtime is known as managed code, while code that does
not target the runtime is known as unmanaged code. The class library, the other
main component of the .NET Framework, is a comprehensive, object-oriented

collection of reusable types that you can use to develop applications ranging
from traditional command-line or graphical user interface applications to
applications based on the latest innovations provided by ASP.NET, such as
Web Form and XML Web services. The .NET Framework can be hosted by
unmanaged components that load the common language runtime into their
processes and initiate the execution of managed code, thereby creating a
software environment that can exploit both managed and unmanaged features.

.Net framework class library


The .NET Framework class library is a collection of reusable types that tightly
integrate with the common language runtime. The class library is object
oriented, providing types from which your own managed code can derive
functionality. This not only makes the .NET Framework types easy to use, but
also reduces the time associated with learning new features of the .NET
Framework. In addition, third-party components can integrate seamlessly with
classes in the .NET Framework. For example, the .NET Framework collection
classes implement a set of interfaces that you can use to develop your own
collection classes. Your collection classes will blend seamlessly with the classes
in the .NET Framework. As you would expect from an object-oriented class
library, the .NET Framework types enable you to accomplish a range of
common programming tasks, including tasks such as string management, data
collection, database connectivity, and file access. One can use the .NET
Framework to develop the following types of applications and services:
i.
ii.
iii.
iv.
v.

Console applications.
Windows GUI applications (Windows Forms).
ASP.NET applications.
XML Web services.
Windows services.

3.3VISUAL STUDIO 2012


Final build of Visual Studio 2012 was announced on August 1, 2012 and the
official launch event was held on September 12, 2012.[117]

Unlike prior versions, Visual Studio 2012 cannot record and play macros and
the macro editor has been removed.[118]
A major new feature is support for WinRT and C++/CX (Component
Extensions). Support for C++ AMP (GPGPU programming) is also
included.[119]
On 16 September 2011 a complete 'Developer Preview' of Visual Studio 11 was
published on Microsoft's website. Visual Studio 11 Developer Preview requires
Windows 7, Windows Server 2008 R2, Windows 8, or later operating
systems.[120] Versions of Microsoft Foundation Class Library (MFC) and C
runtime (CRT) included with this release cannot produce software that is
compatible with Windows XP or Windows Server 2003 except by using native
multi-targeting and foregoing the newest libraries, compilers, and
headers.[121] However, on June 15, 2012, a blog post on the VC++ Team blog
announced that based on customer feedback, Microsoft would re-introduce
native support for Windows XP targets (though not for XP as a development
platform) in a version of Visual C++ to be released later in the fall of
2012.[122] "Visual Studio 2012 Update 1" (Visual Studio 2012.1) was released in
November 2012. This update added support for Windows XP targets and also
added other new tools and features (e.g. improved diagnostics and testing
support for Windows Store apps).[123]
On 24 August 2011, a blog post by Sumit Kumar, a Program Manager on the
Visual C++ team, listed some of the features of the upcoming version of the
Visual Studio C++ IDE:[124]

Semantic Colorization: Improved syntax coloring, various user-defined or


default colors for C++ syntax such as macros, enumerations, typenames,
functions etc.[124]
Reference Highlighting: Selection of a symbol highlights all of the
references to that symbol within scope.[124]
New Solution Explorer: New solution explorer allows for visualization of
class and file hierarchies within a solution/project. Searching for calls to
functions and uses of classes will be supported.[124]
Automatic Display of IntelliSense list: IntelliSense will automatically be
displayed whilst typing code, as opposed to previous versions where it had
to be explicitly invoked through use of certain operators (i.e. the scope
operator (::)) or shortcut keys (Ctrl-Space or Ctrl-J).[124]

Member List Filtering: IntelliSense uses fuzzy logic to determine which


functions/variables/types to display in the list.[124]
Code Snippets: Code snippets are included in IntelliSense to automatically
generate relevant code based on the user's parameters, custom code snippets
can be created.[124]

The source code of Visual Studio 2012 consists of approximately 50 million


lines of code.[125]
Interface controversies[edit]
During Visual Studio 11 beta, Microsoft eliminated the use of color within tools
except in cases where color is used for notification or status change purposes.
However, the use of color was returned after feedback demanding more
contrast, differentiation, clarity and "energy" in the user interface.[126][127]
In Visual Studio 2012 RC, a major change to the interface is the use of all-caps
menu bar, as part of the campaign to keep Visual Studio consistent with the
direction of other Microsoft user experiences, and to provide added structure to
the top menu bar area.[128] The redesign was criticized for being hard to read,
and going against the trends started by developers to use CamelCase to make
words stand out better.[129] Some speculated that the root cause of the redesign
was to incorporate the simplistic look and feel of Metro apps.[130] However,
there exists a Windows Registry option to allow users to disable the all-caps
interface.[131]

FEATURES OF VISUAL STUDIO 2012


Text editor Using this editor, you can write your C# code. This text editor is
quite sophisticated. For example, as you type, it automatically lays out your
code by indenting lines, matching start and end brackets of code blocks, and
color - coding keywords. It also performs some syntax checks as you type, and
it underlines code that causes compilation errors, also known as design - time
debugging. In addition, it features IntelliSense, which automatically displays
the names of classes, fields, or methods as you begin to type them. As you start
typing parameters to methods, it will also show you the parameter lists for the
available overloads. Figure shows the Intel Sense feature in action with one of
the .NET base classes, list box.

Design view editor This editor enables you to place user - interface and data
- access controls in your project; Visual Studio automatically adds the necessary
C# code to your source files to instantiate these controls in your project. (This is
possible because all .NET controls are instances of particular base classes.)
Supporting windows These windows allow you to view and modify aspects
of your project, such as the classes in your source code, as well as the available
properties (and their startup values) for Windows Forms and Web Forms
classes. You can also use these windows to specify compilation options, such as
which assemblies your code needs to reference. The ability to compile from
within the environment Instead of needing to run the C# compiler from the
command line, you can simply select a menu option to compile the project, and
Visual Studio will call the compiler for you and pass all the relevant command line parameters to the compiler, detailing such things as which assemblies to
reference and what type of assembly you want to be emitted.
Integrated debugger It is in the nature of programming that your code will
not run correctly the first time you try it or the second time or the third time.
Visual Studio seamlessly links up to a debugger for you, allowing you to set
breakpoints and watches on variables from within the environment. Integrated
MSDN help Visual Studio enables you to access the MSDN documentation
from within the IDE. For example, if you are not sure of the meaning of a

keyword while using the text editor, simply select the keyword and press the F1
key, and Visual Studio will access MSDN to show you related topics. Similarly,
if you are not sure what a certain compilation error means, you can bring up the
documentation for that error by selecting the error message and pressing F1.
Access to other programs Visual Studio can also access a number of other
utilities that allow you to examine and modify aspects of your computer or
network, without your having to leave the developer environment. Among the
tools available, you can check running services and database connections, look
directly into your SQL Server tables, and even browse the Web using an
Internet Explorer window.
Creating a Project
Once you have installed Visual Studio 2010, you will want to start your first
project. With Visual Studio, you rarely start with a blank file and then add C#
code, in the way that you have been doing in the previous chapters in this book.
(Of course, the option of asking for an empty application project is there if you
really do want to start writing your code from scratch or if you are going to
create a solution that will contain a number of projects.) Instead, the idea is that
you tell Visual Studio roughly what type of project you want to create, and it
will generate the files and C# code that provide a framework for that type of
project. You then work by adding your code to this outline. For example, if you
want to build a Windows GUI - interface - based application, Visual Studio will
start you off with a file containing C# source code that creates a basic form.
This form is capable of talking to Windows and receiving events. It can be
maximized, minimized, or resized; all you need to do is add the controls and
functionality you want. If your application is intended to be a command line
utility (a console application), Visual Studio will give you a basic namespace,
class, and a Main() method to start you off. At last, when you create your
project, Visual Studio also sets up the compilation options that you are likely to
supply to the C# compiler whether it is to compile to a command line
application, a library, or a Windows application. It will also tell the compiler
which base class libraries you will need to reference .You can, of course,
modify all these settings as you are editing, if you need to. The first time you
start Visual Studio, you will be presented with a blank IDE. The Start Page is an

HTML page that contains various links to useful Web sites and enables you to
open existing projects or start a new project altogether.

Selecting a Project Type


You can create a new project by selecting File New Project from the Visual
Studio menu. From there get the New Project dialog box (see Fig 3.3) and your
first inkling of the variety of different projects you can create. Using this dialog
box, you effectively select the initial framework files and code you want Visual
Studio to generate for you, the type of compilation options you want, and the
compiler you want to compile your code with either the Visual C#, Visual Basic
2008compiler.
Fig 3.3 This particular example uses a C# console application.

We do not have space to cover all the various options for different types of
projects here. On the C++ side, all the old C++ project types are there MFC
application, ATL project, and so on. On the Visual Basic 2010 side, the options
have changed somewhat. For example, you can create a Visual Basic 2010
command-line application (Console Application), a .NET component (Class

Library), a .NET control (Windows Control Library), and more. However, you
cannot create an old style COM based control (the .NET control is intended to
replace such ActiveX controls).The following table lists all the options that are
available to you under Visual C# Projects. Note that some other, more
specialized C# template projects are available under the Other Projects option.
Solutions and Projects
A project is a set of all the source code files and resources that will compile into
a single assembly. For example, a project might be a class library or a Windows
GUI application.
A solution is the set of all the projects that make up a particular software
package (application).To understand this distinction, look at what happens
when you ship a project the project consists of more than one assembly. For
example, you might have a user interface, custom controls, and other
components that ship as libraries of the parts of the application. You might have
a different user interface for administrators. Each of these parts of the
application might be contained in a separate assembly, and hence, they are
regarded by Visual Studio as a separate project. However, it is quite likely that
you will be coding these projects in parallel and in conjunction with each other.
Thus, it is quite useful to be able to edit them all as one single unit in Visual
Studio.
Fig 3.4

Fig 3.4 shows that the project contains your source file, Program as well as
another C# source file, Assembly Info (found in the Properties folder), which
allows you to provide information that describes the assembly as well as the
ability to specify versioning information. The Solution Explorer also indicates
the assemblies that your project references according to namespace. You can
see this by expanding the References folder in the Solution Explorer. If you
have not changed any of the default settings in Visual Studio, you will probably
find the Solution Explorer in the top right corner of your screen.
Adding Another Project to the Solution
As you work through the following sections, you will see how Visual Studio
works with Windows applications as well as with console applications. To that
end, you create a Windows project called Basic Form that you will add to your
current solution, ConsoleApplication1. This means that you will end up with a
solution containing a Windows application and a console application. You
might, however, create a solution like this if, for example, you are writing a
utility that you want to run either as a Windows application or as a command
line utility. You can create the new project in two ways. You can select New
Project from the File menu or you can select Add New Project from the File
menu. If you select New Project from the File menu, this will bring up the
familiar New Project dialog box; this time, however, you will notice that Visual
Studio wants to create the new project in the preexisting ConsoleApplication1

project location .If you select this option, a new project is added so that the
ConsoleApplication1 solution now contains a console application and a
Windows application. In accordance with the language - independence of
Visual Studio, the new project does not need to be a C# project. It is perfectly
acceptable to put a C# project, a Visual Basic 2008 project, and a C++ project
in the same solution. To change the name, you can right - click the name of the
solution and select Rename from the context menu. Call the new solution Demo
Solution. The Solution Explorer window now looks like Figure .You can see
from this that Visual Studio has made your newly added Windows project
automatically reference some of the extra base classes that are important for
Windows Forms functionality. You will notice if you look in Windows
Explorer that the name of the solution file has changed to DemoSolution.sln. In
general, if you want to rename any files, the Solution Explorer window is the
best place to do so, because Visual Studio will then automatically update any
references to that file in the other project files. If you rename files using just
Windows Explorer, you might break the solution because Visual Studio will not
be able to locate all the files it needs to read in.
Fig 3.5

If you select this option, a new project is added so that the ConsoleApplication1
solution now contains a console application and a Windows application. In

accordance with the language independence of Visual Studio, the new project
does not need to be a C# project. It is perfectly acceptable to put a C# project, a
Visual Basic 2010 project, and a C++ project in the same solution. However,
we will stick with C# here because this is a C# book! Of course, this means that
ConsoleApplication1 is not really an appropriate name for the solution
anymore! To change the name, you can right - click the name of the solution
and select Rename from the context menu. Call the new solution Demo
Solution. You can see from this that Visual Studio has made your newly added
Windows project automatically reference some of the extra base classes that are
important for Windows Forms functionality. You will notice if you look in
Windows Explorer that the name of the solution file has changed to
DemoSolution.sln. In general, if you want to rename any files, the Solution
Explorer window is the best place to do so, because Visual Studio will then
automatically update any references to that file in the other project files. If you
rename files using just Windows Explorer, you might break the solution
because Visual Studio will not be able to locate all the files it needs to read in.
Setting the Startup Project
Bear in mind that if you have multiple projects in a solution only one of them
can be run at a time! When you compile the solution, all the projects in it will
be compiled. However, you must specify which one you want Visual Studio to
start running when you press F5 or select Start. If you have one executable and
several libraries that it calls, this will clearly be the executable. In this case,
where you have two independent executables in the project, you would simply
need to debug each in turn. You can tell Visual Studio which project to run by
right - clicking that project in the Solution Explorer window and selecting Set
as Startup Project from the context menu.
Windows Application Code
A Windows application contains a lot more code right from the start than a
console application when Visual Studio first creates it. That is because creating
a window is an intrinsically more complex process, Windows Forms,
discusses the code for a Windows application in detail. For now, look at the

code in the Form1 class in the WindowsApplication1 project to see for you how
much is auto-generated.

4. INTRODUCTION TO SQL SERVER

4.1 INTRODUCTION
SQL Server 2008 R2 (10.50.1600.1, formerly codenamed "Kilimanjaro") was
announced at TechEd 2009, and was released to manufacturing on April 21,
2010.[27] SQL Server 2008 R2 adds certain features to SQL Server 2008
including a master data management system branded as Master Data Services, a
central management of master data entities and hierarchies. Also Multi Server
Management, a centralized console to manage multiple SQL Server 2008
instances and services including relational databases, Reporting Services,
Analysis Services & Integration Services.[28]
SQL
Server
2008
R2
includes
a
number
of
new
[29]
services, including PowerPivot for Excel and SharePoint, Master
Data
Services, StreamInsight, Report Builder 3.0, Reporting Services Add-in for
SharePoint, a Data-tier function in Visual Studio that enables packaging of
tiered databases as part of an application, and a SQL Server Utility named UC
(Utility Control Point), part of AMSM (Application and Multi-Server
Management) that is used to manage multiple SQL Servers.[30]
The first SQL Server 2008 R2 service pack (10.50.2500, Service Pack 1) was
released on July 11, 2011.[31]
The second SQL Server 2008 R2 service pack (10.50.4000, Service Pack 2) was
released on July 26, 2012.[32]
The final SQL Server 2008 R2 service pack (10.50.6000, Service Pack 3) was
released on September 26, 2014.[33]
Microsoft SQL Server is a relational model database server produced by
Microsoft. Its primary query languages are T-SQL and ANSI SQL. The current
version of SQL Server, SQL Server 2008R2 was released (RTM) on August 6,
2008and aims to make data management self-tuning, self organizing, and self
maintaining with the development of SQL Server Always On technologies, to
provide near-zero downtime. SQL Server 2008 also includes support for
structured and semi-structured data, including digital media formats for

pictures, audio, video and other Multimedia data. In current versions, such
multimedia data can be stored as BLOBs (binary large objects), but they are
generic bit streams. Intrinsic awareness of multimedia data will allow
specialized functions to be performed on them. According to Paul Flesner,
senior Vice President, Server Applications, Microsoft Corp., SQL Server 2008
can be a data storage backend for different varieties of data: XML, email,
time/calendar, file, document, spatial, etc as well as perform search, query,
analysis, sharing, and synchronization across all data types.
Other new data types include specialized date and time types and a Spatial data
type for location-dependent data. Structured data and metadata about the file is
stored in SQL Server database, whereas the unstructured component is stored in
the file system. Such files can be accessed both via Win32 file handling APIs as
well as via SQL Server using T-SQL; doing the latter accesses the file data as a
BLOB. Backing up and restoring the database backs up or restores the
referenced files as well. SQL Server 2008 also natively supports hierarchical
data, and includes T-SQL constructs to directly deal with them, without using
recursive queries.
The Full-Text Search functionality has been integrated with the database
engine, which simplifies management and improves performance. Spatial data
will be stored in two types. A "Flat Earth" data type represents geospatial data
which has been projected from its native, spherical, coordinate system into a
plane. A "Round Earth" data type model in which the Earth is defined as a
single continuous entity which does not suffer from the singularities such as the
international dateline, poles, or map projection zone "edges". Approximately 70
methods are available to represent spatial operations for the Open Geospatial
Consortium Simple Features for SQL, Version 1.1.SQL Server includes better
compression features, which also helps in improving scalability. It also includes
Resource Governor that allows reserving resources for certain users or
workflows. It also includes capabilities for transparent encryption of data as
well as compression of backups. SQL Server 2008 R2 supports the ADO.NET
Entity Framework and the reporting tools, replication, and data definition will
be built around the Entity Data Model. SQL Server Reporting Services will gain
charting capabilities from the integration of the data visualization products from
Dun das Data Visualization Inc., which was acquired by Microsoft. On the

management side, SQL Server 2008 R2 includes the Declarative Management


Framework which allows configuring policies and constraints, on the entire
database or certain tables, declaratively. The version of SQL Server
Management Studio included Server 2008 supports IntelliSense for SQL
Queries against a SQL Server 2008 R with SQL

4.2 DATABASE DESIGN


CREATING DATA BASE IN SQL SERVER The following steps
demonstrate how to create a database in SQL Server using Enterprise Manager.
1. Right click on the "Databases" icon and select "New Database".

Fig 4.1 Creating new Database


2. Ensuring you have the right database expanded, right click on the
"Tables" icon and select "New Table."

Fig 4.2 Creating new table


While you have this screen open, do the following:
a) Using the values in the screenshot, complete the details in the "Column
Name" column, the "Data Type" column, "Length" column, and "Allow
Nulls" column.
b) Make the individual column an "identity column", by setting "Identity" .

Fig 4.3Design of a table


3. To open the table, right click on the table you wish to open, and select
"Open Table -> Return all rows.

Fig 4.4 Opening of a table

5. INTRODUCTION TO COMPANY
Ever widening gap between the power demand and its availability in the state of
Punjab was one of the basic reasons for envisaging the thermal plant at
LehraMohabbat Distt. Bathinda .The other factors favoring the
installation of this thermal were low initial cost and less generation
period as compared to hydroelectric generating stations, its good railway
connection for fast proximity to load center. Guru HarGobind Singh thermal
plant is Government undertaking (under P.S.E.B.) Initially it was going to set
up at Bathinda under GNDTP but the air force personal restricted its set up at
Bathinda hence plant site is shifted to Lehra Mohobatt about 22Km from
Bathinda city. Later this plant was approved as a separate autonomous
body with its name as Guru Hargobind Thermal Plant.
The construction of the plant commenced in1992 and its unit started
working in December 1997. Its second unit commenced in August1998. The
main companies whose technology pawed the way of this plant are TATA
Honeywell & BHEL in turbine and boiler control .The total set up cost of
the plant is1200 crores and the capacity of the plant is 2* 210
=420MW. The overall efficiency of the plant is 95%.

PROJECT OF STUDY
ENERGIES is the main project that is being developed by the Tata
Consultancy Ltd.

This contain 18 different modules out of which HUMAN RESOURCE


MANAGEMENT is the module that has been developed by us during our
training period. HRM is the functional area of an organization responsible for
all aspects of hiring and employee benefits like training, leaves, promotions,
separations, etc. In our module too we have included all these aspects that are
required in every organization.

6. DATA FLOW DIAGRAMS


ADMIN LOGIN: When the admin tries to login, then a validation check is
performed. If the password and id gets matched from database then after
conformation admin will be allowed to access the Admin Page.

USER LOGIN: When a user logins, with a correct user id and password he is
authenticated to access the functions allowed.

Details maintained by Admin

Leave Application

Approval of Leave Application

7.DATABASE TABLES
1. Admin table containing admin user id(uid) and password(pwd).

2. Login table for members under admin, containing user id(uid) and
password(pwd) and sign up details.

3. Table entry containing basic details of employees.

4.Table t institute containing records of all institutes providing training

5.Table complain containing complains of all employees.

6.Table l apply containing records of applied leaves by various employees

8. SNAPSHOTS

1.Login Form

2.Admin Form

3.MDI Form after successful LogIn

4.New Employee form:- Entry

5.Entry of new training institute

6.Details of all applied leaves

7.Form to check the complains by admin

8.Form to update training by admin

9.Form to view records by admin

10.Form to calculate salary by admin

11.Form to change password by admin

12. For User Login

13. MDI form successfully login

14. Form to register the problem or complain

15.Form to apply for leave application

16. Form to change the password by employee

9. CODE EFFICIENCY
When we produce large programs, it often happens that extra code is left in the
project. This extra code can be unused subs and functions, old variables,
unnecessary constants, even Types. Extra code takes up disk space, slows down
the program and it also makes it harder to read. Since this code is not needed, it
is also called dead code. The opposite of dead code is live code.
Problem categories: Problems fall into 4 categoriesi.
ii.

iii.

iv.

Optimization: These problems adversely affect the speed and size of the
resulting program.
Style: These problems are related to the programming style. They don't
necessarily lead to problems in the short run, but they often lead to
confusion and further errors in the future.
Metrics is a sub-category of Style: You can set target values for
different metrics and monitor to find out if some part of your program
exceeds the limits.
Functionality: These problems affect the run-time functionality of the
program. The reason is often that somebody forgot to do something!

In this Project no dead code, variable, procedure, module is available. Problems


were reported to the guide and with his help these all are solved.

OPTIMIZATION OF CODE
Developments that are considered to improve the systems performance are: Physical I/O - data is read from and written into I/O devices. This can be
a potential bottleneck. A well-configured system always runs 'I/O-bound'
- the performance of the I/O dictates the overall performance.
Memory consumption of the database resources e.g. Buffers, etc.
CPU consumption on the database and application servers
Network communication - not critical for little data volumes, becomes a
bottleneck when large volumes are transferred.

CLASSIC GOOD 4GL PROGRAMMING CODE-PRACTICES


GUIDELINES ARE
Remove unnecessary code and redundant processing
Spend time documenting and adopt good change control practices
Spend adequate time analyzing business requirements, Process flows,
data-structures and data-model
Quality assurance is key: plan and execute a good test plan and testing
methodology
Experience counts
These all points are kept in mind when the project was formulated. In order to
keep the amount of data which is relevant to the query the hit set small, always
all known conditions in the WHERE clause is specified (wherever possible).
The database system is also potentially made to use a database index (wherever
possible) for greater efficiency resulting in fewer loads on the database server
and considerably less load on the network traffic as well. Usage of "OR",
"NOT", in Open SQL statements is checked appropriately. Wherever the
maximum, minimum, sum and average value or the count of a database column
is required, list with aggregate functions are used instead of computing the
aggregates within the program. The RDBMS is responsible for aggregated
computations instead of transferring large amount of data to the application.
Overall Network, Application-server and Database load is also considerably
less.

VALIDATION CHECKS
Software validation is achieved through a series of black-box tests that
demonstrate conformity with requirements. A test plan outlines the classes of
tests to be conducted and a test procedure defines specific test cases that will be
used to demonstrate conformity with requirements.
After each validation test case has been conducted, one of two possible
conditions exists: (1) The function or performance characteristics conform to
specification and are accepted or (2) a deviation from specification is uncovered
and a deficiency list is created.

CONFIGURATION REVIEW
An important element of the validation process is a configuration review. The
intent of the review is to ensure that all elements of the software configuration
have been properly developed, are cataloged, and have the necessary detail to
bolster the support phase of the software life cycle. The configuration review,
sometimes called an audit.

ALPHA AND BETA TESTING


If software is developed as a product to be used by many customers, it is I
impractical to perform formal acceptance test with each one. Most software
product builders use a process called alpha and beta testing uncover errors that
only the end-user seems able to find. A customer conducts the Alpha test at
developers site. The software is used in a natural setting with the developer
looking over the shoulder of the user and recording errors and usage problems.
The Beta test is conducted at one or more customer sites by the end-user of the
software. Unlike alpha testing, the developer is generally not present.
Therefore, the beta test is a live application of the software in an environment
that cannot be controlled by the developer. The customer records all problems
that are encountered during beta tasting and reports these to the developer at
regular intervals.

10.SYSTEM TESTING & DEBUGGING


INTRODUCTION:
In the testing process the Demo versions of the software i.e. actual replica of
the existing system will be installed so that the users can use it as they like and
give their valuable suggestion and advice. There after security can be
incorporated in the system. In this phase we will be using both alpha and beta
test, which will enable the user to check the whole system thoroughly. The said
Demo version software can be used for a period of 15 days to 1 month and
during this period only training of the proposed software will be imported. This
phase will allow the entire user to use the system in a much more efficient way.
The design tests for software and other engineered products can be as
challenging as the initial design of the product itself. The objectives of the
testing are the finding of errors with a minimum amount of time and effort.
Any engineered product can be tested in one of two ways:
1. Knowing the specified function that a product has been designed to perform,
tests can be conducted that demonstrate each function is fully operational while
at the same time searching for errors in each function
2. Knowing the internal workings of a product, tests can be conducted to ensure
that all gears mesh that is, internal operations are performed according to
specifications and all internal components have been adequately exercised.
During the course of module two kinds of testing were done, namely, Unit
testing and Integration testing
Unit Testing: Each individual module has been tested using two techniques of
testing namely:
i.
ii.

Client side testing using JavaScript


Server side testing using the validation framework provided by struts.

Each individual form has been validated (Client Side) so that user enters only
valid data at every time. For e.g., Type checks, Dependency Checks, Mandatory
Field Checks. Data has been verified for consistency at server side like Primary
Key, Foreign Key Constraints.
Integration Testing: Here, integration testing was implemented. Whenever
new units were added to the module, all the related units were tested for the
effect.
Test plan:
Each unit (sub module) was tested for its functionality.
Each unit was validated for inputs.
Once a unit was developed and deployed with the module, each related
unit was checked for effects and over all functionality.
Debugging: Debugging includes certain activities performed for programmers
during execution of the system. Special programmers run each program one by
one in simulated environment using simulated data in order to detect bugs in the
program. Debugging is similar to the process of verification testing. The only
difference between verification testing and debugging is that verification is
performed to test entire system.

11. SECURITY
INTRODUCTION:
System security refers to the technical innovation and procedures applied to the
hardware and operating system to protest against the deliberate or accidental
damage from a defined threat. In other words data security is the protection data
from loss, disclosure, modification and destruction. Every candidate system
must provide built in feature for security and integrity of data. Without
safeguard against unauthorized access, fraud, embezzlement, fire and natural
disasters, a system could as vulnerable as to threaten the survival of the agency.
The end user always concerned about the security along with dependence on the
computer. In the system development, the developer and the system analyst
must consider measure for maintaining data integrity and controlling security at
all times. This involve in build-in hardware features, programs and procedures
to protect candidate system from unauthorized access.

PHYSICAL SECURITY
A physical security mean making sure that the facility is physically secure,
provides a recovery/restart capability and has access to backup files. The most
costly software loss is program error. It is possible to eliminate such error
through proper testing routine. Parallel runs should be implemented whenever
possible. Physical security provides safeguards against the destruction of
hardware, database and documentation; fire, flood, theft, sabotage and eve
dropping; and loss of power trough proper backup.

OPERATING SYSTEM LEVEL SECURITY


In most operating environment there is lack of audit trails in most off the shelf
software packages. It is difficult to reconstruct transaction for audit purposes.
As more personal computers are linked to company mainframes so, remote
users can access data, the potential increase for alerting the data deliberately or
by mistake. It is becoming so obvious that the personal computer is adding

security problems to system installations. With the use of microcomputers the


corporate environment, the potential for misuse of information becomes
enormous. Many of todays operating system contains password; a would-be
theft can copy at will. A person with a microcomputer at a remote location, who
knows how to bypass the codes and passwords, can use a phone line and
illegally retrieves information without leaving any clues.

DATABASE SECURITY
The proper use of the file library is another important security feature. This
involves adequate file backup and personal to handle file documentation when
needed. File backup means keeping duplicate copies o master and other key
files and storing them in suitable environment conditions.

APPLICATION SECURITY
The complexity of system makes automatic auditing necessary. Neither the
auditor nor the user can verify the system must check itself. The internal
controls require mean that programmers and analyst build controls into every
system. Developing a corporate auditing policy will ensure that future system
meet the minimum requirements for security and control against fraud and
embezzlement.

TRANSACTION ENTRY
A logical failure occurs when activity to the database is interrupt with no
chance of completing the currently executing transaction. When the system is
up and run running, again it is known whether or not modifications are still in
memory or were made to actual data. As for the proposed system, there would
be automatic updating. Through still readable, the database may be inaccurate.

12. CONCLUSION
From the initial study we concluded that the our hostel management were
facing various kinds of problems like delay in information, extra manpower
required for information circulation and the cost associated with it. The
proposed system is helpful in solving them. The project eliminates lot of
manual work. This project helps in completion of work fast by consuming less
time. It provides total security to the data stored in database as only authorized
personal can access the data. Thus we can say that this is very beneficial to the
existing as well as for future hostels.

13. BIBLIOGRAPHY
Application development using C# and .NET, Prentice-Hall.
Microsoft SQL Server 2005 database design and implementation,
Prentice-Hall of India Publication.
Roger S Pressman, Software Engineering-A Practitioners Approach.
Fifth Edition. The McGraw-Hill companies, Inc.
Pankaj Jalote, an integrated approach to software engineering second
edition.
System analysis and design, Elias E. Awad.

You might also like