You are on page 1of 24

c 


 
 


91. Can u test a website or a web application manually without using any
automation tool?

As per my idea we can test a web application manually without using automation but its
time consuming and might have error so to make our task easy and error free we use
automatons tool like Qtp.

As for as Manual is concerned we can test Usability, Functionality, Security testing but
whereas
performance is concerned we can¶t do it manually accurate

92. What tools are used in Manual testing for bug tracking and reporting?

For bug tracking and reporting there are many tools like

Rational clear quest.


PVCS
Bugzilla

93. At what stage in the SDLC testing should be started?

Testing Starts from the starting sate of SDLC that is requirement stage where we
prepare SRS Or URS DOC.

94. What is mean by designing the application and coding the application?

Designing and Testing are two different phases in a software development


process(SDLC).
1. Information Gathering
2. Analysis
3. Designing±
4. Coding
-. Testing±
6. Implementation and Maintenance.

If u want answer in Testing terms means STLC, designing test includes preparing Test
Strategy, Test Plan and Test Case documents, and testing means executing the test
cases and generating Test Reports.

Designing the application as per the requirements Designing the application is nothing
but deriving the
functional flow , alternative flow,How many modules that we are handling, data flow etc
Two types of designs are there

HLD:

In this designing team will prepare functional architecture i.e Functional flow

LLD:

In this designing team will divide the total application into modules and they will derive
logic for each
module Coding:writing the course code as per the LLD to meet customer requirements

95. what is mean by client and server?

96. I. A code inspection is held for new code. II. A code inspection is held for
reused code. III. A code inspection is held after the first compilation. IV. A code
inspection is held after the first error-free compilation. Which of the statements
above are true of code inspections?
1. I and IV 2. I, II, and IV 3. I, II, and III 4. II and IV 5. II and III?

1. I and IV

96. What is the best way to choose automation tool?

We use automation only for version wised projects, means if the project comes up with
different versions. Once we write the scripts for one version, we can use these scripts
with multiple versions with minor changes. So the main advantage of automation is:
1. Saves the time.
2. Saves money.

97. What is the point of reference by which something can be measured?


1. Benchmark 2. Baseline 3. Metric 4. Measure 5. Indicator

Baseline

98. what is Concurrency Testing?

Multi-user testing geared towards determining the effects of accessing the same
application code, module or database records. Identifies and measures the level of
locking, deadlocking and use of single-threaded code and locking semaphores.

99. When does metrics validation occur? 1. Throughout the life cycle 2. During
the test 3. After the test 4. During requirements definition 5. After the final
software release Justify your answer with simple explanation.?

Throughout the life cycle - TO identify the lag & overcome

100. The scenario is ³while reviewing requirement docs(SRS)if u find or feel any
requirement is not meeting the client¶s requirements´ whom do you report?and
what is your action?

When the System Requirement Specification does not meet the clients requirements, it
should be intimated to the PL (who prepares the SRS and should be documented in the
Test log & analysis of data which should be discussed in the Management Review
Meeting. The action is the SRS should undergo a revision thereby updating SRS to
match with CRS

101. How to choose a test automation tool?

We have to choose depends upon the application complexity & delivery time.
102. Did u come across STUBS and DRIVERS? How did u use these in u r project
?

Stub : A piece of code that simulates the activity of missing components.

Driver : A piece of code that passes test cases to another piece of code.

i will give a gen example«.suppose u have 3 modules«A B C«A n B r 100%


comp«.and C is only -0% comp«.u r under pressure to comp in a time frame«what u
do is u know to build the mod C u need at least 1- days«so u build a dummy mod C
which will take 1/2 days«This is STUB now once all the mod A B and C(dummy) r
ready..u integrate them to see how it works..This is a DRIVER

103. How to determine if a test environment is appropriate?

c  
 
 


81. On what basis you are fixing up the time for project completion?

Test strategy; Based on the test strategy and testing Approach

82. How u r breaking down the project among team members?

It can be depend on these following cases²-


1) Number of modules
2) Number of team members
3) Complexity of the Project
4) Time Duration of the project
-) Team member¶s experience
etc««

83. Usually customers won¶t give all the requirements. How will u manage &
collect all the necessary information?
Sometimes customer may not produce all the requirements. At this situation Business
analyst and project manager will use their experience which they handles this type of
projects otherwise we will go through some reference sites where we will observe the
functionality and they will prepare use cases and requirements.

or

I am agree with the above answer.


If we really face such a problem then it is better to get information from development
team so that we can know the exact info Or else use Ad-hoc testing for the required job.

84. what are the Qualities needed by a software tester?

A software tester must have intent of showing the product is not working as specified.
Software tester have the basic attitude of showing the presence of errors. He must have
perspective of customers i.e he has to use the system as he is the client of the system.
He has to strive for the quality.

Or

Software Tester must has these qualities²


1)He/she must observe the problem from both the side say user and programmer.
2)Must has good under standing with other team members .
3)Able to understand programmers view.
4)Once start testing, do not put it remain.
-)First test requirements of the user.
6)Before start testing first analysis the project like ;
technology using in project, all the flow etc««

85. Did you write test cases of design phase?

Yes We can write test cases at the design phase

At the time of designing we should be ready with test cases


86. In Testing 10 cases you found 20 bugs then how will you know which bug is
for which test case?

Each Bug Will Have a Unique Bug-ID which would be related to that particular Test
Case. We also make use of the Matrix to keep track of bugs and test cases.

87. what is the path for test director,where the test cases are stored ?

c:\TDcomdir\TD_projectname\tests\test no

Usually test cases are stored in Test Plan in Test director

88. what is mean by test suite?

Test suit is ³set of test cases´

Group of test cases is nothing but functional(+ve & -ve) and GUI

89. What is the diff. b/w Baseline and Traceability matrix?

Baseline : The point at which some deliverable produced during the software
engineering process is put under formal change control

Traceability : Is used to check if some of the test cases are left out or not in Manual and
automated testing.

Baseline is nothing but a software specification or functionality that is reviewed or


accepted for development. Once the functionality is baseline, we can start developing of
the functionality.

Where as a Traceability Matrix lists all the functionality or features and the test cases for
each feature. By using the traceability matrix we can measure, when to stop testing of
the project or application.
Generally Traceability Matrix contains:
1. UseCaseID(Functionality/Feature).
2. Description of the Feature.
3. Priority for the Feature.
4. TestCaseIDs for this Feature. (Once if the mapped testcases for each Feature meets
Success criteria, then we can stop testing of the project)
-. In which phase is the Feature (Unit,Component,Integration,System)

90. what methodologies you are following?

Methodologies are considered in 2 accounts.

1) There are a few methodologies like : v-model,spiral (most common), waterfall, hybrid,
prototype, etc depends on the company.
2) Depends on the clients and the requirements.

No. 2 is def related to no 1

Methodology means the way we are following while writing test cases . there are
different ways like«
1. Functional Test cases
2. Equivalence Partitioning Test cases
3. Boundary value analysis
4. Cause Effect Graphing and Decision table

c  
 
 


73. suppose u have raised one bug.u have posted to that concerned
developer..he can¶t accept that is a bug.what will u do in the next stage?

If the developer won¶t accept our sent bug, then we show it to our team leader or we
can show it to our superior person. so he/she will go and discuss with developer or else
they will conduct one meeting on that.

or

Sometimes bug not reproducible in Dev Environment at that situation dev doesn¶t
accept
we will give him screen shots.If still debate occurs we raise the issue in bug triage
meeting

74. Role of Software Test Engineer in Software Company?

The role of a software test engineer in company is to find the defect. He/She should
have ³test-to-break´ attitude. He/She has to test the application taking the customer into
mind. He should work hard for quality.

75. Suppose you testing Calculator application and you got problems like 1/1=2,
2/2=1, 3/3=6, 4/4=1, 5/5=10. Now how will you describe the bug title as well as give
the bug description?

Bug title : Calculation Errors

Description: Unable to do the calculations of this application.like the output is giving an


undefined/Unstructured format.
Here the examples :
1/1=2«««
Severity : Critical
Priority: High/Medium(depends on your Requirement to solve)

Bug Title:calculator_functionality_Division

Description:Division function is not working properly when the values are(both) same
and even.

76. Explain equivalence partitioning with an example?

When We have an requirement which has a large class of data split the large class into
its subsets
For Ex:
Sal 10000-4-000
Here this is the large class of data equivalence partitioning ±>take inputs from the below
sub
classes such as
less than 10000 (invalid)
between 10000 and 4-000 (valid)
greater than 4-000 (invalid)
Instead of choosing values from the large set of information split the inputs into both
valid & negative inputs that becomes a subset. this technique is equivalence partitioning

77. Explain traceability matrix with an example?

Traceability matrix is table which maps each requirement to its functionality in FDS, its
internal design in IDS, its code and its test cases.
Format is

<!± /* Style Definitions */ p.MsoNormal, li.MsoNormal, div.MsoNormal {mso-style-


parent:´"; margin:0in; margin-bottom:.0001pt; mso-pagination:widow-orphan; font-
size:12.0pt; font-family:´Times New Roman´; mso-fareast-font-family:´Times New
Roman´;} pre {margin:0in; margin-bottom:.0001pt; mso-pagination:widow-orphan; font-
size:10.0pt; font-family:´Courier New´; mso-fareast-font-family:´Times New Roman´;}
@page Section1 {size:8.-in 11.0in; margin:1.0in 1.2-in 1.0in 1.2-in; mso-header-
margin:.-in; mso-footer-margin:.-in; mso-paper-source:0;} div.Section1
{page:Section1;} ±>

Ô    
     


Ô 


78. What is the difference between Integration Testing and System Testing?

Integration testing will do the completion of unit or module level Testing.

System testing is nothing but the application meets the Required specifications or not

Or
In integration testing individual components are combined with other components to
make sure the necessary communications, links and data sharing occur properly.

It is not system testing because the components are not implemented in the operating
environment.

System testing begins one the modules are integrated enough to perform tests in whole
system environment.System testing can occur in parallel with integration test, especially
with the top-down method.

79. How Could u Present Test Strategy for the Product Testing?

Test strategy means that it is a document prepared by quality analyst/project manager.


it specifies how to approach to testing team depends upon requirement gatherings, risks
involved in our company and customer requirements

80. You may undergone many projects. Do all the projects match up with
customer¶s expectations?

Any project never matches with 100% requirements. We consider only when it reaches
to certain extent

c  
 
 


66. After insert the record in front-end, How will u check the back end by
manually? Please explain?

Back end Checking is what we call DATABASE TESTING have to know the queries
very well. With out queries we will not able to test data base testing. But as a tester we
will responsible for test and see the data whether it is stored in back end or not. We
don¶t have permission to do any thing. So what I am coming to tell means ³select * from
condition´ queries is enough for testing the back end.
67. Do write a separate test case for Regression Testing? If it is Yes, Explain How
to write the Test case?

Well we are not going to right separate test cases for regression testing. We execute
the same test cases on newly modified build which ever failed in previous.

OR

We are not going to write new test cases. We will select the some of the test cases from
test case document, and execute the test cases to check for the bug fixes. Here we
selecting the test cases such way that, all the basic functionality test cases, and
affected bug test cases.

68. How to do the performance testing manually? Does u have a test case for
that?

We can test it manually but we don¶t get accurate result. We don¶t have separate test
cases exactly we will do it with tool i.e. Load runner, Act, Web load.

69. What is the difference between Functional testing and Functionality testing?

Functional Testing:
The portion of security testing in which the advertised features of a system are tested
for correct operation.
OR

Quality assurance that a web site performs properly. All aspects of the user interface,
navigation between pages and off-site, multilingual navigation, etc. are tested. Testing is
required in all the current browsers and on the major operating systems and platforms.

OR
Functional testing is nothing but whether the given function is working or not as per the
specifications
Ex: field validation, Navigation etc.

Functionality Testing is nothing but to check whether our application is equal to


customer requirements or not

Here we will do lot more tests


Ex: Inter system Testing
Error handling testing

70. What is Middleware? Can anybody explain me?

In the computer industry, middleware is a general term for any programming that serves
to ³glue together´ or mediate between two separate and often already existing
programs. A common application of middleware is to allow programs written for access
to a particular database to access other databases. The systematic tying together of
disparate applications, often through the use of middleware, is known as enterprise
application integration.

Or

Software that mediates between an applications program and a network. It manages the
interaction between disparate applications across the heterogeneous computing
platforms. The Object Request Broker (ORB), software that manages communication
between objects, is an example of a
middleware program

71. Suppose u and your team member is there.your team member (friend) has
raised one bug..u don¶t no about application as well as that functionality of that
application.your TL give the task u have to give the Severity & Priority..how can u
give the Severity & Priority?

I am using Adhoc testing for this type of bugs depends upon past experience i am try to
execute the testcase and write the severity and priority of that big.
72. what is JOINTS & REGISTRY in SQL?

Joints : Using SQL Joints, you can retrieve data more than one table or view using the
keys etc to define an inequality condition

Registry : A Windows repository that stores configuration information for a computer.

For all the terms on SQL « Plz Visit


http://www.utexas.edu/its/unix/reference/oracledocs/v92/B10-
01_01/win.920/a9-490/glossary.htm

c  
 
 

65. What kind of testing to be done in client server application and web
application? Explain

Web Testing

During testing the websites the following scenarios should be considered.


Functionality
Performance
Usability
Server side interface
Client side compatibility
Security

Functionality:

In testing the functionality of the web sites the following should be tested.
Links
Internal links
External links
Mail links
Broken links
Forms
Field validation
Functional chart
Error message for wrong input
Optional and mandatory fields
Database
Testing will be done on the database integrity.
Cookies
Testing will be done on the client system side, on the temporary internet files.

Performance:

Performance testing can be applied to understand the web site¶s scalability, or to


benchmark the performance in the environment of third party products such as servers
and middleware for potential purchase.
Connection speed:
Tested over various Networks like Dial up, ISDN etc
Load
What is the no. of users per time?
Check for peak loads & how system behaves.
Large amount of data accessed by user.
Stress
Continuous load
Performance of memory, cpu, file handling etc.

Usability :

Usability testing is the process by which the human-computer interaction characteristics


of a system are measured, and weaknesses are identified for correction. Usability can
be defined as the degree to which a given piece of software assists the person sitting at
the keyboard to accomplish a task, as opposed to becoming an additional impediment
to such accomplishment. The broad goal of usable systems is often assessed using
several criteria:
Ease of learning
Navigation
Subjective user satisfaction
General appearance

Server side interface:

In web testing the server side interface should be tested. This is done by Verify that
communication is done properly. Compatibility of server with software, hardware,
network and database should be tested. The client side compatibility is also tested in
various platforms, using various browsers etc.

Security:

The primary reason for testing the security of an web is to identify potential
vulnerabilities and subsequently repair them.
The following types of testing are described in this section:
Network Scanning
Vulnerability Scanning
Password Cracking
Log Review
Integrity Checkers
Virus Detection

Performance Testing

Performance testing is a rigorous usability evaluation of a working system under


realistic conditions to identify usability problems and to compare measures such as
success rate, task time and user satisfaction with requirements. The goal of
performance testing is not to find bugs, but to
eliminate bottlenecks and establish a baseline for future regression testing.

To conduct performance testing is to engage in a carefully controlled process of


measurement and analysis. Ideally, the software under test is already stable enough so
that this process can proceed smoothly.
A clearly defined set of expectations is essential for meaningful performance testing.
For example, for a Web application, you need to know at least two things:
expected load in terms of concurrent users or HTTP connections
acceptable response time

Load testing:

Load testing is usually defined as the process of exercising the system under test by
feeding it the largest tasks it can operate with. Load testing is sometimes called volume
testing, or longevity/endurance testing

Examples of volume testing:


Testing a word processor by editing a very large document
testing a printer by sending it a very large job
testing a mail server with thousands of users mailboxes

Examples of longevity/endurance testing:

Testing a client-server application by running the client in a loop against the server over
an extended period of time

Goals of load testing:

Expose bugs that do not surface in cursory testing, such as memory management bugs,
memory leaks, buffer overflows, etc.
ensure that the application meets the performance baseline established during
Performance testing. This is done by running regression tests against the application at
a specified maximum load.
Although performance testing and load testing can seen similar, their goals are different.
On one hand, performance testing uses load testing techniques and tools for
measurement and benchmarking purposes and uses various load levels whereas load
testing operates at a predefined load level, the highest load that the system can accept
while still functioning properly.

Stress testing:
Stress testing is a form of testing that is used to determine the stability of a given
system or entity. This is designed to test the software with abnormal situations. Stress
testing attempts to find the limits at which the system will fail through abnormal quantity
or frequency of inputs.
Stress testing tries to break the system under test by overwhelming its resources or by
taking resources away from it (in which case it is sometimes called negative testing).
The main purpose behind this madness is to make sure that the system fails and
recovers gracefully ² this quality is known as recoverability.Stress testing does not
break the system but instead it allows observing how the system reacts to failure. Stress
testing observes for the following.

Does it save its state or does it crash suddenly?


Does it just hang and freeze or does it fail gracefully?
Is it able to recover from the last good state on restart?
Etc.

Compatability Testing

A Testing to ensure compatibility of an application or Web site with different browsers,


OS and hardware platforms. Different versions, configurations, display resolutions, and
Internet connect speeds all can impact the behavior of the product and introduce costly
and embarrassing bugs. We
test for compatibility using real test environments. That is testing how will the system
performs in the particular software, hardware or network environment. Compatibility
testing can be performed manually or can be driven by an automated functional or reg
The purpose of compatibility testing is to reveal issues related to the products
interaction session test suite.with other software as well
as hardware. The product compatibility is evaluated by first identifying the
hardware/software/browser components that the product is designed to support. Then a
hardware/software/browser matrix is designed that indicates the configurations on which
the product will be tested. Then, with input from the client, a testing script is designed
that will be sufficient to evaluate compatibility
between the product and the hardware/software/browser matrix. Finally, the script is
executed against the matrix, and any anomalies are investigated to determine exactly
where the incompatibility lies.

Some typical compatibility tests include testing your application:


On various client hardware configurations
Using different memory sizes and hard drive space
On various Operating Systems
In different network environments
With different printers and peripherals (i.e. zip drives, USBs, etc.)

u
    
 
 
  
 
| 12 Comments
1. I-soft
What should be done after writing test case??

2.Covansys
Testing

1. What is bidirectional traceability ??? and how it is implemented


2. What is Automation Test frame work ?
3. Define the components present in test strategy
4. Define the components present in test plan
-. Define database testing ?
6. What is the difference between QA and QC «.
7. What is the difference between V&V
8. What are different types of test case that u have written in your project..
9. Have u written Test plan ?«.

SQL

1. What is joins and define all the joins «


2. What is Foreign key ?
3. Write an SQL query if u want to select the data from one block which intern reflects in
another block ?
Unix

1. Which command is used to run an interface?


2. How will you see the hidden file ?
3. What is the command used to set the date and timings «
4. Some basic commands like copy, move,delete ?
-. Which command used to the go back to the home directory «.
6. Which command used to view the the current directory

3. Virtusa

Testing

1. Tell me about Yourself?


2. Testing process followed in your company «
3. Testing Methodology
4. Where u maintains the Repositories?
-. What is CVS?
6. Bug Tool used?
7. How will you prepare traceability matrix if there is no Business Doc and Functional
Doc?
8. How will you validate the functionality of the Test cases, if there is no business
requirement document or user requirement document as such«
9. Testing process followed in your company?
10. Tell me about CMM LEVEL -4 «what are steps that to be followed to achieve the
CMM -IV standards?
11. What is Back End testing?
12. What is Unit Testing?
13. How will u write test cases for an given scenario«i.e. main page, login screen,
transaction, Report Verification?
14. How will u write traceability matrix?
1-. What is CVS and why it is used?
16. What will be specified in the Defect Report«?
17. What is Test summary Report«?
18. What is Test Closure report«?
19. Explain Defect life cycle«
20. What will be specified in the Test Case«
21. What are the Testing methodologies that u have followed in your project ?
22. What kind of testing that u have been involved in and explain about it«.
23. What is UAT Testing?
24. What is joins and what are the different types of joins in SQL and explain the same?
2-. What is Foreign Key in SQL«?

KLA Tencor

1. Bug life cycle?


2. Explain about the Project. «And draw the architecture of your project?
3. What are the different types of severity?
4. Defect tracking tools used?
-. what are the responsibilities of an tester?
6. Give some example how will you write the test cases if an scenario involves Login
screen.

Aztec

1. What are the different types of testing followed «..


2. What are the different levels of testing used during testing the application?
4. What type of testing will be done in Installation testing or system testing?
-. What is meant by CMMI «what are different types of CMM Level?
6. Explain about the components involved in CMM-4 level
7. Explain about Performance testing ?
8. What is Traceability matrix and how it is done ?
9. How can you differentiate Severity and Priority based on technical and business point
of view.
10. What is the difference between Test life cycle and defect life cycle ?
11. How will u ensure that you have covered all the functionality while writing test cases
if there is no functional spec and there is no KT about the application?

Kinds of Testing

WHAT KINDS OF TESTING SHOULD BE CONSIDERED?

1. Black box testing: not based on any knowledge of internal design or code.Tests are
based on requirements and functionality
2. White box testing: based on knowledge of the internal logic of an application¶s code.
Tests are based on coverage of code statements, branches, paths, and conditions.
3. Unit testing: the most µmicro¶ scale of testing; to test particular functions or code
modules. Typically done by the programmer and not by testers, as it requires detailed
knowledge of the internal program design and code. Not always easily done unless the
application has a well-designed architecture with tight code; may require developing test
driver modules or test harnesses.
4. Incremental integration testing: continuous testing of an application as new
functionality is added; requires that various aspects of an applications functionality be
independent enough to work separately before all parts of the program are completed,
or that test drivers be developed as needed; done by programmers or by testers.
6. Integration testing: testing of combined parts of an application to determine if they
function together correctly the µparts¶ can be code modules, individual applications,
client and server applications on a networked. This type of testing is especially relevant
to client/server and distributed systems.
7. Functional testing: black-box type testing geared to functional requirements of an
application; testers should do this type of testing. This does not mean that the
programmers should not check their code works before releasing it(which of course
applies to any stage of testing).
8. System testing: black ±box type testing that is based on overall requirements
specifications; covers all combined parts of system.
9. End to end testing: similar to system testing; the µmacro¶ end of the test scale;
involves testing of a complete application environment in a situation that mimics real-
world use, such as interacting with database, using network communications, or
interacting with other hardware, applications, or systems if appropriate.
10. Sanity testing: typically an initial testing effort to determine if a new software version
is performing well enough to accept it for a major testing effort. For example, if the new
software is crashing systems every -minutes warrant further testing in item current
state.
11. Regression testing: re-testing after fixes or modifications of the software or its
environment. It can be difficult to determine how much re-testing is needed, especially
near the end of the development cycle. Automated testing tools can be especially useful
for this type of testing.
12. Acceptance testing: final testing based on specifications of the end-user or
customer, or based on use by end users/customers over some limited period of time.
13. Load testing: testing an application under heavy loads, such as testing of a web site
under a range of loads to determine at what point the system¶s response time degrades
or fails.
14. Stress testing: term often used interchangeably with µload¶ and µperformance¶
testing. Also used to describe such tests as system functional testing while under
unusually heavy loads, heavy repletion of certain actions or inputs input of large
numerical values, large complex queries to a database system, etc.
1-. Performance testing: term often used interchangeable with µstress¶ and µload¶
testing. Ideally µperformance¶ testing (and another µtype¶ of testing) is defined in
requirements documentation or QA or test plans.
16. Usability testing: testing for µuser-friendlinesses¶. Clearly this is subjective,and will
depend on the targeted end-ser or customer. User interviews, surveys, video recording
of user sessions, and other techniques can be used programmers and testers are
usually not appropriate as usability testers.
17. Install/uninstall testing: testing of full, partial, or upgrade install/uninstall processes.
18. Recovery testing: testing how well a system recovers from crashes, hardware
failures or other catastrophic problems.
19. Security testing: testing how well system protects against unauthorized internal or
external access, damage, etc, any require sophisticated testing techniques.
20. Compatibility testing: testing how well software performs in a particular
hardware/software/operating/system/network/etc environment.
21. Exploratory testing: often taken to mean a creative, informal software test that is not
based on formal test plans of test cases; testers may be learning the software as they
test it.
22. Ad-hoc testing: similar to exploratory testing, but often taken to mean that the
testers have significant understanding of the software testing it.
23. User acceptance testing: determining if software is satisfactory to an end-user or
customer.
24. Comparison testing: comparing software weakness and strengths to competing
products.
2-. Alpha testing: testing of an application when development is nearing completion;
minor design changes may still be made as a result of such testing. Typically done by
end-users or others, not by programmers or testers.
26. Beta testing: testing when development and testing are essentially completed and
final bugs and problems need to be found before final release. Typically done by end-
users or others, not by programmers or testers.
27. Mutation testing: method for determining if a set of test data or test cases is useful,
by deliberately introducing various code changes (µbugs¶) and retesting with the original
test data/cases to determine if the µbugs¶ are detected proper implementation requires
large computational resources.

Difference between client server testing and web server testing.


Web systems are one type of client/server. The client is the browser, the server is
whatever is on the back end (database, proxy, mirror, etc). This differs from so-called
³traditional´ client/server in a few ways but both systems are a type of client/server.
There is a certain client that connects via some protocol with a server (or set of
servers).

Also understand that in a strict difference based on how the question is worded, ³testing
a Web server´ specifically is simply testing the functionality and performance of the Web
server itself. (For example, I might test if HTTP Keep-Alives are enabled and if that
works. Or I might test if the logging feature is working. Or I might test certain filters, like
ISAPI. Or I might test some general characteristics such as the load the server can
take.) In the case of ³client server testing´, as you have worded it, you might be doing
the same general things to some other type of server, such as a database server. Also
note that you can be testing the server directly, in some cases, and other times you can
be testing it via the interaction of a client.

You can also test connectivity in both. (Anytime you have a client and a server there
has to be connectivity between them or the system would be less than useful so far as I
can see.) In the Web you are looking at HTTP protocols and perhaps FTP depending
upon your site and if your server is configured for FTP connections as well as general
TCP/IP concerns. In a ³traditional´ client/server you may be looking at sockets, Telnet,
NNTP, etc.

You might also like