You are on page 1of 7

Theoretical Background

Brajnik (2001) defines quality as a property of a product


(i.e.

it

applies

to

some

entity,

like

the

website,

or

some

prototype, or its information architecture) defined in terms of a


system of attributes, like readability or coupling. The ISO 9126
definition of quality for software products is the totality of
features and characteristics of a software product that bear on
its ability to satisfy stated or implied needs. This is specified
further by Brajnik (2001) as a composite property involving a
set

of

interdependent

factors:

Functionality,

reliability,

usability, maintainability, and portability.


The

efficiency

of

website

is

grounded

mainly

on

its

software quality, whether or not the software algorithm works or


if the program runs smoothly with minimum number of bugs. This
study is anchored on Online Assignment theory which emphasizes
the efficiency of online-based assignments for students. This
theory

points

out

to

the

problem

of

students

about

the

accessibility and reliability of the software algorithm coded in


the

website.

Evaluations

of

students,

according

to

Airasian,

should provide information that identifies both strengths and


weaknesses, so that strengths can be built upon and problem areas
addressed. The said evaluation shows performance of students in
relation to specified standards, aptitude or expected growth,

amount of improvement, and the amount learned. Also, evaluations


of students help them develop to their fullest potential- that is
why it is most important to address to students their strengths,
and weaknesses to shape them to the best.
However, as much as there are evaluations taking place,
students continue to whine about the results because of certain
reasons. As mentioned in the Rationale, the target of this study
is

to

eliminate

two

specific

reasons

which

are

the

poor

accessibility and reliability of the website. In connection with


the arising problem, this theory presents reasons why there are
still evident excuses by the students in failing the subject.
Web accessibility can be viewed and defined in different
ways (Brajnik, 2008).

One way is to consider whether a web

page/website is conformant to a set of requirements such as those


defined by WCAG 2.0 or by Section 508 (Vigo, Brajnik, and Connor,
2012).
Measuring the level of web accessibility is essential for
assessing accessibility implementation and improvement over time
but finding appropriate measurements is non-trivial (Abou-Zahra,
2011). For instance, conformance to the Web Content Accessibility
Guidelines (WCAG) is based on 4 ordinal levels of conformance
(none, A, AA, and AAA) but these levels are too far apart to
allow granular comparison and progress monitoring; if a websites

satisfied

many

success

criteria

in

addition

to

all

Level

success criteria, the website would only conform to level A of


WCAG 2.0 but the additional effort would not be visible.
Using numerical metrics potentially allows a more continuous
scale for measuring accessibility and, to the extent that the
metrics are reliable, could be used for comparisons. However, it
is

unclear

how

metrics

can

be

developed

that

fulfills

requirements such as validity, reliability, and suitability. For


example,

is

web

page

with

two

images

with

faulty

text

alternatives out of ten more accessible than another page with


only one image with a faulty text alternative out of five? While
such a count may be a fairly simple and reliable metric it is
generally

not

valid

reflection

of

accessibility

without

additional information about the context in which the faults


occur, but identifying this context may introduce complexity,
reduce reliability, and raise other challenges.
On the other hand, other metrics can be defined if one
assumes

that

accessibility

is

quality

that

differs

from

conformance. Example of this is United States federal procurement


policy known as Section 508 which defines accessibility as the
extent to which a technology is used as effectively by people
with disabilities and by those without it. This will measure web

accessibility and would provide results but the results would be


different from the conformance-based ones.
A

research

on

web

accessibility

was

made

by

Thomson,

Burgstahler, and Comden (2003), who analyzed the different web


accessibility
reasonable

evaluator

system

by

tool,

which

with

the

website

aim

to

accessibility

determine
could

be

manually evaluated and the correlation between the results of


this system and those of Bobby which is the most common web
accessibility evaluator tool. The data obtained using the manual
evaluation

was

compared

to

that

obtained

automatically

using

Bobby. Researchers observed that those universities with higher


AWI (Accessible Web Index) ratings tended to have higher Bobby
scores than those universities with lower AWI ratings.
Another similar study about measuring web accessibility was
made

by

Hudson

(2011).

The

same

method

used

by

Thomson,

Burghastler, and Comden (2003) was used in this research. It


involves manual evaluation and the test of the actual quality of
the website code itself. Their findings showed that to one must
not only focus on the code itself to assess the overall quality
and

accessibility

of

the

website.

The

user

must

also

be

made

by

considered and the different users need must be meet.


In

another

study

about

web

accessibility

Brajnik, Vigo, and Connor (2012), the researchers introduce a

different way of evaluating the websites accessibility. This is


by using Quality Accessibility Metrics. The researchers developed
a framework for Quality Accessibility Metrics by evaluating the
most

important

factors

of

website

namely

the

validity,

he

introduced

reliability, sensitivity, adequacy and complexity.


In

yet

another

study

by

Brajnik

(2002),

another way of evaluating the quality of the website in a rather


unique way. Brajnik (2001) define Quality Model as a set of
criteria that are used to determine if a website reaches certain
levels of quality. Quality models require also ways to assess if
such criteria hold for a website. For example user testing,
heuristic evaluation or automatic webtesting systems. Brajnik
used Goal, Question, Metrics (GQM) in defining the Quality Model.
The

GQM

approach

can

be

followed

on

any

analysis

that

requires data collection. Quality assessment is such an activity


since the webmaster needs to acquire data about the site to
determine its quality level. The GQM approach prescribes the
following

steps,

described

here

in

the

context

of

web

site

analysis:
a. Establish

the

goals of

the

analysis.

Possible

goals

include: to detect and remove usability obstacles that


hinder an online sale procedure; to learn whether the
site conveys trust to the intended user population; to

compare two designs of a site to determine which one is


more usable; to determine performance bottlenecks in the
website and its backend implementation.
b. Develop

questions

sufficient

to

of

interest

decide

on

whose

the

answers

appropriate

would

be

course

of

actions. For example, a question related to the online


sale obstacles goal mentioned above could be "how many
users are able to buy product X in less than 3 minutes
and with no mistakes?". Another question, related to the
same goal, might be "are we using too many font faces on
those forms?"
c. Establish measurement methods (i.e. metrics) to be used
to collect and organize data to answer the questions of
interest. A measurement method (see [Fenton and Pfleeger,
1997] for additional details) should include at least:
1. A description of how the data have to be elicited.
For

example

via

description;

or

sources

of

what

of

testing

with

automatically

online

organized.
2. A description
example,

user

pages
how

does

it

to

data
mean

given

task

inspecting

learn
are
a

how

HTML

links

are

identified.

For

"mistake"

or

"success" in a user testing session; how "time to


complete the task" is going to be measured; what
constitutes a link in a webpage (i.e. which HTML

tags should be considered: A, AREA, IMG, ...); what


constitutes a local link to a website (e.g. same
server, same subdomain, same path)
3. A description of how data are
categorized.

For

example,

what

are

going
the

to

be

different

kinds of mistakes that users might get involved in,


how can the "time to complete" be broken down into
different phases, if there are different categories
of local links: links to images, videos, HTML files,
etc.
4. Measurement methods sharpen and refine the questions
of interest. They also identify and clarify other
attributes that enter a quality model. Since they
are

related

to

one

or

more

questions

inherit the importance of the questions.

they

also

You might also like