You are on page 1of 40

Ajax (programming)

Ajax, shorthand for Asynchronous JavaScript and XML, is a web development


technique for creating interactive web applications. The intent is to make web pages
feel more responsive by exchanging small amounts of data with the server behind the
scenes, so that the entire web page does not have to be reloaded each time the user
makes a change. This is meant to increase the web page's interactivity, speed, and
usability.

The Ajax technique uses a combination of:

• XHTML (or HTML) and CSS, for marking up and styling information.
• The DOM accessed with a client-side scripting language, especially
ECMAScript implementations such as JavaScript and JScript, to dynamically
display and interact with the information presented.
• The XMLHttpRequest object is used to exchange data asynchronously with
the web server. In some Ajax frameworks and in certain situations, an IFrame
object is used instead of the XMLHttpRequest object to exchange data with
the web server, in other implementations, dynamically added <script> tags
may be used.
• XML is sometimes used as the format for transferring data between the server
and client, although any format will work, including preformatted HTML,
plain text, JSON and even EBML. These files may be created dynamically by
some form of server-side scripting.

Like DHTML, LAMP and SPA, Ajax is not a technology in itself, but a term that
refers to the use of a group of technologies together.

History
The first use of the term in public was by Jesse James Garrett in February
2005. Garrett thought of the term while in the shower, when he realized the need for a
shorthand term to represent the suite of technologies he was proposing to a client.

Although the term "Ajax" was coined in 2005, most histories of the
technologies that enable Ajax start a decade earlier with Microsoft's initiatives in
developing Remote Scripting. Techniques for the asynchronous loading of content on
an existing Web page without requiring a full reload date back as far as the IFRAME
element type (introduced in Internet Explorer 3 in 1996) and the LAYER element
type (introduced in Netscape 4 in 1997, abandoned during early development of

1
Mozilla). Both element types had a src attribute that could take any external URL,
and by loading a page containing JavaScript that manipulated the parent page, Ajax-
like effects could be attained. This set of client-side technologies was usually grouped
together under the generic term of DHTML. Macromedia's Flash could also, from
version 4, load XML and CSV files from a remote server without requiring a browser
refresh.

Microsoft's Remote Scripting (or MSRS, introduced in 1998) acted as a more elegant
replacement for these techniques, with data being pulled in by a Java applet with
which the client side could communicate using JavaScript. This technique worked on
both Internet Explorer version 4 and Netscape Navigator version 4 onwards.
Microsoft then created the XMLHttpRequest object in Internet Explorer version 5 and
first took advantage of these techniques using XMLHttpRequest in Outlook Web
Access supplied with the Microsoft Exchange Server 2000 release.

The Web development community, first collaborating via the


microsoft.public.scripting.remote newsgroup and later through blog aggregation,
subsequently developed a range of techniques for remote scripting in order to enable
consistent results across different browsers. In 2002, a user-community modification
to Microsoft Remote Scripting was made to replace the Java applet with
XMLHttpRequest.

Remote Scripting Frameworks such as ARSCIF surfaced in 2003 not long before
Microsoft introduced Callbacks in ASP.NET

Since XMLHttpRequest is now implemented across the majority of browsers in use,


alternative techniques are used infrequently. However, they are still used where
compatibility with older Web sites or legacy applications is required.

In addition, the World Wide Web Consortium has several Recommendations that also
allow for dynamic communication between a server and user agent, though few of
them are well supported. These would include:

• The object element defined in HTML 4 for embedding arbitrary content types
into documents, (replaces inline frames under XHTML 1.1)
• The Document Object Model (DOM) Level 3 Load and Save Specification

2
Pros and cons
Pros

Bandwidth utilization

By generating the HTML locally within the browser, and only bringing down
JavaScript calls and the actual data, Ajax web pages can appear to load quickly since
the payload coming down is much smaller in size. An example of this technique is a
large result set where multiple pages of data exist. With Ajax, the HTML of the page,
e.g., a table control and related TD and TR tags can be produced locally in the
browser and not brought down with the first page of data. If the user clicks other
pages, only the data is brought down, and populated into the HTML generated in the
browser.

Interactivity

Ajax applications are mainly executed on the user's machine, by manipulating the
current page within their browser using document object model methods. Ajax can be
used for a multitude of tasks such as updating or deleting records; expanding web
forms; returning simple search queries; or editing category trees—all without the
requirement to fetch a full page of HTML each time a change is made. Generally only
small requests need to be sent to the server, and relatively short responses are sent
back. This permits the development of more interactive applications featuring more
responsive user interfaces due to the use of DHTML techniques.

While the Ajax platform is more restricted than the Java platform, current Ajax
applications effectively fill part of the niche first served by Java applets: extending the
browser with lightweight mini-applications....

Cons

Usability

Web applications that utilize Ajax may break the expected behavior of the browser's
back button. The difference between returning to a previous state of the current,
dynamically modified page versus going back to a previous static page might be a
subtle one, but users generally expect that clicking the back button in web
applications will move their browser to the last page it loaded, and in Ajax
applications this might not be the case.

3
Developers have implemented various solutions to this problem. These solutions can
involve using invisible IFRAMEs to invoke changes that populate the history used by
a browser's back button. Google Maps, for example, performs searches in an invisible
IFRAME and then pulls results back into an element on the visible web page. The
World Wide Web Consortium (W3C) did not include an iframe element in its
XHTML 1.1 Recommendation; the Consortium recommends the object element
instead.

Another issue is that dynamic web page updates make it difficult for a user to
bookmark a particular state of the application. Solutions to this problem exist, many
of which use the URL fragment identifier (the portion of a URL after the '#') to keep
track of, and allow users to return to, the application in a given state. This is possible
because many browsers allow JavaScript to update the fragment identifier of the URL
dynamically, so that Ajax applications can maintain it as the user changes the
application's state. This solution also improves back-button support. It is, however,
not a complete solution.

Response-time concerns

Network latency — or the interval between user request and server response — needs
to be considered carefully during Ajax development. Without clear feedback to the
user, smart preloading of data and proper handling of the XMLHttpRequest object,
users might experience delay in the interface of the web application, something which
users might not expect or understand. Additionally, when an entire page is rendered
there is a brief moment of re-adjustment for the eye when the content changes. The
lack of this re-adjustment with smaller portions of the screen changing makes the
latency more apparent. The use of visual feedback (such as throbbers) to alert the user
of background activity and/or preloading of content and data are often suggested
solutions to these latency issues.

In general the potential impact of latency has not been "solved" by any of the open
source Ajax toolkits and frameworks available today, such as the effect of latency
variance over time.

Accessibility
Using Ajax technologies in web applications provides many challenges for developers
interested in adhering to WAI accessibility guidelines. In addition there are numerous
development groups working on USA government projects which require strict

4
adherence to Section 508 Compliance standards. Failure to comply with these
standards can often lead to cancellation of contracts or lawsuits intended to ensure
compliance.

Because of this, developers need to provide fallback options for users on other
platforms or browsers, as most methods of Ajax implementation rely on features only
present in desktop graphical browsers.

Web developers use Ajax in some instances to provide content only to specific
portions of a web page, allowing data manipulation without incurring the cost of re-
rendering the entire page in the web browser. Non-Ajax users would ideally continue
to load and manipulate the whole page as a fallback, allowing the developers to
preserve the experience of users in non-Ajax environments (including all relevant
accessibility concerns) while giving those with capable browsers a much more
responsive experience.

Ajax framework

From Wikipedia, the free encyclopedia

Jump to: navigation, search

Ajax is a technology to build dynamic web pages on the client side. Data is read from
the server or sent to the server by JavaScript requests. However, some processing at
the server side is required to handle requests, i.e., finding and storing the data. This is
accomplished more easily with the use of a framework dedicated to process Ajax
requests. In the article that coined the "Ajax" term, J.J. Garrett describes the
technology as "an intermediary...between the user and the server." This Ajax engine is
intended to suppress waiting for the user when the page attempts to access the server.
The goal of the framework is to provide this Ajax engine and associated server and
client-side functions.

Benefit of a framework
A framework eases the work of the Ajax programmer at two levels: on the client side,
it offers JavaScript functions to send requests to the server. On the server side, it
processes the requests, searches for the data, and transmits them to the browser.
Some frameworks are very sophisticated and provide a complete library to build web
applications.

5
JavaScript and Server Technology Independent
Frameworks
Many AJAX frameworks and libraries rely solely upon JavaScript and contain no
server components and therefore server technology dependencies. Such AJAX
libraries and frameworks include:

• Prototype. Is the base of several other frameworks.


• Dojo Toolkit. Uses packages and reusable components.
• SmartClient. Commercial since 2001. SOA architecture
• qooxdoo. Includes advanced cross-browser layout managers and an advanced
build system.
• Clean AJAX. Simple open source AJAX engine.

Such frameworks are agnostic as to what server side technology you choose to use.
Usually such frameworks are optimized to consume XML, though JSON is becoming
popularized as well. Even Microsoft Atlas is, in part, positioned as having JavaScript
libraries that can run without dependence on .NET .

PHP frameworks
A PHP framework is able to deal with database, search data, and build pages or parts
of page and publish the page or return data to the XMLHttpRequest object. PHP 5
specifically, thanks to its SimpleXML class, is able to create XML files that may be
returned to the object. However, it is better to use a predefined library that performs
the various tasks.

• Sajax, a very simple library.


• Xajax uses only the XML format, on the server side.
• PHPAjaxTags Is a port of AjaxTags

Java frameworks
Such frameworks permit to use Java web services interactively with a web page. The
most common Ajax specific frameworks are:

• DWR, a remoting toolkit (see also Reverse Ajax) and DHTML library
• Google Web Toolkit, a widget library with Java to Javascript compiler

6
• SmartClient commercial, complete AJAX framework with Java integration
server

Commonly more conventional frameworks are incorporating Ajax features:

• Spring, Struts and RIFE all use DWR internally


• JavaServer Faces is often extended using Ajax4Jsf
• JSP based applications can be extended using taglibs from AjaxTags and
AjaxAnywhere

.NET frameworks
These tools run only on the .NET platform, and thus under Windows and perhaps
some compatible ports as Mono and others.

• Microsoft Atlas (now renamed Microsoft AJAX Library) works best with
.NET services behind it, even if it has JavaScript libraries that can run
without .NET.
• Ajax.NET Professional serialize .NET data to the JSON format.

ColdFusion frameworks
Libraries include:

• ajaxCFC. Object oriented framework.

Python frameworks
These frameworks require the Python programming language to run, and the goal is to
bring together several Python components designed to help in building web
applications.
An example of a such framework:

• TurboGears

Comet (programming)

From Wikipedia, the free encyclopedia

Jump to: navigation, search

7
Comet is a programming technique that enables web servers to send data to the client
without having any need for the client to request it. It allows creation of event-driven
web applications which are hosted in the browser.

Motivation
Traditionally, web pages have been delivered to the client only when the client
requested it. For every client request, the browser initiates an HTTP connection to the
web server, which then returns the data and the connection is closed. The drawback of
this approach is that the page displayed is updated only when the user explicitly
refreshes the page or moves to a new page. Since transferring entire pages takes a
long time, refreshing pages introduces a long latency.

To solve this problem, Ajax can be used which allows the web browser to request
only that part of the web page that has changed and update that portion accordingly.
Since the overall data transferred is reduced, latency is also reduced, and overall
responsiveness of the web site hosting the application increases. Further, by using
asynchronous background data transfer, where the user works with partly received
data as the rest of the data is being retrieved, the responsiveness of the web
application can be further increased.

But this practice also suffers from the problem that the client has to request some data
before it will be sent by the server. This problem becomes a major hurdle when
designing applications which have to wait for some event to occur at the server side,
such as some other user sending some data to the server, before it can proceed, but has
no information when the event will occur.

A solution would be to design the application such that it will intermittently poll the
server to find out if the event has occurred. But this is not an elegant solution as the
application will waste a lot of time querying for the completion of the event, thereby
directly impacting the responsiveness of the application. In addition, a lot of network
bandwidth will be wasted.

A better solution would be for the server to send a message to the client when the
event occurs, without the client having to ask for it. Such a client will not have to
check with the server periodically; rather it can continue with other work and work on
the data generated by the event when it has been pushed by the server. This is exactly
what Comet sets out to achieve.

8
Technology
Unlike normal data transfer between web servers and browsers, connections to a
server employing Comet require special constructs. The client-side application must
keep working while asynchronously initiating a connection to the server. At the
server, even though the request is not serviced at once, the connection remains
unbroken (and unused) until the event occurs. When the event occurs, the server
pushes the data generated by the event to the client over the already established
connection.

The connection can be severed after that, or may be kept open if there can be further
occurrences of the event. If there are more occurrences, the server can send the event
data over the connection, without the client having to request each of them explicitly.

Scalability and Reliability

There are some potential concerns regarding how well a web server implementing
Comet can scale. Because connections are kept alive until events have occurred, it has
to cope with many connections if the event occurs infrequently. The problem worsens
if different connections wait for different events. Managing this large number of
connections poses a considerable load on the system. But servers and other supporting
applications are being developed which have better support for such conditions.

Java Jetty and Apache 2.2 provide intrinsic support for situations where they use
asynchronous network operations to reduce the overhead of managing the
connections. Thus the web server logic has to deal only with sending messages back
to the user once the event has occurred. Since almost all Operating Systems have an
event-driven I/O subsystem, this is not a problem.

The problem arises with guaranteed delivery of the message to the client. The client
may be busy doing something else, and if it cannot temporarily suspend that job to
receive the results, it will be lost. To get around this, some Message Oriented
Middleware is required, that will allow the client to retrieve the received message at
its own pace. Also, browsers, which host the client application, need to be designed to
allow the client to connect to as many servers as it requires, and stay connected as
long as required, to receive messages from them.

9
Alternative

There are several alternatives of adapting server-push in Ajax-like paradigms.

• Steve and Jay McDonald’s *Fjax


• Pjax (Push technology for Ajax)
• Server-Sent Events standards proposal specified by WHATWG

HTTP streaming

From Wikipedia, the free encyclopedia

Jump to: navigation, search

HTTP streaming is a mechanism for sending data from a Web server to a Web
browser in response to an event. HTTP Streaming is achieved through several
common mechanisms.

In one such mechanism the web server does not terminate the response to the client
after data has been served. This differs from the typical HTTP cycle in which the
response is closed immediately following data transmission.

The web server leaves the response open such that if an event is received, it can
immediately be sent to the client. Otherwise the data would have to be queued until
the client's next request is made to the web server. The act of repeatedly queing and
re-requesting information is known as a Polling mechanism.

Typical uses for HTTP Streaming include market data distribution (stock tickers), live
chat/messaging systems, online betting and gaming, sport results, monitoring consoles
and Sensor network monitoring.

Progressive enhancement

From Wikipedia, the free encyclopedia

Jump to: navigation, search

Progressive enhancement is a label for a particular strategy of Web design that


emphasizes accessibility, semantic markup, and external stylesheet and scripting
technologies, in a layered fashion that allows everyone to access the basic content and
functionality of a Web page, using any browser or Internet connection, while also

10
enabling those with better bandwidth or more advanced browser software to
experience an enhanced version of the page.

History
"Progressive Enhancement" was coined by Steven Champeon, of Web design firm
hesketh.com, in a series of articles and presentations for Webmonkey and the Sxsw
Interactive conference between March and June of 2003.

Introduction and background


The strategy is an attempt to subvert the traditional Web design strategy known as
"graceful degradation", wherein designers would attempt to create Web pages for the
latest browsers that would also work well in older versions of browser software.
Graceful degradation was supposed to allow the page to "degrade", or remain
presentable even if certain technologies assumed by the design were not present,
without being jarring to the user of such older software (hence "gracefully"). In
practice, graceful degradation has been supplanted by an attitude that the end user
should "just upgrade". This attitude is due to time and budget constraints, limited
access to testing alternate browser software, as well as the widespread belief that
"browsers are free". Unfortunately, upgrading is often not possible due to IT
department policies, older hardware, and other reasons. The "just upgrade" attitude
also ignores deliberate end user choices and the existence of a variety of browser
platforms; many of which run on handhelds or in other contexts where available
bandwidth is paltry, or where support for sound or color, limited screen size, and so
forth are far different from the typical graphical desktop browser.

In Progressive Enhancement (PE) the strategy is deliberately reversed: a basic markup


document is created, geared towards the lowest common denominator of browser
software functionality, and then the designer adds in functionality or enhancements to
the presentation and behavior of the page, using modern technologies such as
Cascading Style Sheets or JavaScript (or other advanced technologies, such as Flash
or Java applets or SVG, etc.) All such enhancements are to be externally linked, in
order to avoid forcing browsers of lesser capability to "eat" data they do not
understand and cannot handle, or which would swamp their Internet connection.

The PE approach is derived from Champeon's early experience (c. 1993-4) with
SGML, before working with HTML or any Web presentation languages, as well as
from later experiences working with CSS to work around browser bugs. In those early

11
SGML contexts, semantic markup was of key importance, whereas presentation was
nearly always considered separately, rather than being embedded in the markup itself.
This concept is variously referred to in markup circles as the rule of separation of
content and presentation, separation of content and style, or of separation of
semantics and presentation. As the Web evolved in the mid-nineties, but before CSS
was introduced and widely supported, this cardinal rule of SGML was repeatedly
violated by HTML's extenders. As a result, web designers were forced to adopt new,
disruptive technologies and tags in order to remain relevant. With a nod to graceful
degradation, in recognition that not everyone had the latest browser, many began to
simply adopt design practices and technologies only supported in the most recent and
perhaps the single previous major browser releases. For several years, much of the
Web simply did not work in anything but the most recent, most popular browsers.
This remained true until the rise and widespread adoption of and support for CSS, as
well as many populist, grassroots educational efforts (from Eric Costello, Owen
Briggs, Dave Shea, and others) showing Web designers how to use CSS for layout
purposes.

PE is based on a recognition that the core assumption behind "graceful degradation"


— that browsers always got faster and more powerful — was proving itself false with
the rise of handheld and PDA devices with low-functionality browsers and serious
bandwidth constraints. In addition, the rapid evolution of HTML and related
technologies in the early days of the Web has slowed, and very old browsers have
become obsolete, freeing designers to use powerful technologies such as CSS to
manage all presentation tasks and JavaScript to enhance complex client-side behavior.

First proposed as a somewhat less unwieldy catchall phrase to describe the delicate art
of "separating document structure and contents from semantics, presentation, and
behavior", and based on the then-common use of CSS hacks to work around rendering
bugs in specific browsers, the PE strategy has taken on a life of its own as new
designers have embraced the idea and extended and revised the approach.

Core principles
Progressive Enhancement consists of the following core principles:

• all basic content should be accessible to all browsers


• all basic functionality should be accessible to all browsers
• sparse, semantic markup contains all content
• enhanced layout is provided by externally linked CSS

12
• enhanced behavior is provided by unobtrusive, externally linked JavaScript
• end user browser preferences are respected

Support and adoption


Jim Wilkinson created a page for Progressive Enhancement on the CSS mailing list
Wiki to collect some tricks and tips and to explain the overall strategy. Designers such
as Jeremy Keith have shown how the approach can be used harmoniously with still
other approaches to modern Web design (such as Ajax) to provide flexible, but
powerful, user experiences. Others, including Dave Shea, have helped to spread the
adoption of the term to refer to CSS-based design strategies. Organizations such as the
Web Standards Project have embraced PE as a basis for their educational efforts. Nate
Koechley at Yahoo! makes extensive reference to PE in his own approach to Web
design and browser support, Graded Browser Support (GBS). Steve Chipman at AOL
has referred to PE as a basis for his Web design strategy. Chris Heilmann discusses
the importance of targeted delivery of CSS so that each browser only gets the content
(and enhancements) it can handle. Many Web design agencies have begun to
advertise that they provide progressive enhancement as a core service.

Benefits for accessibility


Web pages created according to the principles of PE are by their nature more
accessible, because the strategy demands that basic content always be available, not
obstructed by commonly unsupported or easily disabled scripting. Additionally, the
sparse markup principle makes it easier for tools that read content aloud to find that
content. It is unclear as to how well PE sites work with older tools designed to deal
with table layouts, "tag soup," and the like.

Benefits for search engine optimization (SEO)


Improved results with respect to Search Engine Optimization is another side effect of
a PE-based Web design strategy. Because the basic content is always accessible, and
the markup is clean and easily parsed for structure and intent, it becomes much easier
to tune the content to improve SEO results.

Criticism and responses


Some skeptics, such as Garret Dimon, have expressed their concern that PE is not
workable in situations that rely heavily on JavaScript to achieve certain user interface

13
presentations or behaviors. Jeremy Keith is to present Hijax: Progressive
Enhancement with Ajax at XTech06, suggesting that the two are compatible. Others
have countered with the point that informational pages should be coded using PE in
order to be indexed by spiders, and that even Flash-heavy pages should be coded
using PE. In a related area, many have expressed their doubts concerning the principle
of the separation of content and presentation in absolute terms, pushing instead for a
realistic recognition that the two are (and some would say should be) inextricably
linked.

Reverse Ajax

From Wikipedia, the free encyclopedia

Jump to: navigation, search

Reverse Ajax, shorthand for Reverse Asynchronous JavaScript and XML, is a web
development technique for creating interactive web applications. The intent is to make
web pages feel more responsive by providing real time information in a web page.
This is meant to increase the web page's interactivity, speed, and usability.

Like DHTML, LAMP, Ajax and SPA, Reverse Ajax is not a technology in itself, but
a term that refers to the use of a group of technologies together. These technologies
include:

• AJAX for handling the data on the client side in a smooth and interactive way,
and passing data between server and client.
• A technology for pushing server data to a browser
o Comet, a connection between a server and client is kept open, by
slowly loading a page in a hidden frame.
o Piggyback, extra data is added (piggybacked) onto a normal client-
server interaction.
o Polling, the client repetitively queries (poll) the server.

Reverse Ajax is different from Ajax, as Reverse Ajax is a suite of technologies for
pushing data from a server to a client. These technologies are built upon an Ajax
framework.

14
Rich Internet application

From Wikipedia, the free encyclopedia

Jump to: navigation, search

Rich Internet applications (RIA) are web applications that have the features and
functionality of traditional desktop applications. RIAs typically transfer the
processing necessary for the user interface to the web client but keep the bulk of the
data (i.e maintaining the state of the program, the data etc) back on the application
server.

RIAs typically:

• run in a web browser, or do not require software installation


• run locally in a secure environment called a sandbox
• can be "occasionally connected" wandering in and out of hot-spots or from
office to office.

History of RIAs
The term "Rich Internet application" was introduced in a Macromedia whitepaper in
March 2002, though the concept had been around for a number of years before that
under different names such as:

• Remote Scripting, by Microsoft, circa 1998


• X Internet, by Forrester Research in October 2000
• Rich (Web) clients
• Rich web application

Comparison to standard web applications


Traditional web applications centered all activity around a client-server architecture
with a thin client. Under this system all processing is done on the server, and the
client is only used to display static (in this case HTML) content. The biggest
drawback with this system is that all interaction with the application must pass
through the server, which requires data to be sent to the server, the server to respond,
and the page to be reloaded on the client with the response. By using a client side
technology which can execute instructions on the client's computer, RIAs can
circumvent this slow and synchronous loop for many user interactions. This

15
difference is somewhat analogous to the difference between "terminal and
mainframe" and Client-server/Fat client approaches.

Internet standards have evolved slowly and continually over time to accommodate
these techniques, so it is hard to draw a strict line between what constitutes an RIA
and what does not. But all RIAs share one characteristic: they introduce an
intermediate layer of code, often called a client engine, between the user and the
server. This client engine is usually downloaded at the beginning of the application,
and may be supplemented by further code downloads as the application progresses.
The client engine acts as an extension of the browser, and usually takes over
responsibility for rendering the application's user interface and for server
communication.

What can be done in an RIA may be limited by the capabilities of the system used on
the client. But in general, the client engine is programmed to perform application
functions that its designer believes will enhance some aspect the user interface, or
improve its responsiveness when handling certain user interactions, compared to a
standard Web browser implementation. Also, while simply adding a client engine
does not force an application to depart from the normal synchronous pattern of
interactions between browser and server, in most RIAs the client engine performs
additional asynchronous communications with servers.

Benefits

Because RIAs employ a client engine to interact with the user, they are:

• Richer. They can offer user-interface behaviors not obtainable using only the
HTML widgets available to standard browser-based Web applications. This
richer functionality may include anything that can be implemented in the
technology being used on the client side, including drag and drop, using a
slider to change data, calculations performed only by the client and which do
not need to be sent back to the server (e.g. an insurance rate calculator), etc.

• More responsive. The interface behaviors are typically much more responsive
than those of a standard Web browser that must always interact with the
server.

The most sophisticated examples of RIAs exhibit a look and feel approaching that of a
desktop environment. Using a client engine can also produce other performance
benefits:

16
• Client/Server balance. The demand for client and server computing resources
is better balanced, so that the Web server need not be the workhorse that it is
with a traditional Web application. This frees server resources, allowing the
same server hardware to handle more client sessions concurrently.

• Asynchronous commmunication. The client engine can interact with the server
asynchronously -- that is, without waiting for the user to perform an interface
action like clicking on a button or link. This option allows RIA designers to
move data between the client and the server without making the user wait.
Perhaps the most common application of this is prefetching, in which an
application anticipates a future need for certain data, and downloads it to the
client before the user requests it, thereby speeding up a subsequent response.
Google Maps uses this technique to move adjacent map segments to the client
before the user scrolls their view.

• Network efficiency. The network traffic may also be significantly reduced


because an application-specific client engine can be more intelligent than a
standard Web browser when deciding what data needs to be exchanged with
servers. This can speed up individual requests or responses because less data is
being transferred for each interaction, and overall network load is reduced.
However, use of asynchronous prefetching techniques can neutralize or even
reverse this potential benefit. Because the code cannot anticipate exactly what
every user will do next, it is common for such techniques to download extra
data, not all of which is actually needed, to many or all clients.

Shortcomings and restrictions

Shortcomings and restrictions associated with RIAs are:

• Sandbox. Because RIAs run within a sandbox, they have restricted access to
system resources. When access to expected/assumed resources is disabled
RIAs may fail to operate correctly.

• Disabled scripting. JavaScript or another scripting language is often required.


If the user has disabled active scripting in their browser, the RIA will not
function properly, if at all.

• Client processing speed. To achieve platform independence, some RIAs use


client-side scripts written in interpreted languages such as JavaScript, with a
consequential performance hit. This is not an issue with compiled client

17
languages such as Java, where performance is comparable to that of traditional
compiled languages, or with Flash movies, in which the bulk of the operations
are performed by the native code of the Flash player.

• Script download time. Although it does not have to be installed, the additional
client-side intelligence (or client engine) of RIA applications needs to be
delivered by the server to the client. While much of this is usually
automatically cached it needs to be transferred at least once. Depending on the
size and type of delivery, script download time may be unpleasantly long. RIA
developers can lessen the impact of this delay by compressing the scripts, and
by staging their delivery over multiple pages of an application.

• Loss of integrity. If the application-base is X/HTML, conflicts arise between


the goal of an application (which naturally wants to be in control of its
presentation and behaviour) and the goals of X/HTML (which naturally wants
to give away control). The DOM interface for X/HTML makes it possible to
create RIAs, but by doing so makes it impossible to guarantee correct
function. Because an RIA client can modify the RIA's basic structure and
override presentation and behaviour, it can cause an irrecoverable client
failure or crash. Eventually, this problem could be solved by new client-side
mechanisms that granted an RIA client more limited permission to modify
only those resources within the scope of its application. (Standard software
running natively does not have this problem because by definition a program
automatically possesses all rights to all its allocated resources).

Management complications

The advent of RIA technologies has introduced considerable additional complexity


into Web applications. Traditional Web applications built using only standard HTML,
having a relatively simple software architecture and being constructed using a limited
set of development options, are relatively easy to design and manage. For the person
or organization using RIA technology to deliver a Web application, their additional
complexity makes them harder to design, test, measure, and support.

Use of RIA technology poses several new Service Level Management ("SLM")
challenges, not all of which are completely solved today. SLM concerns are not
always the focus of application developers, and are rarely if ever perceived by
application users, but they are vital to the successful delivery of an online application.
Aspects of the RIA architecture that complicate management processes are:

18
• Greater complexity makes development harder. The ability to move code to
the client gives application designers and developers far more creative
freedom. But this in turn makes development harder, increases the likelihood
of defects (bugs) being introduced, and complicates software testing activities.
These complications lengthen the software development process, regardless of
the particular methodology or process being employed. Some of these issues
may be mitigated through the use of a Web application framework to
standardize aspects of RIA design and development. However, introducing
greater complexity into a software solution will always complicate and
lengthen the testing process, because of the greater number of use cases to be
tested. And incomplete testing lowers the application's quality and its
reliability during use.

• RIA architecture breaks the Web page paradigm. Traditional Web applications
can be viewed as a series of Web pages, each of which requires a distinct
download, initiated by an HTTP GET request. This model has been
characterized as the Web page paradigm. RIAs invalidate this model,
introducing additional asynchronous server communications to support a more
responsive user interface. In RIAs, the time to complete a page download may
no longer correspond to something a user perceives as important, because (for
example) the client engine may be prefetching some of the downloaded
content for future use. New measurement techniques must be devised for
RIAs, to permit reporting of response time quantities that reflect the user's
experience. In the absence of standard tools that do this, RIA developers must
instrument their application code to produce the measurement data needed for
SLM.

• Asynchronous communication makes it harder to isolate performance


problems. Paradoxically, actions taken to enhance application responsiveness
also make it harder to measure, understand, report on, and manage
responsiveness. Some RIAs do not issue any further HTTP GET requests from
the browser after their first page, using asynchronous requests from the client
engine to initiate all subsequent downloads. The RIA client engine may be
programmed to continually download new content and refresh the display, or
(in applications using the Comet approach) a server-side engine can keep
pushing new content to the browser over a connection that never closes. In
these cases, the concept of a "page download" is no longer applicable. These
complications make it harder to measure and subdivide application response
times, a fundamental requirement for problem isolation and service level

19
management. Tools designed to measure traditional Web applications may --
depending on the details of the application and the tool -- report such
applications either as a single Web page per HTTP request, or as an unrelated
collection of server activities. Neither conclusion reflects what is really
happening at the application level.

• The client engine makes it harder to measure response time. For traditional
Web applications, measurement software can reside either on the client
machine or on a machine that is close to the server, provided that it can
observe the flow of network traffic at the TCP and HTTP levels. Because
these protocols are synchronous and predictable, a packet sniffer can read and
interpret packet-level data, and infer the user’s experience of response time by
tracking HTTP messages and the times of underlying TCP packets and
acknowledgements. But the RIA architecture reduces the power of the packet
sniffing approach, because the client engine breaks the communication
between user and server into two separate cycles operating asynchronously --
a foreground (user-to-engine) cycle, and a background (engine-to-server)
cycle. Both cycles are important, because neither stands alone; it is their
relationship that defines application behavior. But that relationship depends
only on the application design, which cannot (in general) be inferred by a
measurement tool, especially one that can observe only one of the two cycles.
Therefore the most complete RIA measurements can only be obtained using
tools that reside on the client and observe both cycles.

The current status of RIA development and adoption


RIAs are still in the early stages of development and user adoption. There are a
number of restrictions and requirements that remain:

• Browser adoption: Many RIAs require modern web browsers in order to run.
Advanced JavaScript engines must be present in the browser as RIAs use
techniques such as XMLHTTPRequest for client-server communication, and
DOM Scripting and advanced CSS techniques to enable the rich user interface.

• Web standards: Differences between web browsers can make it difficult to


write an RIA that will run across all major browsers. The consistency of the
Java platform, particularly after Java 1.1, makes this task much simpler for
RIAs written as Java applets.

20
• Development tools: Some Ajax Frameworks provide an integrated
environment in which to build RIA and B2B web applications.

• Accessibility concerns: Additional interactivity may require technical


approaches that limit applications' accessibility.

• User adoption: Users expecting standard web applications may find that some
accepted browser functionality (such as the "Back" button) may have
somewhat different or even undesired behaviour.

Justifications
Although developing applications to run in a web browser is a much more limiting,
difficult, and intricate a process than developing a regular desktop application, the
efforts are often justified because:

• installation is not required -- updating and distributing the application is an


instant, automatically handled process
• users can use the application from any computer with an internet connection,
and usually regardless of what operating system that computer is running
• web-based applications are generally less prone to viral infection than running
an actual executable
• as web usage increases, computer users are becoming less willing to go to the
trouble of installing new software if a browser-based alternative is available

This last point is often true even if this alternative is slower or not as feature-rich. A
good example of this phenomenon is webmail.

Methods and techniques


JavaScript

The first major client side language and technology available with the ability to run
code and installed on a majority of web clients was JavaScript. Although its uses were
relatively limited at first, combined with layers and other developments in DHTML it
has become possible to piece together an RIA system without the use of a unified
client-side solution. Ajax is a new term coined to refer to this combination of
techniques and has recently been used most prominently by Google for projects such
as Gmail and Google Maps. However, creating a large application in this framework
is very difficult, as many different technologies must interact to make it work, and

21
browser compatibility requires a lot of effort. In order to make the process easier,
several AJAX Frameworks have been developed.

The "rich" in "rich Internet applications" may also suffer from an all-JavaScript
approach, because you are still bound by the media types predictably supported by the
world's various deployed browsers -- video will display in different ways in different
browsers with an all-JavaScript approach, audio support will be unpredictable,
realtime communications, whiteboarding, outbound webcams, opacity compositing,
socket support, all of these are implemented in different ways in different browsers,
so all-JavaScript approaches tend to cluster their "richness" around text refreshes and
image refreshes.

Adobe Flash Player

Adobe Systems is one vendor in this area. Their Shockwave browser extension, and
later their Flash Player, have provided browsers with text-refresh abilities since the
mid-1990s, and Adobe (then Macromedia) invented the term "rich Internet
applications" in 2002.

Adobe Flash technology includes Flash Media Server, Flash Remoting, Central,
Breeze, and Flex, all of which are rendered in viewers' differing browsers via Adobe
Flash Player. Adobe points to research carried out by NPD Online which shows that
98% of Internet users have some version of the Flash plug-in, and major services such
as Google Video, Hollywood, YouTube and automakers all use these abilities as a
matter of course. The Flash Player is supported by many browsers (including Internet
Explorer, Mozilla Firefox and Opera), and is distributed as a free browser plug-in.
The Player runs across various versions of the Microsoft Windows, Mac OS, and
GNU/Linux* operating systems, on set-top boxes and mobile phones, on ambient
signage and automobile consoles. The Flash Player is essentially a widely-supported
virtual machine that executes byte-code found in files following the SWF format.

ActiveX Controls

Embedding ActiveX controls into HTML is a very powerful way to develop rich
Internet applications, however they are only guaranteed to run properly in Internet
Explorer. At the time of this writing, the Macromedia Flash Player for Internet
Explorer is implemented as an ActiveX control for Microsoft environments, as well as
in multi-platform Netscape Plugin wrappers for the wider world. Only if corporations
have standardized on using Internet Explorer as the primary web browser, is ActiveX
per se a good choice for building corporate applications.

22
Java applets

Java applets run in standard HTML pages and generally start automatically when their
web page is opened with a modern web browser. Java applets have access to the
screen (inside an area designated in its page's HTML), speakers, keyboard and mouse
of any computer their web page is opened on, as well as access to the Internet, and
provide a sophisticated environment capable of real time applications.

Java applications

Java Web Start allows desktop Java applications to utilize the client workstation. It
also allows the application to break free of the web browser. This approach offers the
richest functionality without the limitations of HTML or the specific web browser in
use.

User Interface languages

As an alternative to HTML/XHTML new user interface markup languages can be


used in RIAs. For instance, the Mozilla Foundation's XML-based user interface
markup language XUL - this could be used in RIAs though it would be restricted to
Mozilla-based browsers, since it is not a de facto or de jure standard. The W3Cs Rich
Web Clients Activity[1] has initiated a Web Application Formats Working Group
whose mission includes the development of such standards [2].

RIA's user interfaces can also become richer through the use of scriptable SVG
(though not all browsers support native SVG rendering yet) as well SMIL.

Other techniques

RIAs could use XForms to enhance their functionality.

Using XML and XSLT along with some XHTML, CSS and JavaScript can also be
used to generate richer client side UI components like data tables that can be resorted
locally on the client without going back to the server. Mozilla and Internet Explorer
browsers both support this.

RIA with real-time push

Traditionally, web pages have been delivered to the client only when the client
requested for it. For every client request, the browser initiates an HTTP connection to
the web server, which then returns the data and the connection is closed. The

23
drawback of this approach was that the page displayed was updated only when the
user explicitly refreshes the page or moves to a new page. Since transferring entire
pages can take a long time, refreshing pages can introduce a long latency.

Demand for localised usage of RIA

With the increasing adoption and improvement in broadband technologies, remote


latency is becoming less of an issue. Furthermore one of the critical reasons for using
an RIA is that many developers are looking for a language to serve up desktop
applications that is not only desktop OS neutral but also installation and system issue
free.

RIA running in the ubiquitous web browser is a potential candidate even when used
standalone or over a LAN, with the required webserver functionalities hosted locally.

Client-side functionalities and development tools for RIA needed

With client-side functionalities like Javascript and DHTML, RIA can operate on top
of a range of OS and webserver functionalities. What's more, RIA can double as a
desktop and a web application.

Single page application

From Wikipedia, the free encyclopedia

Jump to: navigation, search

A single page application (SPA) is a web application that runs entirely in the client
web browser, typically using a combination of HTML, JavaScript, and CSS. The
application modifies the web page's own data structures through its DOM tree, and
makes its changes persistent when the user invokes the browser's Save Page
command to save the (modified) web page to disk.

Like DHTML, LAMP, or Ajax, SPA is not a technology in itself, but a term that
refers to the use of a group of technologies together.

Web 2.0

From Wikipedia, the free encyclopedia

Jump to: navigation, search

24
Web 2.0, a phrase coined by O'Reilly Media in 2004, refers to a supposed second-
generation of Internet-based services — such as social networking sites, wikis,
communication tools, and folksonomies — that let people collaborate and share
information online in previously unavailable ways. O'Reilly Media, in collaboration
with MediaLive International, used the phrase as a title for a series of conferences and
since 2004 it has become a popular (though ill-defined and often criticized) buzzword
amongst certain technical and marketing communities.

Introduction
Alluding to the version-numbers that commonly designate software upgrades, the
phrase "Web 2.0" hints at an improved form of the World Wide Web, and some
people have used the term for several years..

In the opening talk of the first Web 2.0 conference, Tim O'Reilly and John Battelle
summarized key principles they believed characterized Web 2.0 applications:

• the Web as a platform


• data as the driving force
• network effects created by an architecture of participation
• innovation in assembly of systems and sites composed by pulling together
features from distributed, independent developers (a kind of "open source"
development)
• lightweight business models enabled by content and service syndication
• the end of the software adoption cycle ("the perpetual beta")
• software above the level of a single device, leveraging the power of The Long
Tail.

Earlier users of the phrase "Web 2.0" employed it as a synonym for "semantic web,"
and indeed, the two concepts complement each other. The combination of social-
networking systems such as FOAF and XFN with the development of tag-based
folksonomies, delivered through blogs and wikis, sets up a basis for a semantic
environment. Although the technologies and services that make up Web 2.0 lack the
effectiveness of an internet in which the machines can understand and extract
meaning (as proponents of the Semantic Web envision), Web 2.0 represents a step in
its direction.

As used by its proponents, the phrase "Web 2.0" refers to one or more of the
following:

25
• The transition of websites from isolated information silos to sources of content
and functionality, thus becoming computing platforms serving web
applications to end users
• A social phenomenon embracing an approach to generating and distributing
Web content itself, characterized by open communication, decentralization of
authority, freedom to share and re-use, and "the market as a conversation"
• A more organized and categorized content, with a far more developed
deeplinking web architecture than hithertofore
• A shift in economic value of the Web, possibly surpassing that of the dot com
boom of the late 1990s
• A marketing-term used to differentiate new web businesses from those of the
dot com boom, which due to the bust subsequently seem discredited
• The resurgence of excitement around the implications of innovative web-
applications and services that gained a lot of momentum around mid-2005

Many[citation needed] find it easiest to define Web 2.0 by associating it with companies or
products that embody its principles. Tim O'Reilly gave examples in his description of
his "four plus one" levels in the hierarchy of Web 2.0-ness:[1]

• Level-3 applications, the most "Web 2.0", which could only exist on the
Internet, deriving their power from the human connections and network effects
Web 2.0 makes possible, and growing in effectiveness the more people use
them. O'Reilly gives as examples: eBay, craigslist, Wikipedia, del.icio.us,
Skype, dodgeball, and Adsense.
• Level-2 applications, which can operate offline but which gain advantages
from going online. O'Reilly cited Flickr, which benefits from its shared photo-
database and from its community-generated tag database.
• Level-1 applications, also available offline but which gain features online.
O'Reilly pointed to Writely (gaining group-editing capability online) and
iTunes (because of its music-store portion).
• Level-0 applications would work as well offline. O'Reilly gave the examples
of MapQuest, Yahoo! Local, and Google Maps. Mapping applications using
contributions from users to advantage can rank as level 2.
• non-web applications like email, instant-messaging clients and the telephone.

Examples of Web 2.0 other than those cited by O'Reilly include digg, Shoutwire,
last.fm, and Technorati.

26
Time bar of Web 2.0 buzz words
This article or section does not cite its references or sources.
You can help Wikipedia by introducing appropriate citations.
This image shows the appearance of some buzz words which are assigned to web 2.0
its dependencies.

Commentators see many recently-developed concepts and technologies as


contributing to Web 2.0, including weblogs, linklogs, wikis, podcasts, RSS feeds and
other forms of many-to-many publishing; social software, web APIs, web standards,
online web services, and others.

Proponents of the Web 2.0 concept say that it differs from early web development
(retrospectively labeled Web 1.0) in that it moves away from static websites, the use
of search engines, and surfing from one website to the next, towards a more dynamic
and interactive World Wide Web. Others argue that later developments have not
actually superseded the original and fundamental concepts of the WWW. Skeptics
may see the term "Web 2.0" as little more than a buzzword; or they may suggest that
it means whatever its proponents want it to mean in order to convince their customers,
investors and the media that they have begun building something fundamentally new,
rather than continuing to develop and use well-established technologies[2].

27
On September 30, 2005, Tim O'Reilly wrote a seminal piece neatly summarizing the
subject. The mind-map pictured above (constructed by Markus Angermeier on
November 11, 2005) sums up the memes of Web 2.0, with example sites and services
attached.

Earlier web applications or "Web 1.0" (so dubbed after the event by proponents of
Web 2.0) often consisted of static HTML pages, rarely (if ever) updated. They
depended solely on HTML, which a new Internet content-provider could learn fairly
easily. The success of the dot-com era depended on a more dynamic Web (sometimes
labeled Web 1.5) where content-management systems served dynamic HTML web-
pages generated on-the-fly from a content database more amenable than raw HTML-
code to change. In both senses, marketeers regarded so-called "eyeballing" as intrinsic
to the Web experience, thus making page-hits and visual aesthetics important factors.

Proponents of the Web 2.0 approach believe that Web usage has started increasingly
moving towards interaction and towards rudimentary social networks, which can
serve content that exploits network effects with or without creating a visual,
interactive web page. In one view, Web 2.0 sites act more as points of presence, or
user-dependent web portals, than as traditional websites. They have become so
internally complex that new Internet users cannot create analogous websites, but
remain mere users of web services provided by specialist professional experts.

Access to consumer-generated content facilitated by Web 2.0 brings the web closer to
Tim Berners-Lee's original concept of the web as a democratic, personal, and DIY
medium of communication.

28
Characteristics of Web 2.0
While interested parties continue to debate the definition of a Web 2.0 application,
some suggest that a Web 2.0 website may exhibit some basic characteristics. These
might include:

• "Network as platform" — delivering (and allowing users to use) applications


entirely through a web-browser[3] [4].
• Users owning the data on the site and exercising control over that data[5][3].
• An architecture of participation and democracy that encourages users to add
value to the application as they use it[3][6].
• A rich, interactive, user-friendly interface based on Ajax[3][6].
• Some social-networking aspects[5][3].

Technology overview
The complex and evolving technology infrastructure of Web 2.0 includes server-
software, content-syndication, messaging-protocols, standards-based browsers with
plugins and extensions, and various client-applications. These differing but
complementary approaches provide Web 2.0 with information-storage, creation, and
dissemination capabilities that go beyond what the public formerly expected of
websites.

A Web 2.0 website typically features a number of the following techniques:

• Unobtrusive rich Internet application techniques (such as Adobe Flash/Flex)


• CSS
• Semantically valid XHTML markup and/or the use of Microformats
• Syndication and aggregation of data in RSS/Atom
• Clean and meaningful URLs
• Extensive use of folksonomies (in the form of tags or tagclouds, for example)
• Weblog publishing
• Mashups
• REST or XML Webservice APIs

29
Innovations associated with "Web 2.0"
Web-based communities

Some websites that potentially sit under the Web 2.0 umbrella have built new online
social networks amongst the general public. Some of the websites run social software
where people work together. Other websites reproduce several individuals' RSS feeds
on one page. Other ones provide deeplinking between individual websites.

The syndication and messaging capabilities of Web 2.0 have fostered, to a greater or
lesser degree, a tightly-woven social fabric among individuals. Arguably, the nature
of online communities has changed in recent months and years. The meaning of these
inferred changes, however, has pundits divided. Basically, ideological lines run
thusly: Web 2.0 either empowers the individual and provides an outlet for the "voice
of the voiceless"; or it elevates the amateur to the detriment of professionalism,
expertise and clarity.

Web-based applications and desktops

The richer user-experience afforded by Ajax has prompted the development of web-
sites that mimic personal computer applications, such as word processing, the
spreadsheet, and slide-show presentation. WYSIWYG wiki sites replicate many
features of PC authoring applications. Still other sites perform collaboration and
project management functions. Java enables sites that provide computation-intensive
video capability. Google, Inc. acquired one of the best-known sites of this broad class,
Writely, in early 2006.

Several browser-based "operating systems" or "online desktops" have also appeared.


They essentially function as application platforms, not as operating systems per se.
These services mimic the user experience of desktop operating-systems, offering
features and applications similar to a PC environment. They have as their
distinguishing characteristic the ability to run within any modern browser.

Numerous web-based application services appeared during the Dot-com bubble of


1997 - 2001 and then vanished, having failed to gain a critical mass of customers. In
2005 WebEx acquired the best-known of these, Intranets.com, for slightly more than
the total it had raised in venture capital after six years of trading.

30
Rich Internet applications

Main article: Rich Internet application

Recently, rich-Internet application techniques such as Ajax, Adobe Flash and Flex
have evolved that can improve the user-experience in browser-based web
applications. Flash/Flex involves a web-page requesting an update for some part of its
content, and altering that part in the browser, without refreshing the whole page at the
same time.

Server-side software

The functionality of Web 2.0 rich Internet applications builds on the existing web
server architecture, but puts much greater emphasis on back-end software.
Syndication differs only nominally from the methods of publishing using dynamic
content management, but web services typically require much more robust database
and workflow support, and become very similar to the traditional intranet
functionality of an application server. Vendor approaches to date fall under either a
universal server approach, which bundles most of the necessary functionality in a
single server platform, or a web-server plugin approach, which uses standard
publishing tools enhanced with API interfaces and other tools.

Client-side software

The extra functionality provided by Web 2.0 depends on the ability of users to work
with the data stored on servers. This can come about through forms in an HTML
page, through a scripting language such as Javascript, or through Flash or Java. These
methods all make use of the client computer to reduce the server workload.

RSS

Main article: RSS (file format)

The first and the most important step (by one point of view) of the evolution towards
Web 2.0 involves the syndication of website content, using standardized protocols
which permit end-users to make use of a site's data in another context, ranging from
another website, to a browser plugin, or to a separate desktop application. Protocols
which permit syndication include RSS (Really Simple Syndication — also known as
web syndication), RDF (as in RSS 1.1), and Atom, all of them flavors of XML.
Specialized protocols such as FOAF and XFN (both for social networking) extend

31
functionality of sites or permit end-users to interact without centralized websites. See
microformats for more specialized data formats.

Due to the recent development of these trends, many of these protocols remain de
facto (rather than formal) standards.......

Web protocols

Web communication protocols provide a key element of the Web 2.0 infrastructure.
Major protocols include REST and SOAP.

• REST (Representational State Transfer) indicates a way to access and


manipulate data on a server using the HTTP verbs GET, POST, PUT, and
DELETE
• SOAP involves POSTing XML messages and requests to a server that may
contain quite complex, but pre-defined, instructions for the server to follow

In both cases, an API defines access to the service. Often servers use proprietary
APIs, but standard web-service APIs (for example, for posting to a blog) have also
come into wide use. Most (but not all) communications with web services involve
some form of XML (eXtensible Markup Language).

See also WSDL (Web Services Description Language) (the standard way of
publishing a SOAP API) and the list of Web service specifications for links to many
other web service standards, including those many whose names begin 'WS-'.

Criticism
Given the lack of set standards as to what "Web 2.0" actually means, implies, or
requires, the term can mean radically different things to different people. For instance,
many people pushing Web 2.0 talk about well-formed, validated HTML; however, not
many production sites actually adhere to this standard.[citation needed] Many people will
also talk about web sites "degrading gracefully" (designing a website so that its
fundamental features remain usable by people who access it with software that does
not support every technology employed by the site); however, the addition of Ajax
scripting to websites can render the website completely unusable to anyone browsing
with JavaScript turned off, or using a slightly older browser. Many have complained
that the proliferation of Ajax scripts, in combination with unknowledgeable
webmasters, has increased the instances of "tag soup": websites where coders have
apparently thrown <script> tags and other semantically useless tags about the

32
HTML-file with little organization in mind, in a way that occurred more commonly
during the dot-com boom, and which many standards-proponents have tried to
eschew.[citation needed] Some critics also object to cluttered, arcane navigation structures in
Web 2.0 websites.[citation needed]

Many of the ideas of Web 2.0 featured on networked systems well before the term
"Web 2.0" emerged; Amazon.com, for instance, has allowed users to write reviews
and consumer guides since its inception, in a form of self-publishing; and opened up
its API to outside developers in 2002[7]. Prior art also comes from research in
Computer Supported Collaborative Learning and Computer Supported Cooperative
Work.

Conversely, when a website proclaims itself "Web 2.0" for the use of some trivial
feature (such as blogs or gradient boxes) observers may generally consider it more an
attempt at self-promotion than an actual endorsement of the ideas behind Web 2.0.
"Web 2.0" in such circumstances has sometimes sunk simply to the status of a
marketing buzzword, like 'synergy', that can mean whatever a salesperson wants it to
mean, with little connection to most of the worthy but (currently) unrelated ideas
originally brought together under the "Web 2.0" banner. The argument also exists that
"Web 2.0" does not represent a new version of World Wide Web at all, but merely
continues to use "Web 1.0" technologies and concepts.

Other criticism has included the term "a second bubble", suggesting that too many
Web 2.0 companies attempt to develop the same product with a lack of business
models. The Economist magazine has written of "Bubble 2.0".

Some venture capitalists have noted that the second generation of web applications
has too few users to make them an economically-viable target for consumer
applications. Josh Kopelman noted that Web 2.0 excited only 53,651 people (the then
number of subscribers to TechCrunch, a weblog covering Web 2.0 matters).

Trademark controversy
In November 2003, CMP Media applied to the USPTO for a service mark on the use
of the term "WEB 2.0" for live events[8]. On the basis of this application, CMP Media
sent a cease-and-desist demand to the Irish non-profit organization IT@Cork on May
24, 2006[9], but retracted it two days later[10]. The "WEB 2.0" service mark registration
passed final PTO Examining Attorney review on May 10, 2006, but as of June 12,
2006 the PTO has not published the mark for opposition. The European Union

33
application (which would confer unambigious status in Ireland) remains pending (app
no 004972212) after its filing on March 23, 2006.

XMLHttpRequest

From Wikipedia, the free encyclopedia

Jump to: navigation, search

XMLHttpRequest (XHR) is an API that can be used by JavaScript, JScript,


VBScript and other web browser scripting languages to transfer and manipulate XML
data to and from a web server using HTTP, establishing an independent connection
channel between a web page's Client-Side and Server-Side.

The data returned from XMLHttpRequest calls will often be provided by back-end
databases. Besides XML, XMLHttpRequest can be used to fetch data in other
formats, e.g. JSON or even plain text.

XMLHttpRequest is an important part of the Ajax web development technique, and it


is used by many websites to implement responsive and dynamic web applications.
Examples of XMLHttpRequest applications include Google's Gmail service, Google
Suggest dynamic look-up interface, Meebo, MSN's Virtual Earth and the MapQuest
dynamic map interface.

Methods

Method Description

abort()
Cancels the current
request.

Returns the
getAllResponseHeaders()
complete set of
HTTP headers as a
string.

getResponseHeader( headerName ) Returns the value

34
of the specified
HTTP header.

open( method, URL ) Specifies the


open( method, URL, async )
method, URL, and
open( method, URL, async, userName )
open( method, URL, async, userName, password ) other optional
attributes of a
request.

The method
parameter can have
a value of "GET",
"POST", or
"HEAD". Other
HTTP methods,
such as "PUT" and
"DELETE"
(primarily used in
REST
applications), may
be possible. [1]

The URL parameter


may be either a
relative or complete
URL.

The "async"
parameter specifies
whether the request
should be handled
asynchronously or
not – "true" means
that script
processing carries
on after the send()
method, without
waiting for a
response, and

35
"false" means that
the script waits for
a response before
continuing script
processing.
send( content ) Sends the request.
Adds a label/value
setRequestHeader( label, value ) pair to the HTTP
header to be sent.
[edit]

Properties

Property Description

onreadystatechange
An event handler for an event that fires at every state
change.

readyState Returns the state of the object as follows:

0 = uninitialized, 1 = open, 2 = sent, 3 = receiving and


4 = loaded.
responseText Returns the response as a string.
Returns the response as XML. This property returns an
responseXML
XML document object, which can be examined and
parsed using W3C DOM node tree methods and
properties.

status
Returns the status as a number (e.g. 404 for "Not
Found" and 200 for "OK").

statusText
Returns the status as a string (e.g. "Not Found" or
"OK").
[edit]

History and support


The XMLHttpRequest concept was originally developed by Microsoft as part of
Outlook Web Access 2000. The Microsoft implementation is called XMLHTTP and,
as an ActiveX object, it differs from the published standard in a few small ways. It has

36
been available since Internet Explorer 5.0. and is accessible via JScript, VBScript and
other scripting languages supported by IE browsers.

The Mozilla project incorporated the first compatible native implementation of


XMLHttpRequest in Mozilla 1.0 in 2002. This implementation was later followed by
Apple since Safari 1.2, Konqueror, Opera Software since Opera 8.0 and iCab since
3.0b352.

The World Wide Web Consortium published a Working Draft specification for the
XMLHttpRequest object's API on 5 April 2006[1]. While this is still a work in
progress, its goal is "to document a minimum set of interoperable features based on
existing implementations, allowing Web developers to use these features without
platform-specific code". The draft specification is based upon existing popular
implementations, to help improve and ensure interoperability of code across web
platforms.

Web pages that use XMLHttpRequest or XMLHTTP can mitigate the current minor
differences in the implementations either by encapsulating the XMLHttpRequest
object in a JavaScript wrapper, or by using an existing framework that does so. In
either case, the wrapper should detect the abilities of current implementation and
work within its requirements.

Traditionally, there have been other methods to achieve a similar effect of server
dynamic applications using scripting languages and/or plugin technology:

• Invisible inline frames


• Netscape's LiveConnect
• Microsoft's ActiveX
• Microsoft's XML Data Islands
• Macromedia Flash Player
• Java Applets

In addition, the World Wide Web Consortium has several Recommendations that also
allow for dynamic communication between a server and user agent, though few of
them are well supported. These would include:

• The object element defined in HTML 4 for embedding arbitrary content types
into documents, (replaces inline frames under XHTML 1.1)
• The Document Object Model (DOM) Level 3 Load and Save Specification

37
Known problems
Microsoft Internet Explorer cache issues

Internet Explorer implements caching for GET requests. Authors who are not familiar
with HTTP caching expect GET requests not to be cached, or for the cache to be
avoided as with the refresh button. In some situations, failing to circumvent caching is
a bug. One solution to this is to use the POST request method, which is never cached;
however, it is intended for non-idempotent operations.

Setting the "Expires" header to reference a date in the past will avoid caching of the
response. Here is an example in PHP.

header( "Expires: Mon, 26 Jul 1997 05:00:00 GMT" ); // disable IE


caching
header( "Last-Modified: " . gmdate( "D, d M Y H:i:s" ) . " GMT" );
header( "Cache-Control: no-cache, must-revalidate" );
header( "Pragma: no-cache" );

Caching may also be disabled, as in this Java Servlet example.

response.setHeader( "Pragma", "no-cache" );


response.addHeader( "Cache-Control", "must-revalidate" );
response.addHeader( "Cache-Control", "no-cache" );
response.addHeader( "Cache-Control", "no-store" );
response.setDateHeader("Expires", 0);

Alternatively, it is possible to force the XMLHttpRequest object to retrieve the


content anyway, as shown in this example.

req.open( "GET", "xmlprovider.php" );


req.setRequestHeader( "If-Modified-Since", "Sat, 1 Jan 2000 00:00:00
GMT" );
req.send( null );

Another method is to add a random string on the end of the url in the query:

req.open( "GET", "xmlprovider.php?sid=" + Math.random());

This will ensure that the browser always gets a fresh copy.

It is important to note that these techniques should be used only if the caching is
inappropriate. If such methods are used indiscriminately, poor performance with the
application may result. Another better way around that may be sending an expired
date/time header (or other relevant headers) to the client to gain benefits from caching

38
while letting the client know that new data may be available, as generally it is better
to obtain the performance benefits from caching to reduce processing time and
bandwidth consumption.

Problems with accented and non-ASCII characters

If the server answer does not encapsulate the result in an XML format, the
'responseText' may not work correctly when using non-ASCII characters, for example
accented characters like é. Although Firefox copes with such data, Internet Explorer
will only handle it properly for the first request (although there may be display
problems). If the request is repeated and Internet Explorer uses a cached result, then it
will generate a JavaScript error.

XML formatted results and the use of a slightly more complex 'responseXML' method
will work with any UTF-8 characters.

Reusing XMLHttpRequest Object in IE

In IE, if the open method is called after setting the onreadystatechange callback, there
will be a problem when trying to reuse the XHR object. To be able to reuse the XHR
object properly, use the open method first and set onreadystatechange later. This
happens because IE resets the object implicitly in the open method if the status is
'completed'. For more explanation of reuse: Reusing XMLHttpRequest Object in IE..
The downside to calling the open method after setting the callback is a loss of cross-
browser support for readystates. See the quirksmode article.

Cross-browser support

Microsoft developers were the first to include the XMLHttp object in their MSXML
ActiveX control. Developers at the open source Mozilla project saw this invention
and ported their own XMLHttp, not as an ActiveX control but as a native browser
object called XMLHttpRequest. Opera and Safari have since implemented similar
functionality but more along the lines of Mozilla's XMLHttpRequest. Some Ajax
developer and run-time frameworks only support one implementation of XMLHttp
while others support both. Developers building Ajax functionality from scratch can
provide if/else logic within their client-side JavaScript to use the appropriate
XMLHttp object as well. Internet Explorer 7 added native support for the
XMLHttpRequest object.

39
Frameworks
Because of the complexity of handling cross-browser distinctions between
XMLHttpRequest implementations, a number of frameworks have emerged to
abstract these differences into a set of reusable programming constructs. Examples of
frameworks include Dojo Toolkit and the Atlas Framework.

40

You might also like