You are on page 1of 4

Web 1.

0
Web 1.0 is simply an information portal where users passively receive 
information without being given the opportunity to post reviews, comments, 
and feedback.

Web 1.0 has static websites

Web 2.0
Unlike Web 1.0, Web 2.0 facilitates interaction between web users and sites, 
so it allows users to interact more freely with each other.

Wiki Pedia, Facebook, Fliker

Web 2.0 has dynamic websites.

Web 3.0
In Web 3.0, computers can interpret information like humans and intelligently 
generate and distribute useful content tailored to the needs of users.

1. Development and maintenance of Website for library and 
Research Journal “PAKISTAN”

Except Centre of Excellence in Geology library and Area Study Centre library 
no other library of Peshawar university has it’s own website,  

2. Development of Online Catalogue


Users may search library collection online even from remote location.

3. Implementation of Extreme lib Library database
Automation of collection including books, thesis and journals, digitization,
online catalogue feature, DDC integrated, title page can be added,
circulation, multiple reports can be generated, can be customized
according to requirements.

4. Development of Digital Collection


Comprising relevant E-Books, research articles and thesis of international
universities.

J-store research papers (24,000 full-text international e-journals)

Library Genesis (Millions of E-Books)


Soas Thesis (School of oriental and African study , University of London , More
than 18938 collection on various subjects)

Shodganga (Thesis repository of Indian universities , more than 23000


collection)

How you can help out a researcher in their research work


Librarian is a friend of researcher because a librarian is a lesion
between the book and the researcher.

1- Helps in selection of a research Topic and assists the researcher in


understanding the problem by providing relevant literature.
2- Searching techniques
a. Using deep web for searching
b. Access those restricted websites which may help in research work
c. Using different search engines
3- Arrangement of computer literacy classes
4- Training of Endnote software (Citation tool)
5- Helps in finding similarity index using turnitin software (before
sending to QEC “Quality Enhancement Cell”)
There are two types of repositories in turnitin
i. Permanent Repository
ii. Non-Repository

19 percent similarity is allowed by H.E.C while plagiarism has 0


percent tolerance in research

there is a difference in Similarity of literature and plagiarism

 Similarity in literature as it’s name indicates is the similarity of


contents of the research literature
 Plagiarism is refers to as academic theft
 copyright is something different from plagiarism , copyright is the
duplication of someone’s else work and it is only for trade purpose
o a library can copy a single copy of a research work or book.
6- Providing required information

7- Helps in Literature review

8- Social Media Group for researchers (Linked in, Twitter)


9- Can guide research scholars to in formatting their thesis.

Is internet can be replacement of a library ?


The answer is no, because not all the literary materials are available on Internet.
Very limited publications can be found on Internet specially oriental languages so
internet cannot be a replacement of a library.

A person may use Internet for a quick reference but it cannot be an alternate a
real book.
APA style
Norton , P.(2006). Introduction to computer. 5th ed. London: mcgraw hill

Federated Search
Federated Search is an information retrieval technology that allows the
simultaneous search of multiple searchable resources.

In federated search A user makes a single query request which is distributed to


the search engines, databases or other query engines participating in the
federation. The federated search then aggregates the results that are received
from the search engines for presentation to the user. Federated search can be
used to integrate disparate information resources within a single large
organization ("enterprise") or for the entire web.

Web harvesting or Web scraping


Web scraping, web harvesting, or web data extraction is data scraping used
for extracting data from websites.

Web scraping software may access the World Wide Web directly using the
Hypertext Transfer Protocol, or through a web browser.

While web scraping can be done manually by a software user, the term typically
refers to automated processes implemented using a bot or web crawler. It is a
form of copying, in which specific data is gathered and copied from the web,
typically into a central local database or spreadsheet, for later retrieval or
analysis.

OJS – Open Journal System


Open Journal Systems (OJS) was designed to facilitate the development of open
access, peer-reviewed publishing, providing the technical infrastructure not only
for the online presentation of journal articles, but also an entire editorial
management workflow, including: article submission, multiple rounds of peer-
review, and indexing. OJS relies upon individuals fulfilling different roles, such as
the Journal manager, editor, reviewer, author, reader, etc. It has a module that
supports subscription journals

Research

Research may be refer as “ A systematic Quist for knowledge”

You might also like