You are on page 1of 1

Licata, I. (2009).

A dynamical model for information retrieval and emergence of scale-free


clusters in a long term memory network. Emergence: Complexity And Organization, 11 (1),
48-57.

The following is a summary of the above listed article by Ignazio Licata. The summary
contains the following elements, a description of the purpose of the article, an explanation of the
methods used, and a summary of the results.
Purpose of Article
The purpose of the article is to propose a model of information retrieval that goes beyond the
limitations of classical models. Most Expert Systems are classifiable within the strong AI with
limited efficacy. The author sought to develop a model that combined common aspects between
classic AI symbolic approaches and connectionistic approaches.
Explanation of Methods
Working from the awareness that classic AI representations limitations the their model
considers an ontology as a linguistic theory of beings which assumes neutrality of the ontological
representation and a knowledge acquisition system must exhibit coherence, operational closure,
and produce intrinsic emergent process based on dialogic relation between the system and the
user.
Drawing from Kintsch and Ericsson (1999) concept of Long Term Working Memory
(LTWM) which is part of the (LTM) and is generated by the short term part of working memory
(STWM). The process involves retrieval cues that link objects present in the STWM to objects
present in LTM. This proposed two problems the creation of LTWM and the formation of the
retrieval cues. There were challenges with this, but the author proposed overcoming all the
challenges by finding a method that defined directly the retrieval of cues by an ontology built
over WordNet.
This actual process was difficult to follow because the article was written from the
perspective of a reader who was deeply acquainted with these type of systems. Their technique
was based on the extension of the nodes representing the terms of a text (phrase) to all
correspondent systems of WordNet. Basically they developed a program that uses the
formalization of the Semantic Web Languages (RDF, RDF Schema).
There system extracted knowledge from textual documents. The part of the document that is
analyzed is stored in a buffer. The content is codified on the basis of the context before being
elaborated by the working memory block. This is implemented by a simple scale free graph
model. The process starts with a net consisting of N disconnected nodes and at every step t=1..N
each node (associated with one of the N words) links with other M units (M=5). LTM is an
associated network that is updated with the content of the WM. Whenever a link in WM
corresponds to a link in LTM the weight of the link is increased by 1.
Summary of Results
The results from the system were compared to representations obtained from a group of
human subjects. The subjects were asked to read the same document examined by the system and
assign a rate of relatedness to each pair of words that were considered by the system. A
Pathfinder analysis was performed on the relatedness matrices of human subjects and the LTM
matrix to extract the latent semantics. The results showed that the system acquires knowledge on
the basis of precise inner schema. The system was similar and need improvement, but could
represent the first concrete ontogenetic approach to the problem of knowledge acquisition.

You might also like