Learning Webs
Principia Cybernetica Web

Learning Webs

Hebbian learning can be implemented on the web, by changing the strength of links depending on how often they are used


We are exploring the "brain" metaphor for making the web more intelligent. The basic idea is that web links are similar to associations in the brain, as supported by synapses connecting neurons. The strength of the links, like the connection strength of synapses, can change depending on the frequency of use of the link. This allows the network to "learn" automatically from the way it is used.

Basic algorithms

We have developed a number of algorithms for the self-organization or automatic adaptation of linking patterns in the Web to the pattern of their usage (Bollen & Heylighen, 1996; 1999; Heylighen, 1999). These algorithms are directly inspired by models of individual human cognition or brain functioning. The most basic one is inspired by Hebbian learning, the strengthening of a link between neurons or concepts when these neurons are activated in close succession. The equivalent for the web is the "frequency" rule that reinforces a link from document A to document B each time a user moves from A to B. The complementary rule of "symmetry" will moreover reinforce the inverse link from B to A, albeit with a smaller increment. The rationale is that if users go from A to B, this means that the subject of A is also relevant for people interested in B. The more often users go from A to B (rather than to C, D, E, ...), the more relevant B is, and the stronger the link from A to B becomes. To indicate strength, links can be ordered in a sequence with the strongest ones first, so that the user knows which links are most likely to be relevant for his or her interests.

The frequency rule has the limitation that it can only reinforce links that are already there. It is thus unable to create new structures. This problem is tackled by the "transitivity" rule. The principle is simple: when a user goes from A to B and then to C, it is likely that not only B is relevant to A but C as well. Therefore, the rule creates (or strengthens, if it already exists) a link from A to C. The rationale is that it is worthwhile to create shortcuts for paths that are travelled often (or "macros" for commonly used sequences of actions). Thus, a user may now be able to go directly to C from A, without needing to pass through B. From C, the user may now decide to visit D, thus potentially creating a direct link from A to D, and perhaps from A to E, F, G, etc. Thus, if a sufficient number of users follow a given path through the web, the sequence of intermediate documents may eventually be replaced by a single direct link. This makes web browsing much more efficient.

In concord with the frequency and symmetry rules, transitivity will thus lead to a continuous reorganizing of the web, making it ever more efficient in the process, so that users (or their agents) will gradually need to spend less and less effort in finding the most relevant documents when browsing through the web. Thus, the web constantly "learns" from its users what they hope to find in which place, while adapting its structure to their expectations. This is similar to the way a neural network learns from its inputs.

Extended algorithms

We are now experimenting with a further extension of these algorithms (Heylighen, 2001) where the strenghtening of a link would be proportional to the degree of "co-activation" of two nodes, which would be proportional to the degree of activation (user preference) for each node, and decay with the time or the number of steps in between the selection of the nodes. This would extend the transitivity rule so that paths of more than 2 links could generate a direct link, while avoiding the problem that a user may select and thus reward a link, that turns out to lead to an irrelevant node. By taking into account how "interesting" users find nodes, and rewarding the links accordingly, such algorithms would integrate the lessons from collaborative filtering.

Experiments and application

We have implemented such a learning network in our adaptive hypertext experiment (see an early proposal for the experiment for more about the underlying philosophy). The resulting associative network can be used to guide software agents or "thought processes" through spreading activation. New nodes in the network can be generated through a process of knowledge structuring, where clusters of similarly linked nodes are integrated or identified. An extension of such algorithms to the World-Wide Web as a whole might produce an intelligent, self-organizing distributed network, similar to a global brain.

References:

The wider conceptual framework, in which hypertext networks are compared with the associative organization of the brain, is presented in: See also: Basic References on the Global Brain, with links to other papers discussing related ideas. Related work by others:


Copyright© 2005 Principia Cybernetica - Referencing this page

Author
J. Bollen, & F. Heylighen,

Date
Dec 21, 2005 (modified)
Sep 10, 1996 (created)

Home

Project Organization

Collaborative Knowledge Development

PCP Research on Intelligent Webs

Up
Prev. Next
Down


The Adaptive Hypertext Experiment

Finding words through spreading activation

Cluster Analysis of Word Associations

Adaptive hypertext network


Discussion

Add comment...