The Text Encoding Initiative Guidelines, used for the electronic encoding of literary and linguistic texts and since 2001 based on the XML syntax, are constantly being more and more adopted by the different digitalization projects. As the number of digital texts and collections is growing, also the availability of software tools "TEI-aware" is increasing, both developed in academic and librarian contexts, and adapted from "hard" Information Technology frameworks. The choice is now much wider than it was just a few years ago, but it could also cause some confusion in selecting the right tool, because each software has its own peculiarities, which should be evaluated and confronted against the characteristics of the texts being encoded and the general needs and aims of the project of which the digital library is part. What is the main aim of a project? It's the visualization one, with perhaps a multiple output feature? It's a general full-text research or there are also some form of advanced textual analysis? It is possible to combine all these aspect? And if somebody has already found a solution for our problems how can we find it over the net? What this paper wants to analyze is the possibility of a classification of the actually available TEI Management Systems, using the Topic Maps technology, an ISO standard for the management and representation of knowledge. A Topic Map is based on the definition of a general topic, the particular and real occurrences of that topic, and the associations between different topics, thus it can be the best way to obtain a complete classification, which will include the various aspects, from the most technical, concerning the programming languages used or the technical specification needed, to the functionalities of visualization, text research and analysis.
Scheda prodotto non validato
Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo