In this blog, I am going to reflect briefly on the last few sessions of DITA. These sessions have provided me with a more in-depth exploration of how information specialists go about controlling the masses of data, with a specific focus on metadata, and how to then search and work with data upon request. It was interesting to see the evolution of such methods and interfaces designed to tackle ever-growing data. Metadata and linked data were both, for me, the most interesting topics covered in these sessions and this blog post will be mostly devoted to these two aspects.
Metadata, as defined by Bawden & Robinson (2012, p.108) is data about data; best understood as short, structured and standardised descriptions of information resources. It is essentially a summation of the most key information about data in order to help others find and manage their information in a way that is vastly more time efficient (Coyle, 2005). Without metadata, it would be almost impossible to find specific data required upon request; we would simply be presented with tons of data with limited ways to filter through them. It would also be very difficult to credit authors of particular works, something which is one of the more important objectives of metadata. In class, we touched upon how a drawing had been distributed multiple times to the point where no one knew who the original author of the drawing was. In response, specialists are trying to build a protocol that combines an image with the information relevant to it so that the information is not lost when sharing with other recipients. It will be exciting to see how the integration of metadata and images together in a centralised database develops over the next few years.
We also discussed about library cataloguing as a form of metadata. As Panizzi said ‘The first and chief object of a catalogue… is to give an easy access to the works which form part of the collection’ (Miller, 1979, p.5). Library cataloguing (or bibliographic metadata) has evolved significantly from Panizzi’s ninety-one rules which were developed in 1839. In 1978, the Anglo-American Cataloguing Rules (AACR2) was the most widely used cataloguing code in history as it had both non-english language codes and an infinitely expandable framework for new media.
The importance of metadata is made even greater when considering how linked data could embrace and extend metadata services. Linked data, introduced by Berners-Lee in 2006, involves connecting related data from the web by using some sort of an identification system to make links between datasets that can be understood by both humans and machines. It is one of the most successful and visible parts of the Semantic Web and has been advancing since 2009 (Zeng & Qin, 2016, p.278). Berners-Lee set out four principles for publishing linked data: use URIs as names for things; use HTTP URIs so that people can look up those names; when someone looks up a URI, provide useful information, using the standards (RDF, SPARQL) and include links to other URIs so that they can discover more things.
But how is linked data important or even relevant to libraries? Firstly, people can more easily find library resources on the Web. Google has already begun using linked data to improve reference style searches (e.g. when searching “movies” into Google, you get a list of movies playing near you). Linked data has also given opportunities for cataloguing efficiency and innovation in Libraries. Examples of this include when the German National Library started using linked data in 2010.
Bawden, D and Robinson, L. (2012). Introduction to information science. Facet Publishing: London.
Berners-Lee, T. (2009). The next Web of open, linked data. [Online] Available at: https://www.w3.org/DesignIssues/LinkedData.html
Coyle, K. (2005). Understanding Metadata and Its Purposes. The Journal of Academic Librarianship, Version 31, No.5
Miller, E. (1979). Antonio Panizzi and the British Museum. The British Library Journal, Vol. 5, No. 1
Zeng, M.L and Qin, J. (2016) Metadata, 2nd Ed, Facet Publishing: London.