An overview of Semantic Web activities in the OMRAS2 project†

Transcription

An overview of Semantic Web activities in the OMRAS2 project†
An overview of Semantic Web activities in the OMRAS2
project†
György Fazekas‡1 , Yves Raimond2 , Kurt Jacobson1 , and Mark Sandler1
1
Queen Mary University of London, Centre for Digital Music
2
BBC Future Media & Technology
Abstract
The use of cultural information is becoming increasingly important in music information research, especially in music retrieval and recommendation.
While this information is widely available on the Web, it is most commonly
published using proprietary Web APIs. The Linked Data community is aiming at resolving the incompatibilities between these diverse data sources by
building a Web of data using Semantic Web technologies. The OMRAS2
project has made several important contributions to this by developing an
ontological framework and numerous software tools, as well as publishing
music related data on the Semantic Web. These data and tools have found
their use even beyond their originally intended scope. In this paper, we first
provide a broad overview of the Semantic Web technologies underlying this
work. We describe the Music Ontology, an open-ended framework for communicating musical information on the Web, and show how this framework
can be extended to describe specific sub-domains such as music similarity,
content-based audio features, musicological data and studio production. We
describe several data-sets that have been published and data sources that
have been adapted using this framework. Finally, we provide application
examples ranging from software libraries to end user Web applications.
Introduction
From the management of personal media collections to the construction of large content
delivery services, information management is a primary concern for multimedia-related technologies. However, until recently, solutions that have emerged to solve these concerns were
existing in isolation. For example, large online databases such as Musicbrainz1 , encyclopedic
sources like Wikipedia2 , and personal music collection management tools such as iTunes3
†
Manuscript published in the Journal of New Music Research (JNMR) Vol. 39, No. 4, pp. 295–310.
Correspondence should be addressed to [email protected].
1
http://musicbrainz.org/
2
http://www.wikipedia.org/
3
http://www.apple.com/itunes/
‡
SEMANTIC WEB ACTIVITIES IN THE OMRAS2 PROJECT
2
or Songbird4 do not interact with each other, although they can deal with similar kind of
data. Consequently, information managed by one of these solutions may not benefit from
information held by any of the other solutions.
This problem becomes acute when narrowing our view to the exchange of results between
music technology researchers. If providing access to content-based feature extraction through
web services is a first step (McEnnis, McKay, & Fujinaga, 2006; Ehmann, Downie, & Jones,
2007), the results they produce must be interlinked with other data sources in order for them
to be meaningful. Returning a set of results about a particular digital audio item is useless,
unless we know what has been processed and how. (Raimond & Sandler, 2008)
The OMRAS2 project has made several key contributions in creating a distributed music
information environment linking music-related data held by previously independent systems.
In order to achieve this goal, we used a set of Web standards, often referred to as Semantic
Web technologies.
In the rest of this paper we first give an overview of Web standards mentioned above, and
show how they can be used to create a ‘Web of data’—a distributed, domain-independent,
web-scale database. We then describe the Music Ontology, enabling the publishing of musicrelated information on the Web within a unified framework. Next, we show how this framework can be applied to problems raised by the music information retrieval (MIR) research
community, and describe how it is used to produce more interoperable and more reproducible
research data. We also describe several music-related datasets published and interlinked
within this framework, and discuss a number of software tools developed within OMRAS2
for these purposes. Finally, we describe applications making use of these data and provide
examples of queries some of these tools can answer.
An Introduction to Semantic Web Technologies
One of the key enabling concepts in the success of the World Wide Web is the Uniform
Resource Identifier or URI. It solves the problem of identifying and linking resources (web
pages, data, or services) in a simple and efficient manner. Together with the access mechanism
of HTTP5 , it enables the formation of a large interlinked network of documents: the Web as
we know and use it today. However, this infrastructure is not yet used as widely and effectively
as it could be. In particular, the flow of data and access to networked services are cluttered by
incompatible formats and interfaces. The vision of the Semantic Web (Berners-Lee, Handler,
& Lassila, 2001) is to resolve this issue, in the wider context of bringing intelligence to the
Web, by creating a “Giant Global Graph” of machine-interpretable data.
Since information on the Web can stand for just about anything, developers of the Semantic Web are faced with a major challenge: How to represent and communicate diverse
information so that it can be understood by machines? The answer lies in standardising how
information is published rather than trying to arrange all human knowledge into rigid data
structures. Based on this recognition, several remarkable yet simple technologies emerged
promoting the development of the Semantic Web.
Creating interoperable online services for semantic audio analysis, and the recognition
of various musical factors are among the main objectives of the OMRAS2 project. We believe that this musical information is just as diverse as information expressed on the general
Web. Moreover, we cannot presume to plan for all possible uses of our system components.
4
5
http://www.getsongbird.com/
The Hypertext Transfer Protocol provides basic methods for obtaining resources identified by HTTP URIs.
SEMANTIC WEB ACTIVITIES IN THE OMRAS2 PROJECT
3
Therefore, we need data structures that are interoperable with other systems, and extensible
even by end-users. This poses problems similar to building the Semantic Web itself, hence
the development of Semantic Web ontologies and the use of technologies produced in that
community became key elements in the project.
RDF, Ontologies and Linked Data
Anything can be identified using a Web URI, not just a document: a person, a particular
performance, a composer, a digital signal, and so on. These Web resources may have multiple
associated representations. For example, a URI identifying a particular composer can be
associated with an HTML page giving some information about him in English, another page
giving the same information in French, or a page suited for a mobile device. The web aspect
comes into place when other web identifiers are mentioned within such representations. For
example, the HTML representation of a composer might link to an URI identifying one of
her compositions. When these representations are structured, they provide explicit machineprocessable information.
The Resource Description Framework6 (RDF) allows for such structured representations to
be made. It expresses statements about web resources in the form of triples: subject, predicate
and object. When these representations quote other resource identifiers, they enable access to
corresponding structured representations, creating a Web of structured data. For example, the
URI http://www.bbc.co.uk/music/artists/10000730-525f-4ed5-aaa8-92888f060f5f#artist identifies a particular artist. Two possible representations of that URI are available. One is in
HTML and can be rendered for human consumption through a traditional web browser; the
other is structured and machine readable, holding explicit statements about this artist. The
RDF representation, requested via the HTTP Accept header, holds among others the RDF
statements shown in the example of listing 1.
@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
@prefix owl: <http://www.w3.org/2002/07/owl#> .
@prefix mo: <http://purl.org/ontology/mo/> .
<http://www.bbc.co.uk/music/artists/10000730-525f-4ed5-aaa8-92888f060f5f#artist>
rdf:type mo:MusicArtist ;
owl:sameAs <http://dbpedia.org/resource/Bat_for_Lashes> .
Listing 1 Linking two resources representing a music artist.
In our example, the explicitly written URI identifies a web resource representing an artist.
We make two statements (‘triples’) about this artist. The first triple, where the predicate
rdf:type and object mo:MusicArtist are written using the namespace prefix notation7 , expresses the fact that this resource is a music artist. The second triple after the semicolon8
6
Resource Description Framework specification: http://www.w3.org/RDF/
Such URI references are expanded using a namespace declaration after a @prefix directive like the ones in
our example. A prefix can also remain empty, in which case it is bound to the local namespace of the file. In
the rest of the paper namespace prefixes will be omitted for brevity.
8
The semicolon can be used to group RDF statements about the same resource.
7
SEMANTIC WEB ACTIVITIES IN THE OMRAS2 PROJECT
4
refers to the same resource; our artist. We can then follow the owl:sameAs link to a resource within DBpedia9 , which holds structured data extracted from Wikipedia, or follow the
rdf:type link to get more information about what mo:MusicArtist means.
All RDF examples in this paper are written in RDF/Turtle10 . Other common serialisation
formats include RDF/XML11 and the conceptually broader Notation 3 (N3) language12 which
also allows for the representation of logical rules.
In itself, RDF is a conceptual data model which provides the flexibility and modularity
required for publishing diverse semi-structured data—that is, just about anything on the
Semantic Web. RDF is also the basis for publishing extensible data schema through the use
of OWL (Web Ontology Language). For example, when accessing a structured representation
of the mo:MusicArtist resource we get an OWL document detailing different concepts in
the music domain, as well as the relationships between them (e.g. an artist has a name and
can be involved in a performance) and links to concepts defined within other ontologies, for
example, the fact that mo:MusicArtist subsumes foaf:Agent13 .
Linking Open Data
The open data movement aims at making data freely available to everyone. The published
data sources cover a wide range of topics: from music (Musicbrainz, Magnatune14 and Jamendo15 ) to encyclopedic information (Wikipedia) or bibliographic information (Wikibooks16 ,
DBLP bibliography17 ).
A growing number of datasets are published by the Linking Open Data community covering a wide range of topics (Figure 1). For instance, the DBpedia (Auer et al., 2007) project
extracts structured information from fact boxes within the Wikipedia community-edited encyclopedia. The Geonames dataset18 exposes structured geographic information. The BBC
datasets (Kobilarov et al., 2009) cover a wide range of information, from programmes19 to
artists and reviews20 . We contributed to the ‘Linking Open Data on the Semantic Web’ community project (Bizer, Heath, Ayers, & Raimond, 2007) of the W3C Semantic Web Education
and Outreach group21 , which aims at making such data sources available on the Semantic Web
and creating links between them using the technologies described in the previous sections.
Creating bridges between previously independent data sources paves the way towards a
large machine-processable data web, gathering interlinked Creative Commons licensed content22 , encyclopedic information, domain-specific databases, taxonomies, cultural archives,
and so on.
9
DBpedia project: http://dbpedia.org/
http://www.w3.org/TeamSubmission/turtle/
11
http://www.w3.org/TR/REC-rdf-syntax/
12
Notation 3 language and syntax: http://www.w3.org/DesignIssues/Notation3.html
13
Here, x subsumes y if all elements of x are also elements of y.
14
http://magnatune.com/
15
http://www.jamendo.com
16
http://wikibooks.org/
17
http://dblp.uni-trier.de/
18
http://geonames.org/
19
http://www.bbc.co.uk/programmes
20
http://www.bbc.co.uk/music
21
http://www.w3.org/2001/sw/sweo/
22
http://creativecommons.org/
10
SEMANTIC WEB ACTIVITIES IN THE OMRAS2 PROJECT
5
Figure 1. Datasets published by the Linking Open Data community, September 2010. Each node
corresponds to a particular dataset. Diagram by Richard Cyganiak, and available on-line at http://
richard.cyganiak.de/2007/10/lod/
Accessing and Publishing Linked Data
A standard way of accessing information is an important aspect of Linked Data besides
its representation in RDF. A recent W3C recommendation, the SPARQL23 Protocol and
RDF Query Language allows complex joins of disparate RDF databases in a single query. A
Web interface executing these queries is commonly referred to as a SPARQL end-point. The
language can be used in a multitude of ways. In the simplest case, the query — consisting
of triple patterns — is matched against a database. Results are then composed of variable
bindings of matching statements based on a select clause specified by the user. For example,
the query shown in listing 2 retrieves all triples about the DBpedia resource Bill Evans. In
this example, the HTTP URI identifies the artist in DBpedia’s database. The terms starting
with a question mark are free variables that are matched when the query is evaluated.
SELECT ?predicate ?object
WHERE { <http://dbpedia.org/resource/Bill_Evans> ?predicate ?object . }
Listing 2 A simple SPARQL query.
23
http://www.w3.org/TR/rdf-sparql-query/
SEMANTIC WEB ACTIVITIES IN THE OMRAS2 PROJECT
6
Using SPARQL is the easiest way of accessing the semantic web from an application,
while creating an end-point is a standard way of publishing data. Most modern programming
languages have SPARQL libraries, and several open-source RDF stores24 are available for
creating an end-point. The standardisation and increasing support of SPARQL promotes the
adoption of RDF itself as a prevailing metadata model and language.
Using the Music Ontology Framework
In this section, we review the Music Ontology framework. Rather than focusing on specifications and design decisions already provided elsewhere (Raimond, Abdallah, Sandler, &
Frederick, 2007), we argue for common ontology-based information management in MIR, and
show how our framework can be used to describe diverse music related information. This
includes the composition and music publishing workflow, as well as editorial information, musicological data, and content-based features of audio. The penultimate section of the paper
discusses various musical applications.
Utilities of an ontology-based data model in music applications
Bridging the semantic gap, integrating computational tools and frameworks, and stronger
focus on the user can be cited among the most important future challenges of music information research (Casey et al., 2008). Prevailing machine learning tools based on statistical
models25 provide good solutions to particular problems in MIR, however they give little insight into our understanding of the musical phenomena they capture. In other words, they
do not easily allow us to close the semantic gap between features and computational models
on one hand, and musicological descriptors or human music perception on the other. While
cognitive modelling, the use of contextual metadata in MIR algorithms, and the use of highlevel reasoning are promising future directions; for a recent example see (Wang, Chen, Hu, &
Feng, 2010), some common agreement in how knowledge and information are represented in
different systems is requisite for building on previous work by other researchers, or deploying
complex systems. For example, the use of various socio-cultural factors such as geographical
location, cultural background, gender, faith, political or sexual orientation in problems like
music recommendation, artist similarity, popularity measure (hotness) or playlist generation
is important in order to answer problems where solutions based solely on editorial metadata,
the use of social tags or content-based features are insufficient26 .
Collins (Collins, 2010) presents a musicological study where influences on composition
are discovered through the use of the types of socio-cultural information mentioned above,
combined with content-based audio similarity. While using traditional techniques such as Web
scraping27 , proprietary APIs of online sources like Last.fm28 or EchoNest29 and content based
feature extractor tools such as Marsyas30 (Tzanetakis & Cook, 2000) or jAudio31 (McEnnis,
McKay, Fujinaga, & Depalle, 2005) is not unfeasible in performing such studies, building
robust MIR systems based on these principles could be made easier through agreement on
24
Commonly used RDF stores include Openlink’s Virtuoso, Garlik’s 4Store, Joseki or the D2R Server.
e.g. Support Vector Machines, Gaussian Mixture Models, Hidden Markov Models, or Bayesian Networks.
26
For more details, see sections on personal collection management and music recommendation in this paper.
27
Extracting information from Web pages using for example natural language processing.
28
Last.fm API: http://www.last.fm/api
29
The EchoNest: http://echonest.com/
30
Marsyas: http://marsyas.info/
31
jAudio: http://jmir.sourceforge.net/jAudio.html
25
SEMANTIC WEB ACTIVITIES IN THE OMRAS2 PROJECT
7
how diverse music related data is represented and communicated. As a result of such an
agreement, the laborious process of aggregating information would be reduced to making
simple queries to a widely distributed resource of linked-data on the Semantic Web.
The integration of computational tools through the use of interoperable software components and extensible metadata management was the main focus of the OMRAS2 project. We
describe this framework through examples, and show how it supports reproducible research
as well as combining different music analysis tools and data resources.
Finally, the collection and management of metadata in music production tools can be mentioned in the context of user focus or enhanced usability, and as another use case of the Music
Ontology. Primary reasons for collecting metadata in music production include the effective
organisation of musical assets, such as sounds in a sample library, or previously recorded takes
of multitrack master recordings. There are several existing metadata standards and formats
which could be used in these applications. MPEG-732 , SDIF33 , ACE XML34 (McKay, Burgoyne, Thompson, & Fujinaga, 2009) and AAF35 are prominent examples. These existing
formats however were designed for different purposes covering overlapping musical domains,
without a common higher-level metadata model. This seriously impairs the exploitation of
metadata in ubiquitous creative applications for music production, as none of the standards
cover editorial data, workflow data and content description in a uniform and extensible framework. For a more detailed analysis see (Fazekas & Sandler, 2009). A part of the problem
can be identified in the use of non-normative development and publishing techniques when
creating new metadata formats, rather than flaws in design. For example, using XML to
standardise the syntax does not provide sufficient grounds for interoperability between musical applications or research tools, as further adaptation to the specific data model used in
each application is required.
The Music Ontology Framework
The power of RDF lies in the simplicity of its underlying data model; the decomposition
of knowledge into statements consisting of subject, predicate and object terms. However,
the components of these statements, (resources and literals naming concepts or relationships)
may be selected in an ad-hoc manner. In order for our data to be meaningful for others,
and to avoid ambiguities, we need to be able to define and later refer to concepts such as a
song, a composer, or an audio processing plugin and its parameters. We also have to specify
relationships, such as the association of a musical piece with a composer, pertinent in our
application. Creating an extensible ontological framework is among the first steps required
for producing interoperable research data sets and algorithmic components (for example, to
support reproducible research), as well as for interlinking existing musical data on the Web.
The Music Ontology serves this purpose (Raimond et al., 2007).
From this perspective, there are two important aspects of this ontology. The main parts
can be used to describe the music production workflow in a broad sense—that is, the process of composition, performance and the creation of a particular recording. The second
important aspect is its extensibility. Several additions were produced in OMRAS2 including
sub-ontologies for describing music similarity, audio analysis algorithms and their output features, and musicological data such as chord progressions, keys or musical temperament. In
32
Overview of MPEG-7: http://mpeg.chiariglione.org/standards/mpeg-7/mpeg-7.htm
Sound Description Interchange Format: http://sdif.sourceforge.net/
34
ACE XML: http://jmir.sourceforge.net/index ACE XML.html
35
Advanced Authoring Format: http://www.amwa.tv/publications.shtml
33
SEMANTIC WEB ACTIVITIES IN THE OMRAS2 PROJECT
8
this section we briefly review the Music Ontology and some of its extensions relevant to the
MIR community. The formal specification of the ontology is available online.36
Core Elements of the Music Ontology
The Music Ontology is built on several small ontologies specific to well-bounded domains.
This includes the Timeline Ontology37 , the Event Ontology38 , the Functional Requirements
for Bibliographic Records (FRBR) Ontology39 , and the Friend Of A Friend (FOAF) Ontology40 .
The Timeline Ontology is used to express instants and intervals on multiple concurrent
timelines, for example, backing an audio signal, a performance, or a musical score. The Event
Ontology is used to classify space–time regions. Using both the Event and the Timeline
ontologies, we are able to express information such as ‘at that particular time, the piano player
was playing that particular chord’. The FRBR Ontology defines a layering in abstraction from
intellectual work to its physical manifestation. Finally, the FOAF Ontology covers people,
groups and organisations.
The Music Ontology subsumes these ontologies with music-specific terms. These terms
can be used for a wide-range of use-cases, from expressing simple editorial data such as tracks
and album releases to the description of complex music creation workflows, from composition
to the recording of a particular performance of a work. On the highest level, it also provides
for the detailed description of complex events. However, the Music Ontology does not cover
everything we can say about music, rather, it provides extension points on which more specific
domain models can be plugged. It provides a framework for describing audio signals and
temporal segmentations for example, but more specific ontologies extend it to describe audio
analysis results in detail. In the following, we give an account on extensions most relevant in
common MIR use cases and applications.
Extensions to the Music Ontology Framework
Chords and Musicological Features. A web ontology for expressing musical chords can be
useful in many applications, for example, finding specific chord progressions in a music library
or building an online chord recognition system used in conjunction with the chord symbol
service described in this section. First however, we describe the chord ontology41 . A brief
outline of other extensions for publishing musicological features such as tonality or musical
temperament is provided last.
The chord ontology is grounded on the symbolic representation of musical chords described
by Harte et al. (Harte, Sandler, Abdallah, & Gomez, 2005). A chord is defined, in the
most general case, by a root note and some constituent intervals. A chord inversion may be
indicated by specifying the interval from the root to the bass pitch class. The ontology also
defines a way to build chords on top of other chords. The concepts and relationships in this
ontology are depicted in Figure 2 (Raimond, 2008). For example, we can use this ontology to
represent a D sharp minor with added ninth and missing minor third with the fifth being the
36
The
The
38
The
39
The
40
The
41
The
37
Music Ontology Specification: http://musicontology.com/
Timeline Ontology Specification: http://purl.org/NET/c4dm/timeline.owl
Event Ontology Specification: http://purl.org/NET/c4dm/event.owl
FRBR Ontology Specification: http://vocab.org/frbr/core
FOAF Ontology Specification: http://xmlns.com/foaf/spec/
Chord Ontology Specification: http://purl.org/ontology/chord/
SEMANTIC WEB ACTIVITIES IN THE OMRAS2 PROJECT
9
bass note. This chord is depicted in Figure 3 using standard musical score, while Figure 4
shows its graph representation using the chord ontology.
Figure 2.
Depiction of the main concepts and relationships in the Chord Ontology
Figure 3.
D sharp minor over the fifth with added ninth and missing minor third.
We designed a namespace able to provide RDF representations using this ontology for
chords expressed within Harte’s notation. The example depicted in Figure 4 can be accessed at the URI http://purl.org/ontology/chord/symbol/Ds:min7(*b3,9)/5, which
itself corresponds to the chord symbol Ds:min7(*b3,9)/5 in Harte’s shorthand notation.
In this notation, the root note is written first followed by a colon and a shorthand for
chords common in western music. This can be seen as a label associated with pre-set chord
components as defined in Harte’s paper (Harte et al., 2005). Extra or missing intervals can
be contained by parentheses with missing degrees (that would normally be present in a minor
chord for example) denoted using an asterisk. The bass note may optionally be specified after
a forward slash. The ontology also defines a chord event concept subsuming the event concept
in the Event ontology described in the previous section. We may use these events to classify
temporal regions corresponding to particular chords.
Besides describing chords, the full framework of ontologies also provides for expressing
tonality42 , symbolic music notation43 and the particularities of instrument tuning44 used
42
The Tonality Ontology: http://motools.sourceforge.net/doc/tonality.html
The Symbolic Music Ontology: http://purl.org/ontology/symbolic-music/
44
The Temperament Ontology: http://purl.org/ontology/temperament/
43
SEMANTIC WEB ACTIVITIES IN THE OMRAS2 PROJECT
10
Figure 4.
Representation of a D sharp minor over the fifth with added ninth and missing third.
Shaded parts of the graph correspond to RDF statements that may be accessed using the corresponding
web identifiers.
in a recording. Additional ontologies describing content-based features and audio analysis
algorithms are described next.
SEMANTIC WEB ACTIVITIES IN THE OMRAS2 PROJECT
11
Content-based Audio Features. The Audio Features ontology45 (AF) can be used to express
both acoustical and musicological data. It allows for publishing content-derived information
about musical recordings. This information can be used to share data sets such as the
ones described in (Mauch et al., 2009), or interoperate software components such as Sonic
Visualiser46 and Sonic Annotator47 .
The ontology provides concepts such as Segment or Beat. For instance, it can be used
to describe a sequence of key changes corresponding to key segments in an audio file. The
three main types of features we may express are time instances such as onsets, time intervals for instance the temporal extent of an audio segment with a specific musical key, and
finally dense features which can be interpreted as signals themselves such as spectrograms
and chromagrams. The relationships between core concepts of this ontology is summarised
in figure 5.
We may describe a note onset as an example of using the AF ontology. This is shown
in the RDF snippet of listing 3. Here, the first statement declares a timeline which can be
used to represent the temporal extent of an audio signal. We simply state that the resource
signal timeline is an instance of the tl:Timeline class.48 Note that in the Turtle syntax
keyword a is used as a shorthand for the rdf:type predicate.
Figure 5. Core Audio Features Ontology
Next, we declare an onset resource, an instance of af:Onset. Using the event:time predicate
we associate it with a particular time instant placed on the signal timeline. The square bracket
45
The Audio Features Ontology: http://motools.sourceforge.net/doc/audio features.html
Sonic Visualiser is a program to display content-based audio features.
47
Sonic Annotator is a batch feature extractor tool. We provide detailed descriptions in Applications.
48
The association of this timeline with an actual signal resource is omitted for brevity.
46
SEMANTIC WEB ACTIVITIES IN THE OMRAS2 PROJECT
12
:signal_timeline a tl:Timeline .
:onset_23 a af:Onset ;
event:time [
a tl:Instant ;
tl:timeline :signal_timeline ;
tl:at "PT1.710S"^^xsd:duration ;
] .
Listing 3 Describing a note onset relative to the signal timeline.
notation represents a blank or unnamed resource which is used here for convenience. The
group of statements in brackets declare the class membership of this instance, associate it
with the signal timeline and state its exact position relative to the beginning of the signal
using the XML Schema Duration datatype.49 Note that onsets are modelled as instantaneous
events for simplicity. See (Bello et al., 2005) for the interpretation of onset times. The Audio
Features ontology presently has no exhaustive coverage of all possible features we may wish to
express, however, it subsumes basic concepts from the Timeline and Event ontologies. If we
need to publish features that have no predefined terms in the AF ontology, we can synthesise
a new class within an RDF document as a subclass of an appropriate term in the lower level
ontologies mentioned above. This ensures that our features can be interpreted correctly as
time-based events, even if specific semantic associations are not yet available.
Audio Signal Processing. While the Audio Features ontology can be used to publish both
semantic and signal processing features50 extracted from audio content, it does not provide
for the description of algorithms that were used to produce these results in the first place.
We believe that this kind of provenance information is equally valuable, and, when it comes
to sharing research data, it is inevitable if we are to make meaningful comparisons. Whereas
the ontology presented here is specific to our Vamp API format (Cannam, 2009), it provides
an example of linking algorithms, parameters and results in a traceable manner51 .
Vamp plugins52 are a set of audio analysis plugins using a dedicated API developed in the
OMRAS2 project. The corresponding Vamp Plugin Ontology53 is used to express information
about feature extraction algorithms wrapped as Vamp plugins. The most useful aspect of this
ontology is the association between plugin outputs and Audio Features ontology terms in order
to describe what they return. These may be distinct event types like note onsets, features
describing aspects of the whole recording such as an audio fingerprint, or dense signal data
such as a chromagram.
Besides describing analysis results, it is important to denote specific parameters of audio
analysis algorithms. The Vamp Transform Ontology accommodates this need. Figure 6 shows
its basic, open-ended model.
49
See http://www.w3.org/TR/xmlschema-2/ for details on built-in XML Schema datatypes.
The output of DSP algorithms that produce representations of a music signal that have no corresponding
interpretation in the music domain such as Cepstral or Wavelet coefficients.
51
For a concrete application please see the description of SAWA in the section about Automated Audio
Analysis on the Web.
52
Vamp plugins: http://vamp-plugins.org/download.html
53
The Vamp Plugin and Transform Ontologies: http://omras2.org/VampOntology
50
SEMANTIC WEB ACTIVITIES IN THE OMRAS2 PROJECT
13
Figure 6. The Vamp Transform Ontology
This ontology, although it is conceptually separate, was published together with the Vamp
Plugin ontology. It contains terms to describe how a plugin may be configured and run. It
can be used to identify parameter values and other details such as audio block and step sizes.
This information is expressed and stored using the same RDF format as the results without
imposing any additional encoding requirements. Therefore, an agent reading these results will
have certainty about how they were generated (Fazekas, Cannam, & Sandler, 2009). This
very valuable detail ensures reproducible experiments, which is a central concern of MIR and
of all scientific research.
Music Similarity. Music similarity is an important concept for music recommendation
and collection navigation. It is assumed that if a user expresses interest in a particular piece
of music, that user is also interested in other pieces of music that are similar in some sense.
For example, songs exhibiting acoustical features that are close according to some distance
measure (content-based similarity54 ), or songs composed or performed by artists who have
collaborated in the past, or who were born in the same city (cultural similarity). The use of
the latter type of similarity is well exemplified by the CatfishSmooth web application55 .
Level one of The Music Ontology provides a very basic mechanism for dealing with music
similarity in the form of the mo:similar to property. For example we may apply this property
to an instance of mo:MusicArtist to point to another mo:MusicArtist.
:Chopin a mo:MusicArtist .
:Debussy a mo:MusicArtist ;
mo:similar_to :Chopin .
Listing 4 Expressing simple similarity between two composers.
Unfortunately, we have no information about how these artists are similar or who is
claiming they are similar. We could apply a Named Graphs approach (Carroll, Bizer, Hayes,
54
See for example SoundBite or SAWA-recommender in section Recommendation, Retrieval and Visualisation
by Similarity
55
available at: http://catfishsmooth.net/
SEMANTIC WEB ACTIVITIES IN THE OMRAS2 PROJECT
14
& Stickler, 2005) to this similarity statement. By treating the last triple in the statement
above as a Named Graph we can attach additional properties to that statement. However,
this can lead to a clumsy syntax and a data set that is not optimized to work with most triple
store implementations.
Alternatively we can use the Similarity Ontology56 to make similarity statements with
transparency and provenance. In the Similarity Ontology we treat similarity as a class rather
than a property. We define instances of sim:Similarity, a sub-class of the the broader
concept sim:Association, to make similarity statements. The previous example could be
re-written using the Similarity Ontology as shown in listing 5.
:Chopin a mo:MusicArtist .
:Debussy a mo:MusicArtist .
:ComposerSim a sim:Similarity ;
sim:element :Debussy ;
sim:element :Chopin .
Listing 5 Similarity as a reified concept.
Here we see that :ComposerSim is an instance of sim:Similarity. We use the sim:element
property to specify the music artists involved in this similarity statement, but we have not
provided any additional information about them. We can describe in what way these music
artists are similar by introducing an instance of sim:AssociationMethod:
:Chopin a mo:MusicArtist .
:Debussy a mo:MusicArtist .
:CharlesSmith a foaf:Person .
:ComposerInfluence a sim:Similarity ;
sim:subject :Chopin ;
sim:object :Debussy ;
sim:method :InfluenceMethod .
:InfluenceMethod a sim:AssociationMethod ;
foaf:maker :CharlesSmith ;
dc:description "Similarity by composer influence" .
Listing 6 Describing similarity using different levels of expressivity.
We further reify57 the sim:AssociationMethod by specifying who created this method
and providing a brief textual description. Also note we use sim:subject and sim:object
instead of sim:element to specify this is a directed similarity statement, in this case pertaining to influence. We can include a variety of sim:AssociationMethods in the same
dataset to allow for multi-faceted similarity - two items can be similar in more than one
56
the Similarity Ontology is sometimes referred to as MuSim
Reification is the process of representing a relation with an object so that you can reason about the
relation. This may be achieved for example by treating an RDF statement as a resource in order to make
further assertions on its validity, context or provenance.
57
SEMANTIC WEB ACTIVITIES IN THE OMRAS2 PROJECT
15
sense. Furthermore, using the Similarity Ontology enables an intuitive and extensible method
for querying similarity using the SPARQL query language. For example, in addition to
specifying which artist we want to include in our similarity query, we can specify which
sim:AssociationMethods we want to include as well. In this way, we can select only the
similarity statements that are appropriate for our application. Using the same mechanism in
a distributed environment where similarity statements might be published by a heterogeneous
set of agents, we can choose to select only similarity statements created by agents we trust.
Additional details about the Similarity Ontology, including information on the disclosure
of similarity derivation methods can be found in (Jacobson, Raimond, & Sandler, 2009)
Data Sets
Within the scope of OMRAS2 and the Linking Open Data project, we published and interlinked several music-related datasets using the ontologies described in the previous section,
on our DBTune server58 . The different published datasets are described in table 1. These
datasets are available as Linked Data and through SPARQL end-points. When possible, RDF
dumps were also made available.
Dataset
http://dbtune.org/magnatune/
http://dbtune.org/jamendo/
http://dbtune.org/bbc/peel/
http://dbtune.org/last-fm/
http://dbtune.org/myspace/
http://dbtune.org/musicbrainz/
http://dbtune.org/iso/
Millions of triples
0.322
1.1
0.277
≈ 600
≈ 12,000
≈ 60
0.457
Interlinked with
DBpedia
Geonames, Musicbrainz
DBpedia
Musicbrainz
DBpedia, Myspace, Lingvoj
Musicbrainz
Table 1: Linked datasets published within the scope of the OMRAS2 project. When the dataset is
dynamically generated from another data source (e.g. Myspace pages), its size is approximate.
The Jamendo and Magnatune end-points hold data published on the websites of the respective labels. The BBC end-point holds metadata released about the John Peel sessions.
The Last.fm service provides live RDF representation of tracks submitted to Last.fm using
an AudioScrobbler59 client. The MySpace service provides URIs and associated RDF representations of top-friends and available tracks on a given MySpace page. The MusicBrainz
service can be used to access the MusicBrainz database as a Linked Data resource, while the
Isophone dataset holds content-based music similarity features described in the next section
(see Recommendation, Retrieval and Visualisation by Similarity).
Additional datasets which are available as RDF documents also include chord transcriptions from the Real Book60 , and various music annotations described in (Mauch et al., 2009).
Extending this set of ground truth annotations and making them available via a SPARQL
end-point constitutes future work.
58
see http://dbtune.org/
http://www.audioscrobbler.net/ see also: http://www.last.fm/api
60
http://omras2.org/ChordTranscriptions
59
SEMANTIC WEB ACTIVITIES IN THE OMRAS2 PROJECT
16
Applications
There are two types of applications we can build using the previously described technologies and data sets. Applications that directly contribute to the Semantic Web may answer
complex queries by aggregating data on demand, provide inferencing services, or, in case of
music processing, drive a signal analysis engine. These applications are commonly accessible
via SPARQL end-points. Another kind of application, directed at the end-user, may use
Semantic Web resources and technologies to enhance the user experience. In this section we
provide examples of both types of applications created within the OMRAS2 project.
Musicological Analysis and Music Information Research
Several fields of research may benefit from the enhanced interoperability and usability provided by Semantic Web technologies. Musicological studies using large audio collections and
databases is one of them. While the use of automatic audio analysis and the exchange of research data-sets is more and more common between musicologists, these data are commonly
represented in arbitrary formats. This makes the interaction between researchers difficult.
The solution to this problem is to represent this data in a structured and open-ended framework. To this end, RDF and the Music Ontology provides good grounds, but in order to
make them easy to use we also need software tools which understand these formats. The
OMRAS2 project produced numerous applications which can be used in music information
science or musicological work. The two most prominent ones are Sonic Visualiser61 and Sonic
Annotator62 .
Sonic Visualiser is an audio analysis application and Vamp plugin host. It can also be
used for viewing features calculated by other programs by loading a set of features associated
with an audio file from an RDF document. Sonic Annotator can be used to extract contentbased features from large audio collections. It applies Vamp feature extraction plugins to
audio data, and supports feature extraction specifications in RDF using the Vamp Transform
Ontology. Users of these applications benefit from the uniform representation of features using
the Audio Features Ontology, and the ability of interlinking these features with contextual
data available on the Semantic Web.
Automated Audio Analysis on the Web
Extracting semantic descriptors directly from audio recordings can be useful in audio
collection management and content-based music recommendation. Examples of these descriptors are onset times, beats, chord progressions and musical keys. These data become a
valuable resource when interlinked with cultural or editorial metadata expressed within the
same ontological framework. Two applications were developed in OMRAS2 for such tasks.
Henry63 (Raimond, Sutton, & Sandler, 2008) can be used to provide an on-demand music
analysis service wrapping audio analysis algorithms implemented as Vamp plugins. For example, the SPARQL query shown in listing 7 triggers the key detection algorithm described
in (Noland & Sandler, 2007).
Henry recognises built-in predicates such as mo:encodes, vamp:qm-keydetector or
vamp:qm-chromagram. These predicates are evaluated at query time, and the appropriate
61
Available from: http://sonicvisualiser.org/
Sonic Annotator is available from http://omras2.org/SonicAnnotator
63
See http://code.google.com/p/km-rdf/wiki/Henry for details on Henry.
62
SEMANTIC WEB ACTIVITIES IN THE OMRAS2 PROJECT
17
SELECT ?sig ?result
WHERE {
<http://dbtune.org/audio/Den-Nostalia.ogg> mo:encodes ?sig .
?sig vamp:qm-keydetector ?result }
Listing 7 A SPARQL query for extracting musical key.
signal processing algorithms are invoked. For instance, mo:encodes will result in decoding
the audio file, while vamp:qm-keydetector will trigger the key detection algorithm. Henry
caches the results of previous computations, and may also be used to link these results with
contextual data.
While Henry can be accessed via a SPARQL endpoint, SAWA64 (Sonic Annotator Web
Application) (Fazekas et al., 2009) focuses on providing a user-friendly65 interface. It employs Semantic Web technologies under the hood to offer services to the end-user and audio
researcher. SAWA’s Web-based user interface can be seen in figure 7. It allows batch feature
extraction from a small collection of uploaded audio files. The interface provides for the selection and configuration of one or more Vamp plugin outputs, and execute transforms66 of
previously uploaded files. The results are returned as RDF documents. This can be examined using an RDF browser such as Tabulator, or imported in Sonic Visualiser and viewed in
context of the audio.
Uniform representation and the linked data concept are key in creating modular architectures. For instance, SAWA’s server side application uses RDF to communicate with other
components such as Sonic Annotator, its computation engine; and interprets RDF data to
generate its user interface dynamically. The advantages of using this format are manifold.
The interface can be generated for any number of plugins given that their descriptions are
provided according to Vamp plugin and transform ontologies. Cached results may be returned
from a suitable RDF store instead of repeating computation. Finally, these systems can access
other linked data services and augment the results with different types of metadata.
The main intent behind SAWA is to promote the use of RDF and the Music Ontology
framework in the MIR community. The audio analysis capabilities of both SAWA and Henry
are similar, albeit narrower in scope when compared to the EchoNest web service67 ; mentioned
previously, however, the results are expressed in more general and extensible RDF without
the need for a proprietary XML-based return format. Further details on the components of
the SAWA system are available in (Fazekas et al., 2009).
Personal Collection Management
Much previous research in the MIR community have been concerned with personal collection management tasks including automatic playlist generation, as well as finding intuitive
new ways of navigating through the ever increasing music collection on the average personal
64
SAWA is available at http://www.isophonics.net/sawa/
The present version can be accessed using a web-based graphical user interface, while a SPARQL end-point
for accessing accumulated audio features is under development.
66
A transform is seen here as an algorithm associated with a specific set of parameters.
67
The EchoNest also provides audio analysis services, however its functionality extends beyond content based
feature extraction.
65
SEMANTIC WEB ACTIVITIES IN THE OMRAS2 PROJECT
18
Figure 7. Dynamically generated Web interface for a set of audio analysis plugins and parameter
configuration of a note onset detector plugin.
computer. Early attempts however were limited to the use of metadata available about each
song in a collection at a particular place and time (Pauws & Eggen, 2003). Recognising the
inadequacy of editorial metadata in some cases, other researchers were focusing on the use of
content-based features only (Rauber, Pampalk, & Merkl, 2003). Some common conclusions
can already be drawn from these results; no single source of metadata is adequate in all
listening situations, and locally available metadata tags such as genre (or even the name of
the artist in an mp3 file) are not reliable (Pauws & Eggen, 2003). Several applications were
built in OMRAS2 which demonstrate the use of interlinked music-related data sources and
mitigate the problems mentioned above.
We developed two tools to aggregate Semantic Web data describing arbitrary personal
music collections. GNAT68 finds, for all tracks available in a collection, the corresponding
web identifiers in the Semantic Web publication of the Musicbrainz dataset mentioned earlier.
GNAT uses primarily a metadata-based interlinking algorithm described in (Raimond et al.,
2008), which was also used to interlink the above described music-related datasets. GNARQL
crawls the Web from these identifiers and aggregate structured information about them coming from heterogeneous data sources. GNAT and GNARQL then automatically create a
tailored database, describing different aspects of a personal music collection. GNARQL provides a SPARQL end-point, allowing this aggregation of Semantic Web data to be queried.
For example, queries such as “Create a playlist of performances of works by French com68
GNAT and GNARQL are available at http://sourceforge.net/projects/motools.
SEMANTIC WEB ACTIVITIES IN THE OMRAS2 PROJECT
19
posers, written between 1800 and 1850” or “Sort European hip-hop artists in my collection
by murder rates in their city” can be answered using this end-point. GNARQL also gives a
faceted browsing interface based on /facet (Hildebrand, Ossenbruggen, & Hardman, 2006),
as illustrated in Figure 8.
Figure 8. Management of personal music collections using GNAT and GNARQL. Here, we plot our
collection on a map and display a particular artist.
Recommendation, Retrieval and Visualisation by Similarity
The aim of music recommendation is the suggestion of similar artists, tracks or events
given, for instance, a seed artist or track. While collaborative filtering69 is predominant in
this field, there is a growing trend towards incorporating more diverse data in recommender
systems. Using collaborative filtering alone exhibits a number of problems such as cold start
(lack of initial ratings), and has poor efficiency in discovering the long tail of music production (Celma & Cano, 2008).
The inclusion of diverse cultural and contextual data or content-based similarity may be
beneficial when tackling these problems. However, obtaining these data requires expensive
and often unreliable text mining procedures (Baumann & Hummel, 2005) or the adaptation
to proprietary Web APIs. On the other hand, the use of Linked Data can save both the
computation steps (such as part-of-speech tagging) needed to extract meaning from unstructured Web documents, or writing the necessary glue code between a particular application
and different Web-based data sources.
69
These systems make recommendations based on behaviour or preferences of listeners within a large user
community. (See http://www.last.fm/ for an example).
SEMANTIC WEB ACTIVITIES IN THE OMRAS2 PROJECT
20
Using Linked Data in this fashion is beneficial when generating music recommendations
derived from a number of interlinked data sources (Raimond, 2008; Passant & Raimond,
2008). The set of RDF statements linking the seed resource and the recommended resource
can be used to generate an explanation of the corresponding recommendation (e.g. “this artist
played with that other artist you like, the 4th of June 1976 in Manchester”).
Structured and linked musical data may also be useful in intuitive visualisation and
faceted browsing of music collections, artist networks, etc. The k-pie visualisation technique
(Jacobson & Sandler, 2009) can be used to browse a large network of artists by similarity
derived from social networks, for example, MySpace. Another visualisation tool, the Classical
Music Universe70 is used to discover influences between classical composers. The tool uses
data from the Semantic Web to build a network of composers. This data is mashed up with
more information from DBpedia and DBtune. Most recently the CatfishSmooth website71
allows users to follow connections between music artists by leveraging linked data. A variety
of data sources are used - including MusicBrainz, Last.fm, DBpedia and others - to find music related connections (e.g. artists that have collaborated) and connections that are more
tangential to music (e.g. artists that have converted to the faith of Islam).
The use of content-based similarity features can be valuable in the development of music
recommendation and retrieval systems. However, the lack of large open music databases is a
common problem for the researcher. For this reason, a data collection tool has been developed in OMRAS2 as part of SoundBite72 , an automatic playlist generator (Levy & Sandler,
2006). This tool is available as a plugin for iTunes and SongBird. SoundBite collects acoustic
similarity features based on MFCC (Mel-Frequency Cepstral Coefficients) distributions. Data
of about 150,000 tracks aggregated over a 2 year period have been cleaned73 and published
on the Semantic Web as described in (Tidhar, Fazekas, Kolozali, & Sandler, 2009). We built
a content-based recommender to demonstrate the application of this dataset.
SAWA-recommender74 is a query by example music search application made available on
the Web. A query to this system may be formed by a small set of songs uploaded by the user.
It is evaluated either by considering similarity to any of the uploaded songs (and ranking the
results appropriately), or a single common query is formed by jointly calculating the features
of the query songs. The query is matched against the Isophone database holding similarity
features and MusicBrainz identifiers associated with each song. This allows access to editorial
metadata consisting of basic information such as song title, album title and the main artist’s
name associated with each song in the results. Following links based on MusicBrainz identifiers
the user may be able to access more information, for instance, BBC Music75 artist pages.
Creative Record Production
A computer-based system may be seen as intelligent, if it accomplishes feats which require
a substantial amount of intelligence when carried out by humans (Truemper, 2004). In the
context of creative music production, we may think of an audio application which assists the
work of a recording engineer by providing intelligent tools for music processing and content
management. Such a system has to aggregate and process diverse information from audio
70
http://omras2.org/ClassicalMusicUniverse
http://catfishsmooth.net/
72
SoundBite is available from: http://www.isophonics.net/
73
Features from poorly identified or low quality audio sources were removed.
74
http://isophonics.net/sawa/rec
75
http://www.bbc.co.uk/music/
71
SEMANTIC WEB ACTIVITIES IN THE OMRAS2 PROJECT
21
analysis and from user interaction. The information needs of audio production tools however
are not easily planned, therefore we need an open-ended data model and software framework.
For these reasons, Semantic Web technologies can pave the way towards building intelligent
semantic audio tools.
The first simple application of this is described in (Fazekas, Raimond, & Sandler, 2008).
During a recording session, a wealth of information can be collected about music production.
Based on this recognition, we designed a metadata editor which enables the user to describe
a recording project using Music Ontology terms. We also provide an ontology for linking this
data to audio files in a multi-track environment76 .
An extension to this system stores and processes all information including content-based
features and user entered information arising in the recording environment. Using the Semantic Desktop paradigm (Decker & Frank, 2004), the idea of an intelligent desktop environment,
this system may be seen as the Semantic Audio Desktop. By definition, the Semantic Desktop is a device in which an individual stores digital information (Sauermann, Bernardi, &
Dengel, 2005) — that is, data which are otherwise stored on a personal computer by means of
conventional techniques such as binary files or spread sheets. Its most prominent aim is the
advancement of personal information management through the application of Semantic Web
technologies to the desktop environment. If we extend this idea to the audio production environment, and in particular, audio editors used in post production, we shall be able to collect
high-quality metadata during the production process and open up the creative environment
to social media77 . The metadata — being in the same format as other data on the Semantic
Web — can be fed back into the public domain in a highly useful way; these data may be used
in content-based music recommendation and search services, advanced cataloguing, as well as
music education, collaborative music making and numerous future Web-based applications.
In (Fazekas & Sandler, 2009), we describe a software library which enables the implementation of this system in existing audio editors. Besides contributing music production data to
the Semantic Web, this may be seen as a basis for an intelligent audio editor.
Conclusions
In this article, we described how various areas of music information research can benefit
from using the Web of data, a set of Web standards known as Semantic Web technologies,
Web Ontologies and the Linked Data concept. We also noted how our field of research may
enrich the Semantic Web and reviewed the contributions of OMRAS2 to both areas.
Sharing data, for example, automatic annotations of music recordings, as well as enabling
the use of diverse cultural, editorial and content-based metadata are among the main objectives of the project. For these reasons, an ontological framework was developed in OMRAS2
for publishing music related information. The Music Ontology can be used to describe almost
every aspect of music in detail. While it is primarily aimed at the music information community and application developers using the Semantic Web, it can also be applied in most of the
archival, librarian and creative use cases described in (Canazza & Dattolo, 2009) with a more
straight-forward data model and lots of already available standard tools for processing RDF
data. We also feel important to note that the Music Ontology has become a de-facto standard
for representing music related information on the Semantic Web as opposed to more general,
76
The Multitrack Ontology http://purl.org/ontology/studio/multitrack
Internet-based applications that build on the ideological and technological foundations of Web 2.0, and
that allow the creation and exchange of user-generated content. (Kaplan & Haenlein, 2009)
77
SEMANTIC WEB ACTIVITIES IN THE OMRAS2 PROJECT
22
but also more complicated multimedia ontologies such as COMM78 (Arndt, Troncy, Staab,
Hardman, & Vacura, 2007) with some overlap in the music domain. It can equally be used by
music researchers to exchange data, or Semantic Web developers to create applications using
disparate musical resources. Therefore, it is a major contribution to both the MIR and the
Semantic Web communities.
A large number of data sets have already been published and interlinked using the Music
Ontology. However, this is an ongoing work in the project and also an important part of our
future work. We demonstrated several uses of Linked Data, and showed how the research
community may benefit from using these data sources, and how everyone might benefit from
publishing music related data as Linked Data. This includes the ability of creating intuitive
visualisations or mash-ups such as the Classical Music Universe and GNARQL. We also
showed how MIR research can contribute to the data Web by providing automatic music
analysis services such as Henry, and Web applications like SAWA. This demonstrates how
the Resource Description Framework and various extensions to the Music Ontology can be
adapted to different purposes. Finally, we explored how these technologies can serve the needs
of intelligent music processing systems.
Acknowledgements
The authors wish to thank to Simon Dixon and the anonymous reviewers for their valuable
comments which helped us in improving this article. We acknowledge the support of the
School of Electronic Engineering and Computer Science, Queen Mary University of London,
and the EPSRC-funded ICT project OMRAS- 2 (EP/E017614/1).
References
Arndt, R., Troncy, R., Staab, S., Hardman, L., & Vacura, M. (2007, November 11-15). COMM:
Designing a well-founded multimedia ontology for the web. In Proceedings of the International
Semantic Web Conference. Busan, Korea.
Auer, S., Bizer, C., Lehmann, J., Kobilarov, G., Cyganiak, R., & Ives, Z. (2007, November 11-15).
DBpedia: A nucleus for a web of open data. In Proceedings of the International Semantic Web
Conference. Busan, Korea.
Baumann, S., & Hummel, O. (2005). Enhancing music recommendation algorithms using cultural
metadata. Journal of New Music Research, 34 (2), 161 — 172.
Bello, J., Daudet, L., Abdallah, S., Duxbury, C., Davies, M., & Sandler, M. (2005). A tutorial on
onset detection in music signals. IEEE Transactions on Speech and Audio Processing, 13 (5), pp
1035-1047.
Berners-Lee, T., Handler, J., & Lassila, O.(2001, May). The semantic web. Scientific American, 34-43.
Bizer, C., Heath, T., Ayers, D., & Raimond, Y. (2007). Interlinking open data on the web. In
Demonstrations Track, 4th European Semantic Web Conference, Innsbruck, Austria.
Canazza, S., & Dattolo, A. (2009). The past through the future: A hypermedia model for handling
the information stored in the audio documents. Journal of New Music Research, 38 (4), 381 —
396.
Cannam, C. (2009). The vamp audio analysis plugin api: A programmer’s guide. Available online:
http://vamp-plugins.org/guide.pdf.
Carroll, J. J., Bizer, C., Hayes, P., & Stickler, P. (2005). Named graphs, provenance and trust. In
WWW 2005: Proceedings of the 14th international conference on World Wide Web (p. 613-622).
New York, NY, USA.
78
Core Ontology for Multimedia: http://comm.semanticweb.org/
SEMANTIC WEB ACTIVITIES IN THE OMRAS2 PROJECT
23
Casey, M., Veltcamp, R., Goto, M., Leman, M., Rhodes, C., & Slaney, M. (2008). Content-based
music information retrieval: Current directions and future challenges. In Proceedings of the
IEEE (Vol. 96, p. 668-696).
Celma, O., & Cano, P. (2008). From hits to niches? or how popular artists can bias music recommendation and discovery. 2nd Workshop on Large-Scale Recommender Systems and the Netflix
Prize Competition, Las Vegas, USA.
Collins, N. (2010). Computational analysis of musical influence: A musicological case study using mir
tools. In Proceedings of the 11th International Society of Music Information Retrieval Conference
(ISMIR 2010), August 9-13, 2010, Utrecht, Netherlands.
Decker, S., & Frank, M. (2004). The social semantic desktop. DERI Technology Report. Available
online: http://www.deri.ie/fileadmin/documents/DERI-TR-2004-05-02.pdf.
Ehmann, A. F., Downie, J. S., & Jones, M. C. (2007). The music information retrieval evaluation
exchange “do-it-yourself” web service. In Proceedings of the International Conference on Music
Information Retrieval, 2007.
Fazekas, G., Cannam, C., & Sandler, M. (2009). Reusable metadata and software components
for automatic audio analysis. in Proc. IEEE/ACM Joint Conference on Digital Libraries
(JCDL’09) Workshop on Integrating Digital Library Content with Computational Tools and
Services, Austin, Texas, USA, 2009.
Fazekas, G., Raimond, Y., & Sandler, M.(2008). A framework for producing rich musical metadata in
creative music production. Presented at the 125th Convention of the Audio Engineering Society,
San Francisco, USA, 2008.
Fazekas, G., & Sandler, M. (2009). Novel methods in information management for advanced audio
workflows. In proceedings of 12th International Conference on Digital Audio Effects, Como,
Italy, 2009.
Harte, C., Sandler, M., Abdallah, S., & Gomez, E. (2005). Symbolic representation of musical chords:
A proposed syntax for text annotations. In Proceedings of the International Conference on
Music Information Retrieval.
Hildebrand, M., Ossenbruggen, J. van, & Hardman, L. (2006). The semantic web - ISWC 2006. In
(Vol. 4273/2006, p. 272-285). Springer Berlin / Heidelberg.
Jacobson, K., Raimond, Y., & Sandler, M. (2009). An ecosystem for transparent music similarity in
an open world. in Proc. 10th International Conference on Music Information Retrieval (ISMIR
2009), Kobe, Japan.
Jacobson, K., & Sandler, M. (2009). Interacting with linked data about music. Web Science, WSRI,
Volume 1, Athens, Greece.
Kaplan, A. M., & Haenlein, M. (2009). Users of the world, unite! the challenges and opportunities of
social media. Business Horizons, Volume 53 (Issue 1).
Kobilarov, G., Scott, T., Raimond, Y., Oliver, S., Sizemore, C., Smethurst, M., et al. (2009). Media
meets semantic web - how the BBC uses DBpedia and linked data to make connections. In
Proceedings of the European Semantic Web Conference In-Use track.
Levy, M., & Sandler, M.(2006). Lightweight measures for timbral similarity of musical audio. In Proceedings of the 1st ACM Workshop on Audio and Music Computing Multimedia, Santa Barbara,
California, USA.
Mauch, M., Cannam, C., Davies, M., Harte, C., Kolozali, S., Tidhar, D., et al. (2009). Omras2
metadata project. 10th International Conference on Music Information Retrieval Late-Breaking
Session, Kobe, Japan, 2009.
McEnnis, D., McKay, C., & Fujinaga, I. (2006). Overview of omen. Proceedings of the International
Conference on Music Information Retrieval. Victoria, BC. 7–12.
McEnnis, D., McKay, C., Fujinaga, I., & Depalle, P. (2005). jaudio: A feature extraction library. in
Proc. of the International Conference on Music Information Retrieval, London, UK, 2005.
McKay, C., Burgoyne, J. A., Thompson, J., & Fujinaga, I.(2009). Using ace xml 2.0 to store and share
feature, instance and class data for musical classification. In Proceedings of the 10th international
conference on music information retrieval (ismir 2009), kobe, japan.
SEMANTIC WEB ACTIVITIES IN THE OMRAS2 PROJECT
24
Noland, K., & Sandler, M.(2007). Signal processing parameters for tonality estimation. In Proceedings
of AES 122nd Convention, Vienna, 2007.
Passant, A., & Raimond, Y. (2008, October). Combining social music and semantic web for musicrelated recommender systems. In Proceedings of the Social Data on the Web workshop. Karlsruhe, Germany.
Pauws, S., & Eggen, B. (2003). Realization and user evaluation of an automatic playlist generator.
Journal of New Music Research, 32 (2), 179-192.
Raimond, Y.(2008). A distributed music information system. Doctoral thesis, Department of Electronic
Engineering, Queen Mary, University of London.
Raimond, Y., Abdallah, S., Sandler, M., & Frederick, G. (2007). The music ontology. in Proc. 7th
International Conference on Music Information Retrieval (ISMIR 2007), Vienna, Austria.
Raimond, Y., & Sandler, M. (2008). A web of musical information. In Proceedings of the 9th International Conference on Music Information Retrieval (ISMIR 2008), Philadelphia, Pennsylvania,
USA, pp. 63-68, September, 2008.
Raimond, Y., Sutton, C., & Sandler, M. (2008). Automatic interlinking of music datasets on the
semantic web. In Proceedings of the Linked Data on the Web workshop, colocated with the
World Wide Web Conference.
Rauber, A., Pampalk, E., & Merkl, D. (2003). The som-enhanced jukebox: Organization and visualization of music collections based on perceptual models. Journal of New Music Research, 32 (2),
193 — 210.
Sauermann, L., Bernardi, A., & Dengel, A.(2005). Overview and outlook on the semantic desktop. In
Proceedings of the 1st Workshop on The Semantic Desktop at the ISWC 2005 Conference.
Tidhar, D., Fazekas, G., Kolozali, S., & Sandler, M.(2009). Publishing music similarity features on the
semantic web. Proceedings of the 10th International Conference on Music Information Retrieval
(ISMIR 2009), Kobe, Japan.
Truemper, K. (2004). Design of Logic-based Intelligent Systems. Wiley-Interscience, John Wiley and
Sons.
Tzanetakis, G., & Cook, P. (2000). Marsyas: a framework for audio analysis. In Organised Sound
(Vol. 4, pp. 169–175).
Wang, J., Chen, X., Hu, Y., & Feng, T. (2010). Predicting high-level music semantics using social
tags via ontology-based reasoning. In Proceedings of the 11th International Society of Music
Information Retrieval Conference (ISMIR 2010), August 9-13, 2010, Utrecht, Netherlands.