10 Years Faculty Report - Libera Università di Bolzano

Transcription

10 Years Faculty Report - Libera Università di Bolzano
Information science and technology,
a universal enabler
Information technology refers to the confluence of information, computation, and communication
technologies. Information Science and Technology (IST) refers to the science and technology of
hardware, software, networking, and data management components required to solve computational
and communication problems. It also refers to algorithms and software for modeling, simulation,
and analysis that are developed to solve science and engineering problems.
The impact of IST on our lives has been so profound that it is common to talk about the «infor­mation
revolution». IST has become a major driver of the global economy. IST provides a crucial business
infrastructure, and in many industries (for example, the financial industry) it can be said to be the
essence of the industry. At the same time, IST also has had a revolutio­nary impact on scientific research. Computing is today a fundamental pillar of the scientific method, supporting both theory
and experiments.
Looking ahead, the future poses challenges for IST: managing and analyzing prodigious amounts
of data, using technology to enable global lifelong learning, enabling global affordable health care,
transforming the way we design and build «things», changing the way we organize research, and
deepening our understanding of the environment in which we live. These chal­lenges can be tackled
by integrating multiple science and engineering disciplines, software systems, data, computing
resources, and people so as to gain a better understanding of highly complex multi-scale systems.
IST is also facing tough challenges that arise from its own past successes: we need to continue
translating the ongoing increase in device density (Moore’s Law) into cost-performance gains; we
need to design a global networking and communication infrastructure with more reliability and
security than today’s Internet has; and we need to learn to develop large software systems that
are secure, usable, and reliable.
Having become dependent on IST for both our daily lives and our scientific endeavor, the future
progress of humanity is now predicated on future progress in IST.
The Faculty of Computer Science at the Free University of Bozen-Bolzano is engaged in intensive
IST research efforts that are at the forefront of meeting these challenges. The Center for Applied
Software Engineering (CASE) advances our ability to develop large, complex software systems that
are efficient and reliable. The Centre for Database and Information Systems (DIS) advances today’s
data management technologies to support the extraction and exploitation of information from real-­
world data. The Research Centre for Knowledge and Data (KRDB) advances our ability to handle
large data sets with expressive query languages, including complex data stored on the web, in
digital libraries, and in data warehouses, requiring intelligent modeling and reasoning services.
Moshe Y. Vardi
George Distinguished Service Professor in Computational Engineering, Rice University
Editor-in-Chief, Communications of the ACM
Informatica, uno strumento universale
Informatik, ein weltweit tragender Faktor
Nell’informatica confluiscono tecnologie riferite all’informazione, al calcolo e alla comunicazione. L’espressione «scienza e tecnologia dell’informazione» (IST) si riferisce alla scienza e alla tecnologia dell’hard­ware,
del software e del networking, nonché alle componenti della gestione
dati necessarie a risolvere problemi computazionali e della comunicazione. Si riferisce inoltre agli algoritmi e al software di modellazione, simulazione e analisi, elaborati appositamente per risolvere problemi inerenti
alla scienza e alla tecnica.
L’IST ha avuto sulle nostre vite un impatto tanto profondo che oggi è normale parlare di «rivoluzione informatica». L’IST è ormai un motore determinante dell’economia globale, alla quale fornisce l’infrastruttura fondamentale; costituisce anzi l’essenza stessa delle attività di molti comparti
(ad esempio quello finanziario). Nel contempo, l’IST ha esercitato un impatto rivoluzionario sulla ricerca scientifica. Oggi il calcolo computerizzato è un pilastro fondamentale del metodo scientifico, in grado di supportare sia le elaborazioni teoriche sia la sperimentazione.
Le sfide che l’IST si troverà ad affrontare in futuro sono di enorme portata: la gestione e l’analisi di quantità inimmaginabili di dati, l’impiego
della tecnologia per supportare la formazione continua globale e una gestione globale sostenibile delle risorse sanitarie, la trasformazione delle
metodologie di progettazione e costruzione delle «cose» e delle modalità di organizzazione della ricerca, l’approfondimento delle nostre conoscenze del mondo in cui viviamo… Sfide, queste, che potremo affrontare
e vincere attraverso l’integrazione di persone, settori scientifici e tecnici multidisciplinari, sistemi software, database e risorse computazionali
che consentano di approfondire la conoscenza dei sistemi multiscala altamente complessi.
L’IST dovrà inoltre fronteggiare problemi non indifferenti derivanti dai
suoi passati successi: dobbiamo continuare a tradurre il costante aumento della densità d’integrazione (Legge di Moore) in un miglioramento del
rapporto costi-prestazioni, dobbiamo progettare un’infrastruttura globale di networking e comunicazione che sia più sicura e affidabile dell’odierna rete Internet e dobbiamo imparare a sviluppare sistemi software estesi che siano nel contempo anche sicuri, pratici e affidabili.
Siamo ormai dipendenti dall’IST sia nella vita quotidiana sia nella ricerca scientifica e quindi il progresso dell’umanità andrà di pari passo con i
progressi dell’informatica.
La Facoltà di Scienze e Tecnologie informatiche della Libera Università
di Bolzano è impegnata a tutto tondo nella ricerca sull’IST, ponendosi in
prima linea nell’affrontare le sfide future che abbiamo appena delineato.
Il Centro per l’ingegneria del software applicata (CASE) mira al potenziamento della nostra capacità di sviluppo di sistemi software estesi e complessi ma allo stesso tempo efficienti e affidabili; il Centro per i database
e i sistemi informativi (DIS) punta a perfezionare le odierne tecnologie
di gestione dei dati a supporto dell’estrazione e dell’utilizzo di informazioni dal mondo reale; infine, il Centro di ricerca per la conoscenza e i
dati (KRDB) si pone l’obiettivo di migliorare la nostra capacità di gestire,
tramite linguaggi di interrogazione (query) espressivi, banche dati molto
ampie, inclusi i dati complessi memorizzati nel web, in biblioteche digitali e in archivi informatici, che richiedono procedure di modellazione e ragionamento intelligenti.
In die Informationstechnologie fließen die Technologien der Information, der
Verarbeitung und der Kommunikation ein. Unter Informationswissenschaft
und -Technologie (IST) verstehen sich die Wissenschaften und Technologien,
die sich mit Hardware, Software und Netzwerken befassen, sowie mit den
Datenmanagement-Komponenten, die zur Lösung von Verarbeitungs- und
Kommunikationsproblemen erforderlich sind. Außerdem umfasst sie Algorithmen und Software für Modellierung, Simulation und Analyse, die zur Lösung wissenschaftlicher und technischer Fragen notwendig sind.
Die Auswirkungen der IST auf unser Leben waren so tiefgreifend, dass heute allgemein von «Informationsrevolution» gesprochen wird. Die IST ist
zu einem der wichtigsten Antriebsfaktoren der Weltwirtschaft geworden,
für die sie die ausschlaggebenden Geschäfts-Infrastrukturen schafft, und
kann in vielen Industriebranchen (z.B. Finanzindustrie) als deren eigentliches Wesen bezeichnet werden. Zugleich hatte die IST aber auch einen
revolutionären Einfluss auf die wissenschaftliche Forschung. Heute ist die
Computertechnik ein Eckpfeiler wissenschaftlicher Methoden, da sie sowohl die Theorie als auch die Experimentierung unterstützt.
Die Zukunft stellt die IST vor weitere Herausforderungen: Verwaltung und
Analyse unglaublicher Mengen von Daten, Nutzung der Technologie, um
lebenslanges, globales Lernen zu gestatten, Ermöglichung eines globalen,
nachhaltigen Gesundheitswesens, Verwandlung unserer Art, «Dinge» zu
entwerfen und herzustellen, neuartige Organisation der Forschung, Vertiefung unseres Verständnisses der Umgebung, in der wir leben. Diese Herausforderungen können durch Integration vielfältiger wissenschaftlicher
und technischer Disziplinen, Software-Systeme, Daten, Verarbeitungsmöglichkeiten und Menschen bewältigt werden, um zu einem besseren
Verständnis hochkomplexer, multiskalierter Systeme zu gelangen.
Die IST wird auch mit nicht unerheblichen Problemen konfrontiert, die sich
aus ihrem eigenen Erfolg in der Vergangenheit ergeben: wir müssen die
laufende Steigerung der Integrationsdichte (Moore’sches Gesetz) in eine
Verbesserung des Preis-Leistungsverhältnisses umsetzen, wir müssen
eine globale Netzwerk- und Kommunikationsinfrastruktur projektieren,
die mehr Zuverlässigkeit und Sicherheit als das heutige Internet bietet,
und wir müssen lernen, großangelegte Softwaresysteme zu entwickeln,
die zugleich sicher, praktisch und zuverlässig sind.
Da wir inzwischen sowohl im Alltagsleben als auch im Rahmen der wissenschaftlichen Forschung von der IST abhängig sind, geht nun der zukünftige
Fortschritt der Menschheit mit dem zukünftigen Fortschritt der IST einher.
Die Fakultät für Informatik der Freien Universität Bozen betreibt intensive
IST-Forschung und steht somit beim Aufgreifen der oben definierten Herausforderungen an vorderster Front. Das Zentrum für Angewandte Softwaretechnik (CASE) zielt auf den Ausbau unserer Fähig­keiten zur Entwicklung großangelegter und komplexer, zugleich aber auch effizienter und zuverlässiger Softwaresysteme ab. Das Zentrum für Datenbanken und Informationssysteme (DIS) befasst sich dagegen mit der Perfektionierung der
heutigen Datenverwaltungstechnologie zur Unterstützung der Extraktion und der Nutzung der aus der reellen Welt stammenden Informationen.
Das Forschungszentrum für Wissen und Daten (KRDB) hat sich das Ziel gesteckt, unsere Fähigkeit zu verbessern, große Datenmengen anhand expressiver Anfragesprachen zu verwalten, einschließlich der im Web, in Digitalbibliotheken und Datensilos vorhandenen komplexen Daten, die intelligente Modellierungs- und Schlussfolgerungssysteme erfordern.
Moshe Y. Vardi
George Distinguished Service Professor in Computational Engineering, Rice University
Editor-in-Chief, Communications of the ACM
Dear Reader
The Faculty has been in existence for 10 years—10 years of hard work, difficult discussions, and
challenging issues to solve. But above all, it has been 10 years of outstanding performance and
incredible satisfaction.
Computer science and engineering is the key discipline that is shaping the future of the human
kind. Nowadays, there is not a single aspect of society or research that does not involve this discipline to some extent. Our international, trilingual faculty is playing a significant role in this crucial
task. The faculty was ranked as best among medium-sized universities in the last evaluation by
the Italian Ministry of Education. Several professors of the faculty have won prestigious awards
and have secured very exclusive funding from private industry and public administrations. The
professors of the faculty are also very active locally, where they entertain relationships with companies, local government, and schools.
We have also recruited a high-profile student population, which unquestionably helps us maintain
our high profile. Our students are extraordinarily devoted to their studies and show true ingenuity in their endeavors. Moreover, data collected independently by the Alma Laurea initiative shows
that our students complete their studies in much less time than the average, that they are all employed one year after graduation (and most find a job in less than one month), and that their salaries are higher than the average. This points to the winning combination of the trilingual model
and the strong bond existing between students and professors.
The faculty is organized in three research centers, one dealing with databases, one with knowledge
representation and one with software engineering. I am really proud to say that all these three research centers have excelled in research and in teaching. You can find more details in this report.
Needless to say, these achievements were made possible by the outstanding support of our technical and administrative staff.
Altogether, I am certain that the results we have obtained to date are an indication that we shall
continue to make a significant impact in scientific results as well as in our local and international
cooperation.
Good reading!
Giancarlo Succi, PhD, PEng, Dean & Professor
Cari lettori,
Liebe Leser,
nel corso di 10 anni densi di sfide impegnative e
decisioni difficili, la nostra Facoltà ha raggiunto performance eccezionali e procurato enormi
soddisfazioni a tutto il corpo accademico.
in den vergangenen 10 Jahren großer Herausforderungen und schwieriger Entscheidungen
hat unsere Fakultät ein außerordentliches Leistungsniveau erreicht, worauf der gesamte Lehrkörper außerordentlich stolz ist.
La scienza e l’ingegneria informatica stanno influenzando il futuro dell’umanità in ogni aspetto della società e della ricerca. La nostra Facoltà, internazionale e trilingue, detiene un ruolo
significativo fra gli atenei ed è stata valutata
come la migliore fra le università di medie dimensioni, potendo vantare professori che hanno vinto premi prestigiosi, ottenuto finanziamenti rilevanti e coltivato attivamente le relazioni con le imprese, le amministrazioni locali e
le scuole della Provincia.
I nostri studenti sono tutti di alto profilo, particolarmente impegnati e straordinariamente capaci. Le statistiche di Alma Laurea evidenziano
che essi completano gli studi in un tempo molto
minore della media, trovano un impiego entro
un anno dal conseguimento della laurea e percepiscono stipendi più alti della norma. Questi
dati sono la prova evidente che l’impostazione
trilingue della Facoltà e il forte legame instaurato fra studenti e professori sono una combinazione assolutamente vincente.
La presente relazione offre una panoramica degli ultimi contributi da parte dei nostri tre centri
di ricerca, che si concentrano sui database, sulla
rappresentazione della conoscenza e sull’ingegneria del software. Gli eccellenti risultati raggiunti dall’ateneo sono la dimostrazione della
sua costante abilità nel determinare un impatto
significativo nel mondo della scienza e nella cooperazione a livello locale e internazionale.
Computerwissenschaft und Computertechnik
beeinflussen die Zukunft der Menschheit in
allen Aspekten der Gesellschaft und der Forschung. Unsere internationale, dreisprachige
Fakultät spielt in Universitätskreisen eine bedeutende Rolle und wurde als beste der Universitäten mittlerer Größe beurteilt. Unsere Professoren haben namhafte Auszeichnungen und
bedeutende Finanzierungen erhalten und aktive Beziehungen zu den Unternehmen, der lokalen Verwaltung und zu den Schulen der Provinz
gepflegt.
Unsere hoch profilierte Studentenschaft legt
besonderen Einsatz an den Tag und besitzt außerordentliche Fähigkeiten. Die Statistik Alma
Laurea beweist, dass sie ihr Studium in wesentlich kürzerer Zeit absolvieren als der Durchschnitt, innerhalb eines Jahres nach Abschluss
einen Arbeitsplatz finden und überdurchschnittlich hohe Gehälter beziehen. Offensichtlich ist das dreisprachige Modell und die starke
Bindung zwischen Studenten und Professoren
eine gewinnbringende Kombination.
Der vorliegende Bericht bietet einen Überblick
über die geleisteten Beiträge unserer drei Forschungszentren, die sich auf die Fachgebiete Datenbanken, Wissensdarstellung und Softwaretechnik konzentrieren. Die hervorragenden
Ergebnisse, die unsere Universität erzielen konnte, beweisen ihre fortwährende Fähigkeit, wissenschaftliche Bedeutung ebenso aufrecht zu erhalten, wie die lokale und internationale Kooperation.
Giancarlo Succi, PhD, PEng, Preside / Dekan
table.of.Contents
faculty facts
The Faculty of Computer Science
Staff at the Faculty of Computer Science
8
13
research center case
About CASE
16
Giancarlo Succi, Pekka Abrahamsson, Gabriella Dodero, Barbara
Russo, Alberto Sillitti, Etiel Petrinja, Ilenia Fronza, Cigdem Gencel,
Andrea Janes, Juha Rikklä, Bruno Rossi, Tadas Remencius,
Xiaofeng Wang.
research center dis
About DIS
34
Johann Gamper, Francesco Ricci, Nikolaus Augsten, Mouna Kacimi,
Periklis Andritsos, Floriano Zini.
research center krdb
About KRDB
42
Werner Nutt, Diego Calvanese, Enrico Franconi, Alessandro Artale,
Sergio Tessaris, Valeria Fionda, Rosella Gennari, Marco Montali,
Alessandro Mosca, Giuseppe Pirrò, Mariano Rodriguez-Muro.
appendix
Publications
58
Imprint & Contacts
63
9
facts
Bolzano–Bozen
Faculty of Computer Science
10
the faculty of
computer science
la facoltà di scienze e tecnologie informatiche
die fakultät für informatik
A degree in computer science prepares students to become professionals in
information technology in private companies, government and research. In it’s
10-year history, the Faculty of Computer Science at the Free University of Bozen–
Bolzano has educated hundreds of students to meet the challenges of today’s
job market with courses that rate at the highest levels in Italy and Europe.
italiano / deutsch
Italiano. Una laurea in informatica prepara gli
studenti a diventare dei professionisti delle tecnologie dell’informazione e della comunicazione presso aziende private, enti pubblici e istituti di ricerca. Nella sua storia poco più che decennale, questa Facoltà ha preparato centinaia di
studenti ad affrontare le sfide del mercato del
lavoro odierno grazie a corsi in cui la teoria si
coniuga con l’applicazione nel mondo reale.
Deutsch. Ein Hochschulabschluss in Computerwissenschaften bereitet die Studenten auf die
Ausübung eines informationstechnischen Berufs im Rahmen von Privatunternehmen, Öffentlichen Körperschaften und in der Forschung
vor. Im Verlauf ihrer etwas mehr als 10-jährigen
Geschichte hat die Fakultät Hunderte von Studenten darauf vorbereitet, sich mit den Herausforderungen des heutigen Arbeitsmarkts zu
messen, und zwar durch Kurse, in denen Theorie und Praxis verknüpft werden.
≥ Continua a pagina 12
≥ Fortsetzung auf Seite 12
11
I/O YEARS
Vision. There is a lot that goes into a solid scientific and engineering degree in information technology. Naturally, course work based on theory
must be targeted toward real world application. Three specific factors
make the program at the Free University of Bozen-Bolzano stand out Creativity, Teamwork, and Internationality.
Creativity. Information technology embraces all realms of life and has
shown itself to be a creative discipline. Graduates of Computer Science
are more than just programmers. They are managers and technological
innovators with an important role in leadership.
The Faculty of Computer Science teaches students to build on their inherent capacity to solve problems. The classroom provides fundamental knowledge of computing platforms, languages, modeling, and algorithms. Laboratory practice allows students to apply their skills by creating fully functional applications that solve real-world problems.
Teamwork. To provide solutions, Information Technology relies on a
teams of experts in computer networks and protocols, software, data
storage, and knowledge of specific domains. IT soultions require an interdisciplinary approach, uniting highly diverse disciplines from biology
to linguistics.
Working in teams creates shared knowledge that is greater than the sum
of the each member’s expertise. Students learn to collaborate and exploit their colleagues’ expertise, while at the same time learning to respect the work of others and developing an appreciation for standards
and methods.
Internationality. Information technology is being studied and developed
around the world. The use of English in the teaching environment gives
students the capability of functioning in international and multicultural
environments.
The international character of our study programs also comes from courses integrated at the European and world scale. Our academic team and
student community interacts with members from all over the globe.
Teaching. The teaching body of the Faculty of Computer Science is international, young, dynamic and active in research. For this reason, the Faculty as a whole represents a center of excellence at national and international levels, as is testified among other things by the wealth of research
projects financed with national and European funds and by local public
and private institutions.
The Faculty is composed of 13 permanent professors who teach courses
and lead research activities in information technology. Another 25 assistant professors and researchers complete the staff, contributing to classroom teaching and laboratory practice, as well as demonstrating the Faculty’s strong commitment to research and development. Each member of
the staff has a dedicated section in the following chapters.
The Faculty has a one to six staff / student ratio that is second to none
in Europe.
12
Research. The research activities in the Faculty of Computer Science are
organized in research centers:
Center for Applied Software Engineering
Center for Database and Information Systems
KRDB Research Center for Knowledge and Data
The research centers cooperate on international and local research and
software development projects. Students at all levels contribute to these
projects and programs, developing their skills and assisting in advancements in information technology. A chapter is dedicated to each of the
research centers.
Technology Transfer. Our connection to global research initiatives means
that solutions being developed around the world can be applied locally.
We have strong links with the local area and economy through common
research initiatives and connections with local companies to build top
IT solutions.
Professional Outlook. A degree from the Faculty of Computer Science is
the key to success in engineering, management and research in professional, technological and scientific environments. Undergraduate and
masters programs prepare students to take Italy’s professional engineering exam at the junior and engineer levels. Recent surveys demonstrate
that gradates from our degree programs find work quickly and with better
than average pay.
An interdisciplinary, integrated approach forms the core framework of the
teaching, research and practical applications of the Faculty of Computer
Science. The degrees we offer evolve naturally from our research activities. In turn, students of all levels have opportunity to work directly on
software development projects that have real-world impact for business,
the public, and the international research community.
International English Program. The Faculty’s main objective is to prepare
its students well in order to ease their integration into the world of information technologies. English is the primary language of international
communication, but it is also the language of computing. For this reason,
English is the main teaching language.
Studying Computer Science in Bozen-Bolzano gives students the opportunity to live and study in a trilingual environment. Italian and German are
part of every student’s daily life. At the bachelor level, Italian and German
are taught as a part of the study curriculum.
Accredited School and Facilities. The Faculty placed first in the most recent ministerial evaluation of research by the CIVR (Comitato di Indirizzo
per la Valutazione della Ricerca) and it has excellent infrastructure (research facilities and library) and facilities (rooms and labs) that support
and facilitate studying at the Faculty.
Degrees. The faculty offers three levels of studies: Bachelor of Science,
Master of Science and Doctoral Degree. At each level, there is a wide
range of optional courses that permits students to focus their interests.
Moreover, we use the European Credit System (ECTS) in all the courses,
thus facilitating the facilitating exchanges with other European Universities, and we offer several European Masters which foresee periods of
study at partner universities in Europe.
FACULTY OF COMPUTER SCIENCE­
—FREE UNIVERSITY OF BOZEN-BOLZANO
study programs
Bachelor in Computer Science & Engineering
3-YEAR PROGRAM
The objective of the Bachelor degree is to educate professionals who can find employment in
information and communication technologies.
At the same time, the program provides a basis for continued studies at the master degree
level. The program’s courses teach students to
apply computer science, methodologies and
technologies to solve problems. In addition,
students acquire basic knowledge of mathematics and economics, and learn to communicate about and document their efforts in three
languages.
Master in Computer Science
2-YEAR PROGRAM
The master’s degree has a strong orientation
towards innovation and research, and provides
international study paths. The program has a
number of outstanding features, including project-based routes, five curricula, and three European Masters programs that lead to double and
joint degrees.
PhD in Computer Science
3-YEAR PROGRAM
This program aims at producing top researchers
and future leaders in Computer Science. Graduates can either continue in research or enter
industry as leaders in development programs.
The doctoral program is especially strong in
involving students in research projects of the
three research centers.
unibz.it/inf
13
I/O YEARS
Visione. Una laurea ottenuta presso dalla Facoltà di Scienze e Tecnologie
informatiche è la chiave del successo per una carriera nel campo ingegneristico, gestionale e della ricerca in ambienti professionali, tecnologici e
scientifici. Elemento fondamentale dei metodi di insegnamento, della ricerca e delle applicazioni pratiche è l’approccio interdisciplinare e integrato adottato dall’Ateneo.
Vision. Ein Hochschulabschluss an der Fakultät für Informatik ist der Schlüssel zum Erfolg im Bereich des Ingenieurwesens, des Managements und der
Forschung in professionellen, technologischen und wissenschaftlichen Bereichen. Ein interdisziplinärer, integrierter Ansatz bildet den wesentlichen
Kern der Lehr- und Forschungs­tätigkeit und der praktischen Anwendungen.
Der Erfolg der Fakultät gründet sich auf drei Schlüsselfaktoren:
Il successo riscosso dalla facoltà deriva da tre fattori chiave:
Creatività. L’ Information Technology (IT) abbraccia tutti i campi della realtà ed è necessariamente una disciplina altamente creativa. I programmi
della facoltà insegnano agli studenti come costruire soluzioni partendo
dalla loro innata capacità di risolvere i problemi.
Lavoro di squadra. Le soluzioni IT richiedono un approccio interdisciplinare, in grado di coniugare insegnamenti e competenze differenti. Gli studenti imparano a collaborare, a sfruttare l’esperienza dei loro colleghi e
ad apprezzare gradualmente gli standard e le metodologie adottate.
Internazionalità. L’uso dell’inglese nell’insegnamento abitua gli studenti
a muoversi con disinvoltura in ambienti internazionali e multiculturali. I
programmi di studio sono integrati su scala europea e mondiale.
Kreativität. Die Informationstechnologie (IT) umfasst alle Bereiche des
Lebens und hat sich als kreative Fachrichtung erwiesen. Die Programme
der Fakultät lehren die Studenten, ihre Fähigkeiten zur Lösung von Problemen auszubauen.
Teamarbeit. IT-Lösungen bedürfen eines bereichsübergreifenden Ansatzes,
der unterschiedliche Fachrichtungen und Fähigkeiten miteinander verbindet. Die Studenten lernen, zusammen zu arbeiten, die Erfahrung ihrer Kollegen zu nutzen und die angewandten Standards und Methoden zu schätzen.
Internationalität. Die Tatsache, dass die Lehrtätigkeit in englischer Sprache erfolgt, ermöglicht Studenten sich in einem internationalen und multikulturellen Kontext problemlos zu bewegen. Die Studienpläne sind auf
europäischer und weltweiter Ebene integriert.
Punti chiave dei programmi di studio
Schlüsselpunkte des Programms
Docenza. La Facoltà di Scienze e Tecnologie informatiche vanta un corpo docente internazionale, giovane, dinamico e attivo nel campo della
ricerca. 13 professori e 25 ricercatori insegnano e conducono ricerche a
livello internazionale.
Ricerca. Le attività di ricerca della Facoltà sono organizzate in tre centri: Center for Applied Software Engineering (Centro per l’ingegneria
del software applicata), Center for Database and Information Systems
(Centro per i database e i sistemi informativi), KRDB Research Centre for
Knowledge and Data (Centro di ricerca per la conoscenza e i dati).
Trasferimento tecnologico. Il collegamento della Facoltà con iniziative di
ricerca globali convoglia know-how alle aziende e agli enti pubblici locali.
International english. L’inglese è la lingua dell’informa­tica e della comunità internazionale. Il curriculum della laurea triennale comprende anche
l’insegnamento delle lingue italiana e tedesca.
Dozenten. Der Lehrkörper der Fakultät für Informatik ist international,
jung, dynamisch und aktiv in der Forschung tätig. Die 13 Professoren und
25 Assistenten und Forscher der Fakultät unterrichten und betreiben die
Forschung auf internationaler Ebene.
Forschung. Die Forschungstätigkeit der Fakultät verteilt sich auf drei Zentren: Center for Applied Software Engineering (Zentrum für angewandte
Softwaretechnik), Center for Database and Information Systems (Zentrum für Datenbank- und Informationssysteme), KRDB Research Centre for
Knowledge and Data (Forschungs­zentrum für Wissen und Daten).
Technologie-Transfer. Die Einbindung der Fakultät in globale Forschungsinitiativen vermittelt Know-How an lokale Unternehmen und öffentliche
Körperschaften.
Internationales Englisch. Englisch ist die Sprache der Computerwissenschaft und der internationalen Gemeinschaft. Im Rahmen des Studiengangs für Diplominformatiker gehören aber auch die deutsche und die
italienische Sprache zum Lehrplan.
Lauree
Laurea triennale in scienze e ingegneria dell’informa­zione. Oltre a rappresentare il gradino base per la prosecuzione degli studi universitari, la
laurea di primo livello ha lo scopo di formare professionisti idonei a svolgere mansioni legate alle tecnologie informatiche e della comunicazione.
Laurea magistrale in informatica. La laurea magistrale è fortemente orientata all’innovazione e alla ricerca e offre percorsi di studio internazionali.
Dottorato in informatica. Il programma di dottorato punta a formare ricercatori di alto livello e a creare i leader dell’informatica del futuro. I dottori
di ricerca possono continuare la carriera universitaria o fare il loro ingresso nell’industria come responsabili di programmi di sviluppo.
14
Studientitel
Bachelor in Informatik und Informatik-Ingenieurwesen. Zielpunkt dieses
Diploms ist die Heranbildung von Professionisten, die Aufgaben im Bereich der Informations- und Kommunikationstechnik wahrnehmen können.
Außerdem stellt es die Grundlage für weiterführende Studiengänge dar.
Master in Informatik. Der Master in Computerwissenschaft ist stark
auf Innovation und Forschung ausgerichtet und bietet internationale
Studienpläne.
Doktoratsstudium in Informatik. Das Promotionsprogramm zielt auf die
Heranbildung von Forschern gehobenen Niveaus sowie auf die Schaffung
von Leitfiguren für die Computerwissenschaft der Zukunft ab. Sie können
nach der Promotion in der Forschung weiterarbeiten oder als Leiter von
Entwicklungsprogrammen in die Industrie überwechseln.
FACULTY OF COMPUTER SCIENCE­
—FREE UNIVERSITY OF BOZEN-BOLZANO
staff.facultyOfComputerScience
professors
Giancarlo Succi / Dean & Full Professor
Werner Nutt / Full Professor
Research Center: CASE
[email protected]
Research Center: KRDB
[email protected]
Pekka Abrahamsson / Vice-Dean & Full Professor
Etiel Petrinja / Assistant Professor
Research Center: CASE
[email protected]
Research Center: CASE
[email protected]
Alessandro Artale / Assistant Professor
Francesco Ricci / Associate Professor
Research Center: KRDB
[email protected]
Research Center: DIS
[email protected]
Diego Calvanese / Associate Professor
Barbara Russo / Associate Professor
Research Center: KRDB
[email protected]
Research Center: CASE
[email protected]
Gabriella Dodero / Full Professor
Alberto Sillitti / Associate Professor
Research Center: CASE
[email protected]
Research Center: CASE
[email protected]
Enrico Franconi / Associate Professor
Sergio Tessaris / Assistant Professor
Research Center: KRDB
[email protected]
Research Center: KRDB
[email protected]
Johann Gamper / Associate Professor
Research Center: DIS
[email protected]
15
I/O YEARS
researchers
16
Periklis Andritsos
Mouna Kacimi
Research Center: DIS
[email protected]
Research Center: DIS
[email protected]
Nikolaus Augsten
Marco Montali
Research Center: DIS
[email protected]
Research Center: KRDB
[email protected]
Valeria Fionda
Alessandro Mosca
Research Center: KRDB
[email protected]
Research Center: KRDB
[email protected]
Ilenia Fronza
Giuseppe Pirrò
Research Center: CASE
[email protected]
Research Center: KRDB
[email protected]
Cigdem Gencel
Tadas Remencius
Research Center: CASE
[email protected]
Research Center: CASE
[email protected]
Rosella Gennari
Juha Rikkilä
Research Center: KRDB
[email protected]
Research Center: CASE
[email protected]
Andrea Janes
Mariano Rodriguez-Muro
Research Center: CASE
[email protected]
Research Center: KRDB
[email protected]
FACULTY OF COMPUTER SCIENCE­
—FREE UNIVERSITY OF BOZEN-BOLZANO
Bruno Rossi
Xiaofeng Wang
Research Center: CASE
[email protected]
Research Center: CASE
[email protected]
Vladislav Ryzhikov
Floriano Zini
Research Center: KRDB
[email protected]
Research Center: DIS
[email protected]
1
3
5
7
9
2
4
6
8
10
administrativePool
technicalPool
1 Nadine Mair
Head of the Faculty Administration
+39 0471 016001
[email protected]
9 Cristiano Cumer
Coordinator of the Technical Pool
+39 0471 016015
[email protected]
2 Claudia Asper
+39 0471 016012
[email protected]
6 Margareth Lercher
+39 0471 016010
[email protected]
10 Roberto Cappuccio
+39 0471 016013
[email protected]
3 Federica M. Cumer
+39 0471 016005
[email protected]
7 Carmen Pichler
+39 0471 016011
[email protected]
11 Konrad Hofer
+39 0471 016014
[email protected]
4 Stefania Fiorese
+39 0471 016003
[email protected]
Ines Rosselli
+39 0471 016131
[email protected]
5 Viviana Foscarin
+39 0471 016004
[email protected]
8 Valentina Rossi
+39 0471 016017
[email protected]
Marianna Gesualdo
+39 0471 016006
[email protected]
11
Sabine Zanin
+39 0471 016007
[email protected]
17
Agile
Methodologies
and Lean
Management
Software
Product Lines
Development of
Service Oriented
Systems
Open Source
Development
Distance
Learning in
Software
Engineering
Software
Metrics
Software Quality
research.areas
Software
Reuse & Component based
Development
Experimental
Software Engineering & Software Engineering Knowledge
Bases
current projects
18
QualiPSo
Quality Platform for Open Source Software.
ART DECO Adaptive InfRasTructures for DECentralized Organizations.
AGILE Adoption of Agile Methods in the production of Embeded Systems.
TEKNE Towards Evolving Knowledge-based interNetworked Enterprise.
MOSS Measuring Open Source Software.
ABITEA Measures of alignment in Business/IT for Azienda Energetica S.p.a.
UNISAD University and SAD Trasporto Locale S.p.a.
PROM
Professional Metrics for software.
RiMaComm Risk Managements and Communication on Local and Regional Level
CASE
Center for Applied Software Engineering
Founded in 2001, CASE has developed a reputation for conducting high quality research in Applied Software Engineering and for promoting
collaboration between industry and the Free
University of Bozen-Bolzano. The center also
offers a unique learning environment for undergraduate, graduate, and doctoral students.
Italiano. Fondato nel 2001, CASE si è guadagnato un’ottima fama per aver condotto ricerche di
alta qualità nel settore dell’Applied Software Engineering (Ingegneria del software applicata) e
per aver promosso la collaborazione fra il mondo
dell’industria e l’Università di Bolzano. Il centro
garantisce inoltre un ambiente di apprendimento esclusivo sia per laureandi che per dottorandi.
Deutsch. CASE wurde im Jahr 2001 gegründet
und hat sich dank hochwertiger Forschungsarbeit auf dem Gebiet des Applied Software Engineering (Angewandte Softwaretechnik) und
der Förderung der Zusammenarbeit zwischen
der Welt der Industrie und der Universität Bozen einen hervorragenden Ruf geschaffen. Das
Zentrum gewährleistet darüber hinaus eine exklusive Lern-Umgebung sowohl für Studenten,
die vor dem Universitätsabschluss stehen, als
auch für Doktoranden.
CASE has a mission to:
La mission di CASE comprende:
Die Mission von CASE umfasst:
Form partnerships with local, national and
international research and development institutions in the area of applied software
engineering.
La creazione di partnership con istituzioni di ricerca e di sviluppo locali, nazionali ed internazionali
nel campo dell’ingegneria del software applicata.
Schaffung von Partnerschaften mit lokalen, nationalen und internationalen Forschungs- und
Entwicklungsinstitutionen auf dem Gebiet der
Angewandten Softwaretechnik.
Create a cooperative environment to transfer
know-how and advanced technologies to local
industry through consulting.
L’instaurazione di un clima di cooperazione
atto a trasmettere, tramite appositi servizi di
consulenza, conoscenze e tecnologie avanzate
all’industria locale.
Participate in national, European, and international research projects.
La partecipazione a progetti di ricerca nazionali, europei ed internazionali.
Educate future Software Engineering researchers
and professionals.
La formazione dei futuri ricercatori e professionisti nel campo dell’ingegneria del software.
Schaffung eines Kooperations-Klimas mit dem
Zweck, anhand geeigneter Beratungsdienste der lokalen Industrie hoch fortschrittliche
Kenntnisse und Technologien zu vermitteln.
Beteiligung an nationalen, europäischen und
internationalen Forschungsprojekten.
Heranbildung zukünftiger Forscher und Spezialisten auf dem Gebiet der Softwaretechnik.
19
I/O YEARS
giancarloSucci ( dean & professor )
In the past, people thought that software could
be manu­f actured like any other good through a
process of analysis, design, and implementation. Nothing could have been more wrong. We
now understand why formal steps and procedures have not resolved the software «crisis».
The problem is not the inability of software engineers to capture requirements or to design.
The problem is that, in most cases, the design
is clear only at the end of the project.
Life would be much easier if very careful upfront
analysis of user needs, widespread adoption of
a complex-but-yet-very-sound formal language,
and very detailed planning could solve the software crisis. And people have spent enormous
efforts trying to find the ideal way to collect requirements, create a stellar formal language,
and find the ultimate planning tool. Yet this is
not the answer to the great software question.
Lack of understanding is not caused by superficial analysts. Unexpected changes are not accidents that can be addressed by good planning.
Wrong or misleading decisions are not the result of poor management. In fact, lack of understanding, unexpected changes, misleading decisions are intrinsic to software production.
A large number of scientists think that a major
advance in the discipline will occur only when
significant information is collected from actual
working environments in order to create elaborate models for concrete data, so-called Software
Metrics. This branch of software engineering is
called Empirical Software Engineering exactly
because it aims to advance the state of the art via
empirical observations of real environments.
Giancarlo Succi holds a Laurea degree in Electrical Engineering (Genova, 1988), an MSc in
Computer Science (SUNY Buffalo, 1991) and
a PhD in Computer and Electrical Engineering (Genova, 1993). He has been a registered
Professional Engineer in Italy as well as in the
province of Alberta, Canada.
Among these researchers, Giancarlo Succi is
studying what happens when developers write
code. He uses refined mechanisms to model
what developers do while working and to identify probable causes of productivity losses and
defects. This method can identify the effectiveness of different working practices and tools, to
build scenarios for possible improvement initiatives and monitor the application of proposed
solutions.
Giancarlo Succi has written more than 300 papers published in international journals (more
than 70), books, and conferences, and is editor of six books and author of four. He has been
principal investigator for projects amounting to
more that 2 5 million cash and, overall, he has
received more than 2 10 million in research support from public and private grant bodies. He
chairs and co-chairs many international conferences and workshops, is a member of the editorial boards of international journals, and is a
leader of international research networks.
Giancarlo Succi has also contributed to research in other methodologies that try to improve software development practices, including Software Reuse and Product Lines, Agile
and Lean Methods for Software Production,
and Open Source Development. In another area, Giancarlo Succi is experimenting with new
development techniques in the field of Systems
for Mobile Devices, paying particular attention
to the effectiveness of production processes
and of the use of the resulting applications to
enhance the quality of software production.
Italiano. Creare un software è ancora e soprattutto un’arte. Nonostante vi
siano ingegneri del software eccellenti ed ottime scuole ed università che
li preparano, risulta difficile sistematizzare il processo attraverso il quale
produrre software efficienti: è proprio questo il tema sul quale si concentrano le principali ricerche di Giancarlo Succi.
Il problema sta nel fatto che il product design del software appare chiaro
solamente alla fine del progetto. Risulterebbe tutto più semplice se un’accurata analisi delle necessità degli utenti, linguaggi formali ben definiti e una
progettazione dettagliata fossero in grado di risolvere la «crisi del software»; purtroppo, invece, una mancata comprensione, modifiche non previste
e decisioni fuorvianti sono fattori intrinseci alla produzione dei software.
Le ricerche di Giancarlo Succi riguardano il campo dell’Ingegneria Empirica del Software, che punta a far progredire lo stato dell’arte tramite l’osservazione empirica del mondo reale. Giancarlo Succi utilizza meccanismi di simulazione delle operazioni eseguite dai programmatori durante
la produzione per individuare le probabili cause delle perdite e dei difetti
di produttività e per trovare soluzioni adeguate.
20
Giancarlo Succi is a consultant for several private and public organizations worldwide in the
area of software system architectures, design,
and development; strategy for software organizations; open source systems and training of
software personnel.
Giancarlo Succi is a Fulbright Scholar.
Deutsch. Die Produktion von Software ist noch in erster Linie eine Kunst.
Zwar gibt es ausgezeichnete Softwaretechniker und hervorragende Schulen und Universitäten, jedoch ist es noch immer schwierig, den Prozess
zu systematisieren, um gute Software zu produzieren. Dies ist das HauptForschungsgebiet von Prof. Succi.
Die Schwierigkeit liegt in der Tatsache, dass das Produkt-Design der Software erst bei Abschluss des Projekts deutlich wird. Während sorgfältige
Analyse der Bedürfnisse der Anwender, einwandfrei definierte Programmsprachen und detaillierte Planung eine Lösung für die Software-Krise bieten könnten, wohnen mangelndes Verständnis, unerwartete Veränderungen und irreführende Entscheidungen der Softwareproduktion inne.
Prof. Succi betreibt Forschung im Bereich der Empirischen Softwaretechnik, die darauf abzielt, den Stand der Technik durch die empirische Beobachtung reeller Umgebungen voranzutreiben. Er verwendet Modellierungsmechanismen um nachzuvollziehen, was Entwickler während ihrer
Arbeit tun, mögliche Ursachen für Produktionsausfälle und Defekte festzustellen und Abhilfe zu schaffen.
FACULTY OF COMPUTER SCIENCE­
—FREE UNIVERSITY OF BOZEN-BOLZANO
selected publications
Paulson, J.W., Succi, G., & Eberlein, A. (2004).
An empirical study of open-source and closedsource software products. IEEE Transactions on
Software Engineering, 30(4): 246-256.
Ceschi, M., Sillitti, A., Succi, G., & De Panfilis,
S. (2005). Project Management in Plan-Based
and Agile Companies. IEEE Software 3:21-27.
Coman, I. Sillitti, A., & Succi, G. (2009, May).
A Case-study on Using an Automated In-process Software Engineering Measurement and
Analysis System in an Industrial Environment.
In Proceedings of the 31st International Conference on Software Engineering (ICSE 2009).
Vancouver, Canada.
Sillitti, A., Succi, G., & Vlasenko, J. (2011,
May). Toward a better understanding of tool
usage, In Proceedings of the 33 th International
Conference on Software Engineering (ICSE
2011). Honolulu, HI, USA.
Fitzgerald, B., Kesan, J., Russo, B., Shaikh,
M., & Succi G. (2011). Handbook of Open Source
Software Adoption: A Practical Guideline. MIT
Press.
21
I/O YEARS
pekkaAbrahamsson ( vice-dean & professor )
What will the world look like in 2035? It’s a fascinating question that doesn’t have an easy answer. With today’s fast moving economy, contemporary societal challenges and unforeseeable technological advances, making predictions
even a few years ahead is extremely difficult. We
have all heard accounts of past technology predictions that seem silly today even if seriously
proposed by an authority at the time. On the other hand, other expectations regarding the technology developments turn out to be overly optimistic as is the case with artificial intelligence.
Twenty some years from today, most likely, the
future will be full of software powered devices
interacting with people and each other alike.
We will continue to be amazed with the technology developments as the traditional boundaries between technology, software and humans
begin to evaporate. The Internet of tomorrow
will have considerably more bandwidth and cyber-physical systems will become a reality.
This sounds fun, but how does this futuristic thinking help us to succeed today? A large
number of scientists believe that an important
way to help society and drive economic growth
is to understand what companies are doing today, how and why they are doing it. While this
will help by increasing our understanding incrementally, the role of software in the society of
tomorrow will grow in importance by several orders of magnitude in the next decades. But just
looking at what happens today is insufficient for
achieving an innovative leap because today’s
software systems are so remarkably error-prone
and unreliable. Basing future systems on today’s badly flawed ones risks merely multiplying
those problems in unprecedented quantity. It’s
quite simply an unsustainable line of thinking
and a radically different approach is called for.
Pekka Abrahamsson is building the Software
Factory of tomorrow. A Software Factory is a
physical entity—a place where software is
born—but with very little in common with past
limitations like legacy systems, historic­ rationales, habits, routines, beliefs, economic development or vendors and their solutions.
Companies cannot operate in the future because they are bound to the current technological frame, economical constraints and market
reactions. In fact, it would be very risky for a
company to attempt to redefine everything and
would most likely result in economic failure. So
the new Software Factory needs to be inside the
university: globally connected to other universities to form factory network.
Pekka Abrahamsson is developing Software
Factory at UNIBZ by harnessing education,
Italiano. Come sarà il mondo nel 2035? Difficile dirlo: sono troppe le
aspettative riguardanti gli sviluppi tecnologici che si sono rivelate sbagliate (si pensi all’Intelligenza Artificiale).
Sebbene alcuni scienziati siano convinti che la chiave dello sviluppo stia
nel comprendere quel che stanno facendo le aziende al momento attuale,­
basare i metodi futuri su quelli estremamente imperfetti di oggi significa
rischiare di aumentare i problemi in maniera esponenziale. Utilizzando
questo approccio, rischieremmo facilmente di perdere l’opportunità di
compiere un significativo balzo in avanti.
Pekka Abrahamsson sta realizzando la Software Factory del domani, una
metodologia che bypassa le limitazioni imposte dallo scenario di sviluppo
attuale. La Software Factory della Libera Unviersità Bolzano sfrutta istruzione, ricerca ed imprenditorialità per creare team trasversali dotati di talenti progettuali e analitici, capacità ingegneristiche e conoscenze economiche. La Software Factory combina le abilità di varie università nel
campo della ricerca e dell’insegnamento per creare uno spazio di sviluppo
globale e d’apprendimento difficili da eguagliare.
22
research and entrepreneurship in a unique way.
The goal is to build cross functional teams with
design and thinking talent, engineering skill
and an understanding of economics.
Education in Software Factory integrates teaching of several universities to provide a global
development space and a learning experience
unattainable in a typical university setting. The
Software Factory also improves research quality. It trains PhD students, performs basic and
applied research in its operational context and
performs tests for evaluating different research
methods. It also provides a context for PhD and
Master’s students to pursue their own thinking and challenge common wisdom through
cross-disciplinary research and open calls for
research proposals.
Pekka Abrahamsson is instilling entrepreneurial thinking in students. Europe needs entrepreneurs to develop business in software. The
Software Factory strives to develop businessprototypes at the alpha-test stage. If the student doesn’t become an entrepreneur, he or she
has the skill to be an intra-entrepreneur for a
large company or the public sector.
Pekka Abrahamsson started the Software Factory movement in 2010 at the University of Helsinki. It has now gained momentum, and today
Deutsch. Wie wird die Welt im Jahr 2035 aussehen? Schwer zu sagen: allzu viele Erwartungen bezüglich der Entwicklung der Technologie haben
sich als falsch erwiesen (man denke nur an die künstliche Intelligenz).
Obgleich einige Wissenschaftler zu der Überzeugung gekommen sind,
dass der Schlüssel zur Entwicklung im Verständnis dessen liegt, was die
Unternehmen derzeit tun, wäre es äußerst riskant, zukünftige Methoden
auf der Grundlage der höchst unvollkommenen Methoden von heute zu
entwickeln, da sich die bestehenden Probleme dadurch um ein Vielfaches
vermehren würden. Wenn wir die Sache auf diese Weise angehen, laufen
wir leicht Gefahr, uns die Chance entgehen zu lassen, einen bedeutsamen
Schritt nach vorn zu tun.
Prof. Abrahamsson ist im Begriff, die Software Factory von morgen zu realisieren, eine Methode, die die Einschränkungen der derzeitigen Entwicklungs-Szenerie umgeht. Die Software Factory der Freien Universität Bozen
nutzt Bildung, Forschung und Unternehmensgeist, um bereichsübergreifende Teams zu bilden, in denen sich planerische und analytische Fähigkeiten,
Engineering und Kenntnis der Wirtschaft miteinander verknüpfen. Die
Software Factory kombiniert die Fähigkeiten verschiedener Universitäten
auf dem Gebiet der Forschung und des Unterrichts, um einen Raum für die
globale Entwicklung und das Lernen zu schaffen, der nur schwerlich seinesgleichen findet.
FACULTY OF COMPUTER SCIENCE­
—FREE UNIVERSITY OF BOZEN-BOLZANO
there are Software Factories in Universidad
Politécnica de Madrid, Indra, University of Eastern Finland and University of Oulu. New factories are being set up globally with potential candidates in Asia, Oceania and the Americas.
Before joining the Free University of Bozen –
Bolzano, Pekka Abrahamsson held academic
positions in University of Oulu, SINTEF Technical Research Centre of Finland, University of
Tampere, SINTEF in Norway and most recently in
University of Helsinki. He has led large European research projects and has published actively
in scientific conferences and journals. He has
helped in organizing more than 100 software
conferences. He was the recipient of the Nokia
Foundation Award in 2007.
selected publications
Laanti, M., Salo, O., & Abrahamsson, P.
(2010). Agile methods in Nokia, Information
and Software Technology, 53(3): 276-290
Ikonen, M., & Abrahamsson, P. (2010). Operationalizing the concept of success in software
engineering projects, International Journal of
Innovation in the Digital Economy.
Abrahamsson, P., & Nilay, O. (Eds.). (2010).
Lean Enterprise Software and Systems­—
LESS2010. LNBIP65, conference proceedings.
Abrahamsson, P., Oza, N., & Siponen, M.
(2010). Agile Software Development Methods:
A Comparative Review. In: Tore, D. & Torgeir, D.
(Eds.). Agile Software Development: Current
Research and Future Directions. Springer.
Abrahamsson, P., Babar, M.A., & Kruchten,
P. (2010). Agility and Architecture: Can They
Coexist?. IEEE Software, 27(2):16-22
23
I/O YEARS
gabriellaDodero ( professor )
Technology is expected to support all kinds of
processes, and learning processes are not an
exception. Learning processes, being a crucial
part of human thinking, are far from being standardized, and a number of pedagogical theories
describe what phases should be undertaken to
make significant improvements. One such theory is socio-constructivism, which highlights
the role of collaborative activities undertaken by learners as the most significant moment
for constructing new knowledge. In socio-constructivism, the teacher is the facilitator of interactions among peers, rather than the source
of authoritative information.
Learning in a socio-constructivistic fashion can
take place in the traditional classroom, as well
as inside a virtual classroom. While young people experience the former in schools and at universities, the latter situation typically occurs in
adult life when new learning needs arise and
the constraints imposed by working life and
family oblige learners to take online courses.
Technologies support the online learning process mainly by providing learning platforms.
Many such platforms exist and are in wide use.
Researchers are continuing to look for ways to
improve features and increase flexibility to support learning processes.
The research activity of Gabriella Dodero, in the
recent past, was devoted to analyzing the impact of new technologies in learning platforms.
Key issues considered in her research—as suggested by socio-constructivism—are the relationships between the individual learner and
his/her learning community peers.
The first relationship under consideration was
that of social translucence within the learning
platform. Social translucence is a mechanism
for designing interfaces such that each user is
aware of other users on the platform. That is,
each learner receives visual cues about what
other learners are doing. The behavior of the
learner naturally exploits possible interactions
when peers are «close together» in the virtual
classroom. The learning space provides logical
proximity relationships as a substitute for physical proximity in the real world, showing them in
a virtual 3D space.
In a personal learning environment, the social
aspect of learning is represented by information being shared among peers. Exploring the
personal learning environment of peers, browsing their aggregations, and importing what is
considered useful is the main way of extending one’s own learning environment. Again, the
place where such learning processes take place
is a 3D virtual learning space.
The second topic considered was that of supporting informal and non formal learning, which takes
place by autonomously collecting interesting information from a variety of sources, and re-elaborating it in a user-centered environment. Interest in this technology—called the personal learning environment - is growing because it seems
to be the most promising way of integrating lifelong and life-wide learning processes.
Italiano. La teoria del sociocostruttivismo considera il ruolo delle attività
di collaborazione dei discenti il momento più significativo della creazione
di nuova conoscenza. Gabriella Dodero applica questa teoria allo studio e
alla progettazione di strumenti collaborativi per piattaforme di e-learning.
Un aspetto della ricerca di Gabriella Dodero riguarda la social translucence (trasparenza sociale), un meccanismo per la progettazione di interfacce in cui gli utenti possono percepire gli altri utenti presenti in una
piattaforma e grazie al quale, quindi, il discente sfrutta in modo naturale il potenziale di interazione con i «colleghi» che gli sono vicini nell’aula
virtuale in 3D.
Un altro tema oggetto dello studio di Gabriella Dodero è quello del supporto all’apprendimento formale e informale, che si concretizza con la
raccolta autonoma di informazioni interessanti da una serie diversificata di fonti. Le informazioni raccolte vengono poi usate per supportare un
«ambiente di apprendimento personalizzato». In questo spazio di e-learning, la prossimità è determinata dall’utente, che sceglie tra più relazioni
temporali, tag, parole chiave e altre classificazioni quelle più idonee ad
aggregare le informazioni desiderate.
24
In this research, proximity is the keyword for
exploring the learning space. The definition
of proximity is provided by the user, who can
choose between temporal relationships, tags,
keywords and other classifications as the
means for aggregating information.
Deutsch. Die Theorie des Sozialkonstruktivismus betrachtet die Zu­
sammenarbeitstätigkeiten der Lernenden als bedeutendsten Faktor­ bei
der Schaffung neuer Kenntnisse. Prof. Dodero wendet diese­ Theorie auf
das Studium und die Entwicklung kollaborativer E-Tools für E-Learning
Plattformen an.
Ein Aspekt der von Prof. Dodero betriebenen Forschung betrifft die
­social translucence (soziale Transparenz), ein Mechanismus für die Entwicklung von Schnittstellen, bei denen sich die User der Anwesenheit
anderer User innerhalb einer Plattform bewusst sind, dank dem der Lernende also auf natürliche Weise das Interaktionspotential mit anderen
«Kollegen» nutzt, die sich mit ihm in einem virtuellen, dreidimensionalen
Schulungsraum befinden.
Ein weiteres Thema, mit dem sich Prof. Dodero beschäftigt, ist die Unterstützung des formellen und informellen Lernens, die sich in der unabhängigen Sammlung von interessanten Informationen aus einer diversifizierten Reihe von Quellen konkretisiert. Die eingeholten Informationen
werden dann zur Unterstützung einer «personalisierten Lernumgebung»
herangezogen. Im Rahmen des E-Learnings wird die Nähe durch den User
bestimmt, der zwischen mehreren Zeitbeziehungen, Tags, Schlüsselwörtern und weiteren Klassifizierungen die geeignetsten wählt, um die gewünschten Informationen zu aggregieren.
FACULTY OF COMPUTER SCIENCE­
—FREE UNIVERSITY OF BOZEN-BOLZANO
selected publications
Di Cerbo, F., Forcheri, P., Dodero, G., Gianuzzi,
V., & Ierardi, M.G. (2009). Hybrid learning
experiences with a collaborative open source
environment. In: Fong, J., Kwan, R.,Wang, F.L.
(Eds.). Hybrid Learning and Education First
International Conference. pp. 45-54. Lecture
Notes in Computer Science. Springer.
Di Cerbo F., Dodero G., & Papaleo L. (2010,
July). Integrating a Web3D Interface into an Elearning Platform. In: Proceedings of the web3d
2010. Los Angeles, California, USA.
Di Cerbo F., Dodero G., & Papaleo L. (2010).
Integrating multidimensional means in e-learning. In: proceedings of the International Workshop on Multimedia Technologies for Distance
Learning. New York, USA.
Di Cerbo, F., Dodero, G., & Yng, T.L.B. (2011,
July). Bridging the Gap between PLE and LMS.
In: proceedings of the 11th IEEE International
Conference on Advanced Learning Technologies
(ICALT 2011). Athens, USA.
Di Cerbo, F., Dodero, G., & Papaleo, L. (2011).
Experiencing Personal Learning Environments
and Networks using a 3D Space Metaphor.
ID&A Interaction Design & Architecture(s), 1112(11): 64-76.
25
I/O YEARS
barbaraRusso ( associate professor )
Barbara Russo’s research concerns software reliability and measurement, a field of software
engineering that aims at understanding software products in terms of delivered quality.
software defects over time to predict when further errors will happen.
Software programs are complex products that
often contain dysfunctions and faults. According to the American National Institute of Standards and Technology, low software quality
causes losses of tens of billions of dollars in
the Unites States alone. Most software is developed by private companies—like Siemens
and IBM—that have reported defects being between 4 and 30 per thousand lines of code. This
is a significant number of defects for a large,
modern software product.
Over the last 15 years, a significant amount of research in software reliability has focused on the
study of open source software (OSS) [5], software whose source is publicly available [1]. Barbara Russo specializes in mining large amounts
of data over the Internet to build OSS reliability models [2, 4, 5]. Her work includes measuring software artifacts to understand development processes and products, as well as their
use [1, 3]. Her research provides practitioners
and researchers with insights and recommendations on developing high quality software as well
as on adopting it in companies and government.
Software reliability research tries to predict
software quality. To find defects in software,
researchers analyse source code artifacts like
the lines in which software is written or the
logic expressed by this code. On the one side,
researchers try to estimate the number, frequency and type of faults of software defects
in order to categorize and localize them in the
source code [4]. On the other side, they model
She supports software development globally
by taking part in the international community
of OSS researchers at its annual meeting. For
2011, she is chairing the meeting. She participates in the software engineering community,
taking part in the International Software Engineering Research Network which meets yearly
at the Empirical Software Engineering and Measurement conference. In 2010, the conferences
Italiano. Le ricerche di Barbara Russo riguardano l’affidabilità e la misurazione del software, un’area del software engineering che ha l’obiettivo di analizzare i prodotti software sotto il profilo della qualità erogata.
I software sono prodotti complessi e spesso contengono numerosi difetti
ed anomalie che hanno causato perdite di decine di miliardi di dollari nei
soli Stati Uniti. Non sono molti i dati resi pubblici, ma i difetti dei software
prodotti dalle maggiori aziende del settore oscillano fra 4 e 30 ogni mille linee di codice sorgente – un numero considerevole di anomalie per un
software moderno e di estesa applicazione.
Le ricerche sull’affidabilità di un software tentano di prevederne la qualità, cercando da una parte di formulare una stima sul numero, la frequenza e il tipo di anomalie nel funzionamento del software stesso e di ideare
metodi utili ad individuarle all’interno del codice sorgente, dall’altra simulando l’evoluzione di tali anomalie nel tempo per prevedere il momento in cui potrebbero causare altri problemi.
Barbara Russo è specializzata in software open source (OSS) ed estrapola
da Internet una grande quantità di dati utili alla costruzione di modelli di
affidabilità per tali tipi di software. Il suo lavoro consiste anche nell’approfondire la conoscenza dei prodotti e dei processi di sviluppo, nonché del
loro utilizzo, attraverso la misurazione dei software. I suoi studi forniscono
a professionisti e ricercatori idee e orientamenti sullo sviluppo di software di alta qualità e sul suo utilizzo nelle aziende e negli organismi pubblici.
26
was held in Bolzano (Lutteri, Russo, B., & Succi, G. (2011). Report of the 4th international symposium on empirical software engineering and
measurement ESEM 2010. ACM SIGSOFT Software Engineering Notes, vol. 36, no. 2, 28-34.).
She is a stable member of the editorial team of
the International Journal of Open Source Software and Processes, and is a reviewer for major
journals (Transactions in Software Engineering,
the Journal of System and Software, and the
Journal of Information and Technology).
She teaches courses that help students to
predict software development issues as well
as courses on how to measure software artifacts. She is coordinator of the European Master in Software Engineering that is based on a
didactic program shared by the University of
Kaiserslautern (Germany), Blekinge Tekniska
Hoegskola (Sweden), and the Free University of
Bozen-Bolzano. She she is head of the faculty’s
Master of Science council.
Deutsch. Die Forschungsarbeit von Prof. Russo betreffen die Zuverlässigkeit und die Messung von Software, ein Gebiet der Softwaretechnik, das
darauf abzielt, Software-Produkte im Hinblick auf die gelieferte Qualität
zu untersuchen.
Softwareprogramme sind komplexe Produkte, die oft zahlreiche Fehler
und Störungen enthalten. In der Tat verursacht schlechte Software-Qualität allein in den USA Verluste in Höhe von Dutzenden Milliarden Dollar. Zu
diesem Thema werden nicht viele Daten veröffentlicht, jedoch liegen die
Softwarefehler der größten Softwarehäuser zwischen 4 und 30 pro tausend Quellcode-Zeilen, was für eine weit verbreitete, moderne Software
eine wahrlich sehr hohe Zahl darstellt.
Die Forschungsarbeit bezüglich der Zuverlässigkeit von Software versucht, die Software-Qualität vorherzusagen. Einerseits versuchen die
Forscher Anzahl, Häufigkeit und Art der Fehler abzuschätzen, die zu Fehlfunktionen der Software führen und gleichzeitig Methoden zu entwickeln,
um sie innerhalb des Quellcodes zu erkennen. Andererseits wird die weitere Entwicklung solcher Störungen im Verlauf der Zeit simuliert, um vorhersehen zu können, wann in Zukunft weitere Fehler auftreten können.
Prof. Russo ist auf Open-Source-Software (OSS) spezialisiert und betreibt
breit angelegtes Data Mining im Internet, um OSS-Zuverlässigkeitsmodelle zu erarbeiten. Ihre Forschungsarbeit besteht auch in der Messung
von Software mit dem Ziel, Entwicklungsprozesse und Produkte sowie
deren Anwendung besser zu verstehen. Ihre Studien bieten Fachleuten
und Forschern Einblicke und Empfehlungen für die Entwicklung von qualitativ hochwertiger Software und für deren Anwendung in Unternehmen
und bei öffentlichen Stellen.
FACULTY OF COMPUTER SCIENCE­
—FREE UNIVERSITY OF BOZEN-BOLZANO
selected publications
1
Fitzgerald, B., Kesan, J., Russo, B., Shaikh, M., & Succi, G.,
(2011). Adopting Open Source Software A Practical Guide. The
MIT Press. Cambridge, Massachusetts, London, England.
2
Rossi, B., Russo, B., & Succi, G. (2011). Path Dependent Stochastic Models to Detect Planned and Actual Technology Use: a
case study of OpenOffice. Journal of Information and Software
Technology, vol. 53, No. 11, pp.1209-1226.
3
Pedrycz, W., Russo, B., & Succi, G. (2011). A model of Job Satisfaction for Collaborative Development processes. Journal of
Systems and Software, vol. 84, No. 5, pp.739-752.
4
Steff, M. & Russo, B., (2011). Measuring Architectural Change
for Defect Estimation and Localization. In: Proceedings of the
5th International Symposium on Empirical Software Engineering and Measurement. ESEM 2011, September 22-23. Banff,
Alberta, Canada. Forthcoming http://esem.cpsc.ucalgary.ca/
esem2011/esem/program.php.
5
Mulazzani, F., Rossi, B., Russo, B., & Steff, M.(2011). Building Knowledge in Open Source Software Research in Six
Years of Conferences. In: Proceedings of the 7th International
Conference of Open Source Systems, OSS 2011. October 6-7,
Salvador, Brazil. Forthcoming http://ossconf.org/static/2011/
program/sessions/.
27
I/O YEARS
albertoSillitti ( associate professor )
Alberto Sillitti conducts research in empirical
software engineering. He studies how software
is developed in different contexts (such as mobile and Web) and which practices produce the
best results in terms of software quality and development cost. His overall goal is the definition
of techniques and methodologies to help software companies develop better software.
He focuses on two areas of software development: agile methods and open source. While
these two areas may seem far from each other,
they actually share many basic values and practical aspects. In 2009, together with three colleagues, he published a book on this topic: Agile
Technologies in Open Source Development [1].
The book analyzes similarities and differences,
and reports on a number of empirical investigations on these development techniques.
Regarding agile methods, he studies the benefits of different development techniques such
as pair programming [2] and how programmers
organize their work [3,4]. Moreover, he is program chair of the International Conference on
eXtreme Programming and Agile Processes in
Software Engineering (XP 2010 and XP 2011)
and regularly serves on the program committee.
Concerning open source software, he led the development from 2006 to 2011 of the QualiPSo
Open Maturity Model (OMM), part of QualiPSo,
an EU-funded project that deals with the evaluation of the quality of the open source development processes [5]. In addition, he was program chair of the International Conference on
Open Source Systems (OSS 2007), and regularly
serves on the program committee.
Alberto Sillitti investigates sofware development approaches, taking into consideration the
specific environment in which developers work.
In particular, he investigates mobile and Web
systems. Regarding mobile development, he
focuses on platforms such as Android, iOS, and
Meego, looking at how such platforms can be
adapted to different environments. On Web development, he focuses on the composition and
testing of Web services.
His activities fall in the area of software engineering. He is an invited lecture on the subject in industrial and academic settings. In summer 2011,
he was director of the 3rd International CASE
Summer School on Practical Experimentation in
Software Engineering organized in Bolzano.
Italiano. Le ricerche di Alberto Sillitti hanno lo scopo di aiutare le aziende
di software a realizzare prodotti con migliori prestazioni. I suoi studi nel
campo dell’ingegneria empirica del software si incentrano, ad esempio,
sullo sviluppo dei dispositivi mobili e del web per scoprire quali sono le
strategie capaci di garantire i risultati migliori sotto il profilo della qualità
del software e dei costi di sviluppo.
Alberto Sillitti si concentra su due aree di sviluppo software: i metodi agili e l’open source, temi su cui ha recentemente pubblicato, assieme ad
altri colleghi, il testo Agile Technologies in Open Source Development (I
metodi agili nello sviluppo dei software Open Source). Alberto Sillitti studia approcci allo sviluppo software che pongono particolare attenzione al
particolare ambiente di lavoro in cui operano i programmatori.
Per quanto concerne i metodi agili, Alberto Sillitti studia i vantaggi di diverse tecniche di sviluppo – come ad esempio la programmazione in coppia – e l’organizzazione del lavoro da parte dei programmatori. Riguardo
ai software open source, invece, ha condotto la direzione del QualiPSo
Open Maturity Model (OMM), un progetto finanziato dall’UE il cui obiettivo è valutare la qualità dei processi di sviluppo di questo tipo di software.
28
Deutsch. Die Forschungsarbeit von Prof. Sillitti soll den Softwarehäusern
helfen, leistungsstärkere Software zu realisieren. Seine Studien auf dem
Gebiet der empirischen Softwaretechnik beschäftigen sich beispielsweise­
mit mobilen und Web-Produkten, um herauszufinden, welche Praktik die
besten Ergebnisse im Hinblick auf Softwarequalität und Entwicklungskosten erzielt.
Prof. Sillitti konzentriert sich auf zwei Bereiche: Agile Software-Entwicklung und Open-Source-Software. Zu diesem Thema hat er kürzlich zusammen mit anderen Kollegen ein Buch veröffentlicht: Agile Technologies in
Open Source Development. Prof. Sillitti untersucht die Ansätze zur Software-Entwicklung, indem er die spezifische Umgebung in Betracht zieht,
in der die Entwickler arbeiten.
Was die agile Software-Entwicklung betrifft, untersucht Prof. Sillitti die
Vorteile verschiedener Techniken wie beispielsweise der Paarprogrammierung, sowie die Organisation der Arbeit der Programmierer. Was
die Open-Source-Software betrifft, so leitete er die Entwicklung des
QualiPSo­ Open Maturity Model (OMM), einem von der EU finanzierten
Projekt, dessen Ziel es ist, die Qualität der Entwicklungsprozesse dieser
Art von Software zu beurteilen.
FACULTY OF COMPUTER SCIENCE­
—FREE UNIVERSITY OF BOZEN-BOLZANO
selected publications
1
Russo B., Scotto M., Sillitti A., Succi G. (2009). Agile Technologies in Open Source Development, IGI Global, USA, 2009,
ISBN 978-1-59904-681-5)
2
Fronza, I., Sillitti, A., Succi G., & Vlasenko, J. (2011). Does
Pair Programming Increase Developers Attention?. In: Proceedings of the 8 th joint meeting of the European Software
Engineering Conference and the ACM SIGSOFT Symposium on
the Foundations of Software Engineering (ACM SIGSOFT/FSE
2011). Szeged, Hungary, 5-9 September 2011.
3
Sillitti A., Succi G., & Vlasenko J. (2011, May 21-28). Toward
a better understanding of tool usage. In: Proceedings of the
33 th International Conference on Software Engineering (ICSE
2011). Honolulu, HI, USA.
4
Coman I., & Sillitti A. (2009) Automated Segmentation of
Development Sessions into Task-related Subsections, International Journal of Computers and Applications, ACTA Press, Vol.
31, No. 3, 2009.
5
Petrinja, E., Nambakam, R., & Sillitti, A. (2009, May 18). Introducing the Open Maturity Model. In: Proceedings of the 2nd
Emerging Trends in FLOSS Research and Development Workshop at ICSE 2009. Vancouver, BC, Canada.
29
I/O YEARS
etielPetrinja ( assistant professor )
The software industry is tending towards strong
adoption of FOSS, integrating these products
into commercial applications. Gartner™ reports that in 2012 more than 80 percent of commercial products will include some FOSS components. Increased interest in FOSS from major
software companies is dictating even greater
interest on the part of researchers. Research
shows that best practices used in commercial
software production are being adopted by FOSS
developers and vice versa. Work remains in
evaluating and certifying the quality of the
FOSS development process.
Etiel Petrinja is interested in improving the quality of software products. By paying particular attention to bettering processes and products of
free and open source software (FOSS), Petrinja is
helping to improve the quality of FOSS.
The number of FOSS projects began to grow
quickly from the 1990’s onward. These products range from small applications for mobile
phones to complete operating systems. While
FOSS suffered in its early years due to perceived low quality, interest increased as the
availability of source code and complete project
documentation improved. In fact, research has
demonstrated that FOSS is often higher in quality compared to closed-source counterparts.
A key research approach is to perform case studies on the hundreds of thousands of FOSS projects
available on the Web. Freely open for analysis are
code bases such as SourceForge and OSOR.eu
that contain complete process documentation for
a large number of projects. Etiel Petrinja proposes
improvements to development practices based on
his results. He has identified a set of best practices and assessment criteria that are being applied
in various quality assessment models and, in collaboration with colleagues, has helped integrate
these criteria in the Open Source Maturity Model
(OMM). Several FOSS-focused competence centers have been started as spin-offs of the QualiPSo­
project. Some of the services offered by these centers are using the OMM methodology.
Current research is looking at how to automate
data collection on processes and products.
Italiano. Le ricerche condotte da Etiel Petrinja sono volte al miglioramento della qualità dei prodotti software e si concentrano in particolare sul
software libero ed open source (FOSS), che si è spesso dimostrato, secondo alcuni studi, di miglior qualità rispetto alle controparti closed source.
Al momento, l’industria del software sta impiegando prodotti FOSS, integrandoli ampiamente nelle applicazioni commerciali. A partire dal
2012, l’80% dei prodotti software commerciali comprenderà almeno alcuni componenti FOSS.
Il crescente interesse verso questo prodotto ne sta determinando uno ancora maggiore presso i ricercatori che utilizzano ampi database disponibili nel web. Etiel Petrinja sta studiando il modo di automatizzare ulteriormente la raccolta dati su processi e prodotti.
I developer dei FOSS adottano le best practice utilizzate nella produzione dei software commerciali e viceversa: Etiel Petrinja ha individuato numerosi criteri di valutazione e best practice FOSS che vengono applicati a
vari modelli di valutazione della qualità.
Sulla scia del progetto QualiPSo è stato avviato un buon numero di centri
di competenza specializzati in FOSS.
30
While several tools exist already, much information still needs to be extracted manually. Etiel
Petrinja is working to assess data from a large
number of projects to provide detailed insight
into the current software development process.
The work will also help to identify best practices
that can be adopted by new software development projects, both closed and open source.
Whenever possible, he likes to inspire students
with the latest research topics in his field and to
include them in experiments and trials of the
new technology. His courses focus on reuse,
software engineering, open source software
engineering, internet technologies and mobile
technologies. Laboratory lectures augment the
classes, with students working in small groups.
Etiel Petrinja works in collaboration with researchers from the State University of Sao Paulo, South China University of Technology, Rey
Juan Carlos University, and the University of Insubria, as well as with Siemens, and Engineering Ingegneria Informatica. He has taken part in
program committees for events, including Agility in the Software Process Conference in Belfast 2008, 2nd International Conference on Open
Source Systems (OSS 2006), 6 th International
Conference on Open Source Systems (OSS
2010), 7 th International Conference on Open
Source Systems (OSS 2011), and 4 th International Symposium on Empirical Software Engineering and Measurement (ESEM 2010).
Deutsch. Die Forschungsarbeit von Prof. Petrinja zielt auf die Verbesserung der Qualität von Softwareprodukten ab, insbesondere von Free- und
Open-Source-Software (FOSS). Die Forschung hat in der Tat erwiesen,
dass FOSS-Software häufig eine höhere Qualität aufweist, als die vergleichbare Closed-Source-Software.
Die Software-Industrie verwendet heute FOSS-Produkte, die weitgehend
in kommerzielle Anwendungen integriert werden. Ab dem Jahr 2012 werden 80% der kommerziellen Software-Produkte zumindest einige FOSSBestandteile enthalten.
Das wachsende Interesse für dieses Produkt führt zu einem noch größeren Interesse seitens der Forscher, die große, im Web verfügbare Datenbanken nutzen. Prof. Petrinja beschäftigt sich damit, wie die Datenerfassung über Prozesse und Produkte noch weiter automatisiert werden kann.
Die Entwickler von FOSS nutzen die bewährten Praktiken der kommerziellen
Software-Produktion und umgekehrt: Prof. Petrinja hat eine breite Reihe solcher bewährter Praktiken für FOSS und Bewertungskriterien festgelegt, die
im Rahmen verschiedener Qualitätsbewertungsmodelle Anwendung finden.
Verschiedene auf FOSS spezialisierte Kompetenz-Zentren wurden als Nebenergebnis des Projekts QualiPSo eingerichtet.
FACULTY OF COMPUTER SCIENCE­—FREE UNIVERSITY OF BOZEN-BOLZANO
researchers.case
ileniaFronza
Improving the efficiency of software development and bettering software
product quality are the guiding principles of Ilenia Fronza’s research. Her
current research focuses on agile methodologies, data mining and computational intelligence in software engineering, failure prediction, non-invasive measurement, and software process measurement and improvement.
Ilenia Fronza
In closer analysis, she studies how to use data collected from AISEMA
systems to provide dynamic assessment of software products and processes. Automated in-process software engineering measurement and
analysis (AISEMA) systems perform continuous, accurate and non invasive collection of data about software processes and products. Statistical
analysis of this data can be used to find practices that reduce costs, facilitate the development process and improve product quality.
Other questions Ilenia Fronza is addressing include analyzing the impact of
Pair Programming (two software programmers working side-by-side) to
find out if the practice has an impact on programmer attention. Another
analysis is whether the practice helps novices to integrate into a programming team. She would like to use AISEMA data to find out whether the activities of a team change when novices join it and whether experts change
their way of working when mentoring novices. Yet another line asks whether the data found in log files can be used to predict software failures.
Ilenia Fronza received her Master degree in Mathematics from the University of Trento, Italy, in 2006, her Second Level Master in System Biology
from Microsoft Research-University of Trento, Italy, in 2007. She is a
member of the Program Committe of the International Workshop on
Emerging Trends in Software Metrics (Wetsom 2011).
Italiano. Potenziare l’efficienza nello sviluppo del software e migliorare
la qualità del prodotto sono le direttrici seguite da Ilenia Fronza nella sua
ricerca. I suoi attuali studi si focalizzano sulle pratiche agili, l’estrazione
dati e l’intelligenza computazionale nell’ingegneria del software, la failure
prediction, le misurazioni non invasive, la misurazione e il potenziamento
del processo software. In particolare, Ilenia Fronza sta studiando le modalità con cui i sistemi automatizzati di software engineering per la misurazione e l’analisi in process (AISEMA) eseguono la raccolta continua, accurata e non invasiva di dati riguardanti i processi e i prodotti software.
Deutsch. Ausbau der Effizienz bei der Software-Entwicklung und Verbesserung der Software-Produktqualität sind die Leitlinien, an denen Dr. Fronza ihre Forschungsarbeit ausrichtet. Ihre derzeitigen Studien konzentrieren sich auf «schlanke» Methoden, Data-Mining und Computational
Intelligence (CI) in der Softwaretechnik, Fehlervorhersage, nicht invasive
Messungen, sowie Messung und Verbesserung der Software-Prozesse. Vor
allem untersucht Dr. Fronza die Art und Weise, wie automatisierte Systeme
zur in-process Messung und Analyse der Softwaretechnik (AISEMA) eine
kontinuierliche, genaue und nicht invasive Erfassung von Daten über
Software-Prozesse und -Produkte gestatten.
31
I/O YEARS
researchers.case
cigdemGencel
Cigdem Gencel conducts research in software size and effort estimating,
software project management, software measurement, software value
management, software process improvement and global software
engineering.
Cigdem Gencel
From 2005 to 2008, she worked as a part-time instructor and a post-doc
researcher at the Information Systems Department of the Middle East
Technical University (Turkey) where she received her PhD. She was involved in a number of research and development projects on large-scale
software intensive system specification and acquisition, as well as conceptual model development. She also worked as a part-time consultant
for software organizations on software measurement, estimating and
process improvement. She has been a member of the Common Software
Metrics International Consortium (COSMIC) International Advisory Committee and Metrics Practices Committee.
From 2008 to 2011, she joined the School of Computing at the Blekinge
Institute of Technology (Sweden) as an assistant professor and a senior
researcher.
Andrea Janes
She is an author of more than 35 international research papers and
serves as a reviewer for a number of scientific journals and committee
member for conferences.
Italiano. Cigdem Gencel conduce ricerche sulla dimensione dei software
e sulle stime dei costi, sul project management, la misurazione e il value
management del software, sul potenziamento del processo software e
sul software engineering globale. Cigdem Gencel ha partecipato ad alcuni progetti di ricerca e di sviluppo riguardanti le specifiche dei sistemi
software su larga scala e lo sviluppo del modello concettuale. È autrice di
più di 35 articoli scientifici, presta servizio come revisore per alcune riviste scientifiche e partecipa come membro di comitato a varie conferenze.
Juha Rikklä
32
Deutsch. Dr. Gencel leitet Forschungen bezüglich der Größe von SoftwareProgrammen, Aufwandschätzung, Software-Projektmanagement, Messung und Wertmanagement von Software, Verbesserung des SoftwareProzesses und globaler Softwaretechnik. Dr. Gencel hat an mehreren Entwicklungs- und Forschungsprojekten in Bezug auf die Spezifikationen für
großangelegte Softwaresysteme und die Entwicklung des konzeptionellen Modells teilgenommen. Sie ist Autorin von über 35 internationalen
Forschungsveröffentlichungen, arbeitet als Revisorin für verschiedene
wissenschaftliche Zeitschriften und ist Ausschussmitglied bei verschiedenen Konferenzen.
FACULTY OF COMPUTER SCIENCE­
—FREE UNIVERSITY OF BOZEN-BOLZANO
andreaJanes
juhaRikkilä
Andrea Janes’s research looks at how to improve the efficiency of the software development process through non-invasive measurement and lean
software development. In software engineering, measurement is a proven technique for getting better results. Because software is invisible, developers need a way to measure to understand if they are on the right
path. Put simply, to optimize production, you need to know how much effort you are spending (inputs), the activities involved (process), the result
(output).
Juha Rikkilä is focusing on how to apply complexity theory to software
development, in particular for the Software Factory project, high performing software teams, merging agile and lean approaches for software
development, and managing software development in highly volatile
market place. Common to all of these is research on the principles of
emergence, that is, the appearance of new content and new ways of working in an unexpected, yet beneficial manner. Adding to challenge of this
research are the ways in which complexity theory challenges some of the
traditional research methods.
The key idea behind automatic, non-invasive measurement techniques is
to help SME software development companies to reduce production costs
by measuring what is happening during software development and help
them adopt proven techniques such as lean manufacturing.
As a researcher Andrea Janes has one foot in the world of engineers, and
the other foot in social science. He develops technical solutions for identifying problems, and investigates how users react to his ideas. Automatic, non-invasive measurement tools and techniques let him collect data
about software development work at relatively low cost. The data is fed
back to the team to show productivity, possible problems, distribution of
effort, and product quality. The social science part considers how developers accept non-invasive measurement, how useful the provided measurements are, and how measurements can be used to overcome or avoid
problems.
Andrea Janes teaches courses on Requirements Engineering, Software Architectures, and Open Tools for IT Management. He received his Master of
Science in Business Informatics in 2002 from the Tecnical University of Vienna, Austria.
Italiano. La ricerca di Andrea Janes si concentra sui metodi per migliorare
l’efficienza del processo di sviluppo del software tramite misurazioni non
invasive e una produzione snella. L’idea alla base delle tecniche di misura
automatiche e non invasive è quella di aiutare le aziende di sviluppo software di piccole e medie dimensioni a ridurre i costi di produzione attraverso il monitoraggio di ciò che avviene durante lo sviluppo del software e ad
adottare tecniche collaudate, come ad esempio la produzione snella. Andrea Janes inoltre si occupa di studiare come i developer possano utilizzare tali misurazioni per risolvere ed evitare tipici problemi dello sviluppo
del software.
Juha Rikkilä is the primary researcher for the quality Software Factory
project. The aim of this project is to foster new ways of learning computer
science, new ways of doing international research in a network of Software Factories, and increasing the entrepreneurial spirit of these teams.
Previously he worked on the SCABO and Cloud software projects in the
University of Helsinki, and has long career in industry, including companies like NSN, Nokia and Unisys.
Italiano. Juha Rikkilä si concentra sui metodi di applicazione della teoria
della complessità all’ingegneria del software. Attualmente lavora al progetto Software Factory ed è il principale ricercatore che si occupa dell’aspetto qualità di tale progetto. Lo scopo del progetto Software Factory è
quello di promuovere nuovi metodi di apprendimento in campo informatico e nuovi criteri di ricerca internazionale nell’ambito di una rete di aziende produttrici di software, potenziando nel contempo lo spirito imprenditoriale di tali aziende.
Deutsch. Dr. Rikkilä beschäftigt sich mit der Anwendung der Komplexitätstheorie auf die Softwaretechnik. Derzeit arbeitet er an dem Projekt
Software Factory und ist der primäre Forscher des Projekts Quality Software Factory. Zielsetzung dieses Projekts ist es, neue Lernmethoden in
der Computerwissenschaft und neue Kriterien für die internationale Forschung im Rahmen eines Netzes von Softwarehäusern zu fördern und
gleichzeitig den Unternehmergeist dieser Firmen anzuspornen.
Deutsch. Die Forschungsarbeit von Dr. Janes beschäftigt sich mit der Steigerung der Effizienz der Software-Entwicklungsprozesse anhand nicht
invasiver Messungen und schlanker Produktion. Den automatischen,
nicht invasiven Messtechniken liegt der Gedanke zugrunde, kleinen und
mittelständischen Softwarehäusern zu helfen, ihre Produktonskosten
durch Messung der Software-Entwicklungsabläufe zu senken und bewährte Techniken wie beispielsweise die Lean Production anzuwenden.
Dr. Janes beschäftigt sich darüber hinaus damit, wie Software-Entwickler
solche Messungen nutzen können, um Probleme zu überwinden bzw. zu
vermeiden.
33
I/O YEARS
researchers.case
brunoRossi
Bruno Rossi conducts research on the evolution of development processes
in Open Source Software (OSS). His goal is to find practices and paradigms
in OSS development for use in industry.
Analysis of OSS development is getting a lot of attention in recent years
given the enormous data repositories available for research. Research efforts have led to several new software development paradigms. While these
cope well with the real world development environments, ongoing research
efforts are working to validate the results.
Bruno Rossi
Bruno Rossi’s research strategy is based on experimentation that uses industry and online software repositories. He analyzes low-level events generated by users and relates them to higher level constructs, such as ongoing
processes. The analysis is typically longitudinal and conducted by means of
software evolution techniques, like time-series data analysis.
Tadas Remencius
Research questions vary according to the approach to be investigated, for
example «do short release cycles determine the success of a software
project?» So far outcomes have been the development of methodologies
for data collection and analysis, and the investigation of several aspects
of OSS development, like development iterations, download patterns and
releases, and also usage and adoption.
He keeps close contact with the research community built around the International Conference on Open Source Systems (OSS). He collaborated
on the sixth framework project COSPA. He has been on the review boards
of the International Conference on Open Source Systems (OSS), Empirical
Software Engineering and Measurement (ESEM), International Joint Conference on Neural Networks (IJCNN), Hawaii International Con­ference on
System Sciences (HICSS), Nature, and Biologically Inspired Computing
(NaBIC), European Confe­rence on Information Systems (ECIS).
Xiaofeng Wang
Italiano. Le ricerche di Bruno Rossi si concentrano sull’evoluzione dei processi di sviluppo del software Open Source (OSS), con l’obiettivo di individuare in tali processi prassi e caratteristiche che lo rendano idoneo all’utilizzo in campo industriale. La strategia di ricerca di Bruno Rossi è basata
sulla sperimentazione e sfrutta le banche dati dell’industria e quelle online. I suoi studi hanno portato all’elaborazione di metodologie di analisi e
raccolta dati e ad indagini su come migliorare vari aspetti dello sviluppo
degli OSS.
Deutsch. Die Forschungsarbeit von Dr. Rossi betrifft die Weiterentwicklung der Prozesse für die Entwicklung von Open Source-Software (OSS)
und hat das Ziel, innerhalb dieser Prozesse Praktiken und Paradigmen
ausfindig zu machen, die auf industrieller Ebene genutzt werden können.
Seine Forschungsstrategie basiert auf der Erprobung unter Nutzung industrieller und online verfügbarer Software-Bibliotheken. Seine Forschungsarbeit hat zur Entwicklung von Methoden zur Erfassung und Analyse von Daten geführt und untersucht Wege und Mittel zur Verbesserung
verschiedener Aspekte der OSS-Entwicklung.
34
FACULTY OF COMPUTER SCIENCE­
—FREE UNIVERSITY OF BOZEN-BOLZANO
tadasRemencius
xiaofengWang
Tadas Remencius conducts research aimed at understanding how the
sharing of knowledge and experience in software development teams can
be used to improve the development process. His work involves making
case-studies, working inside software companies. He has developed a
prototype software system that assists in experience management and
sharing.
Xiaofeng Wang conducts research on software development processes,
methods and practices. She places particular focus on agile and lean software development approaches and their effective adoption, use and evolution. She also studies the application of complex adaptive systems concepts and models in the social domain, in particular the use of the concepts and principles of complex adaptive systems to understand agile and
lean software development processes.
Italiano. Le ricerche condotte da Tadas Remencius hanno lo scopo di comprendere come la condivisione delle conoscenze e delle esperienze in
seno ai team che si occupano dello sviluppo software possa essere utile
per migliorare il ciclo di vita del software stesso. Il lavoro di Tadas Remencius comprende anche case studies, che svolge operando all’interno di
aziende produttrici di software. Ha creato un prototipo di software che
agevola l’attività di gestione e di condivisione.
Deutsch. Die Forschungsarbeit von Tadas Remencius zielt darauf ab, zu
verstehen, wie der Austausch von Wissen und Erfahrung innerhalb eines
Teams von Software-Entwicklern zur Verbesserung des Entwicklungsprozesses genutzt werden kann. Die Arbeit von Tadas Remencius umfasst
auch die Durchführung von Fallstudien bei Softwareherstellern. Er hat
den Prototyp eines Softwaresystems geschaffen, dass das Erfahrungsund Austauschmanagement unterstützt.
Xiaofeng Wang is working on the quality Software Factory project. The
aim of this project is to foster new ways of learning computer science, new
ways of doing international research in a network of Software Factories,
and increasing the entrepreneurial spirit of the parties.
She obtained a doctoral degree in Information Systems from the University of Bath (UK) in early 2008. From 2008 to 2011 she worked as a postdoc at the Irish software engineering research centre Lero, where she
worked on both national and European projects of agile software
development.
Italiano. Le ricerche condotte da Xiaofeng Wang si rivolgono ai processi,
alle metodologie e alle prassi adottate nello sviluppo del software, ponendo particolare attenzione agli approcci costituiti dai metodi agili e dalla produzione snella e concentrandosi sull’efficace applicazione, l’uso e
l’evoluzione di tali approcci. Attualmente Xiaofeng Wang si sta dedicando
allo sviluppo di tecniche che possano favorire nuovi sistemi di apprendimento in campo informatico, occupandosi inoltre di come rafforzare la
collaborazione nella ricerca internazionale tramite la creazione di un network che colleghi le varie software factories.
Deutsch. Die Forschungsarbeit von Dr. Wang betrifft die Prozesse, Methoden und Praktiken zur Entwicklung von Software, wobei sie der agilen
und der schlanken Software-Entwicklung sowie deren wirksamer Anwendung, Verwendung und Weiterentwicklung besonderes Augenmerk
schenkt. Derzeit widmet sich Dr. Wang der Entwicklung von Techniken zur
Förderung neuer Wege zum Erlernen der Computerwissenschaft und arbeitet an den Möglichkeiten zur Verbesserung der internationalen Forschungskooperation durch Schaffung eines Netzwerks von Software
Factories.
35
Intelligent Information Systems. The aim
of this area is to research methodologies
and techniques to better support complex
information retrieval and decision making
tasks, such as travel planning and product
purchase through eCommerce Web sites.
The project is especially focused on Advisory Systems that can exploit a wide variety of
knowledge and data sources.
The project spun off a high tech company
that develops and markets recommendation technologies for the travel and tourism market.
Core Database Technologies. This fundamental research area is developing the
next generation of large high performance database and decision support systems. The goal is to develop new solutions to approximately match hierarchical
data such as XML data, to aggregate multi-dimensional data, and to retrieve information from the Web. The main focus is on
designing and implementing scalable solutions for advanced data management in
these fields.
Temporal and Spatial Database Systems.
This research area looks at extending standard relational database technology by introducing support for temporal and spatial
data management, which current systems
don’t support well. The goal is to generalize existing data models and query languages, and to develop efficient evaluation
algorithms.
research.areas
current projects
36
COPER
Contextualizing Personal Recommendations
MOBAS
Analytical Services for Medical Data Warehouse
CAMURA
Context-Aware Music retrieval and Adaptation
SIPAI Proactive information access systems
MEDAN
Medical Data Warehousing and Analysis
AQuiST
Advanced Query Processing in Spatio-Temporal Network Databases
FITTS
Itinerary planning and analysis for tourist applications
DIS
Center for Database and Information Systems
The Center for Database and Information Systems (DIS) is working to improve methods for
managing, extracting and exploiting information from real-world data by conducting basic
research, and by developing advanced technologies for databases and information systems.
Italiano. Il Centro di ricerca sui Database ed i
Sistemi Informativi (DIS) si dedica al potenziamento dei metodi per estrapolare e sfruttare le
informazioni provenienti da dati reali svolgendo
ricerca di base, sviluppando e valutando sistemi informativi e tecnologie di database.
Deutsch. Das Zentrum für Datenbanken und Informationssysteme (DIS) arbeitet an der Entwicklung von Lösungen zur Extraktion von
Informatio­nen aus und der Analyse von großen­
Datenmengen. Dazu betreibt das Zentrum
Grundlagenforschung in Kombination mit der
Entwicklung von Prototypen in den Berei­
chen von Informationssystemen und Daten­banktechnologien.
The mission of the DIS center is to develop solutions, among others, for Intelligent transportation, eHealth, eGovernment, and eTourism—
mission critical applications with large amounts
of data. These applications are driving continuous development of database and information
system technologies. Furthermore, growth of
the Web and Web 2.0 means that new information services are needed to handle the vast
amount of new data available.
La mission del Centro DIS è quella di trovare
nuove soluzioni per l’Intelligent Transportation,
l’eHealth, l’eGovernment e l’eTourism - tutte applicazioni mission critical che comportano grandi quantità di dati e stanno producendo continui
sviluppi nel campo della tecnologia dei database e dei sistemi informativi. Inoltre, la loro crescita nel Web e nel Web 2.0 dimostra che vi è un
bisogno sempre maggiore di nuovi servizi informatici per riuscire a gestire la grande quantità
di nuovi dati disponibili.
Die Mission des Zentrums DIS ist die Entwicklung von innovativen Lösungen in unterschiedlichen Anwendungsbereichen, wie zum Beispiel intelligente Verkehrssysteme, E-Health,
E-Government und E-Tourism – alles Anwendungsbereiche mit kritischen und großen Datenmengen, welche die Entwicklung von innovativen Technologien in Datenbanken und Informationssystemen ständig vorantreiben. Insbesondere das Web und Web 2.0 zeigen deutlich,
dass es neuer Technologien bedarf, um derart
große Datenmengen sinnvoll verwalten und
nutzen zu können.
37
I/O YEARS
johannGamper ( associate professor )
Fundamental computing concepts can be understood best when theoretical studies are
complemented by the development of prototype systems and evaluations based on realworld data. Moreover, ideas like usability and
applicability get better treatment in theoretical
research when research is grounded on realworld problems.
Johann Gamper is concentrating his research efforts on making temporal and spatial data ubiquitous, which means attacking the problem of
today’s database management systems, which
do not handle these issues well. The key is to
develop query processing in temporal and spatial database systems. He works in close collaboration with key members of the temporal and
spatial database community to advance the
frontiers of standard relational database technology, moving it to support complex aggregation queries on temporal and spatial data. Another area of research is approximate query
processing of tree-structured data such as XML.
Like many professors, Johann Gamper both
publishes in prestigious international journals
and conferences while also deploying his research results locally. His research activities
are embedded in collaborations with local institutions and companies, with an overall funding
of more than 2 1 million over the past 10 years.
Important collaborations include: Project eBZ
(an 8-year collaboration with the Municipality of
Bozen-Bolzano), Project MEDAN (a 3-year collaboration with the Hospital of Meran-Merano),
and Project FITTS (a collaboration with the IT
companies Opera 21 and ACS).
·· In eBZ one objective was to develop solutions
for the efficient computation of isochrones
(http://www.isochrones.inf.unibz.it), which have
been integrated into the GIS (Geographical Information System) platform of the company
CreaForm. Isochrones can be used to perform
reachability analysis and location queries,
which is important in urban planning to assess
how well a city is covered by various public services, such as hospitals or schools.
·· Results of his temporal aggregation research
was integrated into the Thoora company’s news
aggregation platform (Toronto, Canada). The
Thoora platform provides a new way to discover,
monitor, and share the best of the Web from social media like blogosphere and Twittosphere,
as well as traditional web platforms.
·· In the FITTS project, he is investigating itinerary
planning tools for touristic applications in order to
personalize itineraries that match a user’s needs
with constraints like opening hours, budget limits,
public transport availability, and similar issues.
Italiano. Johann Gamper si occupa principalmente della manutenzione e
interrogazione di dati temporali e spaziali in sistemi di basi di dati relazionali. Più specificamente, si occupa di semplificare e migliorare le tecniche
di interrogazione. I suoi lavori si concentrano tra l’altro sull’aggregazione
di dati temporali, il riconoscimento di pattern specifici ed il calcolo di isocrone in reti multimodali di trasporto.
Johann Gamper ha vari progetti con aziende e istituzioni locali, quali
eBZ (una collaborazione della durata di 8 anni con il Comune di Bolzano),
MEDAN­ (una collaborazione di 3 anni con l’Ospedale di Merano) e FITTS
(in collaborazione con le aziende Opera 21 e ACS).
38
Being a good programmer is key skill for computer science students. But Johann Gamper’s
courses are predicated on the notion that a
computer science degree needs to promote the
understanding of the principles and foundations of computing. This way graduates can understand not only the immediate features of
cutting-edge technology, but also understand
the long-term impact of computing techniques.
At the graduate level, students should also
learn how to critically examine research problems, which he tries to achieve by offering students guided projects that fit into his ongoing
research.
Deutsch. Prof. Gamper beschäftigt sich vorwiegend mit der Verwaltung
und Verarbeitung von zeit- und raumbezogenen Daten in relationalen Datenbanksystemen. Dabei geht es in erster Linie darum, Anfragen an zeitliche und räumliche Datenbanksysteme zu vereinfachen und effizienter zu
gestalten. Seine Arbeiten konzentrieren sich u.a. auf die Aggregation von
zeitlichen Daten, das Erkennen von spezifischen Suchmustern sowie die
Berechnung von Isochronen in multimodalen Transportnetzen.
Prof. Gamper hat eine Reihe von Projekten mit lokalen Firmen und Institutionen, wie eBZ (eine 8-jährige Zusammenarbeit mit der Gemeinde Bozen), MEDAN (eine 3-jährige Zusammenarbeit mit dem Krankenhaus Meran) und FITTS (in Zusammenarbeit mit den IT-Firmen Opera 21 und ACS).
FACULTY OF COMPUTER SCIENCE­
—FREE UNIVERSITY OF BOZEN-BOLZANO
selected publications
J. Gordevicius, J. Gamper, and M.H. Böhlen.
Parsimonious temporal aggregation. The VLDB
Journal, 2011.
I. Timko, M.H. Böhlen, and J. Gamper. Sequenced spatiotemporal aggregation for coarse
query granularities. The VLDB Journal, 20(5):
721-741, 2011.
M. Akinde, M. H. Böhlen, D. Chatziantoniou,
and J. Gamper. Theta-constrained multi-dimensional aggregation. Information Systems,
36(2): 341-358, 2010.
M.H. Böhlen, J. Gamper, and C.S. Jensen.
Multi-dimensional aggregation for temporal
data. In Proc. of the 10 th International Conference on Extending Database Technology
(EDBT-06), pages 257--275, Munich, Germany, March 2006.
N. Augsten, M. Böhlen, and J. Gamper. The
pq-Gram Distance between Ordered Labeled
Trees. In ACM Transactions on Database Systems (TODS), 35(1):1-36, 2010.
39
I/O YEARS
francescoRicci ( associate professor )
The World Wide Web gives us more choice, and
presumably more freedom, autonomy, and self
determination, than ever before. We face an explosion of choices like what telephone services
to buy, what trip to make, what work to do, and
how to find a partner. But too many options also
decreases well-being by overwhelming us with
huge amounts of information.
Francesco Ricci’s overall research goal is to design tools and techniques that help users to
deal with such decision-making problems. The
leading approach is «personalization and contextualization» which means filtering available
information to provide each user only with the
items that are most relevant for making a good
decision or staying informed on a topic. In the
last 10 years, research has produced tools that in
many ways define our daily interaction with the
web: search engines and recommender systems.
Francesco Ricci is focused on recommender systems: tools that provide personalized suggestions regarding items that interest a user, such
as a music CD that the user isn’t aware of, but
might like to listen to. Each user gets different
suggestions because recommender systems
predict what a user likes. Users leave signs
when they interact with the Web: pages visited,
feedback and ratings, items purchased. Recommender systems predict whether an item
is interesting for a specific user based on the
history of actions of that specific user and of
similar users with respect to the same or similar
items. Interest in recommender systems has increased to the point that every major e-commerce web site offers them (like Amazon, YouTube and Netflix).
Recommender systems require interdisciplinary
research where, beyond computer science techniques, it is important to understand the application domain and its economic drivers as well as
how data can be leveraged. Francesco Ricci recently edited a handbook [1] that summarizes the
state of the art in the field and future challenges.
Francesco Ricci is currently focusing on providing recommendations that are adapted to the user’s specific context. Context includes physical
(time, location), social, modal (emotional states),
and interaction media (mobile phone, PC). He
contributed to ReRex [2], an iPhone mobile recommender system that suggests places of interest in Bolzano based on user context. He worked
on an Android application called InCarMusic [3],
a personalized music player for cars supported
by Deutsche Telekom. Another Android application provides music appropriate to the user’s location [4]. He helped develop technology for automatically adapting how a web site presents information to maximize its conversion rate [5].
Italiano. L’obiettivo primario della ricerca di Francesco Ricci è quello di
progettare strumenti e tecniche atti ad aiutare gli utenti ad affrontare
problemi di tipo decisionale. Il suo lavoro di ricerca si concentra sui recommender system, applicazioni che permettono ai loro fruitori di ottenere suggerimenti personalizzati sulla base dei loro interessi. Un buon
esempio di questo tipo di sistemi è quello in grado di suggerire CD musicali che l’utente non conosce ma che potrebbe piacergli ascoltare.
I recommender system necessitano di ricerche interdisciplinari nel contesto delle quali è importante comprendere il dominio applicativo e i relativi motori economici.
Attualmente Francesco Ricci sta studiando – in collaborazione con partner industriali e istituzioni accademiche – il modo di fornire recommendations adeguati al contesto in cui si muove l’utente.
40
These research outcomes were generated in
collaboration with industrial partners (Telefonica R&D, Deutsche Telekom, Sinfonet, Ectrl Solutions) and academic colleagues at Universities
(DePaul Chicago, Friedrich-Alexander University Erlangen-Nuernberg, New York University,
University of Minnesota, Alpen-Adria University
of Klagenfurt, University College Cork). Francesco Ricci has chaired a number of important
conferences, such as the International Conference on Information and Communication Technologies in Tourism, Innsbruck, Austria, 26 – 28
January, 2011.
Francesco Ricci strives to develop his research
activities together with the students under his
supervision. He is currently supervising three
PhD students and several graduate and undergraduate students. His courses cover Java Programming, Internet Technologies, Mobile Services, Information Search and Retrieval. Students learn fundamental ideas, algorithms and
techniques while applying them to concrete
problems. He promotes active learning in his
courses, complementing them with exercises
and projects. It’s an interactive environment,
with students getting feedback as they solve
problems and summarize findings. The goal is
to train students to become productive and independent, while giving them entrepreneurial
spirit.
Deutsch. Hauptziel der Forschungsarbeit von Prof. Ricci ist die Entwicklung von Mitteln und Techniken, die dem Benutzer bei der Entscheidungsfindung helfen. Seine Arbeit konzentriert sich auf die Empfehlungssysteme (Recommender-Systeme), d.h. Dienste, die individuelle Vorschläge
zu Themen liefern, die den Benutzer interessieren. Ein gutes Beispiel ist
eine Musik-CD, die der Benutzer nicht kennt, die er möglicherweise gerne anhören würde.
Empfehlungssysteme erfordern eine bereichsübergreifende Forschungstätigkeit, da es ebenso wichtig ist, die Anwendungsdomäne und deren
wirtschaftliche Hintergründe zu verstehen, als auch wie die Daten beeinflusst werden können.
Prof. Ricci arbeitet derzeit daran, Empfehlungen bereit zu stellen, die dem
spezifischen Kontext des Benutzers angepasst sind. Er arbeitet mit Partnern aus der Industrie und akademischen Institutionen zusammen.
FACULTY OF COMPUTER SCIENCE­
—FREE UNIVERSITY OF BOZEN-BOLZANO
selected publications
1
Ricci, F.; Rokach, L.; Shapira, B.; Kantor,
P.B. (Eds.), Recommender Systems Handbook. 1st Edition., Springer, 2011.
2
L. Baltrunas, B. Ludwig, S. Peer, and F.
Ricci. Context relevance assessment
and exploitation in mobile recommender
systems. Personal and Ubiquitous Computing. 2011
3
L. Baltrunas, M. Kaminskas, B. Ludwig,
O.Moling, F. Ricci, A. Aydin, K.-H. Lueke,
and R. Schwaiger. InCarMusic: ContextAware Music Recommendations in a Car. 12th International Conference on Electronic
Commerce and Web Technologies - EC-Web
201. Toulouse, France. August 29 - September 2, 2011, 89-100.
4
M. Kaminskas and F. Ricci. Location-Adapted Music Recommendation Using Tags. 19th
International Conference on User Modeling,
Adaptation and Personalization. Girona,
Spain, 11-15 July, 2011, 183-194.
5
T. Mahmood, F. Ricci, and A. Venturini.
Improving Recommendation Effectiveness
by Adapting the Dialogue Strategy in Online
Travel Planning. International Journal of
Information Technology and Tourism, 11
(4):285-302, 2010.
41
I/O YEARS
researchers.dis
nikolausAugsten
mounaKacimi
The field of database research concerns storing and querying data.
Whereas traditional database systems are designed to answer exact queries about business data (such as the sales data of a specific department),
Nikolaus Augsten focuses on queries that are not well covered by traditional database systems. In particular, he is interested in «similarity queries». While they do include exact answers, similarity queries also include answers that only partially match a query. Search results of Web
search engines are a good example of this.
Mouna Kacimi is focused on two main research areas: improving the predictive accuracy of learned models and enhancing the search capabilities
of information retrieval systems. By mining Web links it is possible to exploit correlations to improve the ability of machine learned models to
make accurate predictions. Her research is also looking at ways to improve information retrieval systems by developing new ranking models
and query processing strategies for entity search. She leverages similarity search methodologies to handle retrieval of information from content
contained in multimedia. Her ultimate goal is to develop scalable solutions that address the difficulties in searching for information that is massively distributed.
Most research in this area looks at external computer programs that read
the data from the database, process the query, and write the result back
to the database. Nikolaus Augsten’s goal is to build similarity operators
into the core of the database systems. Doing so offers three big advantages: ease of use, efficiency, and low overhead. Regarding ease of use,
the similarity operator can be combined with other database operators
into a single query. To gain efficiency, the similarity operator can be optimized together with the other operators using the traditional database
techniques. Finally, you can reduce overhead costs by reusing existing database functionality like managing access privileges to the data, guaranteeing consistency in a multi-user environment, or recovering from errors.
Integrating similarity queries into database systems faces some challenges.
The concept of «similarity» depends on the specific field of application while
the operations provided by the database system need to be generalized. This
mismatch must be bridged. Next, while the concept of «equality» is intuitive
for many data types, similarity is hard to define. Put simply, it is hard to guarantee that the query results are intuitive to the user. Lastly, it is often hard to
find efficient algorithms for similarity queries. Applying standard techniques
that function for exact queries is often not possible for similarity queries.
Nikolaus Augsten is co-inventor of a provisional US-patent entitled «Systems and Methods for Efficient Top-k Approximate Subtree Matching»
(USProv 2010043) filed on March 3, 2011. He received the Best Paper
Award at the IEEE International Conference on Data Engineering (ICDE
2010) and the work was presented at ICDE in Los Angeles, USA.
Italiano. I database tradizionali sono stati sviluppati per rispondere a domande precise sui dati relativi alla gestione dell’attività aziendale, ad
esempio sulle vendite di un particolare reparto. La ricerca di Nikolaus
Augsten invece si concentra su query a cui i database tradizionali non
sono in grado di rispondere con esattezza. Nikolaus Augsten è interessato in particolare alle similarity queries, che comportano risposte solo in
parte associate alla query. L’obiettivo di Nikolaus Augsten è inserire dei
similarity operators nel cuore del sistema database, conseguendo così
tre notevoli vantaggi: facilità d’uso, efficienza e spese generali ridotte.
Deutsch. Herkömmliche Datenbanksysteme sind so ausgelegt, dass sie
Antwort auf eine präzise Suche von Geschäftsdaten wie beispielsweise
Umsatzdaten einer spezifischen Abteilung liefern. Die Forschungsarbeit
von Dr. Augsten konzentriert sich dagegen auf Suchfunktionen, die durch
herkömmliche Datenbanksysteme nicht vollständig abgedeckt werden.
Insbesondere befasst er sich mit der sogenannten Ähnlichkeitssuche, deren Antworten auch solche umfassen, die nur teilweise auf das Suchkriterium zutreffen. Die Zielsetzung von Dr. Augsten besteht in der Einbindung
von Ähnlichkeitsoperatoren im Kern der Datenbankensysteme. Dies bietet
drei beachtliche Vorteile: Einfachheit, Effizienz und geringe Festkosten.
42
Mouna Kacimi is primary researcher on project RARE (Reducing Antimicrobial Resistance in Bolzano). The aim of the program is to improve the
effectiveness of antibiotics by investigating past patient therapies. The
project is carried out in tight collaboration with the antimicrobial management program at Bolzano Hospital.
She obtained a doctoral degree in computer science from the University
of Bourgogne (France) in 2007. From 2008 to 2010 she was a post-doc at
Max-Planck Institute for Informatics in Saarbruecken, where she worked
on SAPIR, a European project that was developing a highly scalable
search engine for distributed audio-visual search.
Italiano. Le ricerche di Mouna Kacimi si focalizzano su due tematiche chiave: il miglioramento dell’accuratezza predittiva dei learned models e il miglioramento del funzionamento dei sistemi di recupero delle informazioni
(information retrieval). Mouna Kacimi ha sviluppato tecnologie in grado di
aumentare l’accuratezza predittiva dei machine learned models mediante
il mining dei link nel web. Sta inoltre studiando il modo di perfezionare i sistemi di recupero di informazioni attraverso lo sviluppo di modelli di
ranking e di strategie di elaborazione delle query mirate all’entity search. Il
suo obiettivo principale è quello di sviluppare soluzioni scalabili che aiutino a risolvere le difficoltà di ricerca di informazioni molto distribuite.
Deutsch. Die Forschungsarbeit von Dr. Kacimi konzentriert sich auf zwei
Schlüsselthemen: Verbesserung der Vorhersagegenauigkeit der erlernten Modelle und Ausbau der Suchkapazitäten von Systemen zur Auffindung von Informationen. Durch Mining von Weblinks ist sie in der Lage,
die Fähigkeit der von der Maschine erlernten Modelle zu steigern, genaue
Vorhersagen zu liefern. Darüber hinaus betrifft ihre Forschungsarbeit die
Möglichkeiten zur Verbesserung der Informationssuch­systeme durch
Entwicklung von Gewichtungsmodellen und Anfragen-Verarbeitungsstrategien für Entitätszentrierte Suche. Ihr Hauptziel ist die Entwicklung von
skalierbaren Lösungen, die dazu beitragen sollen, die Schwierigkeiten
bei der Suche nach stark verteilten Informationen zu lösen.
FACULTY OF COMPUTER SCIENCE­
—FREE UNIVERSITY OF BOZEN-BOLZANO
Mouna Kacimi
Nikolaus Augsten
Periklis Andritsos
Floriano Zini
periklisAndritsos
florianoZini
Periklis Andritsos is interested in solving challenging problems centered
around the analysis, extraction and summarization of very large repositories. From clustering to schema discovery, and from redundancy quantification to predictive analytics, he explores how machines can help get the
most from data repositories.
Floriano Zini’s main research focus is on the design, development, and
experimentation in the field of mobile and personalized advisory systems
for health care. With this aim, he uses techniques from machine learning,
human computer interaction, user modeling, and recommender systems
to further research in the field. He is currently applying his research to a
project with the hospital of Meran (South Tyrol, Italy). The goal of the
project is to explore the use of mobile devices, both in the hospital and at
home, for the provisioning of three services: the completion of questionnaires that gather comprehensive information on patient conditions; provisioning toward patients of contextualized, personalized information
and tips concerning their medical conditions; and guidance to patients for
carrying out typical or occasional clinical activities at the hospital.
His current research focuses on the extraction of dictionaries from product records to facilitate querying. More specifically, he is using unsupervised techniques to quantify the information content of product stores
and identify­values that characterize specific properties such as the manufacturers and models of products. Moreover, he is working on the incorporation of content analysis techniques into systems that perform spatiotemporal analysis of real-time traffic data. The ultimate goal of this work
is not only to come up with efficient techniques that analyze and query
such data, but also to correlate the information they convey with existing
or derived knowledge about the environment they describe.
Periklis Andritsos holds MSc and PhD degrees in Computer Science from
the University of Toronto. Before joining the Free University of Bozen-Bolzano he was an assistant professor at the University of Trento, a director
of research at Thoora.com and a senior research associate at the Ontario
Cancer Institute and the University of Toronto.
Italiano. Gli interessi di ricerca di Periklis Andritsos sono rivolti alla soluzione di problemi relativi all’analisi, l’estrazione e la sintesi di banche dati
molto ampie. Attualmente la sua ricerca è focalizzata sull’estrazione di dizionari da schede di prodotti allo scopo di facilitare il procedimento di
­interrogazione (query). Più specificamente, sta utilizzando tecniche non
supervisionate per quantificare il contenuto informativo di una serie di
product stores digitali, identificando i valori che caratterizzano specifiche
proprietà dei prodotti stessi, quali ad esempio i fabbricanti e i modelli.
Deutsch. Dr. Andritsos interessiert sich für die Lösung der Probleme, die
die Analyse, die Datenextraktion und die Synthetisierung sehr umfangreicher Datenbanken betreffen. Seine derzeitige Forschungsarbeit konzentriert sich auf die Extraktion von Wörterbüchern aus gespeicherten Produktbeschreibungen mit dem Zweck, die Abfrage zu erleichtern. Genauer
gesehen, verwendet er überwachungsfreie Techniken zur Quantifizierung
des Informationsgehalts von digitalen product stores und zur Erkennung
von Angaben, die spezifische Merkmale der gespeicherten Produkte charakterisieren, wie beispielsweise Hersteller und Modelle.
Floriano Zini holds a PhD in Computer Science form the University of
Genova (Italy) and a degree in Computer Science from the University of
Torino (Italy). He has been principle researcher or co-author of more than
40 scientific papers and technical reports on grid computing, agent oriented software engineering, and machine learning. Floriano Zini has work
experience both in companies and research centers. Before joining the
University of Bozen-Bolzano he worked for Expert System, a leading company in semantic text mining and for ITC-irst (now: Fondazione Bruno Kessler) one of the main Italian research centers for information technology.
Italiano. Floriano Zini progetta e sviluppa sistemi di supporto e suggerimento mobili personalizzati per esigenze sanitarie e porta avanti le sue
ricerche nel campo utilizzando tecniche proprie del machine learning (apprendimento automatico), dell’interazione uomo-computer, dell’ user
modeling (modellazione utente) e dei recommender systems. Attualmente sta collaborando con l’ospedale di Merano per utilizzare i dispositivi
mobili in modo tale da migliorare i servizi ai pazienti. Floriano Zini è coautore di più di 40 articoli scientifici e relazioni tecniche sul grid computing,
sul agent-oriented software engineering e sul machine learning.
Deutsch. Dr. Zini projektiert und entwickelt mobile und personalisierte Beratungssysteme für das Gesundheitswesen. Für weitere Forschungen auf
diesem Gebiet nutzt er Techniken aus dem Bereich des Maschinenlernens,
Mensch-Computer-Interaktion, Benutzermodellierung und Empfehlungssysteme. Derzeit arbeitet er mit dem Krankenhaus Meran zusammen, um
mobile Geräte zur Verbesserung der Pflege der Patienten einzusetzen. Er
ist der Hauptforscher oder Ko-Autor von über 40 wissenschaftlichen Veröffentlichungen und technischen Berichten über Grid Computing, agentenorientierte Software-Entwicklung und Maschinenlernen.
43
Intelligent Information Access and Query
processing
There is great interest in development of
integrated logic-based view of Knowledge
Representation and Database technologies. KRDB technologies offer promising
solutions to problems concerning:
Logic-based
Computational
Linguistics
Information
Integration
Distributed and
Web Information
Systems
Conceptual Data
Modeling and
Ontology Design
E-services
Peer to Peer
systems
research.areas
Computational
Logic
current projects
44
NET2 Network for Enabling Networked Knowledge
ONTORULE ONTOlogies meet business RULEs
ACSI Artifact-Centric Service Interoperation
TERENCE Adaptive Learning System for Reasoning about
Stories with Poor Comprehenders and their Educators
Italy-South Africa
Technologies for Conceptual Modelling and
Intelligent Query Formulation.
ICOM Tool for Intelligent Conceptual Modelling.
Quest System
Tool for Ontology-based Data Access
WONDER system
Web ONtology mediateD Extraction of Relational data
LODAM Logics for Ontology-based Data Management
MaSCoD Managing Services Combined with Data
CRESCO Context-based Reasoning about Events of Stories with &
for poor Comprehenders
KBASe
Knowledge-based Access to Services
Bio-informatics
KRDB
KRDB Research Center
Founded in 2002, the KRDB Research Centre
aims to be an international center of excellence
in basic and applied research on KRDB technologies. It works on projects that bring innovative
ideas and technologies to local, national and international companies and public agencies.
Computer systems deal with structured objects
with well defined properties that represent concepts and notions of the real world. Research
in Knowledge Representation (KR) and in Databases (DB) studies languages and develops
systems for describing, querying, and manipulating structured objects. Both areas differ in
the specific way this is carried out, although
the rise of object-centered formalisms in the
last decade has significantly influenced their
convergence.
It is not surprising that KR and DB experts perceive structured objects differently. Database
systems require algorithms for the efficient
management of large amounts of data with a
relatively simple structure. Knowledge representation languages, on the other hand, emphasize expressiveness and complex inference
mechanisms, but knowledge bases are usually
relatively small. The goal of the KRDB Research
Centre for Knowledge and Data is to bridge the
gap between these two areas, and to develop
techniques and tools for the efficient management of large amounts of data with a complex
structure and complex interrelationships.
Italiano. Il Centro di ricerca KRDB, fondato nel
2002, punta ad essere un centro internazionale
di eccellenza nel campo delle ricerche teoriche
e applicate per le tecnologie KRDB. Il Centro lavora su progetti volti a trasferire idee e tecnologie innovative ad aziende locali, nazionali e internazionali, nonché ad enti pubblici.
La ricerca nei campi della Rappresentazione
della Conoscenza (KR) e della Progettazione
dei Database (DB) studia e sviluppa linguaggi
atti a descrivere oggetti strutturati. I due settori di ricerca, ancorché diversi nelle particolari
metodologie di cui sopra, condividono l’obiettivo di rappresentare una parte del mondo reale in maniera strutturata, tenendo conto tra l’altro del fatto che nell’ultimo decennio il fiorire di
formalismi incentrati sugli oggetti ha influenzato in maniera considerevole la convergenza dei
linguaggi.
I sistemi di database necessitano di algoritmi
per un’efficiente valutazione delle query rivolte
a database ampi ma contenenti oggetti relativamente semplici. I linguaggi di rappresentazione
della conoscenza, dal canto loro, pongono l’accento sull’espressività e sui meccanismi di inferenza complessi, ma le basi di conoscenza sono,
di solito, piuttosto ridotte. L’obiettivo perseguito dal Centro di ricerca per la conoscenza e i dati
(KRDB) è quello di colmare la lacuna che divide
questi due settori di ricerca.
Deutsch. Das im Jahr 2002 gegründete Forschungszentrum KRDB hat sich das Ziel gesteckt, zu einem internationalen Spitzenzentrum für Grundlagen- und angewandte Forschung auf dem Gebiet der KRDB-Technologien
zu werden. Das Zentrum arbeitet an Projekten,
die innovative Ideen und Technologien für lokale, nationale und internationale Gesellschaften
und öffentliche Behörden bieten sollen.
Die Forschung auf dem Gebiet der Wissensdarstellung (KR) und des Aufbaus von Datenbanken (DB) bringt Sprachen für die Beschreibung
strukturierter Objekte hervor. Obgleich sie sich
in den Einzelheiten der Definition von Eigenschaften unterscheiden, verfolgen beide Forschungsgebiete das Ziel, einen Teil der Welt auf
strukturierte Weise darzustellen. Darüber hinaus hat das Aufkommen von objektorientierten Formalismen in den letzten zehn Jahren die
Sprachenkonvergenz beträchtlich beeinflusst.
Datenbanksysteme benötigen Algorithmen für
effiziente Anfragenbearbeitung bei Datenbanken, die zwar umfangreich sind aber relativ
einfache Objekte enthalten. Die Sprachen zur
Wissensdarstellung setzen wiederum den Akzent auf die Expressivität und auf komplexe Inferenz-Mechanismen, jedoch sind Wissensdatenbanken im allgemeinen relativ klein. Ziel des
Forschungszentrum für Wissen und Datenbanken (KRDB) ist es, die Lücke zwischen diesen
beiden Forschungsgebieten zu überbrücken.
45
I/O YEARS
wernerNutt ( professor )
A wealth of data is is available today to companies, government and individuals to help them
make decisions such as what products to advertise, how many employees to hire, or what to
buy. However, this data can be difficult to use
because it comes from different sources and is
organized in different ways. Also, it is often uncertain and incomplete.
Werner Nutt develops techniques that enable
people and computer programs to take advantage of imperfect collections of information.
The main idea is that users should not be burdened with the diversity of origin, organization,
certainty and completeness of data. Instead,
users should be presented with coherent information where uncertainty and incompleteness
are represented by intuitive concepts.
One approach is to let users think they are looking at information from a single database even
if the they are actually querying multiple sources. Werner Nutt applies these techniques to
monitoring large distributed computer systems
where many programs generate data about the
status of the individual components. In this way
users can find relevant data without bothering
about where to look for it [1].
In an ongoing collaboration with the Province of
Bolzano, he is working with his students to create
tools that combine statistical and geographical
data stored in different information systems. Businesss intelligence software is then used to analyze
statistical data in relation to geographic criteria.
To capture uncertainty of information, he has
worked with researchers from Paris to develop
models where probability is attached to data.
This makes it possible to represent how likely it
is that a person still lives at a certain address if
the data is a year old. The big challenges for this
research were how to run statistical queries on
the data and how to take account of probabilities when updating the data [2,3].
In a collaboration with the IT department of the
province, he devised techniques to manage data completeness for students in public school.
Data about pupils was stored in a distributed
database with each school responsible for
maintaining data about pupils. Given human
nature, data was often missing and it wasn’t
clear which kind. The research challenge was to
devise algorithms that could identify which data is truly relevant and check whether it is sufficient for answering a query.
Werner Nutt finds that most of research ideas
and results come from joint work with his PhD
students and are inspired by the discussions and
the flow of ideas among his colleagues in KRDB.
He offers courses on databases and distributed
systems with the aim of teaching students to
build a functioning system. Using concepts
learned in lectures, students apply then apply
techniques in the lab in teams to create a database or distributed application. Finally, they
present achievements to their fellow students.
The idea he likes to convey is that while good
theory is fundamental for practical work, working solutions are what proves a theory correct.
Knowing how to address problems is fundamental to the work of a scientist and is the basis of
fundamental research questions. To stay cur­rent with real-world data management pro­blems,
Italiano. Werner Nutt sviluppa metodologie che consentono a persone e
programmi computerizzati di avvalersi di «raccolte imperfette» di informazioni. L’idea chiave è che l’utente non dovrebbe preoccuparsi della diversa origine, organizzazione, certezza e completezza dei dati che sta
consultando, ma dovrebbe avere a disposizione informazioni coerenti,
nel contesto delle quali l’incertezza e l’incompletezza sono rappresentate tramite concetti intuitivi.
Un approccio consiste nel fare in modo che gli utenti pensino di stare consultando informazioni provenienti da un unico database, anche se in effetti la loro query è stata inviata a più sorgenti. Werner Nutt applica questa metodologia al monitoraggio di sistemi informatici distribuiti molto
estesi, nei quali vari programmi generano dati sullo stato dei singoli componenti. In tal modo, gli utenti possono reperire i dati di loro interesse
senza preoccuparsi di dove andarli a cercare.
Werner Nutt sta lavorando, in collaborazione con la provincia di Bolzano, alla
realizzazione di tools in grado di combinare dati statistici e geografici immagazzinati in vari sistemi informatici, per poi analizzare i dati statistici in base
a criteri geografici grazie all’uso di idonei business intelligence software.
46
Werner Nutt maintains relationships with researchers, organisations and companies.
Deutsch. Prof. Nutt entwickelt Techniken, die es Personen und Com­
puterprogrammen gestatten, auch «nicht perfekte» Informationssammlungen zu nutzen. Die Grundidee ist die, dass sich die Benutzer nicht um
die unterschiedliche Herkunft, Organisation, Gewissheit und Vollständigkeit der Daten sorgen sollten. Stattdessen sollen sie über zusammenhängende Informationen verfügen können, wo Ungewissheit und Unvollständigkeit durch intuitive Konzepte dargestellt werden.
Einer der Ansätze besteht darin, dass der Benutzer in dem Glauben gelassen wird, dass ihm Daten aus einer einzelnen Datenbank vorliegen, während in Wirklichkeit mehrere Quellen abgefragt werden. Prof. Nutt wendet diese Methode auf die Überwachung verteilter, sehr breit angelegter
Computersysteme an, wo viele Programme Daten über den Status der einzelnen Komponenten generieren. Auf diese Weise können die Benutzer
die gewünschten Daten auffinden, ohne sich darum zu kümmern, wo sie
gesucht werden.
In Zusammenarbeit mit der Provinz­verwaltung Bozen arbeitet Prof. Nutt
an der Entwicklung von Tools, die in der Lage sind, in unterschiedlichen
Informationssystemen gespeicherte statistische und geografische Daten
miteinander zu kombinieren, so dass geeignete Business-IntelligenceProgramme (BI) in der Lage sind, statistische Daten aufgrund geographischer Kriterien aus­zuwerten.
FACULTY OF COMPUTER SCIENCE­
—FREE UNIVERSITY OF BOZEN-BOLZANO
selected publications
1
A.W. Cooke, A.J.G. Gray, W. Nutt. Stream Integration Techniques for Grid Monitoring. J. Data Semantics 2: 136-175
(2005)
2
S. Abiteboul, T.-H. H. Chan, E. Kharlamov,W. Nutt, P. Senellart. Aggregate queries for discrete and continuous probabilistic XML. International Conference on Database Theory
2010: 50–61
3
S. Abiteboul, T.-H. H. Chan, E. Kharlamov, W. Nutt, P. Senellart. Capturing Continuous Data and Answering Aggregate
Queries in Probabilistic XML. ACM Transactions on Database
Systems 36(4) (2011)
4
S. Razniewski, W. Nutt. Completeness of Queries over Incomplete Databases. Proc. VLDB Endowment 4(11), 749–760
(2011) 2
47
I/O YEARS
diegoCalvanese ( associate­Professor )
Diego Calvanese conducts research on principled approaches to modeling, reasoning over,
querying, and updating data, knowledge, and
information in a wide sense. He looks not just at
highly structured data and information found in
traditional database systems, but also at data
on the web and other knowledge sources.
Broadly speaking, his research spans two distinct areas: One is databases and data modeling
where the focus is on the efficient representation and querying of large amounts of data. The
other is knowledge representation in artificial
intelligence. Here, the objective is to study
mechanisms for representing complex knowledge, and to develop techniques and technologies to efficiently reason about this knowledge.
More specifically, Diego Calvanese carries out
research on logic-based formalisms for representing data and knowledge, semantic and conceptual data modeling, query processing in the
presence of views, management of semi-structured and graph-structured data, data integration and data warehousing, management of data on the web, and service modeling, composition, and verification. A key aspect of his research is to understand and to correctly deal
with the trade-off between a highly expressive
formalism that describes a given domain more
fully, and high complexity that imposes­
a heavier load in terms of computing power and
data storage.
His research puts a strong emphasis on theoretical foundations. Nevertheless, his work is driven by practical problems and research leads to
the development of software tools and systems
useful in the real world. His foundational research on reasoning in expressive logics for
knowledge representation led to the development of standards by the World-Wide-Web Consortium (W3C) for semantic Web languages.
Currently, his research group is developing prototype systems for intelligent data access
based on theoretical work on lightweight
knowledge representation languages.
Testifying to his research achievements are the
high impact factors (h-index 53, g-index 94)
that his publications have achieved.
He has extensive teaching experience at all levels (BSc, MSc, PhD) including basic subjects
like the foundations of programming, databases, formal languages, and theory of computation, as well as more advanced topics connected to his research like knowledge representation, ontologies, data integration, view-based
query processing, advanced automata theory,
and computational complexity.
Diego Calvanese has an extensive research network, including ongoing and past research collaborations with groups in Europe, and North
America, including the University of Rome, Milan Politechnic, IBM Watson Research, Rice University Houston, Technical University of Vienna,
University of Oxford, and Stanford University.
He has coordinated and been involved in EU
Framework Programs V-VI-VII, as well as in national and local research projects. He serves on
Italiano. Le ricerche di Diego Calvanese sono rivolte allo sviluppo di approcci fondati su solidi principi ed utili alla modellazione, al ragionamento, alla query e all’aggiornamento dati, alla conoscenza e all’informazione. Diego Calvanese non si concentra esclusivamente su dati altamente
strutturati e su informazioni rintracciabili nei sistemi di database tradizionali, ma anche sui dati reperibili nel web e in altre fonti di conoscenza.
In generale, il suo studio coinvolge due aree distinte. Una comprende i database e la modellazione dei dati, ambiti in cui si concentra sulla rappresentazione di conoscenze complesse e sulla query su vasta scala, mentre l’altra è la rappresentazione della conoscenza nel contesto dell’intelligenza artificiale. L’obiettivo è quello di studiare i meccanismi utili alla
rappresentazione di conoscenze complesse e di sviluppare tecniche e
tecnologie che permettano di ragionare su di esse in maniera efficiente.
Le ricerche di Diego Calvanese danno notevole rilievo ai fondamenti teorici, ma nonostante ciò il suo lavoro è orientato ai problemi pratici e la sua
ricerca ha come risultato la creazione di strumenti e sistemi software utili
nel mondo reale. I suoi studi, basati sul ragionamento nel campo della logica per la rappresentazione della conoscenza, hanno condotto allo sviluppo di linguaggi di Web Semantico per il World Wide Web.
48
the program committees of international conferences on databases and artificial intelligence, and serves on the editorial board of the
Journal of Artificial Intelligence Research. In addition to chairing numerous international workshops, he was program co-chair of the 2nd International Conference on Web Reasoning and
Rule Systems (RR2008), and organized the 8th
EDBT Summer School on Database Technologies for Novel Applications (EDBTSS2007).
Deutsch. Die Forschungsarbeit von Prof. Calvanese betrifft die grundsatzorientierten Ansätze für Modellierung, Folgerung, Anfrage, Aktualisierung von Daten, Wissen und Information. Prof. Calvanese konzentriert
sich nicht ausschließlich auf hoch strukturierte Daten und Informationen
aus herkömmlichen Datenbanksystemen, sondern auch auf Daten aus
dem Web und anderen Wissensquellen.
Grundsätzlich erstreckt sich seine Forschungsarbeit auf zwei getrennte
Bereiche. Einer davon betrifft Datenbanken und Datenmodellierung und
konzentriert sich auf die effiziente Darstellung und auf die Abfrage großer
Datenmengen. Der andere betrifft die Wissensdarstellung im Rahmen der
künstlichen Intelligenz. Zielsetzung sind dabei die Untersuchung der Mechanismen zur Darstellung komplexen Wissens und die Entwicklung von
Techniken und Technologien zur effizienten Verwertung dieses Wissens.
Seine Forschung setzt den Akzent ganz besonders auf die theoretischen
Grundlagen. Dennoch wird seine Arbeit durch praktische Probleme angetrieben und seine Forschung führt zur Entwicklung von Softwaretools
und Systemen, die der wirklichen Welt dienlich sind. Seine Grundlagenforschung über Folgerungsfunktionen im Rahmen der expressiven Logik
zur Wissensdarstellung führten zur Entwicklung von semantischen WebSprachen für das World Wide Web.
FACULTY OF COMPUTER SCIENCE­
—FREE UNIVERSITY OF BOZEN-BOLZANO
selected publications
F. Baader, D. Calvanese, D. McGuinness, D. Nardi, and P. F.
Patel-Schneider, editors. The Description Logic Handbook:
Theory, Implementation, and Applications. Cambridge University
Press, 2003. ISBN 9780511060632.
D. Calvanese, G. De Giacomo, and M. Y. Vardi. Decidable containment of recursive queries. Theoretical Computer Science,
336(1):33–56, 2005.
D. Berardi, D. Calvanese, and G. De Giacomo. Reasoning on UML
class diagrams. Artificial Intelligence, 168(1-2):70-118, 2005.
D. Calvanese, G. De Giacomo, and M. Lenzerini. Conjunctive
query containment and answering under description logics constraints. ACM Transactions on Computational Logic, 9(3):22.122.31, 2008.
A. Artale, D. Calvanese, R. Kontchakov, and M. Zakharyaschev. The DL-Lite family and relations. Journal of Artificial Intelligence Research, 36:1-69, 2009.
49
I/O YEARS
enricoFranconi ( associate professor )
In recent years, knowledge and data base applications have progressively converged towards
integrated technologies which try to overcome
the limits of each discipline alone. Research in
Knowledge Representation originally concentrated around formalisms that are typically
tuned to deal with relatively small sources of
knowledge, but provide intelligent deduction
services and a highly expressive language for
structuring information. In contrast, Information Systems and Database research mainly
deal with efficient storage and retrieval using
powerful query languages, andhandle sharing
and displaying of large numbers of multimedia
documents. However, these data representations were relatively simple and flat, and reasoning over the structure and the content of the
documents played only a minor role.
This distinction between the requirements in
Knowledge Representation (KR) and Databases
(DB) is vanishing rapidly. On the one hand, to be
useful in realistic applications, a modern knowledge-based system must be able to handle
large data sets and provide expressive query
languages. This suggests that techniques developed in the DB area could be useful for KR
systems. On the other hand, the information
stored on the Web, in digital libraries, and in data warehouses is now very complex and has
deep semantic structures. This requires more
intelligent modelling languages and methodologies, as well as reasoning services on those
complex representations to support design,
management, flexible access, and integration.
Therefore, a great call for an integrated formal
view of Knowledge Representation and Database technologies is emerging.
Enrico Franconi is member of the Advisory Committee of the World Wide Web Consortium (W3C),
the organisation driving the standards for the
web. He chairs and co-chairs many international
conferences and workshops, is a member of the
editorial boards of international journals, and is
a leader of international research networks.
To this aim, the KRDB Research Centre for
Knowledge and Data at the Faculty of Computer
Science of the Free University of Bozen-Bolzano
was founded in 2002 by Enrico Franconi. It aims
to be an international center of excellence in basic and applied research on KRDB technologies.
It strives to introduce to enterprise innovative
ideas and technologies based on research developed at the center. The KRDB Research Centre currently it includes 33 members: 5 professors, 7 researchers, 14 PhD students, 7 research
assistants. The prestigious Scientific Advisory
Board of the KRDB Research Centre is composed by: Ron Brachman (Vice-president of Yahoo Research), Thomas Eiter (Professor at the
Technische Universität Wien), Alon Halevy
(Head of structured-data management research
group at Google), Rick Hull (Research Manager
at IBM Research), John Mylopoulos (Professor
at the Università di Trento and the University of
Toronto). The website is http://www.inf.unibz.
it/krdb/.
Enrico Franconi is currently principal investigator of the European People Marie Curie action
on a Network for Enabling Networked Knowledge (NET2), and of the European large-scale
integrating project (IP) on ONTOlogies meet
business RULEs (ONTORULE). In the recent past,
he was principal investigator of the European
network of excellence «Realizing the Semantic
Web» (KnowledgeWeb), of the European network of excellence «Interoperability Research
for Networked Enterprises Applications and
Software» (InterOp), of the European information society technology (IST) technological development and demonstration (RTD) project SEmantic Webs and AgentS in Integrated Economies (SeWAsIE), of the British Engineering and
Physical Sciences Research Council (EPSRC)
project Knowledge Representation meets Databases (KRDB), of the Euro­pean ESPRIT Long
Term Research project Foundations of Data
Warehouse Quality (DWQ). In his career so far,
Enrico Franconi has been in charge of more than
2 1.75 million in research funding.
Italiano. Enrico Franconi, fondatore del Centro di ricerca KRDB, si occupa
di soluzioni per le aziende basate sulle idee e tecnologie innovative sviluppate dal Centro, puntando in particolare sulle ricerche volte all’integrazione tra le tecnologie di Rappresentazione della Conoscenza e quelle
dei Database.
La distinzione tra i requisiti propri della Rappresentazione della Conoscenza (KR) e quelli propri dei Database (DB) si va facendo sempre più
sfumata. Un sistema moderno basato sulla conoscenza, per essere utile
in pratica, deve essere in grado di gestire estese banche dati e mettere
a disposizione un linguaggio espressivo per le query, il che implica che
le tecniche elaborate nel settore DB potrebbero risultare utili anche per
i sistemi KR. D’altro canto, le informazioni attualmente memorizzate nel
web sono assai complesse e hanno strutture semantiche molto articolate.
Questo rende necessario creare metodologie e linguaggi rappresentativi
più intelligenti e prestare idonei servizi di ragionamento, in modo da fornire un adeguato supporto alle funzioni di progettazione, gestione, accesso flessibile e integrazione .
Enrico Franconi è membro del Comitato consultivo del World Wide Web
Consortium (W3C), l’organizzazione che gestisce gli standard del web.
Protagonista della ricerca internazionale, presiede conferenze internazionali e fa parte dei comitati editoriali di varie riviste scientifiche.
50
Deutsch. Prof. Franconi, Gründer des For­schungszentrums KRDB, befasst
sich mit Lö­sungen für Unternehmen auf der Grundlage der vom Zentrum
selbst entwickelten innovativen Ideen und Technologien. Er legt besonderen Wert auf die Forschungsarbeit zur Integration von Wissensdarstellung und Datenbank-Technologien.
Die Unterscheidung zwischen den Erfordernissen der Wissensdarstellung (KR) und denen der Datenbanken (DB) schwindet rasch. Um in realistischen Anwendungen nützlich zu sein, muss ein modernes, wissensbasiertes System in der Lage sein, große Datenmengen zu verarbeiten
und expressive Anfragesprachen bereit zu stellen. Daraus ergibt sich,
dass die im Bereich der Datenbanken entwickelten Techniken auch für
KR-Systeme dienlich sein können. Andererseits sind die im Web gespeicherten Informationen heute sehr komplex und weisen tief verschachtelte semantische Strukturen auf. Aus diesem Grund sind intelligentere Modellierungssprachen und -Methoden erforderlich, ebenso wie Schlussfolgerungsdienste zu diesen komplexen Darstellungen, um Design, Management, flexiblen Zugriff und Integration zu unterstützen.
Prof. Franconi ist Mitglied des Beratungsausschusses des World Wide
Web Consortium (W3C), d.h. der Organisation, die die Web-Standards bestimmt. Er ist ein bedeutendes Mitglied internationaler Forschungsnetzwerke, Vorsitzender internationaler Konferenzen und Redaktionsmitglied
verschiedener Fachzeitschriften.
FACULTY OF COMPUTER SCIENCE­
—FREE UNIVERSITY OF BOZEN-BOLZANO
selected publications
Fillottrani, P. R., Franconi, E., and Tessaris.,
S.: The ICOM 3.0 intelligent conceptual modelling tool and methodology. Semantic Web
journal, 2011, to appear.
E. Franconi and D. Toman. Fixpoints in
temporal description logics. In IJCAI 2011,
Proceedings of the 22nd International Joint
Conference on Artificial Intelligence, pages
875–880, 2011.
B. ten Cate, E. Franconi, and I. Seylan. Beth
definability in expressive description logics. In
IJCAI 2011, Proceedings of the 22nd International Joint Conference on Artificial Intelligence, pages 1099–1106, 2011.
51
I/O YEARS
alessandroArtale ( assistant professor )
Semantic Web. Description logics are the formalisms underlying the standard Ontology Web
Language (OWL). OWL 2 will soon be a standard
of the the World Wide Web Consortium.
Alessandro Artale conducts research on formal
theories and efficient principled systems for expressing and reasoning with structured knowledge. He emphasizes their use in conceptual
modeling tasks in domains with high semantic
complexity. His current research is looking at
knowledge representation and reasoning using
formal logic languages, especially description
logics and temporal logics.
Description logics are a family of logics used for
knowledge representation and reasoning which
are widely used in application areas such as
conceptual modelling, information and data integration, ontology-based data access, and the
One of Alessandro Artale’s research topics concerns study of how to extend description logics
to include ontological categories like parts,
time, and spatial entities. In particular, he examines the computational properties of description logic extensions with the aim of developing a formal language to represent and reason with spatio-temporal dependent information. In this respect, he investigated various
temporal extensions of the classical DL setting,
applying them to the design of temporal databases. He has also investigated the notion of
«parts and wholes» to integrate them in description logics and conceptual models.
Another aspect of his research looks at providing access to large amounts of data using description logics integrated in high level, conceptual interfaces. The research is relevant for data
integration as well as for ontology-based data
access. In this setting, a fundamental inference
service answers complex database-like queries,
that is, queries expressed in a significant fragment of structured query language (SQL) by
taking into account the constraints of the ontology and data stored in an external database.
Italiano. Le ricerche condotte di Alessandro Artale si concentrano su teorie formali e su sistemi efficienti finalizzati all’espressione ed al ragionamento su conoscenza strutturata. Mette inoltre in evidenza il loro utilizzo
nella modellazione concettuale in contesti di elevata complessità semantica. I suoi attuali studi si rivolgono alla rappresentazione della conoscenza ed al ragionamento tramite l’utilizzo di linguaggi logici e formali,
tra cui in particolare la logica descrittiva e quella temporale.
La logica descrittiva viene ampiamente usata in applicazioni quali la modellazione concettuale, l’integrazione tra dati e informazioni, l’Ontology
Based Data Access (accesso ai dati basato sull’ontologia) e il Web semantico, che diverrà presto una caratteristica standard di Internet. Alessandro Artale si sta inoltre occupando di estendere il campo di applicazione
della logica descrittiva ad alcune categorie ontologiche quali le parti, il
tempo e le entità spaziali, così da poter sviluppare un linguaggio formale
atto a rappresentare le informazioni dipendenti dalla dimensione
spazio-temporale.
Alessandro Artale ha collaborato in qualità di ricercatore principale ad alcuni progetti, tra cui il DataExpiration per la Provincia di Bolzano.
52
To make the approach practical, the query processing needs to be as efficient as traditional
database query processing and it should leverage the relational technology already used for
storing the data. With this goal in mind, Alessandro Artale is studying the computational and expressive property of the DL-Lite family of logics.
Alessandro Artale has worked on a number of
projects as principle investigator, including DataExpiration for the Province of Bolzano, Formalization of Temporal Databases and Formalizing the Design of Temporal Databases Using
Description Logics for the University of Bolzano.
The research concentrates on the design phase
of temporal databases, with the aim of establishing a logic-based formalization for capturing temporal extended conceptual models (like
Entity-Relationships and UML models).
He worked on European Framework Program VI
projects TONES (Thinking ONtologiES) and
STREP FET, as well as on the European IST Network of Excellence on Realizing the Semantic
Web. He was a member the European IST Network of Excellence on Interoperability Research
for Networked Enterprises Applications and
Software. His recent teaching activity includes
undergraduate courses such as Principles of
Compilers, Databases, and Formal Methods. He
supervises at the doctoral, masters, and undergraduate levels.
Deutsch. Die Forschungsarbeit von Prof. Artale konzentriert sich auf formale Theorien und effiziente grundsatzorientierte Systeme für Expression und Schlussfolgerung mit strukturiertem Wissensgehalt, deren Einsatz bei der konzeptuellen Modellierung von Bereichen mit hoher semantischer Komplexität er vorantreibt. Seine derzeitigen Studien betreffen
die Darstellung des Wissens und die Schlussfolgerung durch Nutzung logischer und formaler Sprachen, zu denen insbesondere die Beschreibungs- und die Temporallogik gehören.
Die Beschreibungslogik findet breiten Einsatz in Anwendungsgebieten
wie beispielsweise die konzeptuelle Modellierung, die Informations- und
Datenintegration, der ontologiebasierte Datenzugriff und das Semantische Web, das in Kürze zu einem Standard-Webmerkmal werden wird.
Prof. Artale befasst sich darüber hinaus mit der Erweiterung der Beschreibungslogik durch Eingliederung ontologischer Kategorien wie Teile, Zeit, räumliche Entitäten zwecks Entwicklung einer formalen Sprache
zur Darstellung raum- und zeitabhängiger Informationen.
Prof. Artale hat als Haupt-Forschungsmitglied an zahlreichen Projekten
teilgenommen, worunter auch DataExpiration für die Provinz Bozen.
FACULTY OF COMPUTER SCIENCE­
—FREE UNIVERSITY OF BOZEN-BOLZANO
sergioTessaris ( assistant professor )
Sergio Tessaris is working to bridge the gap by
building systems and developing methodologies that exploit mathematical logic and knowledge representation. The key idea is to enrich
data with semantics to give it meaning. The
«meaning» of data can be leveraged to organise
and facilitate access to data.
Sergio Tessaris’ approach to semantics is to look
at «small data». This is data which can be precisely described and classified. The approach is
different the techniques used by Internet search
engines which uses statistical approaches
based massive data processing. Sergio Tessaris
works on solutions for intranets (data within organisations) rather than the Internet.
In today’s information society, the way we access data impacts our lives in important ways,
in terms of quality of life and business opportunities. Sources of data range from publicly
owned companies (like water, sewer, and garbage utilities) and government agencies to private companies that offer data as part of their
services or as core business.
There is a big difference between data and information. As an example, browsing a spare
parts catalog would be meaningless without
reference to the interpretation of codes describing the items.
His research addresses the problem of so called
«deep Web data» by enriching it with semantic
information. He uses well established knowhow in database design as a starting point to
build a semantic wrapper around data. The
wrapper enables high level integration and
flexible presentation of the underlying data. For
example, a natural-language interface exploits
an automated reasoning technique to improve
queries by focusing on the user’s information
needs.
In contrast to the more common ad-hoc database solutions, Sergio Tessaris sees a big opportunity in building databases grounded in
well tested technologies. In the end, they provide more flexible, robust solutions through
the use of formal tools like mathematical logic
and ontologies. Getting this type of solution adopted in the mainstream requires professionals
capable of understanding the advantages of a
principled approach.
Working in this direction, Sergio Tessaris teaches courses in the faculty’s International Master
programme. The European Master in Computational Logic (EMCL) is offered in collaboration
with other European Universities and research
organisations. The programme prepares students for the career of an IT in an information
society. The curriculum requires students to
study in several countries, training them in core
subjects of logic applied to computer science
and IT. Moreover, students have the chance to
work in close contact with some of the best research teams in Europe and Australia.
This research is a step towards a turn-key solution that will make it simpler to use legacy and
structured data within organisations.
Italiano. Nell’odierna società dell’informazione il modo in cui abbiamo
accesso ai dati influenza non solo la nostra qualità di vita, ma anche le
nostre opportunità economiche. L’obiettivo di Sergio Tessaris è colmare
la lacuna tra «dati» e «informazioni» creando sistemi e sviluppando metodologie con l’ausilio della logica matematica e della rappresentazione
della conoscenza. L’idea chiave sta nell’attribuire un significato ai dati integrandoli con elementi semantici.
Sergio Tessaris punta la sua attenzione sugli small data, cioè dati «piccoli»,
che possono essere descritti e classificati con esattezza. Un approccio diverso da quello adottato dai motori di ricerca Internet, che prevedono l’esplorazione di enormi quantitativi di dati secondo criteri statistici.
La ricerca di Sergio Tessaris è rivolta al problema dei deep Web data (dati
del Web profondo), che arricchisce con informazioni semantiche. Si tratta
di un passo avanti verso una soluzione «chiavi in mano» che faciliti l’uso
di dati aziendali o strutturati all’interno delle organizzazioni. Sergio Tessaris ritiene che costruire dei database fondati su tecnologie ben fondate sia da preferire rispetto alle piu’ comuni soluzioni di database ad-hoc.
Deutsch. In der heutigen Informationsgesell­schaft beeinflusst die Art
des Datenzugriffs nicht nur unsere Lebensqualität, sondern auch die Geschäftschancen. Prof. Tessaris arbeitet an der Überbrückung der Lücke
zwischen Daten und Information durch den Aufbau von Systemen und die
Entwicklung von Methoden, die die mathematische Logik und die Wissensdarstellung nutzen. Der Schlüsselgedanke besteht in der Bereicherung der Daten um semantische Faktoren, die ihnen Sinn verleihen.
Prof. Tessaris beschäftigt sich mit der Kleindatenhaltung, d.h. mit Daten,
die präzise beschrieben und klassifiziert werden können. Dieser Ansatz
weicht von den bei Internet-Suchmaschinen zum Einsatz kommenden
Techniken ab, die eine auf statistischen Ansätzen basierende Großdatenverarbeitung verwenden.
Die Forschungsarbeit von Prof. Tessaris befasst sich mit dem Problem der
Daten aus dem sogenannten «Versteckten Web» (deep web), die er mit
semantischen Informationen bereichert. Es handelt sich dabei um einen
Schritt in Richtung einer «schlüsselfertigen» Lösung, die die Verwendung
von Altdaten und strukturierten Daten innerhalb der Firmen vereinfachen
soll. Prof. Tessaris hält es für richtig, Datenbanken aufgrund bewährter
Technologien aufzubauen, was im Kontrast zu der weiter verbreiteten
Verwendung von Ad-Hoc-Datenbanklösungen steht.
53
I/O YEARS
researchers.krdb
valeriaFionda
Valeria Fionda’s research focuses on two main research topics: bioinformatics and linked open data. In bioinformatics she is working on the analysis and comparison of biological networks for knowledge discovery. Biological networks store information about interactions among biological entities regulating cell activity in the form of graphs (sets of nodes
representing biological entities, such as proteins or genes, connected by
edges modeling their relations).
Valeria Fionda
To properly look up the large amount of interaction data available in the
plethora of biological data banks and mine useful information, the design and development of automatic tools has become crucial. In particular, she worked to develop a framework—called Biotron—for the management and execution of biological workflows (Fionda, V. , & Pirrò, G.
(2011). BioTRON: A Biological Workflow Managment System.). Moreover,
she worked to outline the challenges, the benefits and the results obtained and obtainable in this research area (Fionda, V., & Palopoli, L.
(2011). Biological Network Querying Techniques: Analysis and Comparison and Fionda V. (2011). Biological Network Analysis and Comparison:
Mining new Biological Knowledge).
Rosella Gennari
In the field of Linked Open Data (LOD), part of a wider concept known as
the «Web of Data», she recently worked on the theoretical formalization
of the problem of querying the LOD network to retrieve «interesting» information and to develop a tool for the controlled navigation and exploration of the Web of Data for retrieving interesting data sources (Poster.
Semantically-driven recursive navigation and retrieval of data sources
in the Web of Data. At the 10 th International Semantic Web Conference
(ISWC 2011)). «Interesting» information and data sources are those that
can be selected out of the vast amount of information stored in the Web
of Data, according to user preferences such as the time the user is willing to wait to obtain the results, the size of the retrieved data the user is
prepared to receive, trusted information sources and the path of information to be followed.
Marco Montali
Italiano. Le ricerche di Valeria Fionda si concentrano sulla bioinformatica e sui linked open data. In bioinformatica, Fionda lavora sull’analisi e
la comparazione tra reti biologiche per l’esplorazione delle conoscenze.
Nel contesto di un progetto chiamato Biotron, la ricercatrice studia i tools
più adatti a gestire i grandi quantitativi di dati presenti nelle banche dati
biologiche. Valeria Fionda lavora anche alla formalizzazione teorica del
problema del querying nell’ambito dei Linked Open Data.
Deutsch. Die Forschungsarbeit von Dr. Fionda konzentriert sich auf BioInformatik und das Verlinken von im Web frei verfügbaren Daten. Auf dem
Gebiet der Bioinformatik arbeitet sie an der Analyse und Komparation
von biologischen Netzwerken zur Erkennung von Wissen. Im Rahmen des
Projekts Biotron arbeitet sie an Tools für das Management großer Datenmengen in biologischen Datenbanken. Außerdem arbeitet sie an der theoretischen Formalisierung des Anfragenproblems im Bereich der Linked
Open Data.
54
FACULTY OF COMPUTER SCIENCE­
—FREE UNIVERSITY OF BOZEN-BOLZANO
rosellaGennari
marcoMontali
Rosella Gennari conducts research in knowledge representation. In particular, she has been working on knowledge representation since 2005
for Technology-Enhanced Learning (TEL) in collaboration with human
computer interaction experts and cognitive psychologists. Research on
TEL investigates how information and communication technologies can
be designed in order to support pedagogical activities. Currently, Rosella
Gennari is principal investigator for the following projects: DARE (financed
by the Province of Bozen-Bolzano), LODE (financed by the CARITRO bank)
TERENCE (financed by the EC). In addition she is scientific and technological coordinator for TERENCE, and coordinator for DARE. In particular,
TERENCE aims at designing an adaptive learning system for children with
specific inference-making problems on texts.
Marco Montali focuses on theoretical, methodological and experimental
aspects of logic-based languages and automated reasoning techniques.
His reseach involves creating specifications as well as monitoring and verifying knowledge-intensive business processes, clinical guidelines, service-oriented and multi-agent systems. He is currently investigating the
formal specification and verification of artifact- and data-centric business
processes. This encompasses control-flow and the manipulation of data.
Rosella Gennari co-authored a textbook on mathematical logic and the
logic of common-sense, and around 40 peer-reviewed international papers. She obtained her PhD degree in Computer Science at the ILLC, University of Amsterdam, in 2002, after a «laurea» in Mathematics at the University of Pavia and a masters degree in logic at ILLC. In 2002 Rosella
Gennari was appointed as a post-doc ERCIM fellow at CWI, Amsterdam,
and then as post-doc fellow at the Automated Reasoning Systems division of FBK-irst, Trento.
Italiano. Le ricerche svolte daRosella Gennari riguardano la Rappresentazione della Conoscenza. In particolare, Rosella Gennari lavora dal 2005
al progetto Technology-Enhanced Learning (TEL), che si occupa di tecnologie a supporto dell’apprendimento. Le ricerche in campo TEL studiano
le modalità di progettazione di tecnologie informatiche e della comunicazione atte a supportare le attività pedagogiche. Rosella Gennari è coautrice di un libro di testo sulla logica matematica e la logica del buonsenso e di circa 40 articoli scientifici internazionali ammessi a peer-review.
Deutsch. Dr. Gennari betreibt Forschung auf dem Gebiet der Wissensdarstellung. Insbesondere arbeitet sie seit 2005 für das Projekt Technology-Enhanced Learning (TEL), d.h. Technologie-gestütztes Lernen. Im Rahmen von TEL wird untersucht, wie Informations- und Kommunikationstechnologien ausgelegt werden können, um pädagogische Tätigkeiten zu
unterstützen. Dr. Gennari ist auch Koautorin eines Lehrtextes über mathematische Logik und logische Schlussfolgerung, sowie etwa 40 internationaler, durch Peer-Review anerkannter Veröffentlichungen.
He authored a Springer book and more than 50 papers on: (declarative)
modeling, verification and monitoring of business processes, service choreographies and clinical guidelines; compliance checking and process
mining; open multi-agent systems and commitments; service interoperability, composition and discovery; computational logic and logic programming; temporal reasoning and event calculus.
Marco Montali got a BEng and a MEng in Computer Science Engineering
at the University of Bologna, both cum laude. He was a PhD candidate
in Electronics, Computer Science and Telecommunications Engineering at
the same university.
His PhD thesis received the «Marco Cadoli» Distinguished Dissertation
Award, awarded by the Italian Association for Logic Programming to the
most outstanding Italian PhD theses focused on computational logic and
discussed between 2007 and 2009.
Italiano. Marco Montali studia gli aspetti teorici, metodologici e sperimentali dei linguaggi logici e delle tecniche di ragionamento automatico.
La sua ricerca si estende alla creazione di specifiche e al monitoraggio e
verifica di processi economici ad alto tasso di conoscenza, direttive cliniche, sistemi orientati ai servizi e sistemi multi-agente. Attualmente, Marco Montali si occupa della specifica e verifica formali dei processi economici incentrati sui manufatti e sui dati.
Deutsch. Dr. Montali untersucht die theoretischen, methodologischen
und experimentellen Aspekte der logikbasierten Sprachen und der automatischen Schlussfolgerungstechniken. Seine Forschungsarbeit umfasst die Erstellung von Spezifikationen und die Überwachung und Prüfung wissensintensiver Geschäftsprozesse, klinischer Leitlinien, serviceorientierter und Multiagenten-Systeme. Derzeit befasst sich Dr. Montali
mit der formalen Spezifikation und Prüfung der produkt- und datenzentrierten Geschäftsprozesse.
55
I/O YEARS
researchers.krdb
alessandroMosca
Alessandro Mosca conducts research in Knowledge Representation and
Reasoning, Conceptual Modeling, Formal Ontology, and Cultural Evolution and Evolutionary Studies.
He is currently working on several research projects, including SIMULPAST which aims to develop an innovative and interdisciplinary methodological framework to model and simulate ancient societies and their
relationship with environmental transformations. He is the university’s
coordinator for the ONTORULE project, a large-scale IP partially funded
by the European Union’s Framework Program VII, whose main aim is to
integrate the knowledge and technology necessary for acquiring ontologies and rules from the most appropriate sources such as natural language documents. ONTORULE also considers how to handle the management and maintenance of the ontologies, and how to integrate them in IT
applications.
Alessandro Mosca
His teaching currently covers Non-classical and Modal Logics. He has a
doctoral degree from University of Milano-Bicocca’s Department of Computer Science, Systems, and Communication.
Giuseppe Pirrò
Mariano Rodriguez-Muro
56
Italiano. Alessandro Mosca svolge le sue ricerche in vari campi: dalla rappresentazione della conoscenza alla modellazione concettuale, dall’ontologia formale all’evoluzione culturale, agli studi evoluzionistici. Attualmente sta lavorando al SIMULPAST, una struttura sulla quale modellare
e simulare le società dell’antichità e il loro rapporti con i cambiamenti
ambientali. Alessandro Mosca è il coordinatore di facoltà per il progetto
ONTORULE, il cui obiettivo è l’acquisizione di regole e ontologie da documenti redatti in linguaggio naturale.
Deutsch. Dr. Mosca betreibt Forschungsarbeit im Bereich der Wissensdarstellung und Schlussfolgerung, der konzeptuellen Modellierung, der
Formalen Ontologie, der Kulturentwicklung und entwicklungswissenschaftlicher Studien. Derzeit arbeitet er an SIMULPAST, einem Rahmenprogramm zur Modellierung und Simulation von altertümlichen Gesellschaften und deren Beziehung zu den Veränderungen der Umwelt. Dr.
Mosca ist der Fakultäts-Koordinator für das Projekt ONTORULE, dessen
Zielsetzung die Gewinnung von Ontologien und Regeln aus natürlichen
Sprachdokumenten ist.
FACULTY OF COMPUTER SCIENCE­
—FREE UNIVERSITY OF BOZEN-BOLZANO
giuseppePirrò
marianoRodriguez-Muro
The Semantic Web is an extension of the existing World Wide Web. It uses ontologies (models of concepts and the relationships between them)
to annotate information with a semantic meaning that machines can process. This is achieved by using standard languages with increasing expressive power from the Resource Description Framework (RDF) to the
Ontology Web Language (OWL).
Mariano Rodriguez-Muro focuses his research on the theory and practice of Ontology Based Data Access (OBDA). This field concerns finding
techniques that facilitate data management, data analysis and data consumption by means of automated deduction. His research helps to generate technologies that translate into reducing the cost of data integration and data management, as well as creating more dynamic and adaptable IT infrastructures. He has participated in joint projects with La Sapienza University of Rome, Monte di Paschi di Siena (MPS) and Accenture
as well as the European FET project Thinking Ontologies (TONES). More
recently, he has been working with Stanford University’s Center for Biomedical Informatics in the application of OBDA technology for biomedical
applications. Mariano Rodriguez-Muro obtained a PhD in computer science from the University of Bolzano in 2009. He has master of science degree in Computer Systems Engineering from Universidad de las Americas,
Puebla-Mexico.
Centered on the Semantic Web, Giuseppe Pirrò’s research activities can
be divided in three main strands. The first concerns the study of models
of distributed applications which exploit semantics. He concentrates on
how distributed entities—each of which relies on its own ontology—can
interact in a meaningful way. This means finding alignments that harmonize the different ontologies. He recently began studying the problem
of alignment trust, which involves evaluating and, if necessary, revising
alignments. One project considered the case of resource discovery in a
network of peers connected by alignments.
Another research strand concerns the study of languages for navigating and querying «Linked Data». Linked Data are structured data used in
the Semantic Web that are meaningfully interlinked. Presenting data as
a giant global graph, interlinking requires multiple query and navigation
techniques. That is far different from the classical relational database
model. He and others recently defined a language that combines navigation and querying for Linked Data. He is working on defining distributed
and completely decentralized query models and algorithms that enforce
sound and complete answers.
The third research area concerns the definition of semantic similarity, a
study area that straddles the Semantic Web and computational linguistics. Similarity concerns many research areas such as Natural Language
Processing and peer-to-peer systems. One result was the definition of a
new theoretical model of similarity which combines the feature-based
model with the information theoretical model. Future research could investigate how this new model of similarity fits into the context of Linked
Data.
Italiano. La ricerca di Mariano Rodriguez-Muro si focalizza sulle tecniche
OBDA (Ontology Based Data Access – accesso ai dati secondo criteri ontologici), che facilitano la gestione, l’analisi e l’uso dei dati grazie al metodo della «deduzione automatica». I suoi studi contribuiscono alla definizione di tecnologie che si traducono nella riduzione dei costi determinati
dall’integrazione e dalla gestione dei dati, oltre che nella creazione di infrastrutture IT più dinamiche e adattabili.
Deutsch. Die Forschungsarbeit von Dr. Rodriguez-Muro konzentriert sich
auf die OBDA-Techniken (Ontology Based Data Access – ontologiebasierter Datenzugriff), die die Verwaltung, die Analyse und die Anwendung von
Daten durch Einsatz der «automatischen Schlussfolgerung» erleichtert.
Seine Studien tragen zur Entwicklung von Technologien bei, die zur Reduzierung der Datenintegrations- und Datenmanagement-Kosten sowie zur
Schaffung dynamischerer und anpassbarerer IT-Infrastrukturen dienen.
Italiano. Le ricerche di Giuseppe Pirrò sul Web semantico si incentrano
su tre filoni principali. In primis, Giuseppe Pirrò studia come far sì che le
entità distribuite – ciascuna delle quali si basa su una propria ontologia
– interagiscano in maniera costruttiva. Ciò implica la costruzione di allineamenti in grado di armonizzare le diverse ontologie. Giuseppe Pirrò sta
inoltre lavorando alla creazione di un linguaggio che combini navigazione
e interrogazione per i Linked Data, nonché alla definizione del concetto di
«analogia semantica».
Deutsch. Die Forschungsarbeit von Dr. Pirrò über das semantische Web
erstreckt sich auf drei Hauptthemen. Er untersucht, wie verteilte Entitäten, von denen jede auf ihrer eigenen Ontologie basiert, auf sinnvolle
Weise interagieren können. Das bedeutet, dass Anpassungen gefunden
werden müssen, um die verschiedenen Ontologien miteinander zu harmonisieren. Dr. Pirrò arbeitet darüber hinaus an der Ausarbeitung einer
Sprache, die Navigation und Abfrage verlinkter Daten kombiniert, sowie
an der Definition des Begriffs «semantische Ähnlichkeit».
57
58
appendix
publications of
2010
&
2011
case
authored books
Fitzgerald, B., Kesan, J., Russo, B., Shaikh, M., and Succi G.
(2011). Handbook of Open Source Software Adoption: A Practical Guideline, MIT Press.
edited books
Sillitti, A., Hazzan, O., Bache, E., & Albaladejo X. (Eds.).
(2011). Agile Processes in Software Engineering and Extreme
Programming. Springer, USA.
book chapters
Efe, P., Demirors, O., & Gencel, C. (2010). Mapping Concepts
of Functional Size Measurement Methods. In: Dumke, R., &
Abran, A. (Eds.). COSMIC Function Points: Theory and Advanced Practices. CRC Press.
Wang, X., Gobbo, F., & Lane, M. (2010). Turning Time from
Enemy into an Ally using the Pomodoro Technique. In: S˘mite,
D.; Moe, N.B.; and Ågerfalk, P. (Eds.). Agility across Time and
Space. Springer-Verlag.
Abrahamsson, P., Oza, N., & Siponen, M. (2010). Agile Software Development Methods: A Comparative Review. In: Tore,
D. and Torgeir, D. (Eds.). Agile Software Development: Current
Research and Future Directions. Springer.
Dodero G., DI Cerbo, F., Reggio, G., Ricca, F., & Scanniello,
G. (2011, June). Precise vs. Ultra-light Activity Diagrams – An
Experimental Assessment in the Context of Business Process
Modelling. In: Proceedings of the 12th International Conference
on Product Focused Software Development and Process Improvement (PROFES 2011). Torre Canne, Italy.
Dodero G., DI Cerbo, F., Reggio, G., Ricca, F., & Scanniello,
G. (2011, June). Assessing the effectiveness of «Precise»
Activity Diagrams in the Context of Business Modellng. In:
Proceedings of the 19 th Italian Symposium on Advanced Database Systems (SEBD 2011). Maratea, Italia.
Sillitti, A., Martin, A., Wang, X., & Whitworth, E. (Eds.).
(2010). Agile Processes in Software Engineering and Extreme
Programming. Springer, USA.
peer reviewed conference publications
Fronza, I., Sillitti, A., Succi, G., & Vlasenko, J. (2011, July).
Failure Prediction based on Log Files Using the Cox Proportional Hazard Model. 23rd International Conference on Software Engineering and Knowledge Engineering (SEKE 2011).
Miami, USA.
Succi, G., Morisio, M., & Nagappan, N. (Eds.). (2010).
­Proceedings of the International Symposium on Empirical
Software Engineering and Measurement. ACM, New York.
Abrahamsson, P., Fronza, I., Moser, R., Pedrycz, W., &
Vlasenko, J. (2011, September). Predicting Development Effort from User Stories. In: Proceedings of the 5th International
Symposium on Empirical Software Engineering and Measurement (ESEM 2011). Banff, Alberta, Canada.
Fronza, I., Sillitti, A., Succi, G., & Vlasenko, J. (2011, June).
Toward a non invasive control of applications. A biomedical
approach to failure prediction. In: Proceedings of the 13 th
International Conference on Enterprise Information Systems
(ICEIS 2011). Beijing, China.
Abrahamsson, P., Fronza, I., & Vlasenko, J. (2011, July).
Failure Prediction Using the Cox Proportional Hazard Model.
6 th International Conference on Software and Data Technologies (ICSOFT 2011). Seville, Spain.
Fronza, I. and Vlasenko, J. (2011, June). Profiling the effort
of novices in software development teams. An analysis using data collected non invasively. In: Proceedings of the 13 th
International Conference on Enterprise Information Systems
(ICEIS 2011), Beijing, China.
Abrahamsson, P., & Nilay, O. (Eds.). (2010). Lean Enterprise
Software and Systems – LESS2010. LNBIP65.
journal publications
Abrahamsson, P., Babar, M.A., & Kruchten, P. (2010).
Agility and Architecture: Can They Coexist? IEEE Software,
27(2):16-22
Ceravolo, P., Damiani, E., Fugazza, C., Cappiello, C., Mulazzani, F., Russo, B., & Succi, G. (2010, to appear). Monitoring
Business Processes in the Networked Enterprise, IEEE Transaction on Industrial Electronics.
Conboy, K., Coyle, S., Wang, X., & Pikkarainen, M. (2011).
People over Process: Key People Challenges in Agile Development. IEEE Software, 28(4): 48-57.
Di Cerbo, Dodero G., & Forcheri, P. (2010). Ubiquitous
Learning Perspectives in a Learning Management System.
ID&A Interaction Design & Architecture(S), 9-10: 37–48.
Di Cerbo, F., Dodero, G., & Papaleo, L., Experiencing Personal Learning Environments and Networks using a 3D Space
Metaphor. ID&A Interaction Design & Architecture(s), 1112(11): 64-76.
Ikonen, M., & Abrahamsson, P. (2011). Operationalizing the
concept of success in software engineering projects. International Journal of Innovation in the Digital Economy.
Laanti, M., Salo, O., & Abrahamsson, P.(2011). Agile methods in Nokia. Information and Software Technology, 53(3):
276-290.
Lutteri, E., Russo, B., & Succi, G. (2011). Report of the 4th
international symposium on empirical software engineering
and measurement ESEM 2010. ACM SIGSOFT Software Engineering Notes, 36(2): 28-34.
O’hEocha, C., Conboy, K., & Wang, X. (2010). Using Focus
Groups in Studies of ISD Team Behaviour. Electronic Journal of
Business Research Methods, 8(2): 119-131.
Petrinja, E. & Succi, G. (2010). Trustworthiness of the FLOSS
development process. Computer Systems Science and Engineering, 25 (4): 297-304.
Pedrycz, W., Russo, B., & Succi, G. (2011). A model of job satisfaction for collaborative development processes. Journal of
System and Software, 84: 739–752.
Rossi, B., Russo, B., & Succi, G. (2011). Path Dependent Stochastic Models to Detect Planned and Actual Technology Use:
a case study of OpenOffice. Information and Software Technology, 53: 1209–1226.
Suomalainen, T., Salo, O., Abrahamsson, P., & Similä, J.
(2011). Software product roadmapping in a volitive business
environment. Journal of Systems and Software, 84(6): 958–975.
Zivkovic, A., Gencel, C., & Abran, A. (2011). Guest editorial:
Advances in functional size measurement and effort estimation – Extended best papers. Information & Software Technology Journal, 53: 873.
60
Astromskis, S., & Janes, A. (2011, April). Towards a GQM
model for IS development process selection. In: Proceedings of the 16 th annual conference of MSc and PhD students.
Kaunas, Lithuania.
Buglione, L., Ferrucci, F., Gencel, C., Gravino, C., & Sarro,
F. (2010, November). Which COSMIC Base Functional Components are Significant in Estimating Web Application Development? – A Case Study. In: Proceedings of IWSM/DASMA
Metrikon/Mensura. Shaker Verlag. Stuttgart, Germany.
Cawley, O., Richardson, I., & Wang, X. (2011, May). Medical Device Software Development – A Perspective from a Lean
Manufacturing Plant. In: Proceedings of the 11th International
SPICE Conference Software Process Improvement and Capability Determination (SPICE 2011). Dublin, Ireland.
Cawley, O., Wang, X., & Richardson, I. (2010, October).
Lean/Agile Software Development Methodologies in Regulated Environments - State of the Art. In: Proceedings of the
International Conference on Lean Enterprise Software and
Systems (LESS 2010). Helsinki, Finland.
Coman, I., Sillitti, A., Succi, G. (2011, July). Ensuring Continuous Data Accuracy in AISEMA Systems. In: Proceedings of the
23rd International Conference on Software Engineering and
Knowledge Engineering (SEKE 2011). Miami Beach, FL, USA.
Corral, L., Sillitti, A., & Succi, G. (2011, June). Managing TETRA
Channel Communications in Android. In: Proceedings of the
International Conference on Enterprise and Information Systems (ICEIS 2011). Beijing, China.
Corral, L., Sillitti, A., Succi, G., Garibbo, A., & Ramella, P.
(2011, October). Evolution of Mobile Software Development
from Platform-Specific to Web-Based Multiplatform Paradigm. In: Proceedings of SPLASH – Onward!. Oregon, USA.
Corral, L., Sillitti, A., & Succi, G. (2011, October), Preparing Mobile Software Development Processes to Meet MissionCritical Requirements. In: Proceedings of the 3rd International
Conference on Mobile Computing, Applications, and Services
(MOBICASE 2011), Seattle, USA.
Di Cerbo F., Dodero G., & Papaleo L. (2010, July). Integrating
a Web3D Interface into an E-learning Platform. In: Proceedings of the Web3D 2010. Los Angeles, California, USA.
Di Cerbo, F., Dodero, G., & Yng, T. L. B. (2011, July). Bridging the Gap between PLE and LMS. In: Proceedings of the 11th
IEEE International Conference on Advanced Learning Technologies (ICALT 2011). Athens, USA.
Di Cerbo F., Dodero G., & Forcheri P. (2010, July). DULP
Perspectives in a Learning Management System. In: Proceedings of the International Conference on Advances in Learning
Technologies (ICALT 2010).
Fronza, I., Sillitti, A., Succi, G., & Vlasenko, J. (2011, May).
Understanding how Novices are Integrated in a Team Analysing their Tool Usage. In: Proceedings of the International
Conference on Software and Systems Process (ICSSP 2011),
Honolulu, HI.
Fronza, I., Sillitti, A., Succi, G., & Vlasenko, J. (2011, September). Does Pair Programming Increase Developers Attention?. In: proceedings of the 8 th joint meeting of the European
Software Engineering Conference and the ACM SIGSOFT
Symposium on the Foundations of Software Engineering (ACM
SIGSOFT/FSE 2011). Szeged, Hungary.
Fronza, I., Sillitti, A., Succi, G., & Vlasenko, J. (2011, May).
Analysing the usage of tools in pair programming sessions.
In: Proceedings of the 12th International Conference on eXtreme Programming and Agile Processes in Software Engineering (XP 2011). Madrid, Spain.
Fronza, I., Phaphoom, N., & Vlasenko, J. (2011, August). Toward an Overall Understanding of Novices’ Work. In: Proceedings of the 5th IFIP TC2 Central and Eastern European Conference on Software Engineering Techniques (CEE-SET 2011).
Debrecen, Hungary.
Ikonen, M., Pirinen, E., Fagerholm, F., Kettunen, P., &
Abrahamsson, P. (2011). On the Impact of Kanban on Software Project Work: an empirical case study investigation, IEEE
Computer Society.
Ikonen, M., & Abrahamsson, P. (2010, June). Anticipating the
success of a business critical software project: A comparative
case study of waterfall and agile approaches. In: Proceedings
of the 1st International Conference on Software Business (ICSOB 2010). Jyväskylä, Finland.
Ikonen, M., Kettunen, P., Oza, N., & Abrahamsson, P. (2010,
September). Exploring the Sources of Waste in Kanban Software Development Projects. In: Proceedings of the 36 th EUROMICRO Conference on Software Engineering and Advanced
Applications (SEAA 36). Lille, France.
Jalali,S., Gencel, C, & Smite, D. (2010, September). Trust Dynamics in Global Software Engineering. In: Proceedings of the
4 th International Symposium on Empirical Software Engineering and Measurement (ESEM 2010). Bolzano, Italy.
Karhatsu, H., Ikonen, M., Kettunen, P., Fagerholm, F., &
Abrahamsson, P. (2010, October). Building Blocks for SelfOrganizing Software Development Teams: A Framework Model
and Empirical Pilot Study. In: Proceedings of the 2nd International Conference on Software Technology and Engineering
(ICSTE 2010). San Juan, Puerto Rico, USA.
Lutteri, E. & Russo, B. (2011, June). Characterization of Consultant Activities in ERP Projects – A Case Study. In: Proceedings of the International Conference on Enterprise and Information Systems (ICEIS 2011). Beijing, China.
dis
Marciuska, S., Sarcia, A., Sillitti, A., & Succi, G. (2011,
September). Applying Domain Analysis Methods in Agile Development, ACM SIG–SOFT Symposium on the Foundations of
Software Engineering 2011 (FSE 2011). Szeged, Hungary.
Solis, C. & Wang, X. (2011, August). A Study of the Characteristics of Behaviour Driven Development. In: Proceedings of
the 14 th EuroMicro Conference on Software Engineering and
Advanced Applications (EuroMicro 2011). Oulu, Finland.
Melideo, M., Ruffatti, G., Oltolina, S., Dalle Carbonare,
D., Sillitti, A., Petrinja, E., Succi, G., Morasca, S., Taibi, D.,
Tosi, D., Lavazza, L., Canfora, G., & Zimeo, E. (2010, June).
Il centro di competenza italiano per l’Open Source e la nuova
rete globale di centri di competenza FLOSS. In: Proceedings of
the IV Conferenza Italiana sul Software Libero (ConfSL 2010).
Cagliari, Italy.
Steff, M., & Russo, B. (2011, September). Measuring Architectural Change for Defect Estimation and Localization. In:
Proceedings of the 5th International Symposium on Empirical
Software Engineering and Measurement (ESEM 2011). Banff,
Alberta, Canada.
Adomavicius, G., Baltrunas, L., Hussein, T., Ricci, F., & Tuzhilin, A. (2011). Proceedings of the 3 rd workshop on contextaware recommender systems (CARS 2011). In conjunction with
RecSys 2011, ACM.
Stern, S., & Gencel, C. (2010, November). Embedded Software Memory Size Estimation Using COSMIC Function Points
– A Case Study. In: Proceedings of IWSM/DASMA Metrikon/
Mensura. Shaker Verlag. Stuttgart, Germany.
Law, R., Fuchs, M., & Ricci, F. (Eds.). (2011, January 26-28).
Information and Communication Technologies in Tourism
2011. Proceedings of the International Conference in Innsbruck, Austria.
Touseef T., & Gencel, C. (2010, October). A Structured Goal
Based Measurement Framework Enabling Traceability and Prioritization. In: Proceedings of the 6th International Conference
on Emerging Technologies (ICET 2010). Islamabad, Pakistan.
Ricci, F., Rokach, L., Shapira, N., & Kantor, P.B. (Eds.).
(2011). Recommender Systems Handbook. Springer Verlag.
New York, USA.
Mulazzani, F., Rossi, B., Russo, B., & Steff, M. (2011, October). Open Source Software Research in Six Years of Conferences. In: Proceedings of the 7th International Conference on
Open Source Systems (OSS 2011). Salvador, Brazil.
O’Eocha, C., Conboy, K., & Wang, X. (2010, June). So You
Think You’re Agile?. In: Proceedings of the 11th International
Conference in XP and Agile Processes in Systems Development (XP 2010). Trondheim, Norway.
O’Eocha, C., Conboy, K., & Wang, X. (2010, June). Using Focus
Groups in Studies of ISD Team Behaviour. In: Proceedings of
the 9 th European Conference on Research Methodology for
Business and Management Studies. Madrid, Spain.
Phaphoom, N., Sillitti, A., & Succi, G. (2011, May). Pair Programming and Software Defects – An Industrial Case Study.
In: Proceedings of the 12 th International Conference on Agile
Software Development (XP 2011). Madrid, Spain.
Petrinja, E., Sillitti, A., & Succi, G. (2010, September). Valutare la qualità del software Open Source. In: Proceedings of
the XLVIII Congresso Annuale AICA. L’Aquila, Italy.
Petrinja, E., Sillitti, A., & Succi, G. (2010, June). Comparing
Open-BRR, QSOS, and OMM Assessment Models. In: Proceedings of the 6th International Conference on Open Source Systems (OSS 2010). South Bend, Indiana, USA.
Petrinja, E., Sillitti, A., & Succi, G. (2011, October). Adoption of OSS products by the software industry: A Survey. In:
Proceedings of the 7th International Conference on Open
Source Systems (OSS 2011). Salvador, Brazil.
Radu, G., Cretulescu, D., Morariu, I., Lucian, N., Vintan,
I., & Coman, D. (2010, April). An Adaptive Meta-classifier for
Text Documents. In: Proceedings of the 16 th International
Multi-Conference on Complexity, Informatics and Cybernetics
(IMCIC 2010). Orlando, USA.
Reggio, G., Ricca, F., Di Cerbo, F., Dodero, G., & Scanniello,
G. (2011, October). A Precise Style for Business Process Modelling: Results from Two Controlled Experiments. In: Proceedings of the International Conference on Model Driven Engineering Languages and Systems (MODELS 2011). Wellington,
New Zealand.
Romano, L. (2011, August). Tool and Method for Evaluating
Students Working on an E-Learning Platform. In: Proceedings
of the 17th International Conference on Distributed Multimedia Systems (DMS 2011). Florence, Italy.
Rossi, B., Russo, B., & Succi, G. (2010, June). Download
Patterns and Releases in Open Source Software Projects: a
Perfect Symbiosis?. In: Proceedings of the 6th International
Conference on Open Source Systems (OSS 2010). South Bend,
Indiana, USA.
Rossi, B., Russo, B., & Succi, G. (2010, June). Modelling Failures Occurrences of Open Source Software with Reliability
Growth. In: Proceedings of the 6 th International Conference on
Open Source Systems (OSS 2010). South Bend, Indiana, USA.
Scott, A.H., Russo, B., de Mendonça Neto, M.G., & Kon, F.
(2011, October). Open Source Systems: Grounding Research.
In: Proceedings of the 7th IFIP WG 2.13 International Conference (OSS 2011). Salvador, Brazil.
Sillitti, A., & El Ioini, N. (2011, July). Open Web Services
Testing. In: Proceedings of the 7th IEEE World Congress on
Services (SERVICES). Washington DC, USA.
Sillitti, A., Succi, G., & Vlasenko, J. (2011, May). Toward a
better understanding of tool usage. In: Proceedings of the
33 th International Conference on Software Engineering (ICSE
2011). Honolulu, HI, USA.
edited books and journals
Zanker, M., Ricci, F., Jannach, D., & Terveen, L.G. (Eds.).
(2010). Special issue on Measuring the Impact of Personalization and Recommendation on User Behaviour. International
Journal of Humamn-Computer Studies Vol 68 (issue 8).
Wang, X., & Conboy, K. (2011, December). Comparing Apples
with Oranges? The Perspectives of a Lean Online Community
on the Differences between Agile and Lean. In: Proceedings
of the 32 th International Conference on Information Systems
(ICIS 2011). Shanghai, China.
journal publications
Wang, X. (2011, August). The Combination of Agile and Lean
in Software Development: An Experience Report Analysis. In:
Proceedings of the Agile2011 Conference, Salt Lake City, Utah.
Adomavicius, G., Mobasher, B., Ricci, F., & Tuzhilin, A.
(2011). Context-Aware Recommender Systems. AI Magazine.
32(3): 67-80.
Wang, X., Conboy, K., & Lane, M. (2011, June). From agile to
lean: the perspectives of the two agile online communities of
interest. In: Proceedings of the 19 th European Conference on
Information Systems – ICT and Sustainable Service Development (ECIS 2011). Helsinki, Finland.
Akinde, M., Böhlen, M.H., Chatziantoniou, D., & Gamper J.
(2010). Theta-constrained multi-dimensional aggregation.
Information Systems, 36(2): 341-358.
workshop publications
Abrahamsson, P., Fronza, I., & Vlasenko, J. (2011, May).
Analyzing Tool Usage to Understand to What Extent Experts
Change their Activities when Mentoring. In: Proceedings of
the 2nd International Workshop on Emerging Trends in Software Metrics (WeTSOM 2011). Honolulu, HI.
Jermakovics, A., Sillitti, A., & Succi, G. (2011, May). Mining and Visualizing Developer Networks from Version Control
Systems. In: Proceedings of the 4 th International Workshop
on Cooperative and Human Aspects of Software Engineering
(CHASE 2011). Honolulu, HI, USA.
Oza, V., Kettunen, P., Abrahamsson, P., & Münch, J. (2011,
November). An empirical study on high-performing software
teams. In: Proceedings of the 1st Software Technology Exchange Workshop. Stockholm, Sweden.
Rossi, B.,Russo, B., & Succi, G. (2010, March). The Mass Interest in eGovernment. Transforming Government Workshop.
London, UK.
Abrahamsson, P., Kettunen, P., & Fagerholm, F. (2010,
June). The Set-Up of a Valuable Software Engineering Research Infrastructure of the 2010s. In: Proceedings of the 1st
International Workshop on Valuable Software Products (Vasop 2010), Limerick, Ireland.
other
Abrahamsson, P. (2010). Unique infrastructure investment:
Introducing the software factory concept. Software Factory
Magazine, 1(1):2-3, ISSN: 1798-8845.
Abrahamsson, P. (2010). Striving for multidisciplinary research. Software Factory Magazine, 1(1):4, ISSN: 1798-8845.
Abrahamsson, P. & Alahuhta, P. (2010). In the Factory pipeline: Mobilizing China, Software Factory Magazine, 1(1):13,
ISSN: 1798-8845.
Abrahamsson, P. & Oza, N. (2010). Software Factory people
bridge agility and innovation together. Software Factory
Magazine, 1(1):17, ISSN: 1798-8845.
Augsten, N., Barbosa, D., Böhlen, M.H. & Palpanas, T.
(2011). Efficient top-k approximate subtree matching in small
memory. IEEE Trans. Knowl. Data Eng., 23(8): 1123-1137.
Augsten, N., Böhlen, M.H. & Gamper, J. (2011). The pq-gram
distance between ordered labeled trees. ACM Transactions on
Database Systems, 35(1):1-36 .
Augsten, N., Böhlen, M.H., Dyreson, C., & Gamper, J. (Appears in 2011). Windowed pq-grams for approximate joins of
data-centric XML. The VLDB Journal.
Baltrunas, L., Ludwig, L., Peer, S., Ricci, F. (2011). Context
relevance assessment and exploitation in mobile recommender systems. Personal and Ubiquitous Computing.
Gordevicius, J., Gamper, J. & Böhlen, M.H. (Appears in
2011). Parsimonious temporal aggregation. The VLDB Journal.
Lorenzi, F., Baldo, G., Costa, R., Abel, M., Bazzan, A., &
Ricci, F. (2010). A Trust Model for Multiagent Recommendations. Journal of Emerging Technologies in Web Intelligence.
2(4): 310-318.
Lorenzi, F., Bazzan, A.L.C., Abel, M., & Ricci, F. (2011). Improving recommendations through an assumption-based
multiagent approach: An application in the tourism domain.
Expert Systems with Applications. 38: 14703–14714.
Mahmood, T., Ricci, F., & Venturini, A. (2010). Improving
Recommendation Effectiveness by Adapting the Dialogue
Strategy in Online Travel Planning. International Journal of
Information Technology and Tourism. 11 (4): 285-302. BEST
PAPER of vol.11 award.
Ricci, F. (2011). Mobile Recommender Systems. International
Journal of Information Technology and Tourism. 12(3): 205-231.
Timko, I., Böhlen, M.H., & Gamper, J. (2011). Sequenced spatiotemporal aggregation for coarse query granularities. The
VLDB Journal, 20(5): 721-741.
Trabelsi, W., Wilson, N., Bridge, D., & Ricci, F. (2011). Preference Dominance Reasoning for Conversational Recommender Systems: A Comparison Between a Comparative Preferences and a Sum of Weights Approach. International Journal on
Artificial Intelligence Tools. 20(4): 591-616.
Zanker, M., Ricci, F., Jannach, D., & Terveen, L.G. (2010).
Measuring the impact of personalization and recommendation on user behaviour. Int. J. Hum.-Comput. Stud. 68(8):
469-471.
61
book chapters
Ricci, F., Rokach, L., & Shapira, B. (2011). Introduction to Recommender Systems Handbook. In: Ricci, F., Rokach, L., Shapira,
B., Kantor, P.B. (Eds.). Recommender Systems Handbook. 1-35.
Baltrunas, L., & Ricci, F. (2010). Context-Dependent Recommendations with Items Splitting. Proceedings of the First
Italian Information Retrieval Workshop. IIR 2010. Padua, Italy,
January 27-28, 2010. 71-75.
peer reviewed conference publications
Augsten, N., Barbosa, D., Böhlen, M.H., & Palpanas, T. (2010,
March 1-6). TASM: Top-k approximate subtree matching. In: Proceedings of the 26th International Conference on Data Engineering (ICDE-10). Long Beach, California. 353-364.
Baltrunas, L., Kaminskas, M., Ludwig, B., Moling, O., Ricci,
F., Aydin, A., Lueke, K-H., & Schwaiger, R. (2011). InCarMusic: Context-Aware Music Recommendations in a Car. Proceedings of the 12 th International Conference on Electronic
Commerce and Web Technologies – EC-Web 2011. Toulouse,
France. August 29 - September 2, 2011: 89-100.
Baltrunas, L., Ludwig, B., & Ricci, F. (2011). Matrix factorization techniques for context aware recommendation. Proceedings of the 2011 ACM Conference on Recommender Systems.
Chicago, IL, USA. October 23-27, 2011. ACM 2011. 301-304.
Guzzi, F., Ricci, F., & Burke, R.D. (2011). Interactive multi-party critiquing for group recommendation. Proceedings of the
2011 ACM Conference on Recommender Systems. Chicago, IL,
USA, October 23-27, 2011. ACM 2011. 265-268.
Kacimi, M., & Gamper, J. (2011). Diversifying search results of
controversial queries. In: Proceedings of the 20 th ACM Conference on Information and Knowledge Management (CIKM-11).
Glasgow, UK. October 24-28, 2011, 93-98.
Kaminskas, M., & Ricci, F. (2011). Location-Adapted Music
Recommendation Using Tags. Proceedings of the 19th International Conference on User Modeling, Adaptation and Personalization. Girona, Spain. 11 – 15 July. 2011: 183-194.
Karatzoglou, A., Amatriain, X., Baltrunas, L., & Oliver,
N. (2010). Multiverse recommendation: n-dimensional tensor factorization for context-aware collaborative filtering.
Proceedings of the Fourth ACM Conference on Recommender
Systems. Barcelona, Spain. September 26 – 30, 2010. ACM,
New York, NY. 79-86.
Kasperovics, R., Böhlen, M.H., & Gamper, J. (2010). On the
efficient construction of multislices from recurrences. In: Proceedings of the 22nd International Conference on Scientific
and Statistical Database Management (SSDBM-10). Heidelberg, Germany. June 30 – July 2, 2010, 42-59.
Baltrunas, L., Ludwig, B., & Ricci. F. (2011). Context relevance assessment for recommender systems. Intellingent
User Interfaces. Palo Alto, CA. 13-16 February 2011: 287-290.
Kazimianec, M., & Augsten, N. (2010). Exact and Efficient
Proximity Graph Computation. In: Proceedings of the 14 th
East European Conference on Advances in Databases and Information Systems (ADBIS-10). Novi Sad, Serbia. September
20–24, 2010, 289-304.
Baltrunas, L., Ludwig, L., Peer, S., Ricci, F. (2011). ContextAware Places of Interest Recommendations for Mobile Users.
Proceedings of the 14th International Conference on HumanComputer Interaction. Hilton Orlando Bonnet Creek, Orlando,
Florida, USA. 9 -14 July 2011: 531-540.
Kazimianec, M., & Augsten, N. (2011). PG-join: proximity
graph based string similarity joins. In: Proceedings of the
23rd International Conference on Scientific and Statistical
Database Management (SSDBM-11). Portland, OR, USA. July
20 – 22, 2011, 274-292.
Baltrunas, L., Makcinskas, T., & Ricci, F. (2010). Group Recommendations with Rank Aggregation and Collaborative Filtering. Proceedings of the Fourth ACM Conference on Recommender Systems. Barcelona, Spain, September 26 - 30, 2010.
ACM, New York, NY. 119-126.
Kazimianec, M. & Augsten, N. (2011). PG-skip: proximity
graph based clustering of long strings. In: Proceedings of
the 16th International Conference on Database Systems for
Advanced Applications (DASFAA-11). Hong Kong, China. April
22-25, 2011, 31–46.
Braunhofer, M., Kaminskas, M., & Ricci, F. (2011). Recommending music for places of interest in a mobile travel guide.
Proceedings of the 2011 ACM Conference on Recommender Systems. Chicago, IL, USA. October 23-27, 2011. ACM 2011. 253-256.
Lamber, P., Ludwig, B., Ricci, F., Zini, F., & Mitterer, M.
(2011). Message-Based Patient Guidance in Day-Hospital.
Proceedings of the 12 th IEEE International Conference on
Mobile Data Management. 6–9 June, 2011. Luleå, Sweden:
162-167.
Cadonna, B., Gamper, J. & Böhlen, M.H. (2011). Sequenced
event set pattern matching. In: Proceedings of the 14th International Conference on Extending Database Technology
(EDBT-11). Uppsala, Sweden. March 22-24, 2011, 33-44.
Elahi, M. (2010). Context-aware intelligent recommender
system. Proceedings of the 2010 International Conference on
Intelligent User Interfaces. February 7-10, 2010, Hong Kong,
China. ACM 2010. 407-408.
Elahi, M. (2011). Adaptive Active Learning in Recommender
Systems. Proceedings of the 19 th International Conference
on User Modeling, Adaptation and Personalization. Girona,
Spain, 11-15 July, 2011: 414-417.
Elahi, M., Repsys, V., & Ricci, F. (2011). Rating Elicitation
Strategies for Collaborative Filtering. Proceedings of the 12 th
International Conference on Electronic Commerce and Web
Technologies – EC-Web 201. Toulouse, France. August 29–
September 2, 2011: 160-171.
Gamper, J., Böhlen, M.H., Cometti, W., & Innerebner, M.
(2011). Defining isochrones in multimodal spatial networks.
In Proceedings of the 20 th ACM Conference on Information
and Knowledge Management (CIKM-11). Glasgow, UK. October 24–28, 2011, 2381-2384.
Gordevicius, J., Estrada, F.J., Lee, H.C., Andritsos, P. &
Gamper, J. (2010). Ranking of evolving stories through metaaggregation. In: Proceedings of the 19 th ACM Conference on
Information and Knowledge Management (CIKM-10). Toronto,
Canada. October 26–30, 2010. 1909-1912.
Gufler, B., Augsten, N., Reiser, A. & Kemper, A. (2011).
Handling data skew in MapReduce. In: Proceedings of the 1 st
International Conference on Cloud Computing and Services
Science (CLOSER-11). Noordwijkerhout, Netherlands. 7–9
May, 2011, 574-583
62
Lorenzi, F., Ricci, F., Abel, M., & Bazzan, A.L.C. (2010).
Assumption-Based Reasoning for Multiagent Case-Based
Recommender Systems. Proceedings of the Twenty-Third
International Florida Artificial Intelligence Research Society
Conference. May 19–21, 2010. Daytona Beach, Florida. AAAI
Press 2010.
Marciuska, S., & Gamper, J. (2010). Determining objects
within isochrones in spatial network databases. In Proceedings of the 14th East European Conference on Advances in
Databases and Information Systems (ADBIS-10). Novi Sad,
Serbia. September 20–24, 2010, 392-405.
Schneider, S., Ricci, F., Venturini, A., & Not, E. (2010). Usability Guidelines for WAP-based Travel Planning Tools. Information and Communication Technologies in Tourism 2010.
Springer. Wien Ney York. 125-136.
workshop publications
Baltrunas, L., Kaminskas, M., Ricci, F., Rokach, L., Shapira,
B., & Luke, K-H. (2010). Best Usage Context Prediction for Music Tracks. Proceedings of the 2nd Workshop on Context Aware
Recommender Systems. September 26, 2010. Barcelona, Spain.
Baltrunas, L., Ludwig, B., Peer, S., & Ricci, F. (2011, July 11).
Context-Aware Places of Interest Recommendations and
Explanations. Proceedings of the 1st Workshop on Decision
Making and Recommendation Acceptance Issues in Recommender Systems (DEMRA 2011). In conjunction with UMAP
2011. Girona, Spain.
Elahi, M., Ricci, F., & Repsys, V. (2011). System-Wide Effectiveness of Active Learning in Collaborative Filtering. Proceedings of the International Workshop on Social Web Mining.
Co-located with IJCAI. 18 July 2011. Barcelona, Spain.
Felfernig, A., Schubert. M., Mandl, M., Ricci, F., & Maalej, W.
(2010). Recommendation and Decision Technologies For Requirements Engineering. Proceedings of the 2nd International
Workshop on Recommendation Systems for Software Engineering, RSSE 2010. May 4, 2010. Co-located with ICSE 2010.
Cape Town International Convention Centre (CTICC). Cape
Town, South Africa.
Zini, F., & Ricci, F. (2011, July 11). Guiding Patients in the
Hospital. Proceedings of the 2nd International Workshop on
User Modeling and Adaptation for Daily Routines (UMADR). In
conjunction with UMAP 2011. Girona, Spain.
krdb
journal publications
Abiteboul, S., Chan, T.–H. H., Kharlamov, E., Nutt, W., &
Senellart, P. (2011). Capturing Continuous Data and Answering Aggregate Queries in Probabilistic XML. ACM Transactions
on Database Systems 36(4).
Alberti, M., Cattafi M., Chesani, F., Gavanelli, M, Lamma,
E., Mello, P., Montali, M., & Torroni, P. (to appear). A Computational Logic Application Framework for Service Discovery and Contracting. International Journal of Web Services
Research (JWSR).
Artale, A., Calvanese, D., Queralt, A., & Teniente, E. (to appear). OCL-Lite: Finite reasoning on UML/OCL conceptual schemas. Data & Knowledge Engineering, 2011. Elsevier Science.
Calvanese, D., De Giacomo, G., Lenzerini, M., & Rosati, R.
View-based Query Answering in Description Logics: Semantics
and Algorithms (2010). In: J. of Computer and System Sciences.
Calvanese, D., De Giacomo, G., Lembo, D., Lenzerini, M.,
Poggi, A., Rodriguez-Muro, M., Rosati, R., Ruzzi, M., &
Savo, D. F. (2011). The MASTRO system for ontology-based
data access. Semantic Web Journal, 2(1):43–53.
Calvanese, D., De Giacomo, G., Lenzerini, M., & Rosati, R.
(2011). View-based query answering in description logics: Semantics and complexity. J. of Computer and System Sciences.
Calvanese, D., De Giacomo, G., Lembo, D., Lenzerini, M.,
Poggi, A., Rodriguez-Muro, M., Rosati, R., Ruzzi, M., & Savo, D. F. (2011, to appear). The MASTRO system for ontologybased data access. Semantic Web Journal (SWJ).
Taneva, B., Kacimi, M., & Weikum, G. (2010). Gathering and
ranking photos of named entities with high precision, high recall, and diversity. In Proceedings of the 3rd ACM International
Conference on Web Search and Data (WSDM-10). New York,
NY, USA. February 4–6. 2010, 431-440.
Chesani, F., Mello, P., Montali, M., & Torroni, P. (in press).
A Logic-Based, Reactive Calculus of Events. Special Issue of
Fundamenta Informaticae, 104 1–27. IOS Press, Amsterdam.
Taneva, B., Kacimi, M., & Weikum, W. (2011) Finding images
of difficult entities in the long tail. In Proceedings of the 20 th
ACM Conference on Information and Knowledge Management
(CIKM-11). Glasgow, UK. October 24–28, 2011. 189-194.
Chesani, F., Mello, P., Montali, M., Storari, S., & Torroni,
P. (2010). On the Integration of Declarative Choreographies
and Commitment-based Agent Societies into the SCIFF Logic
Programming Framework. Multiagent and Grid Systems, Special Issue on Agents, Web Services and Ontologies: Integrated
Methodologies, 6(2) 165–190. IOS Press, Amsterdam.
Trabelsi, W., Wilson, N., Bridge, D., & Ricci. F. (2010). Comparing Approaches to Preference Dominance for Conversational Recommenders. Proceedings of the 22 th International
Conference on Tools with Artificial Intelligence. Arras, France.
October 27–29, 2010: 113-120. BEST PAPER AWARD
Fillottrani, P. R., Franconi, E., & Tessaris, S. (2011). The
ICOM 3.0 Intelligent Conceptual Modelling tool and methodology. Semantic Web Journal.
Fillottrani, P. R., Franconi, E., & Tessaris, S. (2011, to appear). The ICOM 3.0 intelligent conceptual modelling tool and
methodology. Semantic Web journal.
Fionda, V. (2011). Biological Network Analysis and Comparison: Mining new Biological Knowledge. Central European
Journal of Computer Science. Vol. 1, n. 2, pp. 32-40.
Arfé, B., Di Mascio, T., & Gennari, R. (2010). Representations
of Contemporaneous Events of a Story for Novice Readers. In:
Postproc. of the MBR 2009 conference.
Fionda, V., Palopoli, L. (2011). Biological Network Querying
Techniques: Analysis and Comparison. Journal of Computational Biology. Vol. 18, n. 4, pp. 595-625.
Arfé, B., Di Mascio, T., & Gennari, R. (2010). Representations
of Contemporaneous Events of a Story for Novice Readers. In:
Post-proc. of the MBR 2009 conference. Springer.
Keet, C. M. (2011). The Granular Perspective as Semantically Enriched Granulation Hierarchy (2011). In: Int. Journal
of Granular Computing, Rough Sets and Intelligent Systems,
2:1(51–70).
Artale, A., Calvanese, D., & Ibanez-Garcia, A. (2010). Checking Full Satisfiability of Conceptual Models. In: Proc. of the
23rd Int. Workshop on Description Logic (DL~2010).
Keet, C. M. (2010). Dependencies Between Ontology Design
Parameters (2010), in: Int. Journal of Metadata, Semantics and
Ontologies, 5:4(265–284).
Montali, M., Pesic, M., van der Aalst, W. M. P., Chesani, F.,
Mello, P., & Storari, S. (2010). Declarative Specification and
Verification of Service Choreographies. ACM Transactions on
the Web, Vol. 4(1). ACM, New York.
Montali, M., Torroni, P., Alberti, M., Chesani, F., Lamma,
E., & Mello, P. (2010). Abductive Logic Programming as an
Effective Technology for the Static Verification of Declarative
Business Processes. Special Issue of Fundamenta Informaticae, 102 (3-4) 325–361. IOS Press, Amsterdam.
Montali, M., Torroni, P., Zannone, N., Mello, P., & Bryl, V.
(2010). Engineering and Verifying Agent-Oriented Requirements Augmented by Business Constraints with B-Tropos.
Journal of Autonomous Agents and Multi-Agent Systems, pp.
1–31. Springer Verlag. Berlin Heidelberg.
Pirrò, G., Mastroianni, C., & Talia, D. (2010). A Framework
for Distributed Knowledge Management: Design and Implementation. Future Generation Computer Systems, vol. 26, n.
1, pp. 38-49.
Pirrò, G., & D. Talia (2010). UFOme: An Ontology Mapping
System with Strategy Prediction Capabilities. Data & Knowledge Engineering, vol. 69, n. 5, pp. 444-471.
Queralt, A., Teniente, E., Artale, A., & Calvanese, D. (2011).
OCL-Lite: Finite reasoning on UML/OCL conceptual schemas.
Data and Knowledge Engineering.
Thorne, C., & Calvanese, D. (2011). Tractability and intractability of controlled languages for data access. Studia Logica.
book chapters
Calvanese, D., De Giacomo, G., Lenzerini, M., & Rosati, R.
(2011). Actions and programs over description logic knowledge bases: A functional approach. In: Lakemeyer, G., & McIlraith, S. A. (eds.). Knowing, Reasoning, and Acting: Essays in
Honour of Hector Levesque. College Publications.
Colombo, G., & Mosca, A. (2011). Formazione e computazione: ambienti artificiali di apprendimento. In: Pieri, M.,
Diamantini, D. (Eds.). Ubiquitous learning, Guerini e Associati
Editore. Milano.
Di Mascio, T. & Gennari, R. (2010). Integrating Usability Engineering for Designing the Web Experience: Methodologies
and Principles. In: A Usability Guide to Intelligent Web Tools
for the Literacy of Deaf People. IGI Global.
Keet, C. M. (2010). A Top-level Categorization of Types of
Granularity. In: Novel Developments in Granular Computing,
pp. 81–117. IGI Global.
Montali, M. (2010). Specification and Verification of Declarative Open Interaction Models: a Logic- Based Approach.
Vol. 56 of Lecture Notes in Business Information Processing.
Springer Verlag. Berlin Heidelberg.
peer reviewed conference publications
Abiteboul, S., Chan, T.-H. H., Kharlamov, E., Nutt, W. &
Senellart, P. (2010). Aggregate Queries for Discrete and Continuous Probabilistic XML. In: Proc. 13 th International Conference on Database Theory.
Abiteboul, S., Chan, T.–H. H., Kharlamov, E., Nutt, W., &
Senellart, P. (2010). Aggregate queries for discrete and continuous probabilistic XML. International Conference on Database Theory 2010: 50–61.
Artale, A., Kontchakov, R., Ryzhikov, V., & Zakharyaschev,
M. (2010). Past and Future of DL-Lite. In: Proc. of the 24 th AAAI
Conf. on Artificial Intelligence (AAAI-10).
Artale, A., Kontchakov, R., Ryzhikov, V., & Zakharyaschev,
M. (2010, October). Tailoring temporal description logics for
reasoning over temporal conceptual models. In: Proc. of the
8 th Int. Symposium on Frontiers of Combining Systems (FroCoS’11), Saarbrücken, Germany. Lecture Notes in CS, Springer.
Artale, A., Kontchakov, R., Ryzhikov, V. & Zakharyaschev,
M. (2010). Temporal Conceptual Modelling with DL-Lite. In:
Proc. of the 23rd Int. Workshop on Description Logics (DL-10).
Atencia, M., Euzenat, J., Pirrò, G., & Rousset, M. C. (to appear).
Alignment-based Trust for Resource Finding in Semantic P2P
Networks. Proceedings of the 10 th International Semantic Web
Conference (ISWC), Bonn, Germany, LNCS, vol. Springer Verlag.
Ten Cate, B., Franconi, E., & Seylan, I. (2011). Beth definability in expressive description logics. In: IJCAI 2011, Proceedings of the 22nd International Joint Conference on Artificial
Intelligence, pages 1099–1106.
Calvanese, D., Ortiz, M., & Simkus, M. (2011). Containment
of regular path queries under description logic constraints.
In Proc. of the 22nd Int. Joint Conf. on Artificial Intelligence
(IJCAI 2011).
Carlini, M., Di Mascio, T., Gennari, R. (2010). Reading and
Playing: a Multimedia Tutoring Tool for Children with Text
Comprehension Problems. In: Proc. of HCI 2010, Iadis, Rome.
Chesani, F., Mello, P., Montali, M., & Torroni, P. (2010).
Monitoring Time-Aware Social Commitments with Reactive
Event Calculus. In: Trappl, R. (ed.). Proceedings of the 20 th
European Meeting on Cybernetics and Systems Research,
7th International Symposium «From Agent Theory to Agent
Implementation» (AT2AI-7). Vienna (Austria). April 6-7, 2010,
pp. 447-452. Austrian Society for Cybernetics Studies, Vienna.
Best Paper Award.
Di Mascio, T., Gennari, R., & Vittorini, P. (2010). The Design
of An Intelligent Adaptive Learning System for Poor Comprehenders. In: Proc. of MCES 2010, 6 pages, AAAI 2010 Fall Symposium, USA.
Di Mascio, T., Gennari, R., & Vittorini, P. (2011). The Design
of the TERENCE Adaptive Learning System. Full paper in: Proc.
of ED-MEDIA 2011, Lisboa.
Fionda, V., Gutierrez, C., & Pirrò, G. Semantically-driven
recursive navigation and retrieval of data sources in the Web
of Data. Proceedings of the 10 th International Semantic Web
Conference (ISWC) Posters and Demos, Bonn, Germany, LNCS,
Springer Verlag.
Fionda, V., & Pirrò, G. (2011). BioTRON: A Biological Workflow Management System. Proceedings of the 26 th Symposium On Applied Computing (SAC 2011), ACM Press, pp. 77-82.
Bagheri-Hariri, B., Calvanese, D., De Giacomo, G., De Masellis, R., & Felli, P. (2011). Foundations of relational artifacts verification. In: Proc. of 9 th Int. Conference on Business
Process Management (BPM 2011), volume 6896 of Lecture
Notes in Computer Science, pages 379–395. Springer.
Franconi, E., & Toman, D. (2011). Fixpoints in temporal
description logics. In: IJCAI 2011, Proceedings of the 22nd
International Joint Conference on Artificial Intelligence, pages
875–880.
Bernardi, R., & Kirschner, M. (2010). From artificial questions to real user interaction logs: Real challenges for Interactive Question Answering systems. In: Proc. of Workshop on
Web Logs and Question Answering (WLQA’10).
Keet, C. M. (2011). The Use of Foundational Ontologies in
Ontology Development: an Empirical Assessment. In: Proc. of
the 8 th Extended Semantic Web Conference (ESWC~2011), pp.
321–335, Springer.
Bernardi, R., Kirschner, M., & Ratkovic, Z. (2010). Context
Fusion: The Role of Discourse Structure and Centering Theory.
In: Proceedings of LREC 2010.
Keet, C.M. (2010). Ontology Engineering with Rough Concepts
and Instances. In: Proc. of the 17th Int. Conference on Knowledge Engineering and Knowledge Management (EKAW~2010),
pp. 507–517, Springer.
Botoeva, E., Artale, A., & Calvanese, D. (2010). Query Rewriting in DL-Lite. In: Proc. of the 23nd Int. Workshop on Description Logic (DL~2010).
Botoeva, Elena, Calvanese, D., & Rodriguez-Muro, M.
(2010). Expressive approximations in DL-Lite ontologies. In:
Proc. of the 14 th Int. Conf. on Artificial Intelligence: Methodology, Systems, Applications (AIMSA 2010).
Bragaglia, S., Chesani, F., Mello, P., Montali, M., Sottara
D., & Fry, E. (2011, to appear). Event Condition Expectation
(ECE-) Rules for Monitoring Observable Systems. 5th International RuleML Symposium on Rules (RuleML2011@BRF),
LNCS. Springer Verlag. Berlin.
Calvanese, D., Carbotta, D., & Ortiz, M. (2011). A practical automata-based technique for reasoning in ex- pressive
description logics. In: Proc. of the 22nd Int. Joint Conf. on Artificial Intelligence (IJCAI 2011).
Calvanese, D., De Giacomo, G., & Vardi, M.Y. (2010). Node
Selection Query Languages for Trees, In: Proc. of the 24 th
AAAI Conf. on Artificial Intelligence (AAAI~2010). Calvanese, D., De Giacomo, G., Lenzerini, M., & Vardi, M. Y.
(2011). Simplifying schema mappings. In: Proc. of the 14 th Int.
Conf. on Database Theory (ICDT 2011), pages 114–125.
Calvanese, D., Keet, C.M., Nutt, W., & Stefanoni, G. (2010).
Web-based Graphical Querying of Databases through an
Ontology: the WONDER System. In: Proc. of the 25th ACM Symposium on Applied Computing (SAC~2010), Semantic Web and
Applications Track, pp. 1388–1395. ACM.
Calvanese, D., Kharlamov, E., Nutt, W., & Zheleznyakov, D.
(2010). Updating ABoxes in DL-Lite. In: Proc. of the 4 th Alberto
Mendelzon Int. Workshop on Foundations of Data Management (AMW~2010). Kirschner, M., & Bernardi, R. (2010). Towards an Empirically
Motivated Typology of Follow-Up Questions: The Role of Dialogue Context. In: Proc. of SIGdial’10.
Maggi, F. M., Montali, M., Westergaard, M., & Van Der
Aalst, W.M.P. (2011, to appear). Monitoring Business Constraints with Linear Temporal Logic: An Approach Based on
Colored Automata. 9 th International Conference on Business
Process Management (BPM 2011), LNCS. Springer Verlag,
Berlin.
Maggi, Fabrizio M., Westergaard, M., Montali, M., & Van
Der Aalst, W.M.P. (2011, to appear). Runtime Verification
of LTL-Based Declarative Process Models. In Proceedings of
the 2nd International Conference on Runtime Verification (RV
2011), LNCS. Springer Verlag, Berlin.
Pirrò, G., Euzenat, J. (2010, November). A Feature and Information Theoretic Framework for Semantic Similarity and
Relatedness. Proceedings of the 9 th International Semantic
Web Conference (ISWC), Shanghai, China, LNCS, vol. 6496,
pp. 615-630, Springer Verlag.
Pirrò, G., Euzenat, J. (2010, October). A Semantic Similarity
Framework Exploiting Multiple Parts-of-Speech. Proceedings
of 9 th International Conference on Ontologies, DataBases,
and Applications of Semantics for Large Scale Information
Systems (ODBASE), Heraklion, Greece, LNCS, vol. 6427, pp.
1118–1125, Springer Verlag.
Pirrò, G., Trunfio, P., Talia, D., Missier, P., & Goble, C.
(2010, May). ERGOT: A Semantic-based System for Service
Discovery in Distributed Infrastructures. Proc. of the 10 th
IEEE/ACM International Symposium on Cluster, Cloud and Grid
Computing (CCGrid 2010). Melbourne, Australia, pp. 263–272,
IEEE Computer Society Press.
63
Razniewski, S., & Nutt, W. (2011). Completeness of Queries
over Incomplete Databases. Proc. VLDB Endowment 4(11), 749–760.
Rodriguez-Muro, M., & Calvanese, D. (2011). Semantic
index: Scalable query answering without forward chaining or
exponential rewritings. In: Proc. of the 10 th Int. Semantic Web
Conf. (ISWC 2011).
Savo, F. D., Lembo, Maurizio Lenzerini, Antonella Poggi,
Rodriguez-Muro, M., Romagnoli, V., Ruzzi, M., & Stella, G.
(2010, June). Experimenting Ontology-based Data Access with
MASTRO (Extended Abstract). In: Proc. of the 18 th Italian Symposium on Advanced Database Systems (SEBD’10). Rimini, Italy.
Stefanoni, G., Keet, C.M., Nutt, W., Rodriguez-Muro, M., &
Calvanese, D. (2010, to appear). Web-based graphical querying
of databases through an ontology: the Wonder system. In: Proc.
of the 25th ACM Symposium on Applied Computing (SAC 2010).
Thorne, C., & Calvanese, D. (2010), The Data Complexity of
the Syllogistic Fragments of English. In: Proc. of the 2009 Amsterdam Colloquium (AC~2009). Springer. Di Mascio, T., Gennari, R., & Vittorini, P. (2011). Involving
Learners and Domain Experts in the Analysis of the Context of
Use for the TERENCE Games. Full paper in the DEG 2011 workshop at IS-EUD 2011. Bari.
Rodriguez-Muro, M., & Calvanese, D. (2011). Dependencies
to optimize ontology based data access. In: Proc. of the 24 th
Int. Workshop on Description Logics (DL 2011), volume 745 of
CEUR Electronic Workshop Proceedings, http://ceur-ws.org/.
Franconi, E., Guagliardo, P., & Trevisan, M. (2010). An intelligent query interface based on ontology navigation. In: Proc.
of the Workshop on Visual Interfaces to the Social and Semantic Web (VISSW 2010). Hong Kong.
Rodriguez-Muro, M., & Stella, G. (2010). MASTRO at Work:
Experiences on Ontology-based Data Access. In: Proc. of the
2010 Description Logic Workshop (DL2010).
Franconi, E., Meyer, T., & Varzinczak, I. (2010). Semantic
Diff as the Basis for Knowledge Base Versioning. In: Proc. of
the 13 th International Workshop on Non-Monotonic Reasoning (NMR-2010). Toronto, Canada. Gennari, R., Roubickova, A., & Roveri, M. (2011). A Critical
Overview and Open Questions for Temporal Planning with
Uncertainty. Short position paper in the VVPS workshop at
ICAPS 2011. Freiburg
workshop publications
Gennari, R. (2011, to be published). Book review of Mathematical Logic, by Wei Li Fu. Submitted in August 31, 2011.
Accepted for publication in Theory and Practice of Logic Programming, Cambridge.
Arenas, M., Botoeva, E., & Calvanese, D. (2011). Knowledge
base exchange. In: Proc. of the 24 th Int. Workshop on Description Logics (DL 2011), volume 745 of CEUR Electronic Workshop Proceedings, http://ceur-ws.org/.
Keet, C.M. (2010). On the Feasibility of Description Logic
Knowledge Bases with Rough Concepts and Vague Instances. In: Proc. of the 23rd Int. Workshop Description Logics
(DL~2010), pages 314–324.
Artale, A., Calvanese, D., & Ibanez-Garcia, A. (2010). Checking full
satisfiability of conceptual models. In: Proc. of the 23rd Int. Workshop on Description Logics (DL 2010), volume 573 of CEUR Electronic Workshop Proceedings, http://ceur-ws.org/, pages 55–66.
Keet, C. M., & Artale, A. (2010). A Basic Characterization
of Relation Migration. In: Proc. of the 6 th Int. Workshop on
Fact-Oriented Modeling (ORM~2010), OTM Workshops, pp.
484–493. Springer.
Bagheri-Hariri, B., Calvanese, D. , De Giacomo, G., & De
Masellis, R. (2011). Verification of conjunctive-query based
semantic artifacts. In: Proc. of the 24 th Int. Workshop on
Description Logics (DL 2011), volume 745 of CEUR Electronic
Workshop Proceedings, http://ceur-ws.org/.
Kharlamov, E., Nutt, W., & Senellart, P. (2010). Updating
probabilistic XML. In: Proc. of the 2010 EDBT/ICDT Workshops, Session: Updates in XML. ACM. Botoeva, E., Artale, A., & Calvanese, D. (2010). Query rewriting in DL-LiteHNhorn. In: Proc. of the 23rd Int. Workshop on
Description Logics (DL 2010), volume 573 of CEUR Electronic
Workshop Proceedings, http://ceur-ws.org/, pages 267–278.
Lubyte, L., & Tessaris, S. (2010). Supporting the Development of Data Wrapping Ontologies (Extended Abstract). In:
Workshop Notes of the Int. Workshop on Description Logics
(DL-10). Waterloo, Canada.
Calvanese, D., Kharlamov, E., Nutt, W., & Zheleznyakov, D.
(2010). Updating ABoxes in DL-Lite. In: Proc. of the 4 th Alberto
Mendelzon Int. Workshop on Foundations of Data Management (AMW 2010), volume 619 of CEUR Electronic Workshop
Proceedings, http://ceur-ws.org/, pages 3.1–3.12.
Mariano, R-M., & Calvanese, D. (2011). Dependencies: Making Ontology Based Data Access Work in Practice. Proc. of the
5th Alberto Mendelzon International Workshop on Foundations of Data Managaement. Santiago, Chile.Mariano, R-M.,
& Calvanese, D. (2011). Dependencies to optimize ontology
base data access. Proc. of the 24 th Int. Workshop on Description Logics (DL 2011).
Calvanese, D., Ortiz, M., Simkus, M., & Stefanoni, G. (2011).
The complexity of conjunctive query abduction in DL-Lite.
In: Proc. of the 24 th Int. Workshop on Description Logics (DL
2011), volume 745 of CEUR Electronic Workshop Proceedings,
http://ceur-ws.org/.
Rodriguez-Muro, M., & Calvanese, D. (2011). Dependencies:
Making ontology based data access work in practice. In: Proc.
of the 5th Alberto Mendelzon Int. Workshop on Foundations of
Data Management (AMW 2011), volume 749 of CEUR Electronic
Workshop Proceedings, http://ceur-ws.org/.
64
Savo, Fabio, Domenico Lembo, Maurizio Lenzerini, Antonella Poggi, Rodriguez-Muro, M., Romagnoli, V., Ruzzi
M., & Stella G. (2010, May). MASTRO at Work: Experiences on
Ontology-based Data Access Domenico. In: Proc. of the 2010
Description Logic Workshop (DL2010). Waterloo, Canada.
Thorne, C., & Calvanese, D. (2010). Controlled English Ontology-Based Data Access. In: Proc. of the 2009 Workshop on
Controlled Natural Language (CNL~2009). Springer.
Torroni, P., Chesani, F., Mello, P., & Montali, M. (2010).
Social Commitments in Ti- me: Satisfied or Compensated.
In: M. Baldoni, J. Bentahar, M. Birna van Riemsdijk, J. Lloyd,
(eds.), Post-proceedings of the 7th International Workshop on
Declarative Agent Langua- ges and Technologies (DALT 2009).
Budapest (Hungary). May 11, 2009, Selected Revised and Invited Papers. Vol. 5948 of LNAI, pp. 228–243. Springer Verlag,
Berlin Heidelberg.
Zheleznyakov, D., Calvanese, D., Kharlamov, E., & Nutt,
W. (2010). Updating TBoxes in DL-Lite. In: Proc. of the 23rd
Int. Workshop on Description Logics (DL 2010), volume 573 of
CEUR Electronic Workshop Proceedings, http://ceur-ws.org/,
pages 102–113.
Zheleznyakov, D., Calvanese, D., Kharlamov, E., & Nutt, W.
(2010). Updating TBoxes in DL-Lite. In: Proc. of the 23nd Int.
Workshop on Description Logic (DL~2010).
Faculty of Computer Science
contact
Dominikanerplatz 3 - piazza Domenicani, 3
39100 Bozen-Bolzano
Italy
T +39 0471 016000
F +39 0471 016009
[email protected]
On the Web
www.unibz.it/en/inf/
On LinkedIn:
www.linkedin.com/company/
faculty-of-computer-science-free-university-of-bozen-bolzano
On facebook:
www.facebook.com/inf.unibz.it
imprint
Coordination:
Francesco Ricci
Editorial Staff: Brian Martin
Federica Cumer
Ilenia Fronza
Margareth Lercher
Francesco Ricci
Art Direction & Photography:
Florian Reiche
Photos provided by the faculty of computer science
and only retouched by Florian Reiche:
Etiel Petrinija, p.13, 28; Nadine Mair, p. 13;
Roberto Cappuccio, p.13.
Print:
Esperia srl
65
unibz.it/inf