A Kursaufbau CAS in Modernen Methoden der

Transcription

A Kursaufbau CAS in Modernen Methoden der
Institut für Informatik
CAS in Modernen Methoden der Informatik
Kurskatalog
Inhaltsverzeichnis
CAS in Modernen Methoden der Informatik
Inhaltsverzeichnis
A Kursaufbau CAS in Modernen Methoden der Informatik
3 Vorbemerkung
3 Beschreibung
3 Zielpublikum
3 Kursdauer
3 Zulassung zum Studiengang
3 Kursprogramm
4 Kursablauf und Bewertung
5 Abschluss
5 B Kurse
6 1 Modul Big Data
6 1.1 Data Mining
6 1.2 Cluster Computing, MapReduce, and the Future
7 1.3 Heterogeneous Data Processing on the Web of Data / Information Visualization
8 1.4 Virtualization and Clouds
10 1.5 Computational Perception
11 1.6 Multilingual Text Analysis and Information Extraction
12 1.7 Social Computing
13 1.8 Network-based Business Intelligence
14 1.9 Coaching Day
15 16 2 Modul Engineering Solutions
2.1 Requirements Engineering 2.0
16 2.2 Software Quality Analysis
17 2.3 Software Quality Assurance with Continuous Integration
18 2.4 Development 2.0: Human-Centered Software Engineering / Open Innovation for Software
Products & Processes
19 2.5 Human Computer Interaction
21 2.6 Engineering Electronic Markets
22 2.7 Digital Innovation and Social Networking for Companies
23 2.8 Sustainability and Green IT
24 2.9 Coaching Day
25 C Dozierende
Seite 2
26 Universität Zürich, Institut für Informatik, 01.09.15
Institut für Informatik
A Kursaufbau CAS in Modernen Methoden der Informatik
Vorbemerkung
Änderungen vorbehalten. Das Kursreglement ist Bestandteil dieses Kurskatalogs.
Beschreibung
Die Informatik ist einem schnellen Wandel unterworfen, und Wissen hat eine kurze Halbwertszeit.
Die stetigen Neuerungen beeinflussen in immer stärkerem Masse den Berufsalltag, insbesondere die
Art und Weise, wie Menschen und Computer interagieren. Weil sowohl der Stellenwert der
Informatik in der Gesellschaft und Wirtschaft themenübergreifend immer wichtiger wird, als auch
der zunehmende Einfluss der Sozialwissenschaften auf die Informationsverarbeitung erkennbar wird,
ist der Kurs einerseits auf Big Data und andererseits auf den modernen Software-Entwurf fokussiert.
Somit wird gewährleistet, dass die Teilnehmenden die neusten Entwicklungen der modernen
Informatik kennenlernen, deren Auswirkungen einschätzen und auf die anstehenden Veränderungen
angemessen reagieren können.
Zielpublikum
Informatikerinnen und Informatiker im Beruf mit Freude am „Life-long Learning“, die sich für die
Berufspraxis der Zukunft mit den aktuellen Methoden in der Informatik und relevanten
Forschungserkenntnissen einen Wissensvorsprung verschaffen wollen.
Kursdauer
Februar bis Juni 2016
Kurstage: Freitag und Samstag
Zulassung zum Studiengang
Hochschulabschluss auf Masterstufe oder gleichwertige Qualifikation sowie Berufserfahrung im
Informatikbereich.
Bei einem Bachelorabschluss oder anderen Gleichwertigkeitsfragen entscheidet die Kursdirektion
«sur dossier» und abschliessend. Sie kann für Studienbewerberinnen und -bewerber, welche
ausnahmsweise aufgrund vergleichbarer Qualifikationen zugelassen werden sollen, die Zulassung
von einem erfolgreichen Aufnahmegespräch abhängig machen.
Es besteht kein Anspruch auf Zulassung.
Einzelne Module oder Teile davon können einem weiteren Personenkreis der universitären und
ausseruniversitären Öffentlichkeit zugänglich gemacht werden. Der Besuch einzelner Module führt
nicht zu einem Abschluss.
Seite 3
Universität Zürich, Institut für Informatik, 01.09.15
Institut für Informatik
Kursprogramm
Das Kursangebot ist in zwei Module, „Big Data“ und „Engineering Solutions“ aufgeteilt. Um den
CAS erfolgreich zu absolvieren, sind total 15.0 ECTS-Kreditpunkte nötig. Das Modul „Big
Data“ ergibt 8 ECTS-Kreditpunkte, das Modul „Engineering Solutions“ ergibt 7 ECTS-Kreditpunkte.
Ein ECTS-Kreditpunkt entspricht einem Arbeitsaufwand von ca. 30 Stunden. Dieser setzt sich
zusammen aus Präsenzzeiten während den Kurstagen sowie Vor- und Nachbereitung. Die Module
bestehen aus den angebotenen Kurstagen und den beiden Coaching Days.
Teilnehmende, die den CAS nicht absolvieren, wählen ihre Kurstage frei aus dem Programm aus.
Modul / Kurstage
Unterrichtssprache*
Modul 1: Big Data
Data Mining
Deutsch
Cluster Computing, MapReduce, and the Future
Deutsch
Heterogeneous Data Processing on the Web of Data /
Information Visualization
Deutsch
Virtualization and Clouds
Englisch
Computational Perception
Englisch
Multilingual Text Analysis and Information Extraction
Deutsch
Social Computing
Englisch
Network-based Business Intelligence
Englisch
Coaching Day
Modul 2: Engineering Solutions
Requirements Engineering 2.0
Deutsch
Software Quality Analysis
Deutsch
Software Quality Assurance with Continuous Integration
Deutsch
Development 2.0: Human-Centered Software Engineering /
Open Innovation for Software Products & Processes
Deutsch
Human Computer Interaction
Englisch
Engineering Electronic Markets
Englisch
Digital Innovation and Social Networking for Companies
Deutsch
Sustainability and Green IT
Deutsch
Coaching Day
Seite 4
Universität Zürich, Institut für Informatik, 01.09.15
Institut für Informatik
* Die Unterrichtssprache ist die Sprache, welche während des Kurstages gesprochen wird. Die
Kursunterlagen sind generell auf Englisch abgefasst.
Kursablauf und Bewertung
Pro Modul muss ein Leistungsnachweis erbracht werden. Dieser besteht aus einem schriftlichen Test
über alle Themen des Moduls sowie in einer schriftlichen Arbeit zu einem gewählten Thema. Die
CAS-Absolvierenden entscheiden sich im Laufe des Moduls für ein Thema nach Absprache mit
einem Dozierenden. Die Arbeit soll einen Umfang von bis zu 20 Seiten haben und ist in Deutsch oder
Englisch zu verfassen.
Am Ende des jeweiligen Moduls findet der Coaching Day statt. An diesem Tag bereiten sich die CASAbsolvierenden anhand von Anleitungen und Beispielen auf die schriftliche Arbeit vor. Sie können
Fragen diskutieren sowie das gewählte Thema konkretisieren oder vertiefen.
Der Abgabetermin der schriftlichen Arbeit ist jeweils 30 Tage nach dem Coaching Day des
entsprechenden Moduls. Die schriftliche Arbeit wird vom Dozierenden benotet, mit den Noten 1 bis 6.
Halbe Noten sind zulässig. Noten unter 4 sind ungenügend.
Ein ungenügender Leistungsnachweis kann einmal am nächstmöglichen Termin, spätestens nach drei
Monaten nach Kenntnis des Nichtbestehens, wiederholt werden. Andernfalls gilt er als definitiv nicht
bestanden.
Abschluss
Certificate of Advanced Studies UZH in Modernen Methoden der Informatik
(CAS UZH in Modernen Methoden der Informatik)
Das Zertifikat wird verliehen, wenn 15.0 ECTS Kreditpunkte erworben worden sind und zwei
schriftliche Arbeiten (Leistungsnachweise) mit einer Note ≥ 4.0 bewertet worden sind.
Studierende, denen der Abschluss nicht verliehen wird, erhalten einen Nachweis über die erbrachten
Leistungen.
Seite 5
Universität Zürich, Institut für Informatik, 01.09.15
Institut für Informatik
B Kurse
1 Modul Big Data
1.1 Data Mining
Description
Computers increasingly need to operate in environments that do not lend
themselves to be modeled in the black and white certainties associated with
binary system. Indeed, the world is mostly an uncertain place and Computer
Science has largely decided to ignore the fact almost since its inception.
Aggravating the issue, we collect ever more data about our endeavors that
could be exploited if we would have consistent means to deal with the
uncertainties that are usually associated with data of varying quality.
This course will show how those limitations of the binary system can be
overcome by the coherent usage of methods from probability and information
theory. Based on these foundations it will introduce sound methods for
reasoning about data that includes uncertainties. In addition, it will introduce
the major inference techniques (i.e., predictive analysis methods oftentimes
called data mining or business intelligence algorithms) that are used to improve
computer systems’ performance in almost all sectors of their application. The
course will exemplify its approaches using an industrial strength open source
data mining tool that the participants are going to be using in a number of
exercises.
Course content
– From Binary to Probabilities
o Shortcomings of binary computing
o Uncertainty and probability
o Probabilistic Reasoning
o Bayes’ Rule
– Data Mining
o The Main Data Mining Tasks: Prediction, association rules, and clustering
o Exemplary Methods for each of the tasks
Practical tools
– Weka
– RapidMiner
Instructor
– Prof. Abraham Bernstein, Ph.D.
Language
German
Seite 6
Universität Zürich, Institut für Informatik, 01.09.15
Institut für Informatik
1.2 Cluster Computing, MapReduce, and the Future
Description
Driven by the need for low-latency, large-scale data processing a numerous
Internet companies and open source projects have embraced the principles of
heterogonous cluster computing to process their data. These approaches veer
away from using large centralized systems for processing and data storage but
rely on large, low-cost commodity clusters to store and process their data.
Through the systematic use of powerful abstractions these approaches achieve
almost seamless parallelization, distribution, continuous availability, fail-over
safety, and instantaneous deployment of new features.
This course will start by introducing the complexities of distributed computing
to establish the problems that need to be addressed by possible solutions. Based
on these foundational problems it will introduce the powerful abstractions
developed by the above-mentioned community and explain how they address
the problems mentioned. These will expemlify how companies such as Google,
facebook, Yahoo!, and others deal with todays data deluge. Next, the course
will provide a revue of tools (chiefly among them Hadoop, HBase, Akka, and
Signal/Collect) available freely to harness the powerful abstractions mentioned.
The course will contain plenty of practical example algorithms as well as the
possibility to develop first simple programs in class.
Course content
– Distributed computing problems & large-scale data processing;
– Distributed computing in heterogeneous clusters
– Bulk Synchronous Processing abstractions for cluster computing: MapReduce
and BigTable
– Practical tools for harnessing heterogeneous clusters: Hadoop and HBase
– Beyond Bulk Synchronous Processing: Signal/Collect and Akka
Practical tools
– Hadoop
– HBase
– Akka
– Signal/Collect
Instructor
– Prof. Abraham Bernstein, Ph.D.
Language
German
Seite 7
Universität Zürich, Institut für Informatik, 01.09.15
Institut für Informatik
1.3 Heterogeneous Data Processing on the Web of Data / Information Visualization
Heterogeneous Data Processing on the Web of Data
Description
The World Wide Web has changed the way we all work. Its principles of
decentralized control – everybody can add information and everybody can link
to anything – have lead to an enormous explosion of information available at
our fingertips today. Data, in contrast, is today mostly held imprisoned in data
silos and difficult to process in a decentralized fashion. The advent of the Web
of Data (sometimes also called the Semantic Web) has changed the equation.
Based on the same principles – simple addition of additional data points and
linking of data to other data – the web of data has been growing at an
astonishing speed. Indeed, the publication of government data in the US
(data.gov) and the UK (data.gov.uk) has stirred a number of worldwide
initiatives to follow suite (in Switzerland the city of Zurich is expected to open a
site in the first half of this year). The currently proposed approaches allow the
schema-agnostic, robust processing of data from heterogeneous sources and
have a number of interesting implications for both intra- end interorganizational data processing. This course will introduce the most relevant
techniques for large-scale, schema-agnostic, heterogeneous data processing.
Besides introducing the techniques it will also engage in a number of smaller
practical exercises that exemplify the power of the proposed approaches.
Course content
– Heterogeneous data;
– Semi-structured data;
– Data on the web: principles and approaches;
– Standards: RDF, RDFa, SPARQL, OWL, schema.org, microformats;
– The Linked Open Data Cloud;
– Processing on the web of data;
– Distributed data integration
Practical tools
– Jena
– CKAN
Instructor
– Prof. Abraham Bernstein, Ph.D.
Language
German
Seite 8
Universität Zürich, Institut für Informatik, 01.09.15
Institut für Informatik
Information Visualization
Description
Increasing amounts of data, often in abstract or plain numerical form, are
gathered and stored in many applications, business software and data
processing systems. Besides using systematic analysis of these datasets, e.g.
using statistical and data mining approaches, visual data exploration and
presentation can significantly support the intuitive understanding of the
relations which are implicitly present in the complex data collections. Patterns,
clusters, trends and relative proportions within data collections are often only
easily apprehensible if given in a visual form, thus exploiting the immense
power of the human visual system to detect spatial relations and
configurations.
Unlike in most 3D graphics and scientific visualization domains, abstract
(business) data often lacks a direct mapping to a visual representation. In this
course we will look at the established basic principles and best practices in data
visualization. The contents will cover the representation of quantities, use of
color and visual perception, as well as many algorithms and methods for spatial
mapping and visual display of data. Many principles discussed in this course
are applicable to both (business) presentations as well as interactive data
visualization and analysis systems.
Course content
– Principles of information visualization
– Representation of quantitative values
– Color theory and perception
– Spatial mapping
– Methods and techniques
– Data visualization systems
– Visual analytics
Practical tools
– Various examples from research and practice
Instructors
– Prof. Dr. Renato Pajarola
Language
German
Seite 9
Universität Zürich, Institut für Informatik, 01.09.15
Institut für Informatik
1.4 Virtualization and Clouds
Description
Significant advances in the domain of Information and Communications
Technology (ICT) have been seen in the last thirty years. As such the usage of
processing cycles may be considered to be a new utility. Computing services are
highly essential to support daily demands in society, economics, and
production sectors. One of the newer paradigms, following the cluster
computing, high-performance computing, and grid computing waves is termed
cloud computing, which utilizes the powerful concept of virtualization to
provide such computing demands as suitable as possible. Thus, based on the
definition of cloud computing and respective architectures a suitable resource
allocation overview via virtualization concepts will be discussed.
While a number of alternatives on virtualization will be distinguished, such as
full virtualization, OS virtualization, hardware virtualization, storage
virtualization, network virtualization, virtual machines, and hypervisors, the
general functions needed for successful operations do include (a) real-time
monitoring of virtualized workloads, (b) resource allocation, (c) configuration
change tracking, and (d) reporting feedback for validating SLAs and assessing
ROI for IT spending. As such a system-independent (as far as possible) view
and introduction will be presented, of course, by relying on existing theory and
available examples, such as Xen, VMWare Server, VMWare ESX Server, Virtual
Iron, Dell VM, or Windows Virtual Server. Finally, the operational aspects of
virtualization in selected cases will be taken up for further detailed study.
Course content
– Definitions of cloud computing and virtualization
– Introduction into various virtualization approaches
– Insights into relevant functionality
– Discussions of selected operational aspects of virtualization
Practical tools
– To be decided, likely Xen and VMWare Server
Instructors
– Prof. Dr. Burkhard Stiller
Language
English
Seite 10
Universität Zürich, Institut für Informatik, 01.09.15
Institut für Informatik
1.5 Computational Perception
Description
Computational perception (or computer vision) is a field that includes methods
for acquiring, processing, analyzing, and understanding images and, in general,
image datasets in order to produce numerical or symbolic information. Its goal
is the extraction of semantic and/or geometrical information from images.
Computer vision is used in a plethora of heterogeneous applications, such as
image photography, augmenter reality, object recognition, security and
surveillance, industrial inspection, 3D measurements, medical imaging, and so
on. If one only considers the smartphone market, we can count already several
thousands of apps using computer vision algorithms, and their number is
doubling every 6 months. On the Internet, the availability of large image
collections, such as Google Images or Flick.com, makes it necessary to develop
tools to organize all this information so that it can be managed by humans in
practical ways.
In this course, we will introduce methods for processing digital images and
extracting salient information. Topics will include camera image formation,
filtering, edge detection, feature extraction, object recognition, image retrieval,
and structure from motion. Practical example applications will be provided
throughout the course.
Course content
– Camera image formation
– Filtering
– Edge detection
– Feature extraction
– Object recognition
– Image retrieval
– Structure from motion
Practical tools
– TBD
Instructor
– Prof. Dr. Davide Scaramuzza
Language
English
Seite 11
Universität Zürich, Institut für Informatik, 01.09.15
Institut für Informatik
1.6 Multilingual Text Analysis and Information Extraction
Description
Natural language is the main medium for human communication and for
storing information. The internet has resulted in vast amounts of natural
language texts in electronic form. The processing and interpretation of these
texts for information extraction is of increasing importance.
Texts can occur in a multitude of languages which requires automatic language
identification, specific processing of various languages, machine translation of
queries or texts, and the fusion of information gathered from various language
sources. If the same text is available in multiple languages, this provides
interesting clues for interpreting words and serves as a central resource for
building machine translation systems.
In this course we will introduce the methods for processing texts in multiple
languages including word class tagging and grammatical analysis. We will
present the challenges in analyzing complex words and phrases and in
identifying different types of names (persons, geographical entities,
organizations). We show how information extracted from texts in various
languages can be merged via multilingual ontologies. We introduce the
technology of modern machine translation systems and demonstrate how we
have employed them profitably in practical applications.
Course content
– Multilingual Text Analysis
– Name Recognition and Classification
– Machine Translation
– Information Extraction
– Multilingual Ontologies
Practical tools
– Word Analysis System Gertwol
– Dependency Parser ParZu
– Google Translate
Instructor
– Prof. Dr. Martin Volk
Language
German
Seite 12
Universität Zürich, Institut für Informatik, 01.09.15
Institut für Informatik
1.7 Social Computing
Description
This course provides an introduction to social computing, a topic at the
intersection of computer science and economics. At its core, this course is based
on the realization that people are motivate in many different ways (not just by
money), to work for projects like Wikipedia or Linux for example. The Internet
has enabled many new forms of production that open up new ways to think
about how tasks are best being solved in online domains.
We begin by studying peer production, which describes decentralized
collaborations among individuals that result in successful large-scale projects,
without the use of monetary incentives. Next, we introduce human
computation, which solves problems that currently cannot be solved by
computers alone (e.g., translation, image tagging) by combining small units of
human work with an automated algorithm to organize the work. Third, we
introduce crowdsourcing, i.e., the act of outsourcing tasks to an undefined,
large group of people or community (a crowd) through an open call (e.g., via
MTurk).
In the second half of the course, we study systems that rely on the “wisdom of
the crowds” idea. We show how to optimally design contests (e.g., between
programmers on TopCoder) such that the best solution to a problem is
determined via a competition. Next, we study how to optimally design badges
that reward contributors on social platforms, such that the number and quality
of the contributions is maximized. Finally, we discuss prediction markets,
which are markets specifically designed to predict the outcome of certain events
(e.g., the launch of a new product). Throughout the course we will leave time
for discussions on how these new social computing concepts could be applied
in the participants’ work domains.
Course content
1.
2.
3.
4.
5.
6.
Practical tools
– MTurk
– Turkit
– Inkling prediction markets
Instructor
– Prof. Abraham Bernstein, Ph.D.
– Prof. Sven Seuken, Ph.D.
Language
English
Seite 13
Peer Production
Human Computation Systems
Crowdsourcing Markets
Contest Design
Badge Design
Prediction Markets
Universität Zürich, Institut für Informatik, 01.09.15
Institut für Informatik
1.8 Network-based Business Intelligence
Description
In the current business environment, individuals, organizations, and systems
have been increasingly interacting and collaborating in the form of networks
through computer-based techniques and technologies. This trend has generated
a huge amount of network (relational) data in various business domains such as
Finance and Marketing. How to effectively model, analyze, and utilize such
network data through computing technologies to support business decision
making becomes a major challenge for nowadays business intelligence (BI)
practices in large organizations. This course will help you get a solid start at
understanding what (business) networks are, what they can do, and how you
can develop and employ network-based BI techniques and applications in
organizations. Moreover, this course will introduce main research directions
and works in Network Science and Business Intelligence.
Course content
– The basic concepts of networks (organizational networks and social networks)
– Social network analysis methods
– Network-based BI techniques
– Networks in Finance and Marketing
Practical tools
– tba
Instructors
– Prof. Daning Hu, Ph.D.
Language
English
Seite 14
Universität Zürich, Institut für Informatik, 01.09.15
Institut für Informatik
1.9 Coaching Day
à Vorbereitung auf das Verfassen der schriftlichen Arbeit
Seite 15
Universität Zürich, Institut für Informatik, 01.09.15
Institut für Informatik
2 Modul Engineering Solutions
2.1 Requirements Engineering 2.0
Description
Requirements Engineering (RE) is a key activity in any software development
endeavor. If requirements are wrong, neglected, or misunderstood, the
resulting software system will not satisfy its stakeholders’ expectations and
needs, which means that the system will eventually fail, regardless how well it
is designed, coded, and tested.
Modern Requirements Engineering is no longer about writing voluminous
requirements documents. It is about creating a shared vision for a system-to-be
and then specifying this vision in a degree of detail that ensures a successful
implementation.
The course will start by introducing some general principles and facts about
Requirements Engineering. Based on these foundations, we will then present
practices, techniques and practical advice about how to elicit, document and
validate requirements in various contexts and application domains.
Course content
– Successfully capturing, negotiating, and communicating user needs
– Understanding requirements in context
– Employing social networks, spontaneous feedback and computer supported
cooperative work for capturing and validating requirements
– Creating requirements for innovative systems
– Embedding Requirements Engineering both in traditional and in agile project
settings
Practical tools
– Selected RE tools will be demonstrated
Instructor
– Prof. Martin Glinz, Dr. rer. nat.
Language
German
Seite 16
Universität Zürich, Institut für Informatik, 01.09.15
Institut für Informatik
2.2 Software Quality Analysis
Description
In the development of today’s software systems, in particular for agile
development processes, developers need to continuously adapt and evolve the
system to reflect the frequently changing stakeholders’ needs. This need for
continuous adaptation of the software requires a high quality of the evolving
software throughout the whole development process and developers to
frequently assess the software quality. Thereby, the quality of a software system
comprises many aspects of a system and its development process and can be
measured in various ways. For example, one can meter inner code structures
through a coupling and cohesion analysis or one can analyze the development
process by looking at the evolution of changes. Given the size and complexity of
many of today’s software systems, it is also important to provide adequate
abstraction and visualization for analyzing a software system. Therefore, we
will also introduce software visualization and exploration as means to better
understand and evolve large software systems.
In this course, we will address software quality analysis techniques, software
evolution tools and techniques, as well as software visualization mechanisms
and tools that can address the needs of stakeholders such as software
developers, software architects, or quality assurance managers.
Course content
– Software evolution analysis (history analysis, hotspot identification, design
erosion)
– System of systems analysis (addressing software ecosystems)
– Mining software archives (correlations between changes, people, teams, and
distribution/outsourcing
– Bug prediction models and quality
– Visualization of quality aspects
– Human effects on quality (empirical findings)
Practical tools
– Software evolution tools such as the Evolizer tool suite
– Change analysis tools such as ChangeDistiller
– Software Analysis as a Service platform SOFAS
– Software exploration tools such as CocoViz or Cod
– Bug pattern identification tools such as Findbugs
Instructors
– Prof. Dr. Harald Gall
Language
German
Seite 17
Universität Zürich, Institut für Informatik, 01.09.15
Institut für Informatik
2.3 Software Quality Assurance with Continuous Integration
Description
The customer’s high demand of up-to-date, high quality software puts a
challenging task to the software industry. We need to be able to deliver a new
release at any moment, thus always have a “running-system” at hand. To cope
with these requirements we have to enrich our software development process
with the necessary measures and tools. We need a thorough test-base for
quality assurance and very short feedback cycles for developers during their
daily work. Software testing on different levels of granularity and continuous
integration with multi-stage builds provides the means to ensure high quality
and release-readiness of software products at the same time. Moreover, we can
leverage the tools provided to configure continuous integration for different
software development processes.
In this course, we address software quality assurance by means of software
testing on different levels of granularity (unit, integration, system, end-to-end,
and load) in conjunction with multi-stage builds in a continuous integration
environment. We show how we can use continuous integration for releasingsoftware in different development processes and how we can get towards
continuous delivery. In addition, we present the challenges of continuous
integration with respect to infrastructure and acceptance among different
stakeholders.
The course uses state-of-the-art open-source tools and drives the content with
examples.
The target audience of the course includes people in technical positions, e.g.,
software engineering professionals, software architects/technical project
leaders, and quality assurance engineers.
Course content
– Software testing on different levels of granularity (unit, integration, system,
end-to-end, and load)
– Multi-stage builds with continuous integration
– Deployment/delivery with continuous integration
– Quality analysis with continuous integration
– Continuous integration people and infrastructure challenges
Practical tools
– Continuous integration platform Jenkins
– Common development, build, and test infrastructre: Java EE, JUnit, Maven
– Integration testing with Arquillian
– Web testing framework Selenium/WebDriver
– Load testing platform JMeter
– Quality analysis platform Sonar
Instructors
– Dr. Beat Fluri
Language
German
Seite 18
Universität Zürich, Institut für Informatik, 01.09.15
Institut für Informatik
2.4 Development 2.0: Human-Centered Software Engineering /
Open Innovation for Software Products & Processes
Development 2.0: Human-Centered Software Engineering
Description
Software is built by humans. Current development environments are centered
around models of the artifacts used in development, rather than of the people
who perform the work, making it difficult and sometimes infeasible for the
developer to satisfy her information needs. For instance, a central model in a
development environment is an abstract syntax tree, which provides an
abstraction of source code to facilitate feedback about the syntax of code being
written, facilitate navigation in terms of code structure and facilitate code
transformations. By providing feedback about artifacts, these models benefit the
developers using the environment. However, these models fall short for
developers by supporting only a small fraction of their information needs and
tasks when they are performing work on the system.
This course will introduce relevant current approaches and technologies to
support a better human-centric software development and to satisfy
information needs of software engineers. Besides introducing the techniques it
will also engage in a number of smaller practical and interactive exercises.
Course content
– Information needs and fragments
– Information filtering and tailoring (to stakeholder needs and tasks)
– New interfaces and devices for the software development process (e.g.,
interactive touch displays, tabletops);
– Collaborative software development and awareness
– Recommender systems
– Social networks in software development
Practical tools
(tentative list)
– Collaboration tools such as Mylyn
– New IDEs such as Code Bubbles and Debugger Canvas
– Development support tools such as Crystal
– NUI tools such as SmellTagger
Instructors
– Prof. Thomas Fritz, PhD
Language
German
Seite 19
Universität Zürich, Institut für Informatik, 01.09.15
Institut für Informatik
Open Innovation for Software Products & Processes
Description
Open Innovation (OI) in software engineering often is seen in the context of
open source software projects. But both open and closed source projects offer a
great deal of expertise, experience, and knowledge to enable technological
leadership for future software innovations. One of the basic questions for OI is
the level of quality of a software product that should be improved, enhanced,
refurbished or even freshly developed. This question is essential as the product
functions as a key enabler for an idea to become an innovation: is the point of
time right, is the quality level appropriate, are the feature sets defining a good
foundation for the next steps towards customers?
We will center the course around the definition of OSS Watch: “Processes and
tools that encourage innovation by boosting internal, and harnessing external,
creativity, and by bringing the innovation results to market through both
internal and external channels.” (OSS Watch, 2011)
Course content
– Open Innovation Processes (Intrapreneurship);
– Open development vs. closed development;
– OI platforms and models for enabling innovation incubation;
– OI and software quality aspects;
– OI for software design and development
Practical tools
– OI platforms (socio-technical platforms)
– Software analysis platforms
– Expertise finder tools
– Social media for OI
Instructor
– Prof. Dr. Harald Gall
Language
German
Seite 20
Universität Zürich, Institut für Informatik, 01.09.15
Institut für Informatik
2.5 Human Computer Interaction
Description
Advances in technology have a profound impact on human activities, shaping
the way that we work, communicate, and use information. At the same time,
technology must address and respond to human needs and capabilities in order
to be successful. As evidenced by the way that the modern web browser
suddenly made the internet accessible to the general population after years of
niche usage, or by the way the Apple iPad and similar recent devices have
made the decades-old tablet computing paradigm commonplace, advanced
technology needs to fit with human practices to unlock its real value and
achieve broad adoption. The right interface or design can be critical for
transforming technology from obscure science to supportive, useful, and
effective tools. At the same time, identifying or creating these necessary
innovations is a challenge.
The field of Human-Computer Interaction focuses on the co-evolution of
technology and human practices. It focuses on understanding human needs and
activities, and designing technologies that are informed by this understanding.
It offers methods and approaches for addressing human questions, translating
findings into technology design, and evaluating technologies for both usability
and usefulness. This course will provide participants with a valuable and
versatile toolbox of methods and skills essential for user research and design. In
particular, it will focus on the iterative design process as a practical and costefficient way of developing solutions, as well as simple but effective lowresource techniques for collecting user data and evaluating technology
solutions.
Course content
– The HCI iterative design process
– Usability criteria and principles and their application to design
– Practical techniques for gathering and analyzing user data
– Graduated prototyping techniques
– Low-resource methods for evaluating solutions with end users
Practical tools
– Balsamiq
Instructors
– Prof. Elaine M. Huang, Ph.D.
Language
English
Seite 21
Universität Zürich, Institut für Informatik, 01.09.15
Institut für Informatik
2.6 Engineering Electronic Markets
Description
Course content
Due to the Internet, electronic markets are becoming the most important
means of trading goods and services in today’s society. Millions of items
change hands via eBay every day; Billions of online ads are priced
automatically on Google every day; lots of companies are using automated
markets to automatically source goods and services worth Billions of dollars
every year. However, as the dot-com bubble and the recent financial crisis
have shown, markets can also be broken and, in the worst case, may even
collapse. This illustrates that special care must be taken when setting up a
new electronic market, or when trying to fix a broken one.
In this course, we discuss how to design (or “engineer”) electronic markets,
under economic and computational considerations. The first half of the
course covers a variety of “electronic auctions.” We first discuss how to
design auctions that can be used to sell individual items to customers, or to
automatically price advertisements on the Internet. Then we move on to
more complex, so-called “combinatorial auctions,” and show how they can
be used for sourcing a large company’s supplies, for optimizing server
architectures, or for selling computational resources like bandwidth.
The second half of the course takes a broader view on electronic markets,
covering a number of common challenges that arise when starting a new
online business and ways to solve them. First, we adopt the perspective of a
business manager who wants to start or continue to grow an online business
in a domain with network effects. Then we discuss how to identify good and
bad users in a market domain, how to identify good and bad products, and
how to recommend products or services to users they will like based on their
subjective taste. Finally, we wrap up the course with a case study on how to
monetize a social network. The case study is designed to illustrate many of
the concepts covered in this course.
Throughout the course, we use many real-world examples to illustrate the
new concepts, and we devote enough time for extended discussions on the
pros and cons of various approaches.
1.
2.
3.
4.
5.
6.
Introduction to Game Theory and Auctions
Internet Auctions (eBay, Google, etc.)
Combinatorial Auctions
Electronic Markets with Network Effects
Reputation and Recommender Systems (Tripadvisor, Amazon, etc.)
Case Study: Monetizing a Social Network (e.g., Facebook, Twitter)
Practical tools
– No software or computer necessary
– New concepts will be practiced via case studies
Instructors
– Prof. Dr. Sven Seuken
Language
English
Seite 22
Universität Zürich, Institut für Informatik, 01.09.15
Institut für Informatik
2.7 Digital Innovation and Social Networking for Companies
Description
Over the past years, the increasing use of information and communication
technologies has brought about a significant transformation of organizations.
Alongside other technologies, social software, such as wikis, weblogs, and
social network sites, have become a focal area for many organizations.
The aim of the course is to give an overview over the potential of enterprise
social software (ESS) and to provide you with methods to plan, implement and
evaluate the success of ESS. We will discuss the underlying principles (e.g.
transparency, malleability) and use cases (e.g. expert finding, knowledge
exchange) of ESS as well as (firsthand account) best practices and motives of
companies using ESS. You will learn how ESS contribute to changes in
leadership and organizational culture in companies and discuss the challenges
on the way to a networked organization.
Course content
– tba
Practical tools
– tba
Instructors
– Dr. Alexander Richter
Language
German
Seite 23
Universität Zürich, Institut für Informatik, 01.09.15
Institut für Informatik
2.8 Sustainability and Green IT
Description
Green IT is the study and practice of environmentally sustainable computing
throughout the whole life cycle of IT systems. IT is responsible for roughly 10 %
of our electricity demand and for 2-3 % of total energy consumption. Although
it is the hardware that consumes energy and creates demand for scarce metals
and other important resources, software products have decisive impact on how
much hardware is installed, how it is utilized, how it can adapt to changing
conditions (including the fluctuating availability of renewable energy) and
when a device becomes obsolete.
There have been many attempts to define the sustainability of IT systems from a
software perspective and to derive recommendations for developers from
conceptual frameworks, and to develop tools to measure indicators of
sustainability, e.g., of Web applications.
The course will provide an overview of the existing approaches to Green IT
with a focus on software and the causal chains leading from software
architecture to physical effects, such as power consumption of hardware,
network traffic and its energy intensity, storage and energy intensity, and other
sustainability-related issues.
Course content
– General overview and conceptual framework of Green IT
– Green IT in datacenters (state of the art)
– Green IT and communications (energy intensity of the Internet and different
access networks)
– Green software engineering (software architecture and energy, introduction to
the GREENSOFT model)
– Energy-aware systems
– Standardization (current state of Green IT standardization activities by ITU,
ISO and Greenhouse Gas Protocol)
Practical tools
– none
Instructors
– Prof. Dr. Lorenz Hilty
Language
German
Seite 24
Universität Zürich, Institut für Informatik, 01.09.15
Institut für Informatik
2.9 Coaching Day
à Vorbereitung auf das Verfassen der schriftlichen Arbeit
Seite 25
Universität Zürich, Institut für Informatik, 01.09.15
Institut für Informatik
C Dozierende
Abraham Bernstein
Spezialgebiete:
Abraham
Bernsteins
Forschungsinteressen
beinhalten das Semantische Web, Data-Mining, heterogene
Datenintegration, verteilte Systeme, Verarbeitung von Graph-Daten
sowie Datenströmen und das Wechselspiel zwischen sozialen und
technischen Elementen der Informatik.
Aktivitäten: Mitglied des Editorial Board mehrerer internationaler
Fachzeitschriften. Mitglied verschiedener Führungsgremien und
wissenschaftlicher Vereinigungen. Leiter mehrerer internationaler
Forschungsprojekte. Vorstandsmitglied der Schweizer Informatik
Gesellschaft (SI) und des ICTSwitzerland.
Ordentlicher Professor
für Informatik,
Institut für Informatik,
Universität Zürich
Beat Fluri
Senior Technical Project
Leader,
AdNovum Informatik AG
Seite 26
Website: www.ifi.uzh.ch/ddis/bernstein.html
Dr. Beat Fluri hat an der ETH Zürich Informatik-Ing. studiert
(Abschluss 2004) und bei Prof. Harald Gall an der Universität
Zürich im Bereich Software Evolutions Analyse promoviert
(Abschluss 2008).
Nach der Dissertation arbeitete er ein Jahr als Senior Research
Associate an der Universität Zürich. Von 2009 bis 2011 gründete
und entwickelte er zusammen mit Freunden die unter TischtennisSpielern bekannte Sportplattform spood.me. Von Okt 2011 bis Mai
2013 war er Software Architekt und technischer Projektleiter bei der
Entwicklung von Web- und Enterprise Java Applikationen bei der
Comerge AG. In der Zeit von 2009 bis 2011 hielt er als externer
Dozent die Master-Vorlesung "Software Testing" an der Universität
Zürich. Seit Juni 2013 ist Beat Fluri als Senior Technical Project
Leader bei der AdNovum Informatik AG tätig.
Sein Fokus liegt in der server-seitigen Entwicklung von WebApplikationen mit JEE 6. Als Technischer Leiter von solchen
anspruchsvollen, agilen Software-Projekten ist er verantwortlich,
dass Kunden qualitativ hochstehende Produkte erhalten. Dazu setzt
er in seinen Projekten Test-Driven Development, Continuous
Integration, Continuous Deployment, Clean Code und TestAutomatisierung ein.
Universität Zürich, Institut für Informatik, 01.09.15
Institut für Informatik
Thomas Fritz
Assistenzprofessor für
Software Quality,
Institut für Informatik,
Universität Zürich
Thomas Fritz ist Assistenzprofessor am Institut für Informatik der
Universität Zürich. Er erhielt seinen PhD von der University of
British Columbia, Kanada, in 2011 und sein Diplom von der
Ludwig-Maximilians-Universität München, Deutschland, in 2005.
Er
hat
Erfahrung
mit
verschiedensten
Firmen
und
Forschungsgruppen gesammelt, so wie IBM in Ottawa und Zürich
und die OBASCO Gruppe an der École des Mines de Nantes in
Frankreich.
Aktivitäten: In seiner Forschung interessiert er sich dafür,
Stakeholdern im Softwareentwicklungsprozess zu helfen besser mit
der Information und den Systemen, an denen sie arbeiten,
umzugehen. Heutige Ansätze konzentrieren sich stark auf die
Artifakte im Softwareentwicklungsprozess anstatt der Menschen,
die diese Artifakte erstellen. Dies macht es oft schwer, die
Informationsbedürfnisse der Stakeholder zu befriedigen. Thomas
Fritz erforscht, wie man Modelle spezifisch für die Stakeholder
erstellen kann um heutige Ansätze zu erweitern und die
Informationsbedürfnisse der Stakeholder zu adressieren.
Website: seal.ifi.uzh.ch/fritz
Harald Gall
Ordentlicher Professor für
Software Engineering,
Institut für Informatik,
Universität Zürich
Seite 27
Spezialgebiete: Harald Gall forscht und lehrt im Gebiet der
Software Evolutionsanalyse und dem Mining von SoftwareArchiven. Seit mehr als zehn Jahren erarbeitet seine Forschergruppe
neue Modelle, Techniken und Werkzeuge zur Untersuchung von
Software-Archiven für die bessere Unterstützung des SoftwareEntwicklungsprozesses. Die Arbeitsbereiche beinhalten Software
Qualitätsanalyse, Software Visualisierung, Software Architektur,
Kollaborative Software Entwicklung sowie Service-zentrierte
Software Systeme.
Aktivitäten: Mitglied des Editorial Board mehrerer internationaler
Fachzeitschriften sowie Mitglied verschiedener Führungsgremien
und wissenschaftlicher Vereinigungen.
Website: seal.ifi.uzh.ch/gall
Universität Zürich, Institut für Informatik, 01.09.15
Institut für Informatik
Martin Glinz
Spezialgebiete: Requirements Engineering, Software Engineering,
Software Qualität, Modellierung von Systemen.
Aktivitäten: Direktor des Instituts für Informatik der Universität
Zürich. Mitglied des Editorial Board mehrerer internationaler
Fachzeitschriften sowie des Steering Committees internationaler
Konferenzen. General Chair und Program Chair erstklassiger
internationaler
Konferenzen.
Mitglied
des
International
Requirements
Engineering
Board.
Mitglied
in
den
Führungsgremien verschiedener wissenschaftlicher Vereinigungen.
Dr. rer. nat., Ordentlicher
Professor für Informatik,
Institut für Informatik,
Universität Zürich
Website: www.ifi.uzh.ch/~glinz
Daning Hu
Specialties : Daning Hu‘s research interests include Business
Intelligence, Network Analysis, Social Media, Web and Enterprise
2.0, data mining, financial intelligence
Activities: Member of Association for Information Systems and
INFORMS
Website: www.ifi.uzh.ch/bi/people/hu.html
Assistenzprofessor für
Information Systems,
Institut für Informatik,
Universität Zürich
Seite 28
Universität Zürich, Institut für Informatik, 01.09.15
Institut für Informatik
Elaine M. Huang
Ausserordentliche
Professorin für HumanComputer Interaction,
Institut für Informatik
Universität Zürich
Lorenz Hilty
Ausserordentlicher Professor
für Informatics and
Sustainability,
Institut für Informatik,
Universität Zürich
Specialties: Elaine M. Huang conducts research on HumanComputer Interaction (HCI), Computer-Supported Cooperative
Work (CSCW), and Ubiquitous Computing. Her current interests
include systems to support environmentally sustainable behavior,
smart home technologies influenced by domestic routines,
pervasive and multi-display environments, and tangible interfaces
to support group work.
Activities: Member of organizing committees and program
committees for several top-tier HCI and Ubiquitous Computing
conferences. Forum editor for ACM Interactions magazine.
Involved in activities geared the promotion and engagement of
women in Computer Science. Actively involved in events aimed at
raising the profile of HCI within Switzerland.
Website: http://www.ifi.uzh.ch/zpac/people/huang.html
Lorenz Hilty ist Professor am Institut für Informatik der Universität
Zürich, Leiter der Forschungsgruppe Informatik und Nachhaltigkeit
an der Eidgenössischen Materialprüfungs- und Forschungsanstalt
Empa und Affiliate Professor am Royal Institute of Technology
KTH in Stockholm. Er habilitierte sich an der Universität Hamburg,
Deutschland, im Jahr 1997, und verfügt über langjährige
Forschungserfahrung im Überschneidungsbereich von Informatik
und Umweltforschung, unter anderem am Institut für Wirtschaft
und Ökologie IWÖ der Universität St. Gallen, an der Abteilung
Umweltinformationssysteme
des
Forschungsinstituts
für
Anwendungsorientierte Wissensverarbeitung FAW in Ulm und als
Leiter der Abteilung Technologie und Gesellschaft der Empa.
Aktivitäten: In seiner Forschung arbeitet Lorenz Hilty daran, die
Möglichkeiten der Informatik für die Analyse und Lösung von
Umweltproblemen, für die Einsparung von Energie und Material
und zur Förderung einer insgesamt nachhaltigen Entwicklung
einzusetzen. Seit einigen Jahren ist der Energie- und
Materialverbrauch durch die IT selbst zu einem relevanten Thema
geworden, so dass ein wachsender Teil der Projekte dem Bereich
„Green IT“ zuzuordnen sind. Hier stehen methodische Fragen der
Messung von Material- und Energieverbrauchs über den gesamten
Lebenszyklus von IT-Produkten und der Energiebedarf des
Datenverkehrs im Internet und seinen Zugangsnetzen im
Vordergrund.
Website: http://www.ifi.uzh.ch/isr/people/hilty.html
Seite 29
Universität Zürich, Institut für Informatik, 01.09.15
Institut für Informatik
Renato Pajarola
Ordentlicher Professor für
Informatik, Institut für
Informatik,
Universität Zürich
Alexander Richter
Oberassistent,
Forschungsgruppe
Innovation & Social
Networking,
Institut für Informatik,
Universität Zürich
Spezialgebiete: Renato Pajarolas Forschungsinteressen decken vor
allem die Gebiete der interaktiven 3D Computergrafik (z.B. 3D
Datenvisualisierung, Virtual Reality, 3D Games) und der
wissenschaftlichen Visualisierung ab (z.B. Geo-Visualisierung, Biomedizinische Bildgebung).
Im Fokus stehen vor allem die effiziente Verarbeitung und
Darstellung von grossen mehrdimensionalen Daten mittels schneller
Algorithmen, Datenstrukturen und verteilter paralleler Prozesse.
Aktivitäten: Mitglied von Editorial Boards internationaler
Forschungszeitschriften, mehreren wissenschaftlichen Komitees und
Vereinigungen. Leiter mehrerer internationaler und nationaler
Forschungsprojekte.
Website: vmml.ifi.uzh.ch
Dr. Alexander Richter ist seit 1.10.2013 Oberassistent am Institut für
Informatik der Universität Zürich.
Im Rahmen seiner Dissertation beschäftigte er sich von 2007 bis
2009 mit den Herausforderungen des Einsatzes von Social
Networking Services im Unternehmenskontext. Danach war er
mehrere Jahre lang als Bereichsleiter Social Business in der
Forschungsgruppe Kooperationssysteme an der Universität der
Bundeswehr München tätig. In seiner Rolle unterstützte er
Unternehmen wie Allianz, Deutsche Bahn, Bayer, Bosch,
Capgemini, EADS, Schott oder Siemens bei der Auswahl,
Einführung und Erfolgsmessung von Social Software.
Aktivitäten: Im Rahmen seiner Forschung untersucht er wie Social
Software die Zusammenarbeit und das Wissensmanagement in
einem Unternehmen unterstützen kann. Gleichzeitig möchte er
dazu beitragen den Wandel zu einer vernetzten Organisation, der
sich aktuell in vielen Unternehmen vollzieht, greifbar zu machen
und Wege aufzeigen damit umzugehen.
Website: http://www.ifi.uzh.ch/imrg/people/richter.html
Seite 30
Universität Zürich, Institut für Informatik, 01.09.15
Institut für Informatik
Davide Scaramuzza
Assistant Professor for
Robotics,
Department of Informatics,
University of Zurich
Davide Scaramuzza is Professor of Robotics at the University of
Zurich. He is the founder and director of the Robotics and
Perception Group (http://rpg.ifi.uzh.ch). He received his PhD
(2008) in Robotics and Computer Vision at ETH Zurich. He was
Postdoc at both ETH Zurich and the University of Pennsylvania,
where he worked on autonomous navigation of micro aerial
vehicles. From 2009 to 2012, he led the European project “SFLY”,
which focused on autonomous navigation of micro helicopters in
GPS-denied environments using vision as the main sensor modality.
For his research, he was awarded the Robotdalen Scientific Awards
(2009) and the European Young Researcher Award (2012),
sponsored by the IEEE and the European Commission. He is
coauthor of the 2nd edition of the book “Introduction to
Autonomous Mobile Robots” (MIT Press). He is also author of the
first open-source Omnidirectional Camera Calibration Toolbox for
MATLAB, which, besides thousands of downloads worldwide, is
also currently used at NASA, Philips, Bosch, and Daimler. Since
2009, he has been consultant for the company Dacuda, a startup
from ETH Zurich, inventor of the world's first scanner mouse,
currently sold by LG. This mouse uses robot SLAM technology to
scan documents in real time. Finally, he is author of several topranked robotics and computer vision journals.
Aktivitäten: His research interests are field and service robotics,
intelligent vehicles, and computer vision. Specifically, he
investigates the use of cameras as the main sensors for robot
navigation, mapping, exploration, reasoning, and interpretation.
His interests encompass both ground and flying vehicles.
Website: http://rpg.ifi.uzh.ch
Seite 31
Universität Zürich, Institut für Informatik, 01.09.15
Institut für Informatik
Gerhard Schwabe
Ordentlicher Professor für
Informationsmanagement,
Institut für Informatik,
Universität Zürich
Gernard Schwabe ist seit 2002 ordentlicher Professor am Institut für
Informatik der Universität Zürich. Er erhielt seinen Doktor (1995)
und seine Habilitation (1999) von der University Hohenheim,
Stuttgart und hatte von 1998-2001 eine Professur für
Informationsmanagement an der Universität Koblenz-Landau inne.
Er hat Projekte mit Firmen aus der Finanzbranche (u.a. Postfinance,
UBS, Raiffeisen, Credit Suisse), Softwareindustrie (u.a. Avaloq,
Netcetera), der Reisebranche (STA) und dem öffentlichen Sektor
durchgeführt.
Aktivitäten: In seiner Forschung interessiert er sich für Systeme
und Konzepte, die die Zusammenarbeit von Menschen
unterstützen. Die Kooperation reicht dabei von zwei Personen in
einer Beratungssituation über Klein- und Grossgruppen bis hin zu
Communities und sozialen Netzwerken. In allen Fällen geht es
dabei nicht nur darum, geeignete neue Werkzeuge zu erstellen,
sondern auch Konzepte, wie diese in einem organisatorischen
Kontext genutzt werden. Seit 2012 arbeitet er auch mit der HSG
zusammen, um Organisationen durch Design Thinking innovativer
werden zu lassen. Weiterhin forscht er zum IT-Management, z.B.
die Gestaltung von Outsourcing.
Website: http://www.ifi.uzh.ch/imrg.html
Seite 32
Universität Zürich, Institut für Informatik, 01.09.15
Institut für Informatik
Sven Seuken
Assistenzprofessor für
Computation and
Economics,
Institut für Informatik,
Universität Zürich
Werdegang: Sven Seuken hat 2011 seinen PhD in Informatik von
der Harvard University erhalten, nachdem er 2006 seinen Master in
Informatik an der University of Massachusetts, Amherst
abgeschlossen hatte. Seit September 2011 leitet er als
Assistenzprofessor (mit Tenure Track) die Computation and
Economics Research Group am Institut für Informatik der
Universität Zürich. Als Doktorand hat er mehrere Auszeichnungen
erhalten, unter anderem ein Microsoft Research PhD Fellowship
sowie ein Fulbright Scholarship. Momentan wird seine Forschung
finanziert von einem Google Faculty Research Award, von der
Hasler Stiftung, und vom Schweizerischen Nationalfonds (SNF).
Spezialgebiete: Sven Seukens Forschungsinteressen liegen an der
Schnittstelle von VWL, Spieltheorie, und Informatik, mit einem
Schwerpunkt auf dem Entwurf und der Analyse von elektronischen
Märkten und anderen sozio-ökonomischen Systemen. Dies
beinhaltet elektronische Auktionen, markt-basierte Peer-to-peer
Systeme, Reputations-Mechanismen, Recommender-Mechanismen,
und effektive Nutzer-Interfaces für elektronische Märkte.
Aktivitäten: Mitglied von Programm-Komitees der führenden
internationalen Konferenzen im Bereich Electronic Commerce,
Markt Design, und Künstliche Intelligenz. Gutachter für
internationale Fachzeitschriften im Bereich Electronic Commerce
und Sozio-Ökonomisches System-Design. Autor eines Patents zum
Entwurf eines dezentralen elektronischen Marktes zur RessourcenOptimierung.
Website: www.ifi.uzh.ch/ce/people/seuken.html
Seite 33
Universität Zürich, Institut für Informatik, 01.09.15
Institut für Informatik
Burkhard Stiller
Career: After studies in computer science (1985-90), Ph.D. in
computer science at Universität Karlsruhe, Germany (1991-94), EC
Research Fellowship at the University of Cambridge, Computer
Laboratory, U.K. (1994/95), Computer Engineering and Networks
Laboratory TIK of ETH Zürich (1995-99), Assistant Professor for
Communication Systems at ETH (1999-2004), additionally Full
Professor at the University of Federal Armed Forces Munich
(UniBwM), Information Systems Laboratory IIS (2002-04), since
September 2004 Full Professor for Communications at IFI,
University of Zürich.
Ordentlicher Professor für
Informatik, Communication
Systems Group CSG,
Institut für Informatik,
Universität Zürich
Research: Main research interests include charging and accounting
for IP-based networks, economics of IP services, Grid services,
auctions for services, Quality-of-Service aspects, peer-to-peer
systems, cloud computing, network management, and supporting
functions for wireless access networks including biometrics;
published well over 100 research and survey papers on these areas
in leading journals, conferences, and workshops.
Activities: Prof. Stiller participated in or managed national research
projects of Switzerland, Germany, and the U.K. as well as EU
IST/ICT projects, such as SESERV, SmoothIT, Akogrimo, Daidalos,
EC-GIN, MMAPPS, Moby Dick, CATI, M3I, CoopSC, DaSaHIT,
ANAISOFT, DaCaPo++, and F-CSS. Currently editorial board
member of 8 journals, e.g., IEEE Transactions on Network and
Service Management, Chair of IEEE Computer Society Technical
Committee on Computer Communications (TCCC), and member of
various international technical program committees.
Members of CSG participating in the tutorial on virtualization
and clouds: Guilherme S. Machado (Cloud Computing,
Virtualization,
Contracts,
Peer-to-Peer),
Andrei
Vancea
(Cooperative
Caching,
Peer-to-Peer,
Cloud
Computing,
Virtualization)
Website: http://www.csg.uzh.ch
Seite 34
Universität Zürich, Institut für Informatik, 01.09.15
Institut für Informatik
Martin Volk
Spezialgebiete: Aufbau und Optimierung von Maschinellen
Übersetzungssystemen,
Erschliessung
von
mehrsprachigen
Textsammlungen, Sprachtechnologie im praktischen Einsatz.
Aktivitäten: Leiter mehrerer nationaler und internationaler
Forschungsprojekte. Organisator des ZuCL-Netzwerks für
Sprachtechnologie-Profis im Grossraum Zürich. Mitglied in den
Leitungsgremien der Studiengänge Multilinguale Textanalyse der
UZH sowie Bibliotheks- und Informationswissenschaften der ZB.
Webseite: www.cl.uzh.ch/volk
Ausserordentlicher
Professor für
Computerlinguistik,
Universität Zürich
Seite 35
Universität Zürich, Institut für Informatik, 01.09.15