Rising Stars

Transcription

Rising Stars
EECS
Rising Stars
2015
Rising Stars 2015
3
“MIT is delighted to host such an esteemed group
of women in computer science and electrical
engineering. The Rising Stars program gives you
a terrific opportunity to present your research,
network with your peers, and learn about ways
to pursue a career in academia. It can serve as a
professional launching pad, and I am thrilled you
are here to take part!” —Cynthia Barnhart
Chancellor
Ford Professor of Engineering
Massachusetts Institute of Technology
“Welcome to MIT! The Rising Stars Workshop has
again brought together some of the most talented women in computer science and electrical
engineering globally. You will help lead research,
education, and the professional community in
these fields, and others, in the years to come. We
hope this program will provide guidance and
inspiration as you launch your careers, and help
foster a strong collegial network that will persist
long into the future.”
— Ian A. Waitz
Dean of Engineering
Jerome C. Hunsaker Professor of Aeronautics and Astronautics
Massachusetts Institute of Technology
2
Rising Stars 2015
From the 2015 Rising Stars Workshop Chairs
Welcome to the 2015 Rising Stars in EECS Workshop at MIT. We launched
Rising Stars in 2012 to identify and mentor outstanding young women
electrical engineers and computer scientists interested in exploring careers in academia. We are pleased that the program has grown substantially since its beginning. This year’s workshop will bring together 62 of
the world’s brightest women PhD students, postdocs, and engineers/scientists working in industry, for two days of scientific interactions and career-oriented discussions aimed at navigating the early stages of careers
in academia.
This year’s program focuses on the academic job search process and how
to succeed as a junior faculty member. Our program includes invited presentations targeting the academic search process, how to give an effective
job talk, and developing and refining one’s research and teaching statement. There will also be panels focused on the early years of an academic
career, covering topics such as forming and ramping up a research group,
leadership, work-life balance, fundraising, and the promotions process.
The workshop this year will also feature 24 oral presentations and 38 poster presentations by participants, covering a wide range of specialties representative of the breadth of EECS research. The presentations span the
spectrum from materials, devices and circuits, to signal processing, communications, computer science theory, artificial intelligence and systems.
Many attendees from previous workshops have gone on to secure faculty
positions at top universities, or research positions in leading industry labs.
Toward this end, we are pleased to highlight and feature workshop participants by circulating this brochure to the leadership of EECS departments
at top universities and to selected research directors in industry.
We hope, in addition, that Rising Stars will give participants the opportunity to network with peers and present their research, opening the door for
ongoing collaboration and professional support for years to come.
We are very grateful to the supervisors who supported the participation
of the rising stars. We would also like to thank MIT’s School of Engineering,
the Office of the Dean for Graduate Education, and the EECS-affiliated research labs—CSAIL, LIDS, MTL, and RLE—for their support.
We look forward to meeting and interacting with you all.
Anantha Chandrakasan, Workshop Chair
Vannevar Bush Professor of Electrical Engineering and Computer Science
Department Head, MIT Electrical Engineering and Computer Science
Regina Barzilay, Workshop Technical Co-Chair
Professor of Electrical Engineering and Computer Science, MIT
Dina Katabi, Workshop Technical Co-Chair
Andrew (1956) and Erna Viterbi Professor of Electrical Engineering and
Computer Science, MIT
Asu Ozdaglar, Workshop Technical Co-Chair
Professor of Electrical Engineering and Computer Science, MIT
Director, Laboratory for Information and Decision Systems
Rising Stars 2015
1
“The Rising Stars in EECS Workshop provides what
today’s graduates need, opportunities to take
the lead, to present innovative work, to deliver
professional communications, and to address
global, scientific, and ethical issues. Above all,
the conference connects women graduates with
a critical network of mentors, colleagues, and
faculty who will support their academic and
professional success.”
— Christine Ortiz
Dean for Graduate Education
Morris Cohen Professor of Materials Science and Engineering
Massachusetts Institute of Technology
2
Rising Stars 2015
2015 EECS
Rising Stars
Henny Admoni Yale University
Ilge Akkaya University of California at Berkeley
Sara Alspaugh University of California at Berkeley
Elnaz Banan Sadeghian Georgia Institute of Technology
Katherine Bouman Massachusetts Institute of Technology
Carrie Cai Massachusetts Institute of Technology
Precious Cantú École Polytechnique Fédérale de Lausanne
Peggy Chi University of California at Berkeley
Hannah Clevenson Massachusetts Institute of Technology
SeyedehAida (Aida) Ebrahimi Purdue University
Motahareh Eslamimehdiabadi University of Illinois at UrbanaChampaign
Virginia Estellers University of California at Los Angeles
Fei Fang University of Southern California
Liyue Fan University of Southern California
Giulia Fanti University of California at Berkeley
Lu Feng University of Pennsylvania
Kathleen Fraser University of Toronto
Marzyeh Ghassemi Massachusetts Institute of Technology
Elena Leah Glassman Massachusetts Institute of Technology
Basak Guler Pennsylvania State University
Divya Gupta University of California at Los Angeles
Judy Hoffman University of California at Berkeley
Hui-Lin Hsu University of Toronto
Carlee Joe-Wong Princeton University
Gauri Joshi Massachusetts Institute of Technology
Ankita Arvind Kejriwal Stanford University
Hana Khamfroush Pennsylvania State University
Hyeji Kim Stanford University
Jung-Eun Kim University of Illinois at Urbana-Champaign
Varada Kolhatkar Privacy Analytics Inc.
Parisa Kordjamshidi University of Illinois at Urbana-Champaign
Ramya Korlakai Vinayak California Institute of Technology
Karla Kvaternik Princeton University
Min Kyung Lee Carnegie Mellon University
Kun (Linda) Li University of California at Berkeley
Hongjin Liang University of Science and Technology of China
Xi Ling Massachusetts Institute of Technology
Fei Liu Carnegie Mellon University
Yu-Hsin Liu University of California at San Diego
Kristen Lurie Stanford University
Jelena Marasevic Columbia University
Ghita Mezzour International University of Rabat
Jamie Morgenstern University of Pennsylvania
Vaishnavi Nattar Ranganathan University of Washington
Xiang Ni University of Illinois at Urbana Champaign
Dessislava Nikolova Columbia University
Farnaz Niroui Massachusetts Institute of Technology
Idoia Ochoa Stanford University
Eleanor O’Rourke University of Washington
Amanda Prorok University of Pennsylvania
Elina Robeva University of California at Berkeley
Deblina Sarkar Massachusetts Institute of Technology
Melanie Schmidt Carnegie Mellon University
Claudia Schulz Imperial College London
Mahsa Shoaran California Institute of Technology
Eva Song Princeton University
Veronika Strnadova-Neeley University of California at Santa
Barbara
Huan Sun University of California at Santa Barbara
Ewa Syta Yale University
Rabia Yazicigil Columbia University
Qi (Rose) Yu University of Southern California
Zhou Yu Carnegie Mellon University
Rising Stars 2015
3
Henny Admoni
Ilge Akkaya
PhD Candidate
Yale University
PhD Candidate
University of California at
Berkeley
Nonverbal Communication
in Human-Robot Interaction
Robotics has already improved
lives by taking over dull, dirty,
and dangerous jobs, freeing
people for safer, more skillful
pursuits. For instance, autonomous mechanical arms weld cars in
factories, and autonomous vacuum cleaners keep floors clean in
millions of homes. However, most currently deployed robotic devices operate primarily without human interaction, and are typically incapable of understanding natural human communication.
My research focuses on enabling human-robot communication in
order to develop social robots that interact with people in natural, effective ways. Application areas include social robots that help
elderly users with tasks like preparing meals or getting dressed;
manufacturing robots that act as intelligent third hands, improving
efficiency and safety for workers; and robot tutors that provide students with personalized lessons to augment their classroom time.
Nonverbal communication, such as gesture and eye gaze, is an integral part of typical human communication. Nonverbal communication happens bidirectionally in an interaction, so social robots must
be able to both recognize and generate nonverbal behaviors. These
behaviors are extremely dependent on context, with different
types of behaviors accomplishing different communicative goals
like directing attention or managing conversational turn-taking. To
be effective in the real world, nonverbal behaviors must occur in
real time in dynamic, unstructured interactions. My research focuses on developing bidirectional, context aware, real time nonverbal
behaviors for personally assistive robots. Developing effective nonverbal communication for robots engages a number of disciplines
including autonomous control, machine learning, computer vision,
design, and cognitive psychology. My approach to this research is
three-fold. First, I conduct well-controlled human-robot interaction
studies to understand people’s perceptions of robots. Second, I
build computational models of nonverbal behavior using data from
human-human interactions. Third, I develop robot-agnostic behavior controllers for collaborative human-robot interactions based on
my models of human behavior, and test these behavior controllers
in real-world human-robot interactions.
Bio
Henny Admoni is a PhD candidate at the Social Robotics Laboratory
in the Department of Computer Science at Yale University, where
she works with Professor Brian Scassellati. This winter, Henny will
begin as a Postdoctoral Fellow at the Robotics Institute at Carnegie Mellon University, working with Siddhartha Srinivasa. Henny
creates and studies intelligent, autonomous robots that improve
people’s lives by providing assistance in social environments like
homes and offices. Her dissertation research investigates how robots can recognize and produce nonverbal behaviors, such as eye
gaze and pointing, to make human-robot interactions more natural
and effective for people. Her interdisciplinary work spans the fields
of artificial intelligence, robotics, and cognitive psychology. Henny
holds an MS in Computer Science from Yale University, and a BA/
MA joint degree in Computer Science from Wesleyan University.
Henny’s scholarship has been recognized with awards such as the
NSF Graduate Research Fellowship, the Google Anita Borg Memorial Scholarship, and the Palantir Women in Technology Scholarship.
4
Rising Stars 2015
Compositional ActorOriented Learning and
Optimization for Swarm
Applications
Rapid growth of networked smart
sensors today offer unprecedented volumes of continually streaming data, which renders many traditional control and optimization
techniques ineffective for designing large-scale applications. The
overarching goal of my graduate studies has been enabling seamless composition of distributed dynamic swarm applications. In this
regard, I work on developing actor-oriented frameworks for deterministic and compositional heterogeneous system design.
A primary goal of my graduate work is to mitigate the heterogeneity within Internet-of-Things applications by presenting an actor-oriented framework, which enables developing compositional
learning and optimization applications that operate on streaming
data. Ptolemy Learning, Inference, and Optimization Toolkit (PILOT)
achieves this by presenting a library of reusable interfaces to machine learning, control and optimization tasks for distributed systems. A key goal of PILOT is to enable system engineers who are
not experts in statistics and machine learning to use the toolkit in
order to develop applications that rely on on-line estimation and inference. In this context, we provide domain-specific specializations
of general learning and control techniques, including parameter
estimation and decoding on Bayesian networks, model-predictive
control, and state estimation. Recent and ongoing applications of
the framework include cooperative robot control, real-time audio
event detection, and constrained reactive machine improvisation.
A second branch of my research aims at maintaining separation-of-concerns in model-based design. In industrial cyber-physical systems, composition of sensors, middleware, computation and
communication fabrics yields a highly complex and heterogeneous
design flow. Separation-of-concerns becomes a crucial quality in
model-based design of such systems. We introduce the aspect-oriented modeling (AOM) paradigm, which addresses this challenge
by bridging actor-oriented modeling with aspect-oriented abstractions. AOM specifically enables learning and optimization tasks to
become aspects within a complex design flow, while greatly improving scalability and modularity of heterogeneous applications.
Bio
Ilge Akkaya is a PhD candidate in the Electrical Engineering and
Computer Science department at UC Berkeley, working with Prof.
Edward A. Lee. She received the BS degree in Electrical and Electronics Engineering from Bilkent University, Ankara, Turkey in 2010.
During her graduate studies, she explored systems engineering
for distributed cyber-physical systems, with a focus on distributed
smart grid applications and cooperative mobile robotic control. Her
thesis work centers around actor-oriented machine learning interfaces for distributed swarm applications.
http://risingstars15-eecs.mit.edu/
Sara Alspaugh
Elnaz Banan
Sadeghian
PhD Candidate
University of California at
Berkeley
PhD Candidate
Georgia Institute of Technology
Characterizing Data
Exploration Behavior to
Identify Opportunities for
Automation
Exploratory analysis is undertaken to familiarize oneself with a
dataset. Despite being a necessary part of any analysis, it remains a
nebulous art defined by an attitude and a collection of techniques,
rather than a systematic methodology. It typically involves manually making hundreds to thousands of individual function calls or
small interactions with a GUI in order to obtain different views of
the data. It is not always clear which views will be effective for a
given dataset or question, how to be systematic about which views
to examine, or how to map a high-level question into a series of
low-level actions to answer it. This results in unnecessary repetition,
disrupted mental flow, ad hoc and hard-to-repeat workflows, and
inconsistent exploratory coverage. Identifying useful, repeatable
exploration workflows, opportunities for automation of tedious
tasks, and intelligent interfaces better suited for expressing exploratory questions, all require a better understanding of data exploration behavior. We seek this through three means:
We analyze interaction records logged from data analysis tools–to
identify behavioral patterns and assess the utility of log data for
building intelligent assistance and recommendation algorithms
that learn from user behavior. Preliminary results reveal that while
logs can say which functions are used in which contexts, more comprehensive instrumentation and collection is likely needed to train
intelligent exploration assistants.
We interview experts about their data exploration habits and frustrations–to identify good exploratory workflows and ascertain important features not provided by existing tools. Preliminary results
reveal opportunities to make data exploration more thorough and
efficient.
We design and evaluate a prototype for obtaining quick data overviews–to assess new interface elements designed to better match
data exploration needs. Preliminary results suggest that small
simple automation in existing tools would decrease user effort, increase exploratory coverage, and help users identify erroneous assumptions more readily.
Bio
Sara Alspaugh is a computer scientist and PhD candidate at the UC
Berkeley. In her research, she mines user interaction records logged
from data analysis tools to better characterize data exploration behavior, identify challenges and opportunities for automation, and
improve system and interface design. She also conducts qualitative
research through interview studies with expert analysts and usability evaluations of data exploration tools; and has prototyped new
interfaces to help users get an overview of their data. More broadly,
her research interests include data science, data mining, visualization, and user interaction with data analysis tools. She is a member
of the AMPLab and is advised by Randy Katz and Marti Hearst. She
received her MS in Computer Science from UC Berkeley in 2012 and
her BA in Computer Science from the University of Virginia in 2009.
She is the recipient of an NSF Graduate Fellowship, a Tableau Fellowship, and a Department Chair scholarship.
http://risingstars15-eecs.mit.edu/
Detector for TwoDimensional Magnetic
Recording
The data industry such as Google,
Facebook, Yahoo, and also many
other organizations, rely heavily on data storage facilities to store
their valuable data. Hard disk drives, due to their reliability and extremely cheap price, form a main part of these data storage facilities.
The disk drive industry is currently pursuing a huge increase in the
recorded data density up to 10 Terabits per square inch of the medium through two-dimensional magnetic recording (TDMR). I work
toward realization of this technology, specifically, to design a detector which can recover the data from extremely dense hard drives.
This is a challenge, in part because this novel technology shrinks
the widths of the data tracks to such an extent that an attempt to
read data from one track will inevitably lead to interference from
neighboring tracks, and in part because of the challenging nature
of the magnetic medium itself. The combination of interference
between different tracks and along adjacent bits on each track is a
key challenge for TDMR and motivates the development of two-dimensional signal processing strategies of manageable complexity
to mitigate this two-dimensional interference. To address this issue,
we have designed a novel detection strategy for TDMR recording
channel with multiple read heads. Our method suppresses the intertrack interference and thereby reduces the detection problem
to a traditional one-dimensional problem, so that we may leverage
existing one-dimensional iterative detection strategies. Simulation
results show that our proposed detector is able to reliably recover
five tracks from an array of five read heads at an acceptable signalto-noise ratio. Further, we are working on a detector which also
performs the task of synchronizing the reader and the writer clock
speeds so that the data can be extracted more accurately. Obtained
results from this research can help greatly increase hard disk capacities through TDMR.
Bio
Elnaz Banan Sadeghian received the BS degree in electrical engineering from Shahid Beheshti University, Tehran, Iran, in 2005, and
the M.S. degree in Biomedical Engineering from Amirkabir University of Technology, Tehran, Iran, in 2008. She is currently pursuing
her PhD degree in electrical engineering at the Georgia Institute of
Technology, Atlanta, Georgia, USA. Her current research interests
are in the area of signal processing and communication theory,
including synchronization, equalization, and coding as applied to
magnetic recording channels.
Rising Stars 2015
5
Katherine Bouman
Carrie Cai
Visual Vibrometry:
Estimating Material
Properties from Small
Motions in Video
Wait-Learning: Leveraging
Wait Time for Education
PhD Candidate
Massachusetts Institute of
Technology
The estimation of material properties is important for scene understanding, with many applications
in vision, robotics, and structural engineering. We have connected
fundamentals of vibration mechanics with computer vision techniques in order to infer material properties from small, often imperceptible motion in video. Objects tend to vibrate in a set of preferred modes. The shapes and frequencies of these modes depend
on the structure and material properties of an object. Focusing on
the case where geometry is known or fixed, we have shown how
information about an object’s modes of vibration can be extracted
from video and used to make inferences about that object’s material properties. We demonstrate our approach by estimating material
properties for a variety of rods and fabrics by passively observing
their motion in high-speed and regular framerate video.
Bio
Katherine Bouman received a BSE in Electrical Engineering from
University of Michigan, Ann Arbor, MI in 2011 respectively and an
S.M. degree in Electrical Engineering and Computer Science from
the Massachusetts Institute of Technology (MIT), Cambridge, MA in
2013. She is currently a PhD candidate in the Computer Vision group
at MIT, working under the supervision of Prof. William Freeman.
Katherine is the recipient of the NSF Graduate Fellowship, the Irwin
Mark Jacobs and Joan Klein Jacobs Presidential Fellowship, and is a
Goldwater Scholar. Her research interests include computer vision,
computational photography, and inverse imaging algorithms.
PhD Candidate
Massachusetts Institute of
Technology
The busyness of daily life makes
it hard to find time for informal
learning. Yet, learning typically
requires significant time and effort, with repeated exposures to educational content on a recurring basis. My work introduces the concept of wait-learning: leveraging wait time for education. Despite
the struggle to find time for learning, there are numerous times in a
day that are wasted due to brief moments of waiting, such as waiting for the elevator, waiting for wifi to connect, or waiting for an
instant message reply. Combining wait time with productive work
opens up a new class of software systems that overcomes the problem of limited time while addressing the frustration often associated with waiting.
My goal is to understand how to detect and manage these waiting
moments, and to discover essential design principles for wait-learning systems. I have designed and built several systems that enable
wait-learning: WaitChatter delivers second-language vocabulary
exercises while users wait for instant message replies, and FlashSuite integrates learning across diverse kinds of waiting, including
elevators, wifi, and email loading. Through developing and evaluating these systems, we identify waiting moments to use for learning,
and ways to encourage learning unobtrusively while maximizing
engagement. A study of WaitChatter with 20 participants found
that wait-learning can be an effective and engaging way to learn.
During two weeks of casual instant messaging, participants learned
and retained an average of 57 Spanish and French words, or about
four new words per day.
Bio
Carrie is a PhD student in Computer Science at MIT CSAIL. Her dissertation project focuses on wait-learning: leveraging wait time
for education. Broadly, she is interested in developing systems
that help humans learn and improve productivity in environments
with limited time. Her research brings together disciplines in human-computer interaction, education, attention management, and
productivity. Carrie holds a B.A. in Human Biology and M.A. in Education from Stanford University.
6
Rising Stars 2015
http://risingstars15-eecs.mit.edu/
Precious Cantú
Peggy Chi
Patterning via Optical Saturable Transitions
Designing Video-Based
Interactive Instructions
Fulbright Postdoctoral Fellow
École Polytechnique Fédérale de
Lausanne
For the past 40 years, optical
lithography has been the patterning workhorse for the semiconductor industry. However, as integrated circuits have become
more and more complex, and as device geometries shrink, more
innovative methods are required to meet these needs. In the farfield, the smallest feature that can be generated with light is limited to approximately half the wavelength. This, so called far-field
diffraction limit or the Abbe limit (after Prof. Ernst Abbe who first
recognized this), effectively prevents the use of long-wavelength
photons >300nm from patterning nanostructures <100nm. Even
with a 193nm laser source and extremely complicated processing,
patterns below ~20nm are incredibly challenging to create. Sources
with even shorter wavelengths can potentially be used. However,
these tend be much more expensive and of much lower brightness,
which in turn limits their patterning speed. Multi-photon reactions
have been proposed to overcome the diffraction limit. However,
these require very large intensities for modest gain in resolution.
Moreover, the large intensities make it difficult to parallelize, thus
limiting the patterning speed. In this dissertation, a novel nanopatterning technique using wavelength-selective small molecules
that undergo single-photon reactions, enabling rapid top-down
nanopatterning over large areas at low-light intensities, thereby
allowing for the circumvention of the far-field diffraction barrier
is developed and experimentally verified. This approach, which I
refer to as Patterning via Optical Saturable Transitions (POST) has
the potential for massive parallelism, enabling the creation of nanostructures and devices at a speed far surpassing what is currently
possible with conventional optical lithographic techniques. The
fundamental understanding of this technique goes beyond optical
lithography in the semiconductor industry and is applicable to any
area that requires the rapid patterning of large-area two or three-dimensional complex geometries.
Bio
Dr. Precious Cantú is a Postdoctoral Researcher in the Materials Science and Engineering Department at École Polytechnique Fédérale
de Lausanne (EPFL), where she works with Professor Francesco Stellacci in the Supramolecular Nanomaterials and Interfaces Laboratory. She recently received her PhD in Electrical Engineering from the
University of Utah, advised by Prof. Rajesh Menon. Her research area
of interest is Optics and Nanofabrication, with a specific focus on
extending the spatial resolution of optics to the nanoscale. Her PhD
dissertation focused on developing a novel nanopatterning technique using wavelength-selective small molecules.
She is the recipient of the National Science Foundation Graduate
Research Fellowship (NSF GRFP), University of Utah Nanotechnology Training Fellowship, Global Entrepreneurship Monitor Consortium (GEM) Fellowship, More Graduate Education at Mountain
States Alliance (MGE/MSA) Fellowship, and The Fulbright U.S. Scholars Fellowship.
http://risingstars15-eecs.mit.edu/
PhD Candidate
University of California at
Berkeley
When aiming to accomplish unfamiliar, complicated tasks, people
often search for online helps to
follow instructions shared by experts or hobbyists. Although the
availability of content sharing sites such as YouTube and Blogger
has led to an explosion in user-generated tutorials, it remains a
challenge for tutorial creators to offer concise and effective content
for learners to put into actions. From using software applications,
performing physical tasks such as machine repair and cooking, to
giving a lecture, each domain involves specific “how-to” knowledge
with certain degree of complexity. Authors therefore need to carefully design what and when to introduce an important concept in
addition to accurately performing the tasks.
My research introduces video editing, recording, and playback tools
optimized for producing and consuming instructional demonstrations. We focus on videos as they are commonly used to capture
a demonstration contained with visual and auditory details. Using
video and audio analysis techniques, our goal is to dramatically increase the quality of amateur-produced instructions, which in turn
improves learning for viewers to interactively navigate. We show a
series of proposed systems that create effective tutorials to support
this vision, including MixT that automatically generates mixed-media software instructions, DemoCut that automatically applies video editing effects to a recording of a physical demonstration, and
DemoWiz that provides an increased awareness of upcoming actions through glanceable visualizations.
Bio
Pei-yu (Peggy) Chi designs intelligent systems that enhance and
improve everyday experiences. She is currently a fifth-year PhD student in Computer Science at UC Berkeley, working with Prof. Bjoern
Hartmann on computer-generated interactive tutorials. She received the Google PhD Fellowship in Human Computer Interaction
(2014-2016) and the Berkeley Fellowship for Graduate Study (20112013). Peggy earned her MS in Media Arts and Sciences in 2010
from the MIT Media Lab, where she was awarded as a lab fellow and
worked with Henry Lieberman at the Software Agents Group. She
also holds a MS in Computer Science in 2008 from National Taiwan
University, where she worked with Hao-hua Chu at the UbiComp
Lab.
Peggy’s research in Human-Computer Interaction focuses on novel
authoring tools for content creation. Her recent work published at
top HCI conferences includes: tutorial generation for software applications and physical tasks, designing and scripting cross-device
interactions, and interactive storytelling for sharing personal media.
Rising Stars 2015
7
Hannah Clevenson
SeyedehAida (Aida)
Ebrahimi
PhD Candidate
Massachusetts Institute of
Technology
PhD Candidate
Purdue University
Sensing and Timekeeping
using a Light-Trapping
Diamond Waveguide
Solid-state quantum sensors are
attracting wide interest because
of their sensitivity at room temperature. In particular, the spin
properties of individual nitrogen–vacancy (NV) color centers in
diamond make them outstanding nanoscale sensors of magnetic
fields, electric fields, and temperature under ambient conditions.
Recent work on NV ensemble-based magnetometers, inertial sensors, and clocks has employed unentangled color centers to realize
significant improvements in sensitivity. However, to achieve this
potential sensitivity enhancement in practice, new techniques are
required to excite efficiently and to collect the optical signal from
large NV ensembles. Here, we introduce a light-trapping diamond
waveguide geometry with an excitation efficiency and signal collection that enables in excess of 5% conversion efficiency of pump
photons into optically detected magnetic resonance (ODMR) fluorescence—an improvement over previous single-pass geometries
of more than three orders of magnitude. This marked enhancement
of the ODMR signal enables precision broadband measurements of
magnetic field and temperature in the low-frequency range, otherwise inaccessible by dynamical decoupling techniques. We also
use this device architecture to explore other precision sensing and
timekeeping applications.
Bio
Hannah earned her BE (cum laude) in electrical engineering from
Cooper Union in 2011. She was a NASA MUST scholar and spent
four summers working in the nanotechnology division at NASA
Ames Research Center on the Microcolumn Scanning Electron Microscope (MSEMS) project and led a microgravity flight experiment.
She finished her masters degree at Columbia University in 2013. She
is a NASA Space Technology Research Fellow and spent a summer
as a visiting technologist in the Quantum Sciences and Technology
group at JPL. She is currently a PhD candidate at MIT, splitting her
time between Dirk Englund’s lab on campus and Danielle Braje’s lab
in group 89 at MIT Lincoln Laboratory. Her current research focuses
on precision sensing and timekeeping based on large ensembles of
NV centers in diamond.
Droplet-Based Impedance
Spectroscopy for HighlySensitive Biosensing within
Minutes
Rapid detection of biomolecules in small volumes of highly diluted
solutions is of essential interest in various applications, such as food
safety, homeland security, fast drug screening, and addressing the
global issue of antibiotic resistance. Toward this goal, we developed
a label-free, electrical approach which is based on (i) evaporation-induced beating of diffusion limit for reducing the sensor response
time and (ii) continuous monitoring of non-Faradic impedance of
an evaporating droplet containing the analytes. Small droplets are
deposited and pinned on a multifunctional, specially designed superhydrophobic sensor which results in highly-controlled evaporation rate, essential for highly-precise data acquisition. Our method
is based on the change of the droplet’s impedance due to ionic
modulation caused by evaporation. The time-multiplexing feature
of the developed platform results in a remarkably reduced data
variation, which is necessary for a reliable biosensing assay. Furthermore, we examined applicability of the developed technique
as a fast, label-free platform for: improving the detection limit of
classical methods by five orders of magnitude (detection of attomolar concentration of biomolecules), selective identification of
DNA hybridization (down to nM concentration, without any probe
immobilization), and bacterial viability (detection is achieved within minutes, as opposed to hours in conventional methods). More
specifically, the proposed viability assay relies on a basis fundamentally different from most bacterial viability assays which rely on cell
multiplication. Instead, our method is based on modulation of the
osmotic pressure to trigger cells to modify their surroundings. The
developed paradigm eliminates the need for bulky reference electrodes (which impose integration challenges), requires only a few
microliter sample volume, and is cost-effective and integrable with
the microfabrication processes. It has therefore the potential for integration in portable, array-formatted, point-of-care applications.
Bio
Aida Ebrahimi received her BSc and MSc degrees both in Electrical
and Computer Engineering from University of Tehran, Iran. Her Master’s project was on fabrication and characterization of highly sensitive capacitive sensors and actuators based on Branched Carbon
Nanotubes (BCNTs). In 2012, she joined CEED group, under supervision of Prof. M. A. Alam at Purdue University, West Lafayette, IN,
USA. She is currently pursuing a PhD degree in ECE. The title of her
dissertation is ‘Droplet-based non-Faradaic Impedance Sensing for
Combating Antibiotic Resistance’.
During her academic life, Aida has developed the required skills
to approach scientific problems. She has been involved in various,
yet connected, projects whose outcome has been published in 15
peer-reviewed journal articles and more than 10 conference proceedings. She enjoys diversity in scientific thinking and intertwining various disciplines to advance the state of the art of a specific
problem, especially in health-related applications. Aida is a recipient of Meissner Fellowship Award (Purdue University, 2011) and
Bilsland Dissertation Fellowship Award (Purdue University, 2015).
8
Rising Stars 2015
http://risingstars15-eecs.mit.edu/
Motahareh
Eslamimehdiabadi
Virginia Estellers
Postdoctoral Fellow University of California at Los
Angeles
PhD Candidate
University of Illinois at UrbanaChampaign
Reasoning about Invisible
Algorithms in News Feeds
Our daily digital life is full of algorithmically selected content
such as social media feeds, recommendations and personalized
search results. These algorithms have great power to shape users’
experiences, yet users are often unaware of their presence. Whether
it is useful to give users insight into these algorithms’ existence or
functionality and how such insight might affect their experience are
open questions. To address them, we conducted a user study with
40 Facebook users to examine their perceptions of the Facebook
News Feed curation algorithm. Surprisingly, more than half of the
participants (62.5%) were not aware of the News Feed curation algorithm’s existence at all. Initial reactions for these previously unaware participants were surprise and anger. We developed a system, FeedVis, to reveal the difference between the algorithmically
curated and an unadulterated News Feed to users, and used it to
study how users perceive this difference. Participants were most
upset when close friends and family were not shown in their feeds.
We also found participants often attributed missing stories to their
friends’ decisions to exclude them rather than to Facebook News
Feed algorithm. By the end of the study, however, participants were
mostly satisfied with the content on their feeds. Following up with
participants two to six months after the study, we found that for
most, satisfaction levels remained similar before and after becoming aware of the algorithm’s presence, however, algorithmic awareness led to more active engagement with Facebook and bolstered
overall feelings of control on the site.
Bio
Motahhare Eslami is a 4th year PhD candidate at Computer Science department, University of Illinois at Urbana-Champaign. Her
research interests are in social computing, human computer interaction and data mining areas. She is interested in performing research to analyze and understand people’s behavior in online social
networks. Her recent work has focused on the effects of feed personalization in social media and how the awareness of filtering algorithm’s existence affects users’ perception and behavior. Her work
has published at prestigious conferences and also appeared internationally in the press-in the Washington Post, TIME, MIT Technology Review, New Scientist, the BBC, CBC Radio, Oglobo (a prominent
Brazilian newspaper), numerous biogs, Fortune, and more. Motahhare has been nominated as a Google PhD Fellowship Nominee
(2015) by University of Illinois as one of the two students from the
entire College of Engineering. Her research has received honorable
mention award at Facebook Midwest Regional Hackathon 2013 and
the best paper award at CHI 2015.
http://risingstars15-eecs.mit.edu/
Robust Models and Efficient
Algorithms for Imaging
I work on mathematical modeling
and computational techniques
for imaging. I am interested in the
theoretical and physical aspects of the acquisition of images, their
mathematical representations, and the development of efficient algorithms to extract information from them. To this purpose, I focus
on three lines of research.
Better Models in Image Processing: My dissertation focused on
variational models for inverse problems in imaging, that is, the design of minimization problems that reconstruct or analyze an image from incomplete and corrupted measurements. To overcome
the ill-posed nature of these problems, prior knowledge about the
solution — its geometry, shape, or smoothness — is incorporated
into a mathematical model that both matches the measurements
and is physically meaningful.
Efficient Algorithms: In the same way that simplifying an algebraic
expression speeds its computation and reduces numerical errors,
developing an efficient algorithm reduces the computational cost
and errors of the numerical minimization. For this reason, my work
focuses also on developing algorithms tailored to each problem to
overcome the limitations of non-differentiable functionals, high-order derivatives, and non-convex problems.
Stepping out of the Image plane: Computer Vision analyzes 3D
scenes from 2D images or videos and therefore requires to step
out of the image plane and develop models that account for the
3D nature of the scene, modeling their geometry and topology to
account for the occlusions and shadows observable in videos and
images.
My research, in a nutshell, brings together models and algorithms
into solid mathematical grounds to designs techniques that only
extract the information that is meaningful for the problem at hand.
It incorporates the knowledge available on the solution into the
mathematical model of the problem, chooses a discretization suited to the object being imaged, and designs optimization strategies
that scale well and are easy to parallelize.
Bio
Dr. Estellers received her PhD in image processing from Ecole Polythechnique Federale de Lausanne in 2013, and joined the UCLA
Vision Lab as a postdoctoral fellow with an SNSF fellowship. Previous to that, she completed Bachelor and Master studies at the Polytechnic University of Catalonia in both Mathematics and Electrical
Engineering.
Rising Stars 2015
9
Fei Fang
Liyue Fan
Towards Addressing
Spatio-Temporal Aspects in
Security Games
Preserving Individual
Privacy in Big Data Analytics
PhD Candidate
University of Southern California
My research aims to provide
game-theoretic solutions for fundamental challenges of security
resource optimization in the real-world, in domains ranging from
infrastructure protection to sustainable development. Whereas first
generation of “security games” research provided algorithms for optimizing security resources in mostly static settings, my thesis advances the state-of-the-art to a new generation of security games,
handling massive games with complex spatio-temporal settings
and leading to real-world applications that have fundamentally
altered current practices of security resource allocation. My work
provides the first algorithms and models for advancing three key
aspects of spatio-temporal challenges in security games. First, focusing on games where actions are taken over continuous time (for
example games with moving targets such as ferries and refugee
supply lines), I provide an efficient linear-programming-based solution while accurately modeling the attacker’s continuous strategy.
This work has been deployed by the US Coast Guard for protecting
the Staten Island Ferry in New York City in past few years and fundamentally altering previously used tactics. Second, for games where
actions are taken over continuous space (for example games with
forest land as target), I provide an algorithm computing the optimal
distribution of patrol effort. Third, my work addresses challenges
with one key dimension of complexity — the temporal change of
strategy. Motivated by the repeated interaction of players in domains such as preventing poaching and illegal fishing, I introduce
a novel game model that accounts for temporal behavior change
of opponents and provide algorithms to plan effective sequential
defender strategies. Furthermore, I incorporate complex terrain
information and design the PAWS application to combat illegal
poaching, which generates patrol plans with detailed patrol routes
for local patrollers. PAWS has been deployed in a protected area in
Southeast Asia, with plans for worldwide deployment.
Postdoctoral Research Associate
University of Southern California
We live in the age of big data. With
an increasing number of people,
devices, and sensors connected
with digital networks, individual
data now can be largely collected and analyzed by data mining applications for social good as well as for commercial interests. However, the data generated by individual users exhibit unique behavioral
patterns and sensitive information, and therefore must be transformed prior to the release for analysis. The AOL search log release
in 2006 is an example of privacy catastrophe, where the searches of
an innocent citizen were quickly re-identified by a newspaper journalist. In this talk, I present a novel framework to release continuous
aggregation of private data for an important class of real-time data
mining tasks, such as disease outbreak detection and web mining,
to name a few. The key innovation is that the proposed framework
captures the underlying dynamics of the continual aggregate statistics with time series state-space models, and simultaneously adopts
filtering techniques to correct the observed, noisy data. It can be
shown that the new framework provides a rigorous, provable privacy guarantee to individual data contributors without compromising
the output analysis results. I will also talk about my current research,
including the extension of the framework to spatial crowd-sourcing
and privacy-preserving machine learning in a distributed research
network.
Bio
Liyue Fan is a postdoctoral research associate at the Integrated Media Systems Center at USC. She holds a PhD in Computer Science
and Informatics from Emory University and a BSc in Mathematics
from Zhejiang University in China. Her PhD dissertation research
centers around the development of data publication algorithms
which provide rigorous guarantee for individual privacy without
compromising output utility. After joining USC, she also works on
spatial crowd-sourcing, transportation, and healthcare informatics.
Bio
Fei Fang is a PhD candidate in Department of Computer Science
at University of Southern California. She is working with Professor
Milind Tambe at Teamcore Research group. She received her bachelor degree from the department of Electronic Engineering, Tsinghua Unviersity in July, 2011. Her research lies in the field of artificial
intelligence and multi-agent systems, focusing on computational
game theory with applications to security and sustainability domains. Her work on “Protecting Moving Targets with Mobile Resources” has been deployed by the US Coast Guard for protecting
the Staten Island Ferry in New York City since April 2013. This work
has led to her receiving the Meritorious Team Commendation from
Commandant of the US Coast Guard and Flag Letter of Appreciation
from Vice Admiral and she is named a poster competition finalist in
the First Conference on Validating Models of Adversary Behaviors
(2013). Her work on “When Security Games Go Green: Designing
Defender Strategies to Prevent Poaching and Illegal Fishing” won
the Outstanding Paper Award in IJCAI-15 Computational Sustainability Track. She is the chair of the AAAI Spring Symposium 2015
on Applied Computational Game Theory and the recipient of WiSE
Merit Fellowship (2014).
10
Rising Stars 2015
http://risingstars15-eecs.mit.edu/
Giulia Fanti
Lu Feng
Spy vs. Spy: Anonymous
Messaging
Assuring the Safety and
Security of Cyber-Physical
Systems
PhD Candidate
University of California at
Berkeley
Anonymous microblogging platforms, such as Secret, Yik Yak, and
Whisper have emerged as important tools for sharing one’s thoughts without fear of judgment
by friends, the public, or authority figures. These platforms provide
anonymity by allowing users to share content (e.g., short messages) with their peers without revealing authorship information to
users. However, recent advances in rumor source detection show
that existing messaging protocols, including those used in the mentioned anonymous microblogging applications, leak authorship information when the adversary has global access to metadata. For
example, if an adversary can see which users of a messaging service
received a particular message, or the timestamps at which a subset
of users received a given message, the adversary can infer the message author’s identity with high probability. We introduce a novel
anonymous messaging protocol, which we call adaptive diffusion,
that is designed to resist such adversaries. We show that adaptive
diffusion spreads messages quickly while achieving provably-optimal anonymity guarantees when the underlying messaging network is an infinite regular tree. Simulations on real social network
data show that adaptive diffusion effectively hides the location of
the source even when the graph is finite, irregular and has cycles.
Bio
Giulia Fanti is a 6th year PhD student at the University of California-Berkeley, studying privacy-preserving algorithms under Professor Kannan Ramchandran. She received her M.S. in EECS from the
University of California-Berkeley in 2012 and her B.S. in Electrical and
Computer Engineering from Olin College of Engineering in 2010.
She is a recipient of the National Science Foundation Graduate Research Fellowship, as well as a Best Paper Award at ACM Sigmetrics
2015 for her work on anonymous rumor spreading, in collaboration
with Peter Kairouz, Professor Sewoong Oh and Professor Pramod
Viswanath of the University of Illinois at Urbana-Champaign.
Postdoctoral Fellow
University of Pennsylvania
Cyber-Physical Systems (CPS)—
also called the Safety-Critical Internet of Things—are smart systems that include co-engineered interacting networks of physical
and computational components. These highly interconnected and
integrated systems provide new functionalities to improve quality of
life and enable technological advances in critical areas, such as smart
healthcare, transportation, manufacturing, and energy. The increasing complexity and scale of CPS, with high-level expectations of autonomous operation, predictability and robustness, in the presence
of environmental uncertainty and resource limitations, pose significant challenges for assuring the safety and security of CPS.
My research is focused on assuring the safety, security and dependability of CPS, through formal methods and data-driven approaches,
with particular emphasis on probabilistic modeling and quantitative verification. My doctoral thesis work improves the scalability of
probabilistic model checking—a powerful formal verification method that focuses on analyzing quantitative properties of stochastic
systems—by developing, for the first time, fully automated compositional verification techniques for probabilistic systems.
My current postdoctoral research includes two themes. One theme
is medical CPS, which are life-critical, context-aware, networked systems of medical devices. For example, I have worked on assuring
the interoperability of on-demand plug & play medical devices, and
model-based development of high-confidence medical devices.
Another theme of my current work is human-in-the-loop CPS. I collaborate with clinicians and develop data-driven modeling framework for studying the behavior of Diabetic patients who depend
on insulin pumps. The research outcome could potentially assist in
developing safer, more effective, and even personalized treatment
devices. In another project, with my collaborators at the Air Force
Research Lab, I develop approaches for synthesizing provably correct human-in-the-loop control protocols for unmanned aerial vehicles (UAV). My other on-going projects include human factors in CPS
security assurance, and operator behavior signatures for the haptic
authentication of surgical robots.
Bio
Lu Feng is postdoctoral fellow at the PRECISE Center and Department of Computer & Information Science at the University of Pennsylvania, advised by Professor Insup Lee. She received her DPhil
(PhD) in Computer Science from the University of Oxford in 2014,
under the supervision of Professor Marta Kwiatkowska. She also
holds a B.Eng. in Information Engineering from the Beijing University
of Posts and Telecommunications and a M.Phil. in Computer Speech,
Text and Internet Technology from the University of Cambridge. Lu is
a recipient of the prestigious James S. McDonnell Foundation Postdoctoral Fellowship, which only selects 10 fellows internationally
and trans-disciplinary each year. She has also received various other
awards, including the ACM SIGMOBILE N2 Women Young Researcher
Fellowship, UK Engineering and Physical Sciences Research Council
Graduate Scholarship, and Cambridge Trust Scholarship.
http://risingstars15-eecs.mit.edu/
Rising Stars 2015
11
Kathleen Fraser
Marzyeh Ghassemi
PhD Candidate
University of Toronto
PhD Candidate
Massachusetts Institute of
Technology
Text and Speech Processing
for the Detection of Dementia
It has been shown that language
can be a sensitive barometer of
cognitive health. However, current approaches to screening and
diagnosis for dementia do not typically include a detailed analysis
of spontaneous speech because the manual annotation of language samples is far too time-consuming. Using methods from
natural language processing and machine learning, we have been
able to extract relevant linguistic and acoustic features from short
speech samples and their transcripts to predict whether the speaker has Alzheimer’s Disease with 92% accuracy. We have also investigated a type of dementia called primary progressive aphasia (PPA),
in which language ability is the primary impairment. In addition to
determining whether participants had PPA or not, we were able to
distinguish between semantic-variant PPA and agrammatic-variant
PPA by incorporating features to detect signs of empty speech and
syntactic simplification. Another component of my current work
involves improving automatic speech recognition for cognitive assessment. By developing computational tools to collect, analyze,
and interpret language data from cognitively impaired speakers, I
hope to provide the groundwork for numerous potential applications, including remote screening, support for diagnosis, assistive
technologies for community living, and the quantitative evaluation
of therapeutic interventions.
Estimating the Response
and Effect of Clinical
Interventions
Much prior work in clinical modeling has focused on building
discriminative models to detect specific easily coded outcomes
with little clinical utility (e.g., hospital mortality) under specific ICU
settings, or understanding the predictive value of various types of
clinical information without taking interventions into account.
In this work, we focus on understanding the impact of interventions
on the underlying physiological reserve of patients in different clinical settings. Reserve can be thought of as the latent variability in
patient response to treatment after accounting for their observed
state. Understanding reserve is therefore important to performing
successful interventions, and can be used in many clinical settings.
I attempt to understand reserve in response to intervention in
two settings: 1) the response of intensive care unit (ICU) patients
to common clinical interventions like vassopressor and ventilation
administration in the ICU, and 2) the response of voice patients to
behavioral and surgical treatments in an ambulatory outpatient setting. In both settings, we use large sets of clinical data to investigate
whether specific interventions are meaningful to patients in an empirically sound way.
Bio
Bio
Katie Fraser is a PhD candidate at the University of Toronto in the
Computational Linguistics group, where her main research interests
are text processing, automatic speech recognition, and machine
learning. She is particularly interested in how these techniques can
be used to assess potential cognitive impairment. She received a
Master of Computer Science degree from Dalhousie University in
Halifax, Nova Scotia, where she developed techniques for reducing
noise and blur in microscope images. Before that, she researched
the structure and dynamics of glass-forming liquids as part of her
Bachelor of Science in Physics at St. Francis Xavier University in Antigonish, Nova Scotia.
Marzyeh Ghassemi is a PhD student in the Clinical Decision Making
Group (MEDG) in MIT’s Computer Science and Artificial Intelligence
Lab (CSAIL) supervised by Prof. Peter Szolovits. Her research uses
machine learning techniques and statistical modeling to predict
and stratify relevant human risks.
Marzyeh is interested in creating probabilistic latent variable models to estimate the underlying physiological state of patients during
critical illnesses. She is also interested in understanding the development and progression of conditions like hearing loss and vocal
hyperfunction using a combination of sensor data, clinical observations, and other physiological measurements.
While at MIT, Marzyeh has served on MIT’s Women’s Advisory Group
Presidential Committee, as Connection Chair to the Women in
Machine Learning Workshop, on MIT’s Corporation Joint Advisory
Committee on Institute-wide Affairs, and on MIT’s Committee on
Foreign Scholarships. Prior to MIT, Marzyeh received two B.S. degrees in computer science and electrical engineering with a minor in applied mathematics from New Mexico State University as
a Goldwater Scholar, and a MSc. degree in biomedical engineering
from Oxford University as a Marshall Scholar. She also worked at Intel Corporation in the Rotation Engineering Program, and then as a
Market Development Manager for the Emerging Markets Platform
Group.
12
Rising Stars 2015
http://risingstars15-eecs.mit.edu/
Elena Leah Glassman
Basak Guler
Systems for Teaching
Programming and
Hardware Design at Scale
Interaction, Communication,
and Computation in
Information and Social
Networks
PhD Candidate
Massachusetts Institute of
Technology
In a massive open online course
(MOOC), a single programming
exercise may yield thousands of student solutions that vary in many
ways, some superficial and some fundamental. Understanding
large-scale variation in programs is a hard but important problem.
For teachers, this variation can be a source of pedagogically valuable examples and expose corner cases not yet covered by autograding. For students, the variation in a large class means that other
students may have struggled along a similar solution path, hit the
same bugs, and can offer hints based on that earned expertise.
I have developed three systems to explore solution variation in
large-scale programming and computer architecture classes. (1)
OverCode visualizes thousands of programming solutions using static and dynamic analysis to cluster similar solutions. It lets
teachers quickly develop a high-level view of student understanding and misconceptions and provide feedback that is relevant to
many student solutions. (2) Foobaz clusters variables in student
programs by their names and behavior so that teachers can give
feedback on variable naming. Rather than requiring the teacher to
comment on thousands of students individually, Foobaz generates
personalized quizzes that help students evaluate their own names
by comparing them with good and bad names from other students.
(3) ClassOverflow collects and organizes solution hints indexed by
the autograder test that failed or a performance characteristic like
size or speed. It helps students reflect on their debugging or optimization process, generates hints that can help other students with
the same problem, and could potentially bootstrap an intelligent
tutor tailored to the problem. All three systems have been evaluated using data or live deployments in on-campus or edX courses
with thousands of students.
Bio
Elena Glassman is an EECS PhD candidate at MIT Computer Science
and Artificial Intelligence Lab, where she specializes in human-computer interaction. For her dissertation, Elena has created tools that
help teach programming and hardware design to thousands of students at once. She uses theories from the learning sciences, as well
as the pain points of students and teachers, to guide the creation
of new systems for teaching and learning online and at scale. Elena earned both her MIT EECS BS and MEng degrees in ‘08 and ‘10,
respectively, with a Ph.D. expected in ‘16. She has been a visiting
researcher at Stanford and an intern at Google and Microsoft Research. She earned the NSF and NDSEG fellowships and MIT’s Amar
Bose Teaching Fellowship. She also leads the MIT chapter of MEET,
which helps teach gifted Palestinians and Israelis computer science
and teamwork in Jerusalem.
http://risingstars15-eecs.mit.edu/
PhD Candidate
Pennsylvania State University
Modern networks are designed
to facilitate the interaction of
humans with computers. These networks consist of actors with
possibly different characteristics, goals, and interests. My research
takes a mathematical approach to modeling semantic and social
networks. I study the fundamental limits of the information transferred in real-world networks, and develop algorithms to make
network applications human-centric. Unlike conventional communication networks, this necessitates taking into account the semantic relationships between words, phrases, or clauses, as well as the
personal background, characteristics, and knowledge bases of the
interacting parties. These differences can in turn lead to various interpretations of the received information in a communication system.
Modern network systems should be able to operate under such ambiguous environments, and adapt to the interpretation differences
of the communicating parties. My goal is to incorporate these individual characteristics for designing effective network models that
can leverage and adapt to the semantic and social features of the
interacting parties. To do this, my research takes an interdisciplinary approach, rooted in information theory and optimization, and
incorporates social networks, and mathematical logic. As such, we
consider a diverse set of problems ranging from lossless and lossy
source coding to reliable communication with social structures. We
identify the optimal strategies to represent a remotely observed
phenomenon when the communicating parties have individual
and common backgrounds, as well as optimal interaction protocols
for exchanging messages with semantic relationships.
Bio
Basak Guler received her BSc degree in electrical and electronics
engineering from Middle East Technical University (METU), Ankara,
Turkey in 2009 and her M.Sc. degree in electrical engineering from
Wireless Communications and Networking Laboratory, Pennsylvania State University, University Park, PA, in 2012. She is currently pursuing the PhD degree and is a graduate research assistant with the
Department of Electrical Engineering, Pennsylvania State University, University Park, PA. Her research interests include information
theory, social networks, semantic communications, source coding,
data compression, interactive communication, and heterogeneous
wireless networks.
Rising Stars 2015
13
Divya Gupta
Judy Hoffman
Hosting Services on an
Untrusted Cloud
Adapting Deep Visual
Models for Visual
Recognition in the Wild
PhD Candidate
University of California at Los
Angeles
Outsourcing computation from
a weak client to a more powerful
server has received a lot of attention in recent years. This is partly
due to the increasing interest in cloud computing, where the goal is
to outsource all the computations to a (possibly untrusted) “cloud”.
Though this is quickly becoming the predominant mode of day-today computation, it brings with it many security challenges, and
there has been large numbers papers which address them. In our
work, we expand the realm of outsourcing computation to more
challenging security and privacy settings.
We consider a scenario where a service provider has created a software service and desires to outsource the execution of this service
to an untrusted cloud. The software service contains secrets that
the provider would like to keep hidden from the cloud. For example, the software might contain a secret database, and the service
could allow users to make queries to different slices of this database
depending on the user’s identity.
This setting presents significant challenges not present in previous
works on outsourcing or secure computation because secrets in the
software itself must be protected against an adversary that has full
control over the cloud that is executing this software. Furthermore,
we seek to protect knowledge of the software to the maximum extent possible even if the cloud can collude with several corrupted
users of this service.
In this work, we provide the first formalizations of security for this
setting, yielding our definition of a secure cloud service scheme. We
also provide constructions of secure cloud service schemes using
cryptographic tools.
Bio
Divya Gupta is a doctoral candidate in the Department of Computer
Science at University of California at Los Angeles, where she started
in the Fall of 2011 under the supervision of Prof. Amit Sahai. Her
research interests include cryptography, security, and theoretical
computer science. Before coming to UCLA, she graduated with a
B.Tech. and M.Tech from IIT Delhi.
14
Rising Stars 2015
Postdoctoral Research Associate
University of California at
Berkeley
Understanding visual scenes is
a crucial piece in many artificial
intelligence applications ranging from autonomous vehicles and
household robotic navigation to automatic image captioning for
the blind. Reliably extracting high-level semantic information from
the visual world in real-time is key to solving these critical tasks
safely and correctly. Existing approaches based on specialized
recognition models are prohibitively expensive or intractable due
to limitations in dataset collection and annotation. By facilitating
learned information sharing between recognition models these applications can be solved; multiple tasks can regularize one another,
redundant information can be reused, and the learning of novel
tasks is both faster and easier.
My work focuses on transferring learned information quickly and
reliably between visual data sources and across visual tasks–all
with limited human supervision. I aim to both formally understand
and empirically quantify the degree to which visual models can be
adapted and provide algorithms to facilitate information transfer.
Most visual recognition systems learn concepts directly from a large
collection of manually annotated images/videos. A model which
detects pedestrians requires a human to manually go through thousands or millions of images and indicate all instances of pedestrians.
However, this model is susceptible to biases in the labeled data and
often fails to generalize to new scenarios — a detector trained in
Palo Alto may have degraded performance in Rome, or a detector
trained in sunny weather may fail in the snow. Rather than require
human supervision for each new task or scenario, my work draws on
deep learning, transformation learning, and convex-concave optimization to produce novel optimization frameworks which transfer
information from the large curated databases to real world scenarios. This results in strong recognition models for novel tasks and
paves the way towards scalable visual understanding.
Bio
Judy Hoffman is a PhD candidate at UC Berkeley’s Computer Vision
Group. She received her B.Sc. in Electrical Engineering and Computer Science from UC Berkeley in 2010. Her research lies at the intersection of computer vision, transfer learning, and machine learning:
she is interested in minimizing the amount of human supervision
needed to learn new visual recognition models. Judy was awarded
the NSF Graduate Research Fellowship in 2010 and the Rosalie M.
Stern Fellowship 2010. She was the co-president of the Women in
Computer Science and Engineering at UC Berkeley 2012-2013, the
outreach and diversity officer for the Computer Science Graduate
Association 2013-2014, and organized the first workshop for Women in Computer Vision located at CVPR 2015.
http://risingstars15-eecs.mit.edu/
Hui-Lin Hsu
Carlee Joe-Wong
Reduction in the
Photoluminescence
Quenching for ErbiumDoped Amorphous Carbon
Photonic Materials by
Deuteration and Fluorination
Smart(er) Data Pricing
Research Assistant
University of Toronto
The integration of photonic. materials into CMOS processing involves the use of new materials. A simple one-step metal-organic
radio frequency plasma enhanced chemical vapor deposition system (RF-PEMOCVD) was deployed to grow erbium-doped amorphous carbon thin films (a-C:(Er)) on Si substrates at low temperatures (<200°C). A partially fluorinated metal-organic compound,
tris(6,6,7,7,8,8,8-heptafluoro-2,2-dimethyl-3,5-octanedionate)
Erbium(+III) or abbreviated Er(fod)3, was incorporated in situ into
a-C based host. It was found that the prominent room-temperature photoluminescence (PL) signal at 1.54 µm observed from the
a-C:H:F(Er) film is attributed to several factors including a high Er
concentration, the large optical bandgap of the a-C:H host, and the
decrease in the C-H quenching by partial C-F substitution of metal-organic ligand. In addition, six-fold enhancement of Er PL was
demonstrated by deuteration of the a-C host. Also, the effect of RF
power and substrate temperature on the PL of a-C:D:F(Er) films was
investigated and analyzed in terms of the film structure. PL signal increases with increasing RF power, which is the result of an increase
in [O]/[Er] ratio and the respective erbium-oxygen coordination
number. Moreover, PL intensity decreases with increasing substrate
temperature, which is attributed to an increased desorption rate or
a lower sticking coefficient of the fluorinated fragments during film
growth and hence [Er] decreases. In addition, it is observed that Er
concentration quenching begins at ~2.2 at% and continues to increase until 5.5 at% in the studied a-C:D:F(Er) matrix. This technique
provides the capability of doping Er in a vertically uniform profile.
Bio
Hui-Lin Hsu is a PhD graduate in Electrical Engineering (Photonics)
from the University of Toronto, with M.S. and B.S. degrees in Materials Science and Engineering from National Tsing Hua University, Taiwan. Her research interest is in the areas of thin film and nano-material processing, material characterizations, and microelectronic and
photonic devices fabrication. Hui-Lin has completed four different
research projects (Organic Thin Film Transistors (OTFTs), Flexible
Carbon Nanotubes Electrodes for Neuronal Recording, Si Nanowire
for Optical Waveguide Interconnection Application, and Rare Earth
doped Amorphous Carbon Based Thin Films for Light Guiding/Amplifying Applications). Hui-Lin has also first authored 3 patents (1
in USA, 2 in Taiwan) and 5 SCI journal articles, co-authored 9 SCI
journal articles, and 15 international conference presentations. She
did internships at Taiwan Semiconductor Manufacturing Company
(TSMC) and Industrial Technology Research Institute (ITRI). She is
also a recipient of the 2008 scholarship for studying abroad from
Taiwan government, and an invited participant for the 2007 Taiwan
Semiconductor Young Talent Camp held by Applied Materials and
2015 ASML PhD master class.
http://risingstars15-eecs.mit.edu/
PhD Candidate
Princeton University
Over the past decade, many more
people have begun to use the Internet regularly, and the proliferation of mobile apps allows them
to use the Internet for more and
more tasks. As a result, data traffic is growing nearly exponentially.
Yet network capacity is not expanding fast enough to handle this
growth in traffic, creating a problem of network congestion. My research argues that the very diversity in usage that is driving growth
in data traffic points to a viable solution for this fundamental capacity problem.
Smart data pricing reduces network congestion by looking at the
users who drive demand for data. In particular, we ask what incentives will alter user demand so as to reduce congestion, and perhaps more importantly, what incentives should we offer users in
practice? For instance, simply raising data prices or throttling data
throughput rates will likely drive down demand, but also lead to
vast user dissatisfaction. More sophisticated pricing schemes may
not work in practice, as they require users to understand the prices offered and algorithms to predict user responses. We demonstrate the feasibility and benefits of a smart data pricing approach
through end-to-end investigations of what prices to charge users,
when to charge which prices, and how to price supplementary network technologies.
Creating viable pricing solutions requires not only mathematical
models of users’ reactions to the prices offered, but also knowledge
of systems-building and human-computer interaction. My work
develops a feedback loop between optimizing the prices, offering
them to users, and measuring users’ reactions to the prices so as
to re-calibrate the prices over time. My current research expands
on this pricing work by studying users’ incentives to contribute towards crowd-sourced data. Without properly designed incentive
mechanisms, users might “free-ride” on others’ measurements or
collect redundant measurements at a high cost to themselves.
Bio
Carlee Joe-Wong is a PhD candidate and Jacobus Fellow at Princeton University’s Program in Applied and Computational Mathematics. Her research interests include network economics, distributed
systems, and optimal control. She received her A.B. in mathematics
in 2011 and her M.A. in applied mathematics in 2013, both from
Princeton University. In 2013, she was the Director of Advanced Research at DataMi, a startup she co-founded in 2012 that commercializes new ways of charging for mobile data. DataMi was named a
“startup to watch” by Forbes in 2014. Carlee received the INFORMS
ISS Design Science Award in 2014 for her research on smart data
pricing, and the Best Paper Award at IEEE INFOCOM 2012 for her
work on the fairness of multi-resource allocations. In 2011, she received the National Defense Science and Engineering Graduate Fellowship (NDSEG).
Rising Stars 2015
15
Gauri Joshi
Ankita Arvind Kejriwal
PhD Candidate
Massachusetts Institute of
Technology
Using Redundancy to
Reduce Delay in Cloud
Systems
It is estimated that by 2018, more
than thirty percent of all digital
content will be stored and processed on the cloud. The term ‘cloud’
refers to a shared pool of a large number of connected servers, used
to host services such as Dropbox, Amazon EC2, Netflix etc. The sharing of resources provides scalability and flexibility to cloud systems,
but it also causes randomness in the response time of individual
servers, which can result in large and unpredictable delays experienced by users. My research develops techniques to use redundancy to reduce delay, while using the available resources efficiently.
In cloud storage and computing systems, a task (for e.g. searching
for a term on Google, or accessing a file from Dropbox) experiences
random queuing and service delays at the machine it is assigned
to. To reduce the overall latency, we can launch replicas of the task
on multiple machines and wait for the earliest copy to finish, albeit
at the expense of extra computing and network resources. We develop a fundamental understanding how the randomness in the
response time of a server affects latency and cost of computing resources. This helps us find cost-efficient strategies of launching and
canceling redundant tasks to minimize latency.
Achieving low latency is even more challenging in streaming services such as Netflix and Youtube because they require fast, in-order playback of packets. Another focus of my research is to develop
erasure codes to transmit redundant combinations of packets, and
minimize the number of interruptions in playback.
Bio
Gauri Joshi is a PhD candidate at MIT, advised by Prof. Gregory
Wornell. She works on applying probability and coding theory to
improve today’s cloud infrastructure. She received an S.M. in EECS
from MIT in 2012, for which she received the William Martin memorial award for best thesis in Computer Science at MIT.
PhD Candidate
Stanford University
Scalable Low-Latency
Indexes for a Key-Value
Store
Many large-scale key-value storage systems sacrifice features like
secondary indexing and/or consistency in favor of scalability or performance. This limits the ease
and efficiency of application development on these systems.
My work shows how a large-scale key-value storage system can be
extended to provide secondary indexes in a fashion that is highly
scalable and offers ultra low latency access. The architecture, called
SLIK, enables multiple keys for each object, and allows indexes to
be partitioned and distributed independently of their objects. SLIK
represents index B+ trees using objects in the underlying key-value
store. It uses an ordered write approach for object updates, which
allows temporary inconsistencies between indexes and their objects but masks those inconsistencies from applications. When implemented using RAMCloud as the underlying key-value store, SLIK
performs indexed reads in 11 μs and writes in 30 μs; it supports indexes spanning thousands of nodes, and provides linear scalability
for throughput. SLIK is also an order of magnitude faster than other
state-of-the-art systems.
Bio
Ankita Kejriwal is a PhD candidate in the Computer Science department at Stanford University working with Prof. John Ousterhout. She
enjoys working on problems in distributed systems. She is building
RAMCloud, a low-latency datacenter storage system, along with the
rest of her lab. Her recent project, called SLIK, extends a key-value
store to enable scalable, low-latency indexes. She interned at MSRSVC in 2013 with Marcos Aguilera and designed an algorithm for
low-latency distributed transactions. Prior to graduate school, she
completed her Bachelor in Computer Science at Birla Institute of
Technology and Science - Pilani, Goa Campus.
Before coming to MIT in 2010, she completed a B.Tech and M. Tech
in Electrical Engineering from the Indian Institute of Technology
(IIT) Bombay. She was awarded the Institute Gold Medal of IIT Bombay, for highest GPA across all majors.
Gauri has received several other awards and honors including the
Schlumberger Faculty for the Future fellowship (2012-15) and the
Claude E. Shannon Research Assistantship (2015-16). She has had
summer internships at Bell Labs (2012) and Google (2013, 14).
16
Rising Stars 2015
http://risingstars15-eecs.mit.edu/
Hana Khamfroush
Hyeji Kim
On Propagation
of Phenomena in
Interdependent Networks
Superposition Coding is
Almost Always Optimal
for the Poisson Broadcast
Channel
Postdoctoral Scholar
Pennsylvania State University
Operational networks of different
types are often interdependent
and interconnected. Many of today’s infrastructures are organized in the form of interdependent
networks. For example, the smart grid is controlled via the Internet, and the Internet is powered by the smart grid. A failure in one
may lead to service degradation and possibly failure in the other.
This failure procedure can cascade multiple times between the two
interdependent networks and therefore, results in catastrophic
widespread failures. Previous works that are modeling the interdependency between two networks are generally based on strong
assumptions and specific applications, thus fail to capture important aspects of real networks. Furthermore, most of the previous
works only address the asymptotic behavior of the networks. To fill
this gap, we focused on the temporal evolution of the phenomena
propagation in interdependent networks. The goal is to identify the
importance of the nodes in terms of their influence on the propagation phenomenon, and to design more efficient interdependent
networks.
We proposed a general theoretical model for such a propagation,
which captures several possible models of interaction among affected nodes. Our model is general in the sense that there is no
assumption on the network topology, propagation model, or the
capability of the network nodes (heterogeneity of the networks).
The theoretical model allows us to evaluate small-scale networks.
On the other hand, we implemented a simulator, which allows for
the evaluation of larger scale networks for different types of random graphs, different models of coupling between networks, and
different initial spreaders. Based on our analysis, we propose a new
centrality metric designed for the interdependent networks that
is shown to be more precise in identifying the importance of the
nodes compared to the traditional centrality metrics. Our next step
would be analyzing the phenomena propagation in time-varying
interdependent networks.
PhD Candidate
Stanford University
The two fundamental building
blocks of wireless networks is the
multiple access channel (multiple transmitters and one receiver)
and the broadcast channel (one transmitter and multiple receivers).
While the capacity region for multiple access channel is known, the
capacity region for broadcast channels has been an open problem
for 40 years.
A continuous-time Poisson channel is a canonical model for optical
communications that is widely used to transmit telephone signals,
internet communication, and cable television signals. The 2-receiver continuous-time Poisson broadcast channel is a 2-receiver broadcast channel for which the channel to each receiver is a continuous-time Poisson channel.
We show that superposition coding is optimal for this channel for
almost all channel parameter values. Interestingly, the channel in
some subset of these parameter values does not belong to any of
the existing classes of broadcast channels for which superposition
coding is known to be optimal. For the rest of the channel parameter values, we show that there is a gap between the best known inner bound and the best known outer bound – Marton’s inner bound
and the UV outer bound.
Bio
Hyeji Kim is a PhD candidate in the Department of Electrical Engineering at Stanford University advised by Prof. Abbas El Gamal. She
received the B.S. degree with honors in Electrical Engineering from
the Korea Advanced Institute of Science and Technology (KAIST) in
2011 and the M.S. degree in Electrical Engineering from Stanford
University in 2013. Her research interest include information theory,
communication systems, and statistical learning. She is a recipient
of the Stanford Graduate Fellowship.
Bio
Hana Khamfroush is a postdoctoral scholar in the Electrical Engineering and Computer Science department of Penn State University, working with Prof. Thomas La Porta. She received her PhD with
highest distinction from the University of Porto in Portugal and in
Collaboration with Aalborg University of Denmark in Nov. 2014.
Her PhD research focused on network coding for cooperation in
dynamic wireless networks. Currently at PSU, she is working on
interdependent networks, network recovery and network tomography. Her research interests include complex networks, computer
networks, wireless communications, and mathematical models. She
received a four-year scholarship from the ministry of science of Portugal for her PhD, and was awarded many grants and fellowships
from the European Union. Recently, she received the best poster
award for her recent work in the basic research technical review
meeting of DTRA.
http://risingstars15-eecs.mit.edu/
Rising Stars 2015
17
Jung-Eun Kim
Varada Kolhatkar
PhD Candidate
University of Illinois at UrbanaChampaign
A New Real-Time
Scheduling Paradigm for
Safety-Critical Multicore
Systems
Over the past decade, multicore
processors have become increasingly common for their potential of
efficiency, which has made new single-core processors become relatively scarce. As a result, it has created a pressing need to transition
to multicore processors. However, existing safety-critical software
that has been certified on single-core processors is not allowed to
be fielded on a multicore system as is. The issue stems from, namely,
serious inter-core interference problems on shared resources in current multicore processors, which create non-deterministic timing
behavior. Meeting the timing constraints is the crucial requirement
of safety-critical real-time systems as timing violations could have
disastrous effects, from loss of human life to damages to machines
and/or the environment. This is why Federal Aviation Administration
(FAA) does not currently allow the use of more than one core in a
multicore chip. Academia has paid little attention to non-determinism due to uncoordinated I/O communications relatively compared
to other resources such as cache or memory, although industry
considers it as one of the most troublesome challenges. Hence we
focuse on I/O synchronization while assuming unknown Worst Case
Execution Time (WCET) that can get impacted by other interference
sources. Traditionally, a two-level scheduling, such as Integrated
Modular Avionics system (IMA), has been used for providing temporal isolation capability. However, such hierarchical approaches introduce significant priority inversions across applications, especially
in multicore systems, ultimately leading to lower system utilization.
To address these issues, we have proposed a novel scheduling
mechanism called budgeted generalized rate monotonic analysis
(Budgeted GRMS) in which different applications’ tasks are globally
scheduled for avoiding unnecessary priority inversions, yet the CPU
resource is still partitioned for temporal isolation among applications. Incorporating the issues of unknown WCETs and I/O synchronization, this new scheduling paradigm enables the “safe” use of
multicore processors in safety-critical real-time systems.
Bio
Jung-Eun Kim is a PhD candidate advised by Prof. Lui Sha in the Department of Computer Science at the University of Illinois at Urbana-Champaign. She received her BS and MS (advised by Prof. ChangGun Lee) degrees from the department of Computer Science and
Engineering of Seoul National University, Korea in 2007 and 2009,
respectively. Her current research interests include real-time scheduling (schedulability analysis, optimization, hierarchical scheduling)
and real-time multicore architecture. The main targeted application
is safety-critical hard real-time systems such as avionics systems (Integrated modular avionics (IMA) systems). She is a recipient of the
Richard T. Cheng Endowed Fellowship for 2015-2016.
18
Rising Stars 2015
Postdoctoral Researcher
Privacy Analytics Inc.
Resolving Shell Nouns
Shell nouns are abstract nouns,
such as ‘fact’, ‘issue’, ‘idea’, and
‘problem’, which, among other
functions, facilitate efficiency
by avoiding repetition of long
stretches of text. Shell nouns encapsulate propositional content, and the process of identifying this
content is referred to as shell noun resolution.
My research presents the first computational work on resolving
shell nouns. The research is guided by three primary questions: first,
how an automated process can determine the interpretation of
shell nouns; second, the extent to which knowledge derived from
the linguistics literature can help in this process; and third, the extent to which speakers of English are able to interpret shell nouns.
I start with a pilot study to annotate and resolve occurrences of ‘this
issue’ in the Medline abstracts. The results illustrate the feasibility of
annotating and resolving shell nouns, at least in this closed domain.
Next, I move to developing general algorithms to resolve a variety
of shell nouns in the newswire domain. The primary challenge was
that each shell noun has its own idiosyncrasies and there was no
annotated data available. I developed a number of computational methods for resolving shell nouns that do not rely on manually
annotated data. For evaluation, I developed annotated corpora for
shell nouns and their content using crowdsourcing. The annotation
results showed that the annotators agreed to a large extent on the
shell content. The evaluation of resolution methods showed that
knowledge derived from the linguistics literature helps in the process of shell noun resolution, at least for shell nouns with strict semantic and syntactic expectations.
Bio
Varada Kolhatkar’s broad research area in the past eight years has
been natural language processing and computational linguistics.
She recently completed her PhD in computational linguistics from
the university of Toronto. Her advisor was Dr. Graeme Hirst. Prior
to that, she did her Master’s with Dr. Ted Pedersen at the University of Minnesota Duluth. During her PhD she focused primarily on
the problem of anaphora resolution. Her Master’s thesis explores
all-words-sense disambiguation, showing the effect of polysemy,
context window size, and sense frequency on disambiguation. At
the end of her Ph.D., Varada spent four months at the University of
Hamburg, Germany, where she worked with Dr. Heike Zinsmeister
on non-nominal anaphora resolution. Currently, Varada is working as a research analyst at a company called Privacy Analytics Inc,
where she focuses on the problem of text de-identification, i.e., the
process used to protect against inappropriate disclosure of personal information in unstructured data.
http://risingstars15-eecs.mit.edu/
Parisa Kordjamshidi
Ramya Korlakai
Vinayak
Postdoctoral Research Associate
U. of Illinois, Urbana-Champaign
PhD Candidate
California Institute of Technology
Saul: Towards Declarative
Learning Based
Programming
Developing intelligent problem-solving systems for real
world applications requires addressing a range of scientific
and engineering challenges. I will present Saul, a learning based
programming language designed to address some of the shortcomings of programming languages that aim at advancing and
simplifying the development of intelligent systems. Such languages need to interact with messy, naturally occurring data, to allow a
programmer to specify what needs to be done at an appropriate
level of abstraction rather than at the data level, to be developed on
a solid theory that supports moving to and reasoning at this level
of abstraction and, finally, to support flexible integration of these
learning and inference models within an application program. Saul
is an object-functional programming language written in Scala that
facilitates these by (1) allowing a programmer to learn, name and
manipulate named abstractions over relational data; (2) supporting
seamless incorporation of trainable (probabilistic or discriminative)
components into the program, and (3) providing a level of inference
over trainable models to support composition and make decisions
that respect domain and application constraints. Saul is developed
over a declaratively defined relational data model, can use piecewise learned factor graphs with declaratively specified learning and
inference objectives, and it supports inference over probabilistic
models augmented with declarative knowledge-based constraints.
I will describe the key constructs of Saul and exemplify its use in
case studies of developing intelligent applications in the domains
of natural language processing and computational biology. I will
also argue that, apart from simplifying programming for complex
models, one main advantage of such a language is the reusability of
the designed inference, learning models and features, henceforth
increasing the replicability of research results. Moreover, the models can be extended to use new emerging algorithms, new data resources and background knowledge with a minimum effort.
Bio
Parisa Kordjamshidi is a postdoctoral researcher in University of
Illinois at Urbana-Champaign, computer science department, in
cognitive computation group. She obtained her PhD degree from
KULeuven in July 2013. During her PhD research she introduced the
first Semantic Evaluation task and benchmark for Spatial Role Labeling (SpRL). She has worked on structured output prediction and
relational learning models to map natural language onto formal
spatial representations, appropriate for spatial reasoning as well as
to extract knowledge from biomedical text. She is also involved in
an NIH (National Institute of Health) project, extending her research
experience on structured and relational learning to Declarative
Learning Based Programming (DeLBP) and performing biological
data analysis. DeLBP is a research paradigm in which the goal is to
facilitate programming for building systems that require a number
of learning and reasoning components that interact with each other. This would help experts in various domains who are not expert
in machine learning, to design complex intelligent systems.The results of her research have been published in several international
peer-reviewed conferences and journals including ACM-TSLP, JWS,
BMC-Bioinformatics, IJCAI.
http://risingstars15-eecs.mit.edu/
Convex Optimization
Based Graph Clustering:
Theoretical Guarantees and
Practical Applications
Today we are collecting huge amounts of data with the aim of extracting useful and relevant information. Clustering, a widely used
technique toward this quest, refers to the grouping of data points
that are similar to each other. In many problems, the observed data
has a network or graphical structure associated to it, as is the case in
social networks, bioinformatics, data mining and other fields. When
attempting to cluster massive data, making pairwise comparisons/
measurements between all data points is exorbitantly expensive. A
major challenge therefore, has been to identify clusters with only
partially-observed graphs and to design algorithms with provable
guarantees for this task.
In the case of unweighted graphs, we consider two algorithms
based on the popular convex optimization approach of the “lowrank plus sparse” decomposition of the adjacency matrix (Robust
Principal Component Analysis). We provide sharp performance
guarantees for successfully identifying clusters generated by the
commonly used Stochastic Block Model in terms of the size of the
clusters, the density of edges inside the clusters and the regularization parameter of the convex programs.
For weighted graphs, where each weighted edge represents the
similarity between its corresponding pair of points, we seek to recover a low-rank component of the adjacency matrix (also called
the similarity matrix). We use a convex-optimization-based algorithm which requires no prior knowledge of the number of clusters
and behaves in a robust way in the presence of outliers.
Using a generative stochastic model for the similarity matrix, we
obtain sharp bounds on the sizes of clusters, strength of similarity
compared to noise, number of outliers and the regularization parameter.
We corroborate our theoretical findings with simulated experiments. We also apply our algorithms to the problem of crowdsourcing inference using real data.
Bio
Ramya Korlakai Vinayak is a PhD candidate in the Department of
Electrical Engineering at Caltech. She works with Prof. Babak Hassibi. Her research interests are broadly in the intersection of Optimization and Machine Learning. She received the Schlumberger
Foundation Faculty of the Future fellowship for the academic years
2013-15. Prior to joining Caltech, Ramya obtained her undergraduate degree in Electrical Engineering from Indian Institute of Technology Madras.
Rising Stars 2015
19
Karla Kvaternik
Min Kyung Lee
Consensus Optimization
Based Coordination Control
Strategies
Designing Human-Centered
Algorithmic Technologies
Postdoctoral Research Associate
Princeton University
Consensus-decentralized optimization (CDO) methods, originally
studied by Tsitsiklis et al., have
undergone significant theoretical development within the last decade. Much of this attention is motivated by the recognized utility
of CDO in large-scale machine learning and sensor network applications.
In contrast, we are interested in a distinct class of decentralized
coordination control problems (DCCPs) and we aim to investigate
the utility and limitations of CDO-based coordination control strategies. Unlike prototypical machine learning and sensor network
problems, DCCPs may involve a number of networked agents with
heterogeneous dynamics that couple to those of a CDO-based coordination control strategy, thereby affecting its performance. We
find that existing analytic techniques cannot easily accommodate
such a problem setting. Moreover, the final desired agent configuration in general DCCPs does not necessarily involve consensus.
This nuanced observation requires a re-interpretation of the variables updated in a standard CDO scheme, and exposes a limitation
of CDO-based coordination control strategies.
Starting from this re-interpretation, we address this limitation by
proposing the Reduced Consensus Optimization (RCO) method,
which is a streamlined variant of CDO particularly well suited to the
DCCP context. More importantly, we introduce a novel framework
for the analysis of general CDO methods, which is based on the
use of interconnected systems techniques, small-gain arguments
and the concept of semiglobal, practical, asymptotic stability. This
framework allows us to seamlessly study the performance of RCO,
as well as problem settings involving dynamic agents. In addition,
when applied to a general class of CDO methods themselves, this
analytic viewpoint allows us to relax several standard assumptions.
Bio
Karla Kvaternik obtained her B.Sc. in Electrical and Computer Engineering at the University of Manitoba, her M.Sc. specializing in
control theory at the University of Alberta, and her Ph.D. in control
theory at the University of Toronto. She was the recipient of the
prestigious Vanier Canada Graduate Scholarship in 2010, and the
recipient of the Best Student Paper award at the 2009 Multiconference on Systems and Control in St. Petersburg, Russia. Her research
interests span nonlinear systems and control theory, Lyapunov
methods, nonlinear programming and extremum-seeking control,
but her main interest is the development and application of decentralized coordination control strategies for dynamic multiagent
systems. She is currently a Postdoctoral Research Associate at Princeton University, where her research focuses on the development of
optimal social foraging models.
20
Rising Stars 2015
Research Scientist
Carnegie Mellon University
Algorithms are everywhere,
acting as intelligent mediators
between people and the world
around them. Facebook algorithms decide what people see on their news feeds; Uber algorithms
assign customers to drivers; robots drive cars on our behalves. Algorithmic intelligence offers opportunities to transform the ways
people live and work for the better. Yet their opacity can introduce
bias into the worlds that people access through such technologies,
inadvertently provide unfair choices, blur accountability, or make
the technology seem incomprehensible or untrustworthy.
My research examines the social and decision-making implications
of intelligent technologies and facilitates more human-centered
design. I study how intelligent technologies change work practices, and devise design principles and interaction techniques that
give people appropriate control over intelligent technologies. In
the process, I create novel intelligent products that address critical
problems in the areas of on-demand work and robotic service.
In the first line of my research, I studied Uber and Lyft ridesharing
drivers to understand the impact of algorithms used to manage human workers in on-demand work. The results suggested that workers do not always cooperate with algorithmic management because
of the algorithms’ limited assumptions about worker behaviors and
the opacity of algorithmic mechanisms. I further examined people’s
perceptions of algorithmic decisions through an online experiment,
and created design principles around how we can use transparency,
anthropomorphization, and visualization to foster trust in algorithmic decisions and help people make better use of them.
In the second line of my research, I studied three service robots deployed in the field over long periods of time: a receptionist robot, a
telepresence robot for distributed teams, and an office delivery robot that I helped build from scratch using human-centered design
methods. The studies revealed individual and social factors that robots can personalize in order to be more successfully adopted into
a workplace.
Bio
Min Kyung Lee is a research scientist in human-computer interaction
at the Center for Machine Learning and Health at Carnegie Mellon
University. Her research examines the social and decision-making
implications of intelligent systems and supports the development
of more human-centered machine learning applications. Dr. Lee
is a Siebel Scholar and has received several best paper awards, as
well as an Allen Newell Award for Research Excellence. Her work has
been featured in media outlets such as the New York Times, New
Scientist, and CBS. She received a PhD in HCI in 2013 and an MDes
in Interaction Design from Carnegie Mellon, and a BS summa cum
laude in Industrial Design from KAIST.
http://risingstars15-eecs.mit.edu/
Kun (Linda) Li
Hongjin Liang
PhD Candidate
University of California at
Berkeley
Limited-Term Associate
Researcher
University of Science and
Technology of China
III-V Compound Semiconductor
Lasers for Optical
Communication and Imaging
My research projects focus on
III-V compound semiconductor
lasers to generate and manipulate light, with both bottom-up and top-down approaches, for applications in optical communications, biological imaging, ranging and
sensing. As microprocessors become progressively faster, chip-scale
data transport becomes progressively more challenging. Optical interconnects for inter- and intra-chip communications are required
to reduce power consumption and increase bandwidth. Lightwave
devices have traditionally relied on III-V compound semiconductors
due to their capacity for efficient optical processes. Growing III-V materials from the bottom up opens a pathway to integrating superior
optoelectronic properties with the massive existing silicon-based
infrastructure. Our approach of self-assembling III-V nanostructures
on silicon in a novel growth mode has bypassed several roadblocks
and achieved excellent single crystalline quality with GaAs and InP
based materials. I have developed a methodology to evaluate optical
properties of InP nanostructures, and demonstrated its superior surface quality, which are critical for optoelectronic devices. I also make
another type of micro-scale semiconductor lasers from the top down,
which is called vertical-cavity surface-emitting lasers (VCSELs). They
are key optical sources in optical communications, with the advantages of lower power consumption, lower-cost packaging, and ease
of fabrication and testing. Our group has demonstrated a revolutionary single-layer, high-index contrast sub-wavelength grating (HCG),
and implemented it as a reflection mirror in VCSEL. Compared with
conventional VCSEL mirrors (DBRs), the seemingly simple-structured
HCG provides ultra-broadband high reflectivity, compact size and
light weight, high-tolerant and cost-effective fabrication process.
I mainly work on the development of wavelength-tunable 850nm
and 1060nm HCG-VCSELs. These monolithic, continuously tunable
HCG-VCSELs will present extraordinary performance in applications
such as wavelength-division-multiplexed (WDM) optical network,
light detection and ranging. Its potential wide reflection band and
fast tuning speed will also be highly promising for high-resolution,
real-time imaging in optical coherent tomography (OCT).
Bio
Kun (Linda) Li is a PhD candidate in the Department of Electrical Engineering and Computer Sciences at University of California Berkeley, advised by Prof. Connie Chang-Hasnain. Prior to joining graduate school, she received her B.S. degree from Optical Engineering
of Zhejiang University in China (2006-2010). She had one year of
exchange experience in University of Hong Kong (2008-2009). Kun’s
main research interests focus on III-V nanostructures directly grown
on silicon for integrated optoelectronics, and vertical-cavity surface
emitting laser (VCSEL) with high-contrast grating (HCG) structure for
optical communication and imaging. Her skills include optical characterization, semiconductor fabrication, and optoelectronic device
modeling. She received Lam Research Graduate Fellowship (2014)
to award her performance in the field of semiconductors. Besides
research, Kun is also active in a variety of education, outreach, and
mentoring programs, including Girl Scouts, Expanding Your Horizon,
and Girls in Engineering. Kun has won the Outstanding Graduate
Student Instructor Award at UC Berkeley (2014).
http://risingstars15-eecs.mit.edu/
A Program Logic for
Concurrent Objects under
Fair Scheduling
Existing work on verifying concurrent objects is mostly concerned with safety only, e.g., partial
correctness or linearizability. Although there has been recent work
verifying lock-freedom of non-blocking objects, much less efforts
are focused on deadlock-freedom and starvation-freedom, progress properties of blocking objects. These properties are more challenging to verify than lock-freedom because they allow the progress of one thread to depend on the progress of another, assuming
fair scheduling.
We propose LiLi, a new rely-guarantee style program logic for
verifying linearizability and progress together for concurrent objects under fair scheduling. The rely-guarantee style logic unifies
thread-modular reasoning about both starvation-freedom and
deadlock-freedom in one framework. It also establishes progress-aware abstraction for concurrent objects, which can be applied
when verifying safety and liveness of client code. We have successfully applied the logic to verify starvation-freedom or deadlock-freedom of representative algorithms such as ticket locks, queue locks,
lock-coupling lists, optimistic lists and lazy lists.
This is joint work with Xinyu Feng at USTC.
Bio
Hongjin Liang is a limited-term associate researcher at University of
Science and Technology of China (USTC). She received her Ph.D. in
Computer Science from USTC in 2014, under the joint supervision of
Prof. Xinyu Feng (USTC) and Prof. Zhong Shao (Yale).
Hongjin is interested in program verification and concurrency theory. Her Ph.D. thesis is about refinement verification of concurrent
programs and its applications, in which she designed simulations
and Hoare-style program logics for concurrent program refinement,
and applied them to verify concurrent garbage collectors and prove
linearizability of concurrent objects and algorithms. She is currently
trying to extend her refinement verification techniques to also reason about liveness properties of concurrent algorithms.
For more information, please visit http://staff.ustc.edu.cn/~lhj1018.
Rising Stars 2015
21
Xi Ling
Fei Liu
Postdoctoral Scholar
Massachusetts Institute of
Technology
Seeding Promoter Assisted
Chemical Vapor Deposition
of MoS2 Monolayer
The synthesis of monolayer
MoS2-based dichalcogenides is
an attractive topic because of their promising properties in diverse
fields, especially in electronics and optoelectronic. Among the various methods to get the monolayer MoS2, the chemical vapor deposition (CVD) method is considered as the superlative one because
of the high efficient, low cost and large-area synthesis. So far, sulfur and MoO3 are the widely used precursors to grow monolayer
MoS2 on the SiO2/Si substrate. Here, by loading the organic aromatic molecule on the SiO2/Si substrate as seed, it was found that the
large-area and high quality MoS2 can grow out under a much soft
condition, such as atmospheric pressure, lowing the temperature
from 800°C or higher to 650°C. Raman spectra, photoluminescence
spectra and AFM (atomic force microscopy) are used to identify the
thickness and quality of MoS2. Furthermore, other kinds of aromatic molecules are tried to use as a seed to grow MoS2. Towards the
applications in integrated circuits, we developed a method called
“selective sowing” of seeds to construct the basic building blocks of
metal-semiconductor (e.g. graphene-MoS2), semiconductor-semiconductor (e.g. WS2-MoS2) and insulator-semiconductor (e.g. hBNMoS2) heterostructures, through direct and controllable CVD synthesis in a large-scale.
Bio
Xi Ling is currently a Postdoctoral Associate in the Research Laboratory of Electronics at Massachusetts Institute of Technology (MIT)
since September 2012, under the supervision of Professors Mildred
Dresselhaus and Jing Kong. She obtained her PhD degree in physical chemistry from Peking University in July 2012, under the supervision of Professor Jin Zhang and Zhongfan Liu. She has a multidisciplinary background in chemistry, materials science, electrical
engineering and physics, with research experience on spectroscopy, chemical vapor deposition (CVD) and optoelectronic devices.
22
Rising Stars 2015
Postdoctoral Fellow
Carnegie Mellon University
Summarizing Information
in Big Data: Algorithms and
Applications
Information floods the lives of
modern people, and we find it
overwhelming. Summarization
systems that identify salient pieces of information and present it
concisely can help. I will discuss both algorithmic and application
perspectives of summarization. Algorithm-wise, I will describe keyword extraction, sentence extraction, and summary generation,
including a range of techniques from information extraction to
semantic representation of data sources; application-wise, I focus
on summarizing human conversations, social media contents, and
news articles.
The data sources span low-quality speech recognizer outputs and
social media chats to high-quality content produced by professional writers. A special focus of my work is exploring multiple information sources. In addition to better integration across sources, this allows abstraction to shared research challenges for broader impact.
Finally, I try to identify the missing links in cross-genre summarization studies and discuss future research directions.
Bio
Fei worked as a Senior Research Scientist at Bosch Research, Palo
Alto, California, one of the largest German companies providing
intelligent car systems and home appliances. Fei received her PhD
in Computer Science from the University of Texas at Dallas in 2011,
supported by Erik Jonsson Distinguished Research Fellowship.
Prior to that, she obtained her Bachelors and Masters degrees in
Computer Science from Fudan University, Shanghai, China. Feihas
published over twenty peer reviewed articles, and she serves as a
referee for leading journals and conferences.
http://risingstars15-eecs.mit.edu/
Yu-Hsin Liu
Kristen Lurie
Silicon p-n Junction
Photodetectors
New Optical Imaging
Tools and Visualization
Techniques for Bladder
Cancer
PhD Candidate
University of California at San
Diego
Yu-Hsin’s research focuses on silicon p-n junction structures applied to photodetectors, which
are compatible with COMS fabrication process and without involving defects. By using space confinement and heavy doping in nanoscaled p-n junction structures to relax the k-selection rule for Si
materials, efficient 1310 nm light detection has been demonstrated. The nanowire and waveguide devices show efficient sub-bandgap bias-dependent photoresponse without involving any defects
or surface states.
On the other hand, she also demonstrated high gain in heavily
doped and partially compensated p-n junction devices at visible
wavelength. Compared to avalanche photodiodes based on impact
ionization, her photodetectors using the Cycling Excitation Process
(CEP) for signal amplification, experience smaller excess noise and
can be operated at very low bias (<4V). CEP also possesses an intrinsic, phonon-mediated regulation process to keep the device
stable without the quenching components required in today’s Geiger-mode avalanche detectors.
Bio
Yu-Hsin Liu is now a Ph.D. candidate in Materials Science and Engineering program in UCSD. She received her Master degree in Materials Science and Engineering from National Tsing Hua University
(NTHU)at Taiwan in 2009. She had worked as research assistant in
NTHU for one year and became a Ph.D. student of UCSD in 2010.
In the department of Electrical and Computer Engineering, she has
been working on optoelectronics and semiconductor devices and
has an extensive background in fabrication process development,
device characterizations, simulations and modelings. She also has
experiences in micro- fabrication and microfluidics devices development from internship working with Illumina and Nano3 Facility
(Nanoscience, Nanoengineering, and Nanomedicine). Currently her
research interests are in cycling excitation process, a new signal amplification process for Si photodetectors.
http://risingstars15-eecs.mit.edu/
PhD Candidate
Stanford University
Bladder cancer is the most costly
cancer to treat as the high rate of
recurrence necessitates lifelong surveillance in order to detect cancer as early as possible. White light cystoscopy (WLC) is the standard
tool used for these surveillance procedures, but this imaging technique has several limitations. First, WLC cannot accurately detect all
tumors, causing some — particularly early stage tumors — to go
untreated. Second, WLC cannot gauge the penetration depth of lesions, the criterion for cancer staging, which requires an excisional
biopsy. This follow-up procedure is costly and risky and may ultimately be unnecessary if the tumor is incorrectly classified. Third, it
is difficult to review the image data, making it easy to overlook signs
of cancer due to work flow challenges or insufficient annotations.
To overcome these limitations, I developed targeted techniques
to improve the cystoscopy examination. Specifically, I augmented
WLC with optical coherence tomography (OCT), a complementary imaging technique whose ability to visualize the subsurface
appearance of the bladder wall reveals early stage tumors better
than WLC alone and makes it possible to stage cancers. To this end,
I developed a miniaturized, rapid-scanning OCT endoscope that
facilitates tumor detection and classification during the initial cystoscopy. Finally, to improve the review of the cystoscopy data, I developed techniques that can enable a more comprehensive review
among and between WLC and OCT imaging data. These techniques
include (1) a volumetric mosaicing algorithm to extend the field of
view of OCT, (2) 3D reconstruction technique to generate models
with the shape and appearance of the bladder, and (3) a registration
approach that registers OCT data to the 3D bladder model. Taken
together, the new OCT endoscope and image reconstruction algorithms I describe can have a tremendous impact on the future of
cystoscopy for improved management of bladder cancer patients.
Bio
Kristen is a PhD student in Electrical Engineering at Stanford University and is advised by Dr. Audrey Ellerbee. Kristen was awarded
a Stanford Graduate Fellowship, an NSF Graduation Research Fellowship, and a National Defense Science and Engineering Graduate
(NDSEG) Fellowships to pursue her doctoral studies. She received
her A.B. and B.E. degrees from Dartmouth College in Engineering
Sciences and a MS in Electrical Engineering also from Stanford University. Her research interests lie at the intersection of optics and
computer vision for medical applications. Her dissertation is primarily focused on developing endoscopes and 3D computer vision
algorithms for applications in urology.
Rising Stars 2015
23
Jelena Marasevic
Ghita Mezzour
Links between Systems
and Theory: Full-Duplex
Wireless and Beyond
A Socio-Technical Approach
to Cyber-Security
PhD Candidate
Columbia University
My research focuses on the optimization of wireless network
performance by using analytical
tools from optimization and algorithms research, and relying on the
problem structure. From the theory perspective, the goal is to understand and describe the studied problems with realistic but tractable mathematical models, and ultimately devise algorithms with
provable performance guarantees. From the systems perspective,
the goal is to implement the devised algorithms and demonstrate
their performance experimentally.
For example, in a cross-disciplinary project on full-duplex wireless
communication — simultaneous transmission and reception on
the same frequency channel — we have been exploring the interactions between the hardware design and the algorithms for medium access control (MAC). Our work has resulted in a number of
insightful analytical results that characterize and quantify achievable throughput gains from full-duplex, based on realistic models
of the full-duplex hardware. Moreover, we have obtained power
allocation algorithms that are applicable to quite general hardware
models and to both single-channel and multi-channel settings. Our
algorithms maximize the sum of the rates over a full-duplex pair
of users and over (possibly multiple) frequency channels, and are
provably near-optimal. The algorithms provide output that agrees
well on the modeled and the measured full-duplex hardware profile.
My work in the area of full-duplex currently involves design of
adaptive algorithms for self-interference cancellation in full-duplex
circuits, design and analysis of scheduling algorithms with fairness
guarantees, and a testbed development.
Apart from full-duplex, I am also working on the design and analysis
of fast iterative optimization methods for large-scale problems with
fairness objectives.
Assistant Professor
International University of Rabat
Cyber security has both technical and social dimensions. Understanding and leveraging the
interplay of these dimensions
can help design more secure systems and more effective policies.
However, the majority of cyber security research has only focused
on the technical dimension. In my work, I study cyber-security using
a socio-technical approach that combines data science techniques,
computational models, and network science techniques.
I will start by presenting my work on empirically identifying factors
behind international variation in cyber attack exposure and hosting. I use data from 10 million computers worldwide provided by a
key anti-virus vendor. The results of this work indicate that reducing
attack exposure and hosting in the most affected countries requires
addressing both social and technical issues such as corruption and
computer piracy. Then, I will present a computational methodology
to assess cyber warfare capabilities of all the countries in the world.
The methodology captures political factors that motivate countries
to develop these capabilities and technical factors that enable such
development. Together, these projects show that bridging the social and technical dimensions of cyber security can improve our understanding of the dynamics of cyber security and have a real-world
impact.
Bio
Ghita Mezzour received her PhD degree from Carnegie Mellon
University (CMU) in May 2015. She was part of both the School of
Computer Science and the Electrical and Computer Engineering
Department at CMU. Ghita is currently an Assistant Professor at the
International University of Rabat in Morocco. Her research interests
are at the intersection of cyber security, big data, and socio-technical systems. Ghita holds a Master and a Bachelor in Communication Systems from the Ecole Polytechnique Federale de Lausanne in
Switzerland.
Bio
Jelena Marasevic is a PhD student at Columbia University. Her research focuses on algorithms for fair and efficient resource allocation problems, with applications in wireless networks. She received
her BSc degree from University of Belgrade, School of Electrical
Engineering, in 2011, and her MS degree in electrical engineering
from Columbia University in 2012. For her MS degree, she received
the M.S. Award of Excellence.
In Spring 2012, Jelena organized the first cellular networking
hands-on lab for a graduate class in wireless and mobile networking. For this work, she received the Best Educational Paper Award at
the 2nd GENI Research and Educational Experimentation workshop
(GREE2013), and was also awarded the Jacob Millman Prize for Excellence in Teaching Assistance from Columbia University.
Earlier this year, Jelena was in a two-student team that won the
Qualcomm Innovation Fellowship for a cross-disciplinary project on
full-duplex wireless.
24
Rising Stars 2015
http://risingstars15-eecs.mit.edu/
Jamie Morgenstern
Vaishnavi Nattar
Ranganathan
Warren Postdoctoral Fellow
University of Pennsylvania
Approximately Stable,
School Optimal, and
Student-Truthful Manyto-One Matchings (via
Differential Privacy)
We present a mechanism for computing asymptotically stable
school optimal matchings, while guaranteeing that it is an asymptotic dominant strategy for every student to report their true
preferences to the mechanism. Our main tool in this endeavor is
differential privacy: we give an algorithm that coordinates a stable matching using differentially private signals, which lead to our
truthfulness guarantee. This is the first setting in which it is known
how to achieve nontrivial truthfulness guarantees for students
when computing school optimal matchings, assuming worst- case
preferences (for schools and students) in large markets.
Bio
Jamie Morgenstern is a Warren Fellow at the University of Pennsylvania. She received her PhD in Computer Science at Carnegie
Mellon University, advised by Avrim Blum. Her research interests
include mechanism design, learning theory, and applications of
differential privacy to questions in economics. During her PhD, she
received a Simons Award for Graduate Students in Theoretical Computer Science, an NSF Graduate Research Fellowship, and a Microsoft Graduate Women’s scholarship.
PhD Candidate
University of Washington
Limb Reanimation with Fully
Wireless Brain Computer
Interface
The primary aim of my research
is to develop a completely wireless and implantable brain-computer-spinal interface (BCSI) to reanimate patients with paralysis caused by injury in the spinal cord.
Advancement in technology has led to state-of-the-art solutions for
neural signal acquisition and stimulation for limb reanimation. The
major challenge lies in combining them for autonomous stimulation. Another concern associated with long term implantation of
these devices is the power and communication cables that exit the
skin surface and pose a risk of infection.
To address these two issues, my contributions are to implement
an algorithm to enable stimulation based on recorded signal and
designing the analog circuits for simultaneous wireless communication and power transfer to increase the implanted operational
lifetime.
I have implemented a fully wireless printed circuit design of the BCSI
system as a part of a NSF funded program. Using low-power wireless communication protocols, like radio frequency (RF) backscatter at 915MHz, high data-rate communication can be established
between the implant and external devices. I have also designed
and implemented a digital controller for this protocol using 65nm
CMOS technology.
With respect to wireless power transfer, we have successfully
demonstrated inductive power delivery across tissue using optimally designed coupled resonators operating at 13.56MHz. This
system is capable of achieving efficiencies greater than 70% with
low temperature rise (less than 2°C).
The current focus is to utilize inherently low power on-chip digital
computation to minimize dependence on external control, thereby
closing the loop in the BCSI system. My goal is to contribute with
my experience to the advancement of engineering in medicine by
developing affordable miniaturized biomedical implants that can
facilitate diagnosis for treatment and improve the quality of life for
individuals with disabilities.
Bio
Vaishnavi Ranganathan is a PhD student in the Sensor Systems laboratory at the University of Washington (UW), Seattle, WA. Her main
research interests are wireless power transfer and brain-computer
interface applications, including low-power computation and communication solutions for implantable devices. She is a member of
the Center for Sensorimotor Neural Engineering which is an NSF
funded research center at the UW.
She received the BTech degree in Electronics and Instrumentation
Engineering from Amrita Vishwavidyapeetham University, TN, India, in 2011, and a M.S degree in Electrical Engineering, specializing
in NEMS, from Case Western Reserve University, Cleveland, OH, in
2013. As an undergraduate she gained experience in robotics and
sensor design. She has also worked as a research intern in the Nanobios lab at Indian Institute of Technology, Mumbai, India, where her
focus was MEMS for biomedical sensors.
http://risingstars15-eecs.mit.edu/
Rising Stars 2015
25
Xiang Ni
Dessislava Nikolova
PhD Candidate
University of Illinois at UrbanaChampaign
Mitigation of Failures in High
Performance Computing via
Runtime Techniques
Parallel computing is a powerful
tool to solve large complex problems in a timely manner. The most powerful supercomputer in the
US today, named Titan, consists of 300,000 cores along with over
18,000 general purpose GPUs. At its peak, Titan can perform over
17 quadrillion floating-point operations per second! While the number of components assembled to create a supercomputer keeps
increasing beyond these values, the reliability and the capacity of
each individual component has not increased proportionally. As a
result, the machines of today fail frequently and hamper smooth
execution of high performance applications. The slow increase in
memory capabilities has thwarted efficient use of the state-of-theart methods for containing such failures. My research strives to develop runtime system techniques that can be deployed to make
large scale parallel executions robust and fail-safe. In particular, I
have worked on answering the following questions: how can a runtime system provide fault tolerance support efficiently with minimal application intervention? What are the effective ways to detect
and correct silent data corruptions? Given the limited memory resource, how do we enable the execution and checkpointing of data
intensive applications?
Bio
Xiang Ni is a final-year PhD candidate in the Department of Computer Science at the University of Illinois at Urbana-Champaign. She
is interested in developing runtime system techniques for scalable
parallel executions on supercomputers. Performance optimization
of irregular applications and application oblivious fault tolerance
are her primary areas of research. At Illinois, she is part of the Parallel Programming Laboratory which develops Charm++ and its
applications. She has closely worked with researchers from Disney
Research on developing a first of its kind parallel software for Cloth
Simulation. She has also done two summer internships at the Lawrence Livermore National Laboratory. Xiang got her master’s degree
at Illinois in 2012 for her work on asynchronous protocols for low
overhead checkpointing. Prior to that she got a bachelor’s degree in
Computer Science at Beihang University in Beijing, China.
26
Rising Stars 2015
Postdoctoral Research Scientist
Columbia University
Silicon Photonics for
Exascale Systems
Driven by the increasing use and
adoption of cloud computing
and big data applications, data
centers and high-performance
computers are envisioned to reach Exascale dimensions. This will
require highly scalable and energy efficient means for data exchange. Optical data movement is one of the promising means to
address the bandwidth demands of these systems. Optical links are
being increasingly deployed in data centers and high-performance
computers but to facilitate the increased bandwidth demand and
provide flexibility and high link utilization, wavelength routing and
optical switching is necessary. Silicon photonics is a particularly
promising and applicable technology to realize various optical
components due to its small footprint, high bandwidth density, and
the potential for nanosecond scale dynamic connectivity.
My research focusses on the design, modelling and demonstration
of novel silicon nanophotonic devices and systems for energy efficient optical data movement. In particular I aim to provide solutions for highly scalable silicon photonic switch fabrics. Within this
work several architectures for spatial and wavelength switches with
silicon photonic microrings are proposed and demonstrated. Analytical modeling and simulations show that the proposed architectures are highly scalable. By developing simple but accurate device
models critical for the switching performance device parameters
are identified and their optimal values are derived. Experimental
demonstrations confirm the feasibility of these silicon photonic
switches. The proposed devices can form the building blocks of the
future flexible, low-cost and energy efficient optical networks that
can deliver large volumes of data with time-of-flight latencies.
Bio
Dessislava Nikolova is a postdoctoral scientist at Columbia University where her current research is on photonic systems for optical
interconnects, networks and quantum communications. Prior to
that Dessislava was awarded the prestigious European wide Marie
Curie Fellowship to study magneto-plasmonics at the University
College London. She received her PhD in Computer Science from
Antwerp University with research in the area of optical networks.
Her work was recognized by a Best Paper award at the SPECTS symposium and led to a patent. She has also worked for Alcatel-Lucent,
researching passive optical networks. Her future research goal is to
design accessible, easily controllable, multifunctional photonic systems thereby opening the field of active nanophotonics to computer scientists and network engineers.
http://risingstars15-eecs.mit.edu/
Farnaz Niroui
Idoia Ochoa
PhD Candidate
Massachusetts Institute of
Technology
Nanoscale Engineering with
Molecular Building Blocks
Mechanical properties of materials at the nanoscale can lead to
unique physical phenomena and
devices with improved performance and novel functionalities. My
research utilizes the mechanical behavior and structural deformations of few-nanometer-thin molecular films to achieve precise nanoscale force control. This combined with deformation-dependent
changes in the electrical and optical properties of matter creates
a platform enabling development of novel device concepts. Based
on these principles, I have developed electromechanically tunable
nanogaps composed of self-assembled compressive organic films
sandwiched between conductive contacts. An applied voltage
across these electrodes provides an electrostatic force that causes
mechanical compression of the molecular layer to modulate the
gap size. Through modifying the molecular film by chemical synthesis and thin-film engineering, the nanogap dimensions and the
extent of compression can be precisely controlled. The compressed
molecules also provide the elastic force necessary to control the
surface adhesive forces to avoid permanent adhesion of the electrodes (defined as stiction) as the gap is mechanically tuned. Utilizing these nanogaps, I have designed nanoelectromechanical (NEM)
switches that operate through a tunneling switching mechanism. In
this scheme, the electrostatic compression of the molecules leads
to a decrease in the tunneling gap and an exponential increase in
the tunneling current. With sub-5 nm switching gaps and nanoscale
force control, these low-voltage and stiction-free devices overcome
the challenges faced by the current contact-based NEM switches
giving rise to promising applications in low-power electronics. Beyond electromechanical switches, these mechanically active nanogaps exhibit applications as molecular metrological tools for probing nanoscale mechanical and electrical properties, and can enable
dynamically tunable optical and plasmonic systems.
Bio
Farnaz Niroui is currently a PhD candidate in the Department of
Electrical Engineering and Computer Science at Massachusetts Institute of Technology. She is a recipient of the Natural Sciences and
Engineering Research Council of Canada Scholarship for graduate
studies. Farnaz received her Master of Science degree in Electrical
Engineering from MIT in 2013, working with Professors Vladimir Bulovic and Jeffrey Lang. She completed her undergraduate studies in
Nanotechnology Engineering at University of Waterloo in Canada.
Her research interest is at the interface of device physics, materials
science and nanofabrication to enable study, manipulation and engineering of systems with unique functionalities at the nanoscale.
PhD
Stanford University
Genomic Data Compression
and Processing
One of the highest priorities of
modern healthcare is to identify
genomic changes that predispose individuals to debilitating
diseases or make them more responsive to certain therapies. This
effort is made possibly due to the generation of a massive DNA sequencing data that must be stored, transmitted and analyzed. Due
to the large size of the data, storage and transmission represent a
huge burden, as the current solutions are costly and space/time demanding. Finally, due to imperfections on the data and the lack of
theoretical models that describe it, the analysis tools do not generally have theoretical guarantees, and different approaches exist for
the same task.
Thus it is important to develop new tools and algorithms to facilitate the transmission and storage of the data and to improve the
inference performed on it. This is exactly the focus of my research.
Part of it consists on developing compression schemes, which
range from compression of single genomes to compression of the
raw data outputted by the sequencing machines. Part of this data
(the reliability of the outputted nucleotides) is normally lossy compressed, as it is inherently noisy and therefore difficult to compress.
Moreover, it has been shown that lossy compression can potentially reduce the storage requirements while improving the inference
performed on the data. Further understanding this effect is part of
my ongoing research, together with characterizing the statistics of
the noise, such that denoisers tailored to them can be designed.
I have also worked on developing compression schemes for databases such that similarity queries can still be performed on the
compressed domain. This is of special interest in large biological
databases, where retrieving genomic sequences similar to others is
necessary in several applications. Finally, I designed a tool for identifying disease driver genes associated with molecular processes in
cancer patients.
Bio
Idoia is currently in her 5th year of PhD in the Electrical Engineering
department at Stanford University, working with Prof. Tsachy Weissman. She also received her MSc from the same department in 2012.
Previous to Stanford, she got a BS and MSc from the Telecommunications Engineering (Electrical Engineering) department at University of Navarra, Spain. During her time at Stanford, she conducted
internships at Google and Genapsys, and she served as a technical
consultant for the HBO TV show “Silicon Valley.”
Her main interests are in the field of compression, bioinformatics,
information theory, coding and signal processing. Her research focuses mainly on helping the bio community to handle the massive
amounts of genomic data that are being generated, for example
by designing new and more effective compression programs for
genomic data. Part of her effort also goes into understanding the
statistics of the noise presented in the data under consideration, so
that denoisers tailored to them can be generated, thus improving
the subsequent analysis performed on the data.
Her research was/is founded by a La Caixa fellowship, a Basque Government fellowship and a Stanford Graduate fellowship.
http://risingstars15-eecs.mit.edu/
Rising Stars 2015
27
Eleanor O’Rourke
Amanda Prorok
Educational Systems for
Maximizing Learning Online
and in the Classroom
Heterogeneous Robot
Swarms
PhD Candidate
University of Washington
The goal of my research is to
create computing systems that
maximize student learning both
online and in the classroom. Specifically, my dissertation explores
the design, implementation, and evaluation of novel educational
systems that increase motivation, provide personalized learning experiences, and support formative assessment. As part of this work,
I have created an incentive structure that promotes the growth
mindset in an educational game, developed a framework for automatically generating instructional scaffolding, and evaluated a
system that visualizes student data in real-time to assist classroom
teachers. In the development of these systems, I combine ideas
from computer science, psychology, education, and the learning
sciences to develop novel technical methods of integrating learning theory into computational tools. In addition to evaluating my
work through classroom studies with students and teachers, I have
also conducted large-scale online experiments with tens of thousands of students. My findings provide new insights into how students learn and how computing systems can support the learning
process. The ultimate goal of my research is to build personalized
data-driven systems that transform how we teach, assess, communicate, and collaborate in learning environments.
Bio
Nell is a PhD candidate in Computer Science and Engineering at
the University of Washington, advised by Zoran Popović in the Center for Game Science. She received a B.A. in Computer Science and
Spanish at Colby College in 2007, and an M.S. in Computer Science
from the University of Washington in 2012. Her research lies at the
intersection of human-computer interaction and educational technology with a focus on creating novel learning systems to support
motivation, personalization, and formative assessment. Nell has
won several awards and scholarships, including the Google Anita
Borg Scholarship and the Microsoft Research Graduate Women’s
Scholarship.
Postdoctoral Researcher
University of Pennsylvania
As we harness swarms of autonomous robots to solve increasingly
challenging tasks, we must find
ways of distributing robot capabilities among distinct swarm members. My premise is that that one
robot type is not able to cater to all aspects of a given task, due to
constraints at the single-platform level. Yet, it is an open question
how to engineer heterogeneous robot swarms, since we lack the
foundational theories to help us make the right design choices and
understand the implications of heterogeneity.
My approach to designing swarm robotic systems considers both
top-down methodologies (macroscopic modeling) as well as bottom-up (single-robot level) algorithmic design. My first research
thrust targeted the specific problem of indoor localization for large
robot teams, and employed a fusion of ultra-wideband and infrared
signals to produce high accuracy. I developed the first ultra-wideband time-difference-of-arrival sensor model for mobile robot
localization, which, when used collaboratively, achieved centimeter-level accuracy. Experiments with ten robots illustrated the effect
of distributing the sensing capabilities heterogeneously throughout the team. This bottom-up approach highlighted the compromise between homogenous teams that are very efficient, yet expensive, and heterogeneous teams that are low-cost.
My second research thrust, which aims at formally understanding
this compromise, targets the general problem of distributing a
heterogeneous swarm of robots among a set of tasks. My strategy
is to model the swarm macroscopically, and subsequently extract
decentralized control algorithms that are optimal given the heterogeneous swarm composition and underlying task requirements. I
developed a dedicated diversity metric that identifies the relationship between performance and heterogeneity, and that provides
a means with which to control the composition of the swarm so
that performance is maximized. This top-down approach complements the bottom-up method by providing high-level abstraction
and foundational analyses, thus shaping a new way of exploiting
heterogeneity as a design paradigm.
Bio
Amanda Prorok is a Postdoc in the General Robotics, Automation,
Sensing and Perception (GRASP) Lab at the University of Pennsylvania, where she works with Prof. Vijay Kumar on multi-robot systems. Prior to moving to UPenn, she spent one year working on
cutting-edge sensor technologies at Sensirion, during which period
her team launched the world’s first multi-pixel miniature gas sensor onto the market. She completed her PhD at EPFL, Switzerland,
where she addressed the topic of indoor localization for large-scale,
cooperative systems. Her dissertation was awarded the Asea Brown
Boveri (ABB) award for the best thesis at EPFL in the fields of Computer Sciences, Automatics and Telecommunications. Before starting her doctorate, she spent two years in Japan working for Mitsubishi in the robotics industry, as well as for the Swiss government in a
diplomatic role, on a full scholarship that was awarded to her by the
Swiss-Japanese Chamber of Commerce.
28
Rising Stars 2015
http://risingstars15-eecs.mit.edu/
Elina Robeva
Deblina Sarkar
Super-Resolution without
Separation
2D Steep Transistor
Technology: Overcoming
Fundamental Barriers in
Low-Power Electronics and
Ultra-Sensitive Biosensors
PhD Candidate
University of California at
Berkeley
This is joint work with Benjamin
Recht and Geoffrey Schiebinger
at UC Berkeley.
We provide a theoretical analysis of diffraction-limited super-resolution, demonstrating that arbitrarily close point sources can be
resolved in ideal situations. Given a lo-resolution blurred signal of
M point sources of light, super-resolution imaging aims to recover
the correct locations of the point sources and the intensity of light
at each of them. Caused by the imaging device (telescope, microscope, camera, or others), every point source of light is blurred by
a given point spread function. We assume that the incoming signal
is a linear combination of M shifted copies (centered at each of the
M point sources) of a known point spread function with unknown
shifts (the locations of the point sources) and intensities, and one
only observes a finite collection of evaluations of this signal.
To recover the locations and intensities, practitioners solve a convex
program, which is a weighted version of basis pursuit over a continuous dictionary. Despite the recent success in many empirical
disciplines, the theory of super-resolution imaging remains limited.
More precisely, our aim is to show that the true point source locations and intensities are the unique optimal solution to the above
mentioned convex program. Much of the existing proofs to date rely
heavily on the assumption that the point sources are separated by
more than some minimum amount. Building on polynomial interpolation techniques and tools from compressed sensing, we show
that under some reasonable conditions on the point spread function, arbitrarily close point sources can be resolved by the above
convex program from 2M+1 observations. Moreover, we show that
the Gaussian point spread function satisfies these conditions.
Bio
Elina Robeva is a fourth-year graduate student in mathematics at
UC Berkeley advised by Bernd Sturmfels. Originally from Bulgaria,
Elina’s career as a mathematician started in middle school when she
took part in many competitions in mathematics and computer science. After winning two silver medals from the international mathematical olympiad in high-school, she started her undergraduate degree at Stanford University in 2007. There she pursued her interests
in mathematics and wrote two combinatorics papers with Professor
Sam Payne. She received the Dean’s award, the Sterling award, the
undergraduate research award, and an honorable mention for the
Morgan prize. Elina completed software engineering internships
at Facebook and Google and decided to pursue a PhD where she
could apply her mathematical skills to problems in computer science and other applied disciplines. She commenced her PhD at
Harvard University in 2011 and transferred to UC Berkeley in 2012
to work with Professor Bernd Sturmfels. Her papers are focused on
the interplay between algebraic geometry statistics and optimization. They include work on mixture models and the EM algorithm,
orthogonal tensor decomposition, factorizations through the cone
of positive semidefinite matrices, and super-resolution imaging.
http://risingstars15-eecs.mit.edu/
Postdoctoral Associate, MIT
Aggressive technology scaling has
resulted in exponential increase
in power dissipation levels due to
the degradation of device electrostatics as well as the fundamental
thermionic limitation in subthreshold swing of conventional Field-Effect Transistors (FETs). My research, explores novel two-dimensional
(2D) materials for obtaining improved electrostatic control and Tunneling-Field-Effect-Transistors (TFETs), employing a fundamentally
different carrier transport mechanism in the form band-to-band
tunneling (BTBT) for overcoming the fundamental limitations of
conventional FETs. This tailoring of both material and device technology can lead to transistors with super steep turn-on characteristics, which is crucial for obtaining high energy-efficiency and ultra-scalability. My research, also establishes, for the first time, that
the material and device technology which have evolved, mainly with
an aim of power reduction in digital electronics, can revolutionize a
completely diverse arena of bio/gas-sensor technology. The unique
advantages of 2D semiconductors for electrical sensors is demonstrated and it is shown that they lead to ultra-high sensitivity, and
also provide an attractive pathway for single molecular detectabilitythe holy grail for all biosensing research. Moreover, it is theoretically
illustrated that steep turn-on, obtained through novel technology
such as BTBT, can result in unprecedented performance improvement compared to that of conventional electrical biosensors, with
around 4 orders of magnitude higher sensitivity and 10x lower detection time. With the aim towards building ultra-scaled low power
electronics as well as highly efficient sensors, my research achieves
a significant milestone, furnishing the first experimental demonstration of TFETs based on 2D channel material to beat the fundamental
limitation in subthreshold swing (SS). This device comprising of an
atomically thin channel exhibits record average SS at ultra-low supply voltages, thus, cracking the long-standing issue of simultaneous
dimensional and power supply scalability and hence, can lead to a
paradigm shift in information technology as well as healthcare.
Bio
Deblina Sarkar completed her M.S. and PhD in the ECE department at
UCSB in 2010 and 2015, respectively. Her doctoral research, which combined the interdisciplinary fields of engineering, physics and biology,
included theoretical modeling and experimental demonstration of energy-efficient electronic devices and ultra-sensitive biosensors. She is
currently a postdoctoral researcher in the Synthetic Neurobiology group
at MIT and is interested in exploring novel technologies for mapping and
controlling the brain activity. Ms. Sarkar is the lead author of numerous
publications including several eminent journals such as Nature, Nano
Lett., ACS Nano, TED as well as prestigious conferences such as IEDM,
DRC and has authored/coauthored more than 30 papers till date. Several of her works have appeared in popular press and her research on
novel biosensors, has been highlighted by Nature Nanotechnology. She
is the recipient of numerous awards and recognitions, including being
awarded Presidential Fellowship and Outstanding Doctoral Candidate
Fellowship for pursuing doctoral research (2008), one of three researchers worldwide to receive the prestigious IEEE EDS PhD Fellowship Award
(2011), one of the 4 young researchers from USA honored as “Bright
Mind” and invited to speak at the KAUST-NSF Conference (2015), and
one of three winners of the Falling Walls Lab Young Innovator’s competition at UC San Diego (2015).
Rising Stars 2015
29
Melanie Schmidt
Claudia Schulz
Algorithmic Techniques
for Solving the k-Means
Problem on Big Data Sets
Explaining Logic
Programming with
Argumentation
PostDoc
Carnegie Mellon University
Algorithm theory consists of designing and analyzing methods
to solve computational problems.
The k-means problem is a computational problem from geometry.
The input consists of points from the d-dimensional Euclidean
space, i.e. vectors. The goal is to group these into k groups and to
find a representative point for each group. Clustering is a major tool
in machine learning: Imagine that the vectors represent songs in
a music collection or handwritten letters. The clustering can show
which objects are similar, and the representatives can be used to
classify newly arriving objects.
There are many clustering objectives and the k-means objective
might be the most popular among them. It is based on the Euclidean distance. The representative of a group is the centroid, i.e. the
sum of the points in the group divided by their number. A grouping
is evaluated by computing the squared Euclidean distance of every point to its representative and summing these up. The k-means
problem consists of finding a grouping into k groups that minimizes
this cost function.
The algorithmic challenges connected to the k-means problem are
numerous. The problem is NP-hard, but it can be solved approximately up to a constant factor. What is the best possible approximation factor? Can we prove lower bounds? A different approach is
to fix a parameter to lower the complexity. If the number of groups
k is fixed, then the problem can be approximated to an arbitrary
precision. This assumption also allows us to approximately solve the
problem by algorithms that only read the input data once and in a
given order — a main tool to deal with big data. How small can we
make the memory need of such a streaming algorithm, and will the
algorithm be efficient in practice? We see different answers to this
question.
Bio
Melanie Schmidt obtained a master’s degree with distinction in
computer science (with minor in mathematics) from TU Dortmund
University in 2009. In her undergraduate studies, she focused on
network flow theory, a topic that lies in the intersection between
theoretical computer science and discrete mathematics. During her
PhD time, her main focus became clustering algorithms, in particular for large data sets. In 2012, Melanie Schmidt received the Google
Anita Borg Memorial Scholarship that supports women that excel in
technology. She graduated with distinction with her PhD theses on
“Coresets and streaming algorithms for the k-means problem and
related clustering objectives” in 2014. Then, she was awarded with
a merit-scholarship by the German Academic Exchange Service
(DAAD) to spend a year as a visiting PostDoc at the Carnegie Mellon
University in Pittsburgh, where she visits Anupam Gupta.
30
Rising Stars 2015
PhD Candidate
Imperial College London
Argumentation Theory and Logic Programming are two prominent approaches in the field of
knowledge representation and reasoning, a sub-field of Artificial
Intelligence. One of the applications of such approaches are recommendation systems, to be used for example for making medical
treatment decisions. The main difference between Argumentation
Theory and Logic Programming is that the former focuses on human-like reasoning, thus sometimes neglecting the efficiency of
the reasoning procedure, whereas the latter is concerned with the
efficient computation of solutions to a reasoning problem, resulting in a less human-understandable process. In recent years, Logic
Programming has been frequently applied for the computation of
reasoning problems in Argumentation Theory and has been found
an efficient method for determining solutions to those problems.
My research is concerned with the opposite direction, i.e. with using
ideas from Argumentation Theory to improve Logic Programming
techniques. One of the shortcomings of Logic Programming is that
it does not provide any explanation of the solution computed for a
given problem. For recommendation systems based on Logic Programming, this means that there is no explanation for a recommendation made by the system. I thus created a mechanism to explain
Logic Programming solutions in a human-like argumentative style
by applying ideas from the field of Argumentation Theory. A medical
treatment recommendation can thus be automatically explained in
the style of two physicians arguing about the best treatment.
Bio
Claudia Schulz received her B.Sc. in Cognitive Science from the
University of Osnabrück in 2011. She then decided to specialise
in Artificial Intelligence, receiving an M.Sc. in Artificial Intelligence
from Imperial College London in 2012. Since 2012, Claudia is a Ph.D.
candidate at Imperial College London interested in logic-based
formalisms in Artificial Intelligence used for the representation
of knowledge and for decision making based on the represented
knowledge. Claudia is a keen lecturer and teaching assistant, which
won her Imperial College’s Best Graduate Teaching Assistant Award
in 2015. She was also involved in setting up the Imperial College
ACM Student Chapter and served as its chair in 2014/15. Apart from
academia, Claudia enjoys the outdoors and is an enthusiastic climber and runner.
http://risingstars15-eecs.mit.edu/
Mahsa Shoaran
Eva Song
Low-Power Circuit and
System Design for Epilepsy
Diagnosis and Therapy
A New Approach to
Lossy Compression and
Applications to Security
Postdoctoral Fellow
California Institute of Technology
Epilepsy is a common neurological disorder affecting over 50
million people in the world. Approximately one third of epileptic patients exhibit seizures that are
not controlled by medication. The development of new devices capable of performing a rapid and reliable seizure detection followed
by brain stimulation holds great promises for improving the quality
of life of millions of people with epileptic seizures worldwide.
PhD candidate
Princeton University
The first fully-integrated circuit that addresses the multichannel
compressed-domain feature extraction for epilepsy diagnosis is
proposed. This approach enables the real-time, compact, low-power and low hardware complexity implementation of the seizure
detection algorithm, as a part of an implantable neuroprosthetic
device for the treatment of epilepsy. The developed methods in this
research can be employed in other applications than epilepsy diagnosis and neural recording, which similarly require data recording
and processing from multiple nodes.
Rate-distortion theory is studied
in the context of lossy compression networks with and without
security concerns. A new source coding technique using the “likelihood encoder” is proposed that achieves the best known compression rate in various lossy compression settings. It is demonstrated
that the use of the likelihood encoder together with the Wyner’s
soft-covering lemma yields simple achievability proofs for classical
source coding problems. The cases of the point-to-point rate-distortion function, the rate-distortion function with side information
at the decoder (i.e. the Wyner-Ziv problem), and the multi-terminal source coding inner bound (i.e. the Berger-Tung problem) are
examined. Furthermore, a non-asymptotic analysis is used for the
point-to-point case to examine the upper bound on the excess
distortion provided by this method. The likelihood encoder is also
compared, both in concept and performance, to a recent alternative
technique using properties of random binning. Also, the likelihood
encoder source coding technique is further used to obtain new results in rate-distortion based secrecy systems. Several secure source
coding settings, such as using shared secret key and correlated side
information, are investigated. It is shown mathematically that the
rate-distortion based formulation for secrecy fully generalizes the
traditional equivocation based secrecy formulation. The extension
to joint source-channel security is also considered using similar encoding techniques. The rate-distortion based secure source-channel analysis has been applied to optical communication for reliable
and secure delivery of an information source through an insecure
multimode fiber channel.
Bio
Bio
Mahsa received her B.Sc. and M.Sc. degrees in Electrical Engineering and Microelectronics from Sharif University of Technology, Tehran, Iran in 2008 and 2010. In April 2015, she received her PhD from
Swiss Federal Institute of Technology in Lausanne (EPFL) with honors, working on implantable neural interfaces for epilepsy diagnosis. She is currently a postdoctoral scholar in Mixed-mode Integrated Circuits and Systems Lab at Caltech. Her main research interest is
low-power IC design for biomedical applications, innovative system
design for diagnosis and treatment of neurological disorders, implantable devices for cardiovascular diseases and neuroscience. She
has received a silver medal in Iran’s National Chemistry Olympiad
competition in 2003.
Eva Song received her master’s and PhD degree in the Electrical Engineering from Princeton University in 2012 and 2015, respectively.
She received her B.S. degree in Electrical and Computer Engineering from Carnegie Mellon University, Pittsburgh, PA, in 2010. In her
PhD work, she studied lossy compression and rate-distortion based
information-theoretic secrecy in communications. She is the recipient of Wu Prize for Excellence in 2014. During 2012, she interned at
Bell Labs, Alcatel-Lucent, NJ, to study secrecy in optics communications. Her general research interests include: information theory,
security, compression and machine learning.
In this context, low-power circuit and system design techniques for
data acquisition, compression and seizure detection in multichannel cortical implants are presented in the current research work.
Compressive sensing is utilized as the main data reduction method
in the proposed system. The existing microelectronic implementations of compressive sensing are applied in a single-channel basis.
Therefore, these topologies incur a high power consumption and
large silicon area. As an alternative, a multichannel measurement
scheme and an appropriate recovery scheme are proposed which
encode the entire array into a single compressed data stream.
http://risingstars15-eecs.mit.edu/
Rising Stars 2015
31
Veronika StrnadovaNeeley
Huan Sun
Graduate Student
University of California at Santa
Barbara
PhD Candidate
University of California at Santa
Barbara
Efficient Clustering and
Data Reduction Methods for
Large-Scale Structured Data
The necessity for efficient algorithms in large-scale data analysis
has become clear in the past few years, as unprecedented scales
of information have become available in a variety of domains, from
bioinformatics to social networks to signal processing. In many cases, it is no longer sufficient to use even quadratic-time algorithms
for such data, and much of recent computer science research has
focused on developing efficient methods to analyze vast amounts
of information. My contribution to this line of research focuses on
new algorithms for large-scale clustering and data reduction, by exploiting inherent low-dimensional structure to overcome the challenges of significant amounts of missing and erroneous entries. In
particular, over the past few years, together with collaborators from
Lawrence Berkeley National Lab, UC Santa Barbara, UC Berkeley,
and the Joint Genome Institute, I have developed a fast algorithm
for the linkage-group finding phase of genetic mapping, as well as
a novel data reduction method for analyzing genetic mapping data.
The efficiency of these algorithms has helped to produce accurate
maps for large, complicated genomes, such as wheat, by relying on
assumptions on the underlying ordered structure of the data. The
efficiency and accuracy of these methods suggests that in order
to further advance state-of-the-art clustering and data reduction
methods, we should be looking closer at the structure of the data
from a given application of interest. Assumptions on this structure
may lead to much faster algorithms without losing much in terms
of solution quality, even with high amounts of missing or erroneous
data entries. In ongoing and future research, I will explore algorithmic techniques which exploit inherent data structure for faster dimensionality reduction methods in order to identify important and
meaningful features of the data.
Bio
I am a PhD Candidate with a Computational Science and Engineering emphasis at UC Santa Barbara, working with adviser John R.
Gilbert in the Combinatorial Scientific Computing Lab. For the past
few years I have been collaborating with researchers at Lawrence
Berkeley National Lab, UC Berkeley and the Joint Genome Institute
to design scalable algorithms for genetic mapping. Broadly, my
research interests include scalable clustering algorithms, bioinformatics, graph algorithms, linear algebra and scientific computing. I
completed my BS in applied mathematics at the University of New
Mexico.
32
Rising Stars 2015
Intelligent and Collaborative
Question Answering
The paradigm of information
search is undergoing a significant
transformation with the popularity of mobile devices. Unlike traditional search engines retrieving
numerous webpages, techniques that can precisely and directly
answer user questions are becoming more desired. We investigate
two strategies: (1) Machine intelligent query resolution, where we
present two novel frameworks: (i) Schema-less knowledge graph
querying. This framework directly searches knowledge bases to
answer user queries. It successfully deals with the challenge that
answers to user queries could not be simply retrieved by exact keyword and graph matching, due to different information representations. (ii) Combining knowledge bases with the Web. We recognized
that knowledge bases are usually far from complete and information required to answer questions may not always exist in knowledge bases. This framework mines answers directly from large-scale
web resources, and meanwhile employs knowledge bases as a significant auxiliary to boost question answering performance; (2) Human collaborative query resolution. We made the first attempt to
quantitatively analyze expert routing behaviors, i.e., how an expert
decides where to transfer a question when she could not solve it.
A computational routing model was then developed to optimize
team formation and team communication for more efficient problem solving. Future directions of my research include leveraging
both machines and humans for better question answering and decision making in various domains such as healthcare and business
intelligence.
Bio
Huan Sun is a PhD candidate in the Department of Computer Science at the University of California, Santa Barbara, and is expected
to graduate in September 2015. Her research interests lie in data
mining and machine learning, with emphasis on text mining, network analysis and human behavior understanding. Particularly, she
has been investigating how to model and combine machine and
human intelligence for question answering and knowledge discovery. Prior to UCSB, Huan received her BS in EE from the University
of Science and Technology of China in 2010. She received the UC
Regents’ Special Fellowship and the CS PhD Progress Award in 2014.
She did summer internships at Microsoft Research and IBM T.J. Watson Research Center. Huan will join the Department of Computer
Science at the Ohio State University as an assistant professor in July
2016.
http://risingstars15-eecs.mit.edu/
Ewa Syta
Rabia Tugce Yazicigil
Certificate Cothority:
Towards Trustworthy
Collective CAs
Enabling 5/Next-G
Wireless Communications
with Energy-Efficient,
Compressed Sampling
Rapid Spectrum Sensors
PhD Candidate
Yale University
Certificate Authorities (CAs) sign
certificates attesting that the
holder of a public key legitimately represents a name such as google.com, to authenticate SSL/TLS
connections. Only if a server can produce a certificate signed by a
trusted CA, will the client’s browser accept it and establish a secure
connection.
Current web browsers directly or indirectly trust hundreds of CAs,
any one of which can issue fake certificates for any domain. Consequently, it takes only one compromised or malicious CA to threaten
the security of the entire PKI and in turn, everyone on the Internet.
Due to this “weakest-link” security, hackers have stolen the “master
keys” of CAs such as DigiNotar and Comodo and successfully generated fake certificates for website spoofing and man-in-the-middle
attacks.
We propose to replace current, high-value certificate authorities
with a certificate cothority (CC) — a practical system, which embodies strongest-link security by allowing all participants to validate certificates before they are issued and endorsed, and therefore
proactively prevent their misuse.
We build certificate cothorities using an instantiation of a collective
authority (cothority), an architecture we propose to enable thousands of participants to witness, validate, and co-sign an authority’s
public actions, with moderate delays and costs.
Each of potentially thousands of hosts comprising a certificate cothority independently validates each new batch of certificates, either contributing a share of a collective digital signature or withholding it and raising an alarm if misbehavior is detected. This
collective signature attests to the client that not just one but many
(ideally thousands) well-known servers independently checked and
signed off on a certificate. Therefore, a certificate cothority guarantees strongest-link security whose strength increases as the collective grows, instead of decreasing to weakest-link security as in
today’s CA system.
Bio
Ewa Syta is a PhD candidate in the Computer Science Department
at Yale University. She is co-advised by Professors Michael Fischer
and Bryan Ford. Prior to joining Yale, she earned her B.S. and M.S.
in Computer Science and Cryptology from Military University of
Technology in Warsaw, Poland. Her research interests lie in computer security. She is particularly interested in the security and privacy issues users face as a result of engaging in online activities. She
has been working on developing stronger anonymous communication technologies, privacy-preserving biometric authentication
schemes, anonymous and deniable authentication methods as well
as different ways to generate good and verifiable randomness in a
distributed setting.
http://risingstars15-eecs.mit.edu/
PhD Candidate
Columbia University
Future 5G networks will drastically advance the way we interact with
each other and machines and how machines interact with each other. The data storm driven by emerging technologies like “Internet of
Things”, “Digital Health”, machine-to-machine communications, and
video over wireless, leads to a pressing spectrum scarcity. Future
cognitive radio systems employing multi-tiered, shared-spectrum
access are expected to deliver superior spectrum efficiency over
existing scheduled-access systems. We focus on lower tiered ‘smart’
devices that evaluate the spectrum dynamically and opportunistically use the underutilized spectrum. These smart devices require
spectrum sensing for interferer avoidance. The integrated interferer
detectors need to be fast, wideband and energy efficient. We are
developing quadrature analog-to-information converters (QAIC),
a novel compressed sampling (CS) technique for bandpass signals.
With a QAIC the wideband spectrum can be sampled at a substantially lower rate set by the information bandwidth, rather than the
much higher Nyquist rate set by the instantaneous bandwidth. As
a result, innovative spectrum sensor RF ICs can be designed to simultaneously deliver a very short scan time, a very wide span and
a high frequency resolution, while requiring only modest hardware
and energy resources. This is not possible with existing spectrum
scanning solutions. Our first QAIC RF IC demonstration scans a wideband 1GHz span with a 20MHz resolution bandwidth in 4.4μsecs,
offering 50x faster scan time compared to traditional sweeping
spectrum scanners and 6.3x compressed aggregate sampling rate
compared to traditional concurrent Nyquist rate approaches. The
unique QAIC bandpass architecture is 50x more energy efficient
compared to traditional spectrum scanners and 10x more energy
efficient compared to existing lowpass CS spectrum sensors.
Bio
Rabia Tugce Yazicigil received the BS degree in electronics engineering from Sabanci University, Istanbul, Turkey, in 2009, and the M.S.
degree in electrical and electronics engineering from École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland, in
2011. She is currently a PhD candidate in the electrical engineering
department at Columbia University, New York, advised by Prof. Peter Kinget and co-advised by Prof. John Wright. Her interdisciplinary research work focuses on developing and implementing novel
spectrum sensing architectures exploiting compressed sampling
for future cognitive radio systems. She collaborated and presented
her research on “A 2.7-3.7GHz Rapid Interferer Detector Exploiting
Compressed Sampling with a Quadrature Analog-to- Information
Converter” together with a live demo of the system at the prestigious 2015 IEEE International Solid-State Circuits Conference (ISSCC), which was supported by the National Science Foundation EARS
program in collaboration with Interdigital Communications. She has
been a recipient of a number of awards, including the second place
at the Bell Labs Future X Days Student Research Competition (2015),
Analog Devices Inc. outstanding student designer award (2015) and
2014 Millman Teaching Assistant Award of Columbia University.
Rising Stars 2015
33
Qi (Rose) Yu
Zhou Yu
Fast Multivariate Spatiotemporal Analysis via Low
Rank Tensor Learning
Engagement in Multimodal
Interactive Conversational
Systems
PhD Candidate
University of Southern California
Many data are spatiotemporal
by nature, such as climate measurements, road traffic and user
checkins. Complex spatial and temporal dependencies pose new
challenges to largescale spatiotemporal data analysis. Existing
models usually assume simple interdependence and are computationally expensive. In this work, we propose a unified lowrank
tensor learning framework for multivariate spatiotemporal analysis,
which can conveniently incorporate different properties in the data,
such as spatial clustering, temporal periodicity and shared structure
among variables. We demonstrate how the framework can be applied to two central tasks in spatiotemporal analysis: cokriging and
forecasting. We develop an efficient greedy algorithm to solve the
resulting optimization problem with convergence guarantees. Empirical evaluation shows that our method is not only significantly
faster than existing methods but also more accurate.
Bio
Qi (Rose) Yu is a fourth year PhD candidate at the University of
Southern California with a particular interest in Machine Learning
and Data Mining. Rose’s research focuses on largescale spatiotemporal data analysis where she designs algorithms to perform predictive tasks in applications including climate informatics, mobile
intelligence, and social media. Her work is supported by USC Annenberg Graduate Fellowship. She has interned in Microsoft R&D,
Intel Lab, Yahoo Labs, and IBM Watson Research Center. She was
selected and funded as one of 200 outstanding young computer
scientists and mathematicians all over the world to participate the
Heidelberg Laureate Forum.
Prior to enrolling at USC, Rose earned her Bachelors Degree in Computer Science from Cho Kochen Honors College at Zhejiang University. Before beginning her graduate studies, she was awarded
Microsoft Research Asia Young Fellowship. Outside the lab, she is
the technical cofounder of NowMoveMe, a neighborhood discovery startup.
PhD Student
Carnegie Mellon University
Autonomous conversational systems, such as Apple Siri, Google
Now, Microsoft Cortana, etc. act
as personal assistants who set alarms, mark events on calendars,
etc. Some systems provide restaurant or transportation information
to users. Despite the capability of completing these simple tasks
through conversations, they still act according to pre-defined task
structures and do not sense or react to their human interacts’ nonverbal behaviors or internal states such as the level of engagement.
This problem can also be found in other interactive systems.
Drawing knowledge from human-human communication dynamics, I use multimodal sensors and computational methods to understand and model user behaviors when interacting with a system
that has conversational abilities (e.g. spoken dialog systems, virtual
avatars, humanoid robots). By modeling the verbal and nonverbal
behaviors, such as smiles, we infer high-level psychological state
of the user, such as attention and engagement. I focus on maintaining engaging conversations by modeling users’ engagement
states in real-time and making conversational systems adapt to
their users via techniques, such as adaptive conversational strategies and incremental speech production. I apply my multimodal
engagement model in both non-task-oriented social dialog framework and task-oriented dialog framework that I designed. I developed an end-to-end, non-task-oriented multimodal virtual chatbot,
TickTock, which serves as a framework for controlled multimodal
conversation analysis. TickTock can carry on free-form everyday
chatting conversations with users in both English and Chinese languages. Together with ETS Speech and Dialog team, I developed
task-oriented system, HALEF, which is also a distributed web-based
system. HALEF has both visual and audio sensing capabilities for
human behavior understanding. Users can access the system via a
web browser, which in turn reduces the cost and effort in data collections. HALEF can be easily adapted to different tasks. We implemented an application so that the system acts as an interviewer to
help users prepare for job interviews.
For demos, please visit my webpage: http://www.cs.cmu.
edu/~zhouyu/
Bio
Zhou is a fifth-year PhD student in the Language Technology Institute, School of Computer Science, Carnegie Mellon University,
where she works with Prof. Alan Black and Prof. Alex Rudnicky. Zhou
creates end-to-end interactive conversational systems that are
aware of their physical situation and their human partners via real
time multimodal sensing and machine learning techniques. Zhou
holds a B.S. in computer science and a B.A. in English language with
linguistic focus from Zhejiang University in 2011. Zhou also interned
at Microsoft Research with Eric Horvitz and Dan Bohus, at Education
Testing Service with David Suendermann-Oeft, and at Institute for
Creative Technologies in USC with Louis-Philippe Morency. Zhou is
also a receiver of the Quality of Life Fellowship.
34
Rising Stars 2015
http://risingstars15-eecs.mit.edu/
Rising Stars
2015 Committee
Workshop Chair
Anantha Chandrakasan, Workshop Chair
Vannevar Bush Professor of Electrical Engineering and Computer Science
Department Head, MIT Electrical Engineering and Computer Science
Program Chairs
Regina Barzilay, Workshop Technical Co-Chair
Professor of Electrical Engineering and Computer Science, MIT
Dina Katabi, Workshop Technical Co-Chair
Andrew (1956) and Erna Viterbi Professor of Electrical Engineering and
Computer Science, MIT
Asu Ozdaglar, Workshop Technical Co-Chair
Professor of Electrical Engineering and Computer Science, MIT
Director, Laboratory for Information and Decision Systems
Rising Stars 2015
35
Workshop
Administration
Debroah Hodges-Pabón
Workshop Administrative Manager
Microsystems Technology
Laboratories, MIT
617.253.5264
[email protected]
Dorothy Curtis
Research Scientist
Computer Science and Artificial
Intelligence Lab, MIT
617.253.0541
[email protected]
Rising Stars
2015 Sponsors
36
Rising Stars 2015
Audrey Resutek
Communications Officer
Electrical Engineering and
Computer Science, MIT
617.253.4642
[email protected]
“The Rising Stars workshop was an amazing
opportunity — to chat with absolutely top
professors about my research, to learn from
insiders about how to thrive in an academic
career, and to meet the next wave of world-class
researchers in EECS.”
— Tamara Broderick,
2013 Rising Stars alumna
Assistant Professor, Electrical Engineering and Computer Science
Massachusetts Institute of Technology
“The Rising Stars workshop helped me understand
thoroughly all the aspects of the job application
process, so I can do my best at every step of this
process.”
— Raluca Ada Popa,
Rising Stars 2013 alumna
Assistant Professor
University of California at Berkeley
Rising Stars 2015
37
Contact:
Anantha P. Chandrakasan, Department Head, EECS
Vannevar Bush Professor of Electrical Engineering and Computer Science
Massachusetts Institute of Technology
77 Massachusetts Ave., 38-403
Cambridge, MA 02139-4307
Phone: 617.253.4601
617.258.7619
2
Rising Stars 2015
Fax: 617.253.0572
Email: [email protected]