The Logic-Based Route to Medical Nanobots

Transcription

The Logic-Based Route to Medical Nanobots
The Logic-Based Route to Medical Nanobots
Selmer Bringsjord
Rensselaer AI & Reasoning (RAIR) Laboratory
Department of Cognitive Science
Department of Computer Science
Rensselaer Polytechnic Institute (RPI)
Troy NY 12180 US
@ Mt Orford 4.2.08
Since I’m not there, I care.
Since I’m not there, I care.
The Rensselaer AI & Reasoning (RAIR) Lab ...
http://www.cogsci.rpi.edu/research/rair
Today ...
Specifically ...
1. Engineering nanobots that, relative to their
programs, are known in advance to behave
correctly is possible only if this engineering
is based on formal logic.
2. Engineering nanobots that, relative to our
ethical codes, are known in advance to
behave correctly is possible only if this
engineering is based on formal logic.
Specifically ...
1. Engineering nanobots that, relative to their
programs, are known in advance to behave
correctly is possible only if this engineering
is based on formal logic.
2. Engineering nanobots that, relative to our
ethical codes, are known in advance to
behave correctly is possible only if this
engineering is based on formal logic.
Agents (AIMA2e)
Agents (AIMA2e)
Agents (AIMA2e)
Logic/Proof
Logic-Based Robotics
is the field devoted to designing and implementing
robots whose significant actions are a function of
logico-mathematical representations of what these
robots know and believe, where this function is
computed by mechanized reasoning.*
* Includes: object-level reasoning, reasoning that
produces object-level reasoning (e.g., tactics,
methods), and direct, “dirty,” purely computational
procedures compiled from the either of first two.
Some Macroscopic Examples ...
Hunt the Wumpus
Situation Calculus
The situation calculus in the following layers works as follows:
The result function takes in a list of actions and returns a list representing a location
There is a general definition of the result function, which SNARK uses to build up sequences of actions,
and it is defined as:
(= (result ?actions (result (list ?action) square))
(result (append (list ?action) ?actions) square))
Results for single actions are then defined – in this case since the theory is used to plan a path
consisting of visited squares, the result of a single action on a square is only defined if that square is
visited, e.g. if the action is ‘up, the result function returns the square above it:
(= (result (list up) (list 1 0)) (list 1 1))
SNARK is used to prove there is a list of actions that the result function takes in and performs on the
agent’s current location and returns the location of interest
For example if the agent is at (2,1) and wants to get to (0,0), SNARK would generate, assuming the appropriate
squares are visited, (list down left left)
i.e. (= (result ?actions (list 2 1)) (list 0 0))
(= (result ?actions (result (list ?action) (list 2 1))) (list 0 0)) //general-result-defn
(= (result ?actions (list 2 0)) (list 0 0))
//result-of-down
(= (result ?actions (result (list ?action) (list 2 0))) (list 0 0))
//general-result-defn
(= (result ?actions (list 1 0)) (list 0 0))
//result-of-left
at this point ‘left solves it, and SNARK has remembered the list up to this point, so the answer is
(list down left left)
Simulation Performance
•
•
•
•
In the world situation on the right, it
takes SNARK 2 seconds to generate
(LIST UP RIGHT RIGHT DOWN
DOWN RIGHT RIGHT UP UP UP
UP UP UP LEFT LEFT LEFT
LEFT LEFT LEFT DOWN DOWN
DOWN DOWN DOWN DOWN) as a
solution for the home layer – that’s 2
seconds for a list of length 25
Before much needed efficiency
enhancements and some slight
theory adjustments, this proof would
have taken well over a day
This shows the need for careful,
terse theory and taking full
advantage of all appropriate
efficiency options in SNARK
Here, a gray square represents a
visited square and A represents
the agent’s location
A
Simulation
Performance : Type I
150
100
min
average
max
50
0
2x2
3x3
4x4
5x5
6x6
7x7
8x8
9 x 9 10 x 10
“Feeding” WW-Cracking Robot
PERI ...
PERI “Cracked” Block Design*
*With much help from Sandia Labs’ Bettina Schimanski.
Cracking False-Belief Tasks ...
In SL, w/ real-time comm w/ ATP
In SL, w/ real-time comm w/ ATP
Konstantine Arkoudas & Selmer Bringsjord
Full generality
wrt time and
change: includes
event calculus —
yet fast.
Cracking Wise Man Tests ...
Wise Men Puzzle
?
?
?
Wise Men Puzzle
?
?
?
Wise man A
Wise man B
Wise man C
Wise Men Puzzle
I don’t know
?
?
?
Wise man A
Wise man B
Wise man C
Wise Men Puzzle
I don’t know
I don’t know
?
?
?
Wise man A
Wise man B
Wise man C
Wise Men Puzzle
I don’t know
I don’t know
?
?
?
Wise man A
Wise man B
Wise man C
I DO know
Wise Men Puzzle
I don’t know
I don’t know
?
?
?
Wise man A
Wise man B
Wise man C
I DO know
Wise Men Puzzle
I don’t know
I don’t know
?
?
?
Wise man A
Wise man B
Wise man C
I DO know
Start of Reasoning in WMP3
(pov of truly wise man; easy for smart humans)
Start of Reasoning in WMP3
(pov of truly wise man; easy for smart humans)
Arkoudas-Proved-Sound Algorithm for
Generating Proof-Theoretic Solution to WMPn
All our
humanauthored
proofs
machinechecked.
“Life and Death” Wise Man Test (3)
* Again: Object-level reasoning, reasoning that produces object-level reasoning
(e.g., methods), and direct, “dirty,” purely computational procedures.
To return:
1. Engineering nanobots that, relative to their
programs, are known in advance to behave
correctly is possible only if this engineering
is based on formal logic.
2. Engineering nanobots that, relative to our
ethical codes, are known in advance to
behave correctly is possible only if this
engineering on formal logic.
Theorem: 1.
Behavior of nanobot R is known to be correct only if
the behavior of the agent (function) fR in question is
known to be correct.
Agent (function) fR is known to be correct only if
the program PfR is known to be correct.
The program PfR is known to be correct only if the
engineering of that program is based on formal logic.
Theorem: 1.
Behavior of nanobot R is known to be correct only if
the behavior of the agent (function) fR in question is
known to be correct.
Agent (function) fR is known to be correct only if
the program PfR is known to be correct.
The program PfR is known to be correct only if the
engineering of that program is based on formal logic.
Therefore (by hypothetical syllogism x2):
Behavior of nanobot R is known to be correct only if
the engineering of PfR is based on formal logic.
And #2. ...
1. The only way to (premeditatedly) engineer
nanobots that, relative to their programs,
can be known in advance to behave
correctly, is to base this engineering on
formal logic.
2. The only way to (premeditatedly) engineer
nanobots that, relative to our ethical codes,
can be known in advance to behave
correctly, is to base this engineering on
formal logic.
And #2. ...
1. The only way to (premeditatedly) engineer
nanobots that, relative to their programs,
can be known in advance to behave
correctly, is to base this engineering on
formal logic.
2. The only way to (premeditatedly) engineer
nanobots that, relative to our ethical codes,
can be known in advance to behave
correctly, is to base this engineering on
formal logic.
H̄
“In the relatively near future, and certainly
sooner or later, the human species will be
destroyed by advances in robotics technology
that we can foresee from our current vantage
point, at the start of the new millennium.”
—Bill Joy
“Why the Future Doesn’t Need Us”
(catalyzed by Kurtzweil)
Argument #1: Unabomber
We — to use the Unabomber's words — “postulate that the computer scientists
succeed in developing intelligent machines that can do all things better than human
beings can do them. In that case presumably all work will be done by vast, highly
organized systems of machines and no human effort will be necessary.” From here, we
are to infer that there are two alternatives: the machines are allowed to make their
decisions autonomously, without human oversight; or human control is retained. If the
former possibility obtains, humans will lose all control, for before long, turning the
machines off will end the human race (because by that time, as the story goes, our very
metabolisms will be entirely dependent upon the machines). On the other hand, if the
latter alternative materializes, “the machines will be in the hands of a tiny elite — just
as it is today, but with two differences. Due to improved techniques the elite will have
greater control over the masses; and because human work will no longer be necessary
the masses will be superfluous, a useless burden on the system.” In this scenario, the
Unabomber tells us, the elite may decide either to exterminate the masses, or to
essentially turn them into the equivalent of domestic animals. The conclusion: if AI
continues, humans are doomed. We ought therefore to halt the advance of AI.
Argument #1: Unabomber
Part of my specific discomfort with this argument is that
it's supposed to entail that robots have autonomy. I very
much doubt that robots can have this property, in anything
like the sense corresponding to the fact that, at the
moment, I can decide whether to keep talking, or head
outside and grab a bite to eat, and return to the
symposium thereafter.
A “simple” engineering goal:
A microworld in which one robot — PERI (or Utilbot)
— freely performs one morally permissible action.
(defun peris-choice ()
(cond ((> (random 10) 5) (hold-earth))
((drop-earth))))
vid1
? (peris-choice)
"I will drop earth"
? (peris-choice)
"I will hold onto earth"
? (peris-choice)
"I will hold onto earth"
vid2
(defun peris-choice ()
(cond ((> (random 10) 5) (hold-earth))
((drop-earth))))
vid1
? (peris-choice)
"I will drop earth"
? (peris-choice)
"I will hold onto earth"
? (peris-choice)
"I will hold onto earth"
vid2
If it happens out of the blue that I drop x, then I didn’t drop x.
Argument #3:
Speed + Thirst Immortality = Death
Humans will find it irresistible to download themselves
into robotic bodies, because doing so will ensure
immortality. When this happens (and Moore's Law, that
magically efficacious mechanism, will soon enough see to it
that such downloading is available), the human race will
cease to exist. A new race, a race of smart and durable
machines, will supersede us. And indeed the process will
continue ad indefinitum, because when race R1, the one that
directly supplants ours, realizes that they can extend their
lives by downloading to even more long-lived hardware,
they will take the plunge, and so to R2, and R3, ... we go.
Argument #3:
Speed + Thirst Immortality = Death
Argument #3, Explicit
i. Advances in robotics, combined with Moore's Law,
will make it possible in about 30 years for humans to
download themselves out of their bodies into more
durable robotic brains/bodies.
ii. Humans will find this downloading to be irresistible.
iii. Ergo: H̄ = In about 30 years, humans will cease to
exist as a species.
Argument #3, Explicit
i. Advances in robotics, combined with Moore's Law,
will make it possible in about 30 years for humans to
download themselves out of their bodies into more
durable robotic brains/bodies.
ii. Humans will find this downloading to be irresistible.
iii. Ergo: H̄ = In about 30 years, humans will cease to
exist as a species.
enthymematic, at best
Argument #3, Explicit
1. Advances in robotics, combined with Moore's Law,
will make it possible in about 30 years for humans to
download themselves out of their bodies into more
durable robotic brains/bodies.
2. Humans will find this downloading to be irresistible.
3. If this downloading takes place, humans will cease to
exist as a species.
4. Ergo: H̄ = In about 30 years, humans will cease to
exist as a species.
This premise presupposes that human minds are
standard computing machines — which is provably false.
The problem is that Joy (and Moravec, and
Kurzweil, and ...) suffers from some sort of
speed fetish. Speed is great, but however fast
(standard) computation may be, it's still by
definition at or below the Turing Limit. It
doesn’t follow from Moore’s Law that human
mentation can be identified with the
computing of functions at or below this limit.
The problem is that Joy (and Moravec, and
Kurzweil, and ...) suffers from some sort of
speed fetish. Speed is great, but however fast
(standard) computation may be, it's still by
definition at or below the Turing Limit. It
doesn’t follow from Moore’s Law that human
mentation can be identified with the
computing of functions at or below this limit.
The amazing thing to me is that we in AI
know that speed isn't a panacea.
The Mathematical Landscape
{f |f : N → N }
(Information Processing)
Σ1
Turing Limit
Φ ! φ?
∃kH(n, k, u, v)
H(n, k, u, v)
The Mathematical Landscape
{f |f : N → N }
(Information Processing)
Π2
Σ1
Turing Limit
∀u∀v[∃kH(n, k, u, v) ↔ ∃k ! H(m, k ! , u, v)]
Φ ! φ?
∃kH(n, k, u, v)
H(n, k, u, v)
Argument #2: Self-Replication
“Accustomed to living with almost routine scientific
breakthroughs, we have yet to come to terms with the fact
that the most compelling 21st-century technologies —
robotics, genetic engineering, and nanotechnology — pose a
different threat than the technologies that have come before.
Specifically, robots, engineered organisms, and nanobots share
a dangerous amplifying factor: They can selfreplicate.”
(Joy)
But what's the threat, exactly?
Is it that some company in the business of building
humanoid robots is going to lose control of their
manufacturing facility, and the robots are going to
multiply out of control, so that they up squeezing us
out of our office buildings, crushing our houses, our
cars, so that we race to higher ground as if running
from a flood? It sounds like a B-grade horror
movie. I really do hope that Joy has something just a
tad more serious in mind. But what?
But what's the threat, exactly?
Is it that some company in the business of building
humanoid robots is going to lose control of their
manufacturing facility, and the robots are going to
multiply out of control, so that they up squeezing us
out of our office buildings, crushing our houses, our
cars, so that we race to higher ground as if running
from a flood? It sounds like a B-grade horror
movie. I really do hope that Joy has something just a
tad more serious in mind. But what?
Idea is N+R: self-replicating small robots.
Solution Steps
Solution Steps
1. Human overseers select ethical theory, principles,
rules.
Solution Steps
1. Human overseers select ethical theory, principles,
rules.
2. Selection is formalized in a deontic logic, revolving
around what is permissible, forbidden, obligatory
(etc).
Solution Steps
1. Human overseers select ethical theory, principles,
rules.
2. Selection is formalized in a deontic logic, revolving
around what is permissible, forbidden, obligatory
(etc).
3. The deontic logic is mechanized.
Solution Steps
1. Human overseers select ethical theory, principles,
rules.
2. Selection is formalized in a deontic logic, revolving
around what is permissible, forbidden, obligatory
(etc).
3. The deontic logic is mechanized.
4. Every action that is to be performed by robot R
must be provably ethically permissible relative to
this mechanization (with all proofs expressible in
smooth English).
Again ...
1. Engineering nanobots that, relative to their
programs, are known in advance to behave
correctly is possible only if this engineering
is based on formal logic.
2. Engineering nanobots that, relative to our
ethical codes, are known in advance to
behave correctly is possible only if this
engineering is based on formal logic.
The End
“The ultimate goal of AI, which we are
very far from achieving, is to build a
person, or, more humbly, an animal.”
Charniak & McDermott 1985
Cognitive Carpentry:
A Blueprint for How to Build a Person
by John Pollock 1995
“The ultimate goal of AI, which, courtesy of Oscar, we are
very close to achieving, is to build a person.”
John Pollock, @ River Street Café, 2004
We are well on the way toward
completing Newell’s Program ...
John Anderson BBS 2003
Really?
The Mirage of Mechanical Mind
(forthcoming)
The Mirage of Mechanical Mind
(forthcoming)
The Mirage of Mechanical Mind
(forthcoming)
The Mirage of Mechanical Mind
(forthcoming)
•
•
•
•
•
•
•
•
“Deep and Ancient Roots of the Myth”
“The Argument from Creativity”
“The Argument from Free Will”
“The Zombie Argument Against SAI”
“The Chinese Room Remodeled”
“The Argument from Infinitary Reasoning”
“The Modalized Gödelian Argument”
...
AI will always be mired in
three anemic, wheel-spinning options...
• trick
• trick
• pray
• trick
• pray
• relax ‘smart’ — & test
The Trick Approach
The Trick Approach
The Trick Approach
The Trick Approach
The Trick Approach
The Trick Approach
The Trick Approach
The Trick Approach
The Avowed Trick Approach
The Avowed Trick Approach
The Avowed Trick Approach
Brutus.1
The Avowed Trick Approach
Brutus.1
Bringsjord & Ferrucci
Golems: The Pray Approach
Golems: The Pray Approach
Golems: The Pray Approach
Golems: The Pray Approach
Golems: The Pray Approach
Golems: The Pray Approach
Golems: The Pray Approach
The Pray Approach
has other “distinguished” fans:
The Pray Approach
has other “distinguished” fans:
“Instead of trying to produce a programme to
simulate the adult mind, why not rather try to
produce one which simulates the child's? If this were
then subjected to an appropriate course of education
one would obtain the adult brain. Presumably the
child-brain is something like a note-book as one buys
it from the stationers. Rather little mechanism, and
lots of blank sheets. (Mechanism and writing are from
our point of view almost synonymous.) Our hope is
that there is so little mechanism in the child-brain
that something like it can be easily programmed. The
amount of work in the education we can assume, as a
first approximation, to be much the same as for the
human child.”
Turing 1950
The Relax ‘Smart’ & Test Approach
x is a person iff x has the capacity ...
•
to “will,” to make choices and decisions, set plans and projects —
autonomously;
•
for consciousness, for experiencing pain and sorrow and happiness,
and a thousand other emotions — love, passion, gratitude, and so on;
•
for self-consciousness, for being aware of his/her states of mind,
inclinations, preferences, etc., and for grasping the concept of him/
herself;
•
•
to communicate through a language;
•
to desire not only particular objects and events, but also changes in
his or her character;
•
to reason (for example, in the fashion needed to prove the
correctness of responses in false-belief, wise man, ... tests).
to know things and believe things, and to believe things about what
others believe (and so on);
x is a person iff x has the capacity ...
•
to “will,” to make choices and decisions, set plans and projects —
autonomously;
•
for consciousness, for experiencing pain and sorrow and happiness,
and a thousand other emotions — love, passion, gratitude, and so on;
•
for self-consciousness, for being aware of his/her states of mind,
inclinations, preferences, etc., and for grasping the concept of him/
herself;
•
•
to communicate through a language;
•
to desire not only particular objects and events, but also changes in
his or her character;
•
to reason (for example, in the fashion needed to prove the
correctness of responses in false-belief, wise man, ... tests).
to know things and believe things, and to believe things about what
others believe (and so on);
x is a person iff x has the capacity ...
•
to “will,” to make choices and decisions, set plans and projects —
autonomously;
•
for consciousness, for experiencing pain and sorrow and happiness,
and a thousand other emotions — love, passion, gratitude, and so on;
•
for self-consciousness, for being aware of his/her states of mind,
inclinations, preferences, etc., and for grasping the concept of him/
herself;
•
•
to communicate through a language;
•
to desire not only particular objects and events, but also changes in
his or her character;
•
to reason (for example, in the fashion needed to prove the
correctness of responses in false-belief, wise man, ... tests).
to know things and believe things, and to believe things about what
others believe (and so on);
x is a person iff x has the capacity ...
•
to “will,” to make choices and decisions, set plans and projects —
autonomously;
•
for consciousness, for experiencing pain and sorrow and happiness,
and a thousand other emotions — love, passion, gratitude, and so on;
•
for self-consciousness, for being aware of his/her states of mind,
inclinations, preferences, etc., and for grasping the concept of him/
herself;
•
•
to communicate through a language;
•
to desire not only particular objects and events, but also changes in
his or her character;
•
to reason (for example, in the fashion needed to prove the
correctness of responses in false-belief, wise man, ... tests).
to know things and believe things, and to believe things about what
others believe (and so on);
x is a person iff x has the capacity ...
•
to “will,” to make choices and decisions, set plans and projects —
autonomously;
•
for consciousness, for experiencing pain and sorrow and happiness,
and a thousand other emotions — love, passion, gratitude, and so on;
•
unsearchably
difficult;
ignore
real
pfor self-consciousness, for being aware of his/her states of mind,
consciousness,
andetc.,
ignore
s-consciousness
inclinations, preferences,
and forreal
grasping
the concept of him/
herself;
•
•
to communicate through a language;
•
to desire not only particular objects and events, but also changes in
his or her character;
•
to reason (for example, in the fashion needed to prove the
correctness of responses in false-belief, wise man, ... tests).
to know things and believe things, and to believe things about what
others believe (and so on);
x is a person iff x has the capacity ...
•
to “will,” to make choices and decisions, set plans and projects —
autonomously;
•
for consciousness, for experiencing pain and sorrow and happiness,
and a thousand other emotions — love, passion, gratitude, and so on;
•
unsearchably
difficult;
ignore
real
pfor self-consciousness, for being aware of his/her states of mind,
consciousness,
andetc.,
ignore
s-consciousness
inclinations, preferences,
and forreal
grasping
the concept of him/
herself;
•
•
to communicate through a language;
•
to desire not only particular objects and events, but also changes in
his or her character;
•
to reason (for example, in the fashion needed to prove the
correctness of responses in false-belief, wise man, ... tests).
to know things and believe things, and to believe things about what
others believe (and so on);
x is a person iff x has the capacity ...
•
to “will,” to make choices and decisions, set plans and projects —
autonomously;
•
for consciousness, for experiencing pain and sorrow and happiness,
and a thousand other emotions — love, passion, gratitude, and so on;
•
unsearchably
difficult;
ignore
real
pfor self-consciousness, for being aware of his/her states of mind,
consciousness,
andetc.,
ignore
s-consciousness
inclinations, preferences,
and forreal
grasping
the concept of him/
herself;
•
•
to communicate
a language; by
machines through
still whipped
•
to desire not only particular objects and events, but also changes in
his or her character;
•
to reason (for example, in the fashion needed to prove the
correctness of responses in false-belief, wise man, ... tests).
sharp toddlers
to know things and believe things, and to believe things about what
others believe (and so on);
operationalize the relaxation: test
Definition of PAI
• Psychometric AI is the field devoted to
building information-processing entities
capable of at least solid performance on all
established, validated tests of intelligence and
mental ability, a class of tests that includes IQ
tests, tests of reasoning, of creativity,
mechanical ability, and so on.
Bringsjord (Dedicated issue of JETAI)
Bringsjord & Schimanski (IJCAI 2001)
Which tests?
Which tests?
E.g.,
“Raven’s” (IQ)
“The Wumpus World”
“Block Design” (IQ)
Raven’s Progressive Matrices
Hunt the Wumpus
Situation Calculus
The situation calculus in the following layers works as follows:
The result function takes in a list of actions and returns a list representing a location
There is a general definition of the result function, which SNARK uses to build up sequences of actions,
and it is defined as:
(= (result ?actions (result (list ?action) square))
(result (append (list ?action) ?actions) square))
Results for single actions are then defined – in this case since the theory is used to plan a path
consisting of visited squares, the result of a single action on a square is only defined if that square is
visited, e.g. if the action is ‘up, the result function returns the square above it:
(= (result (list up) (list 1 0)) (list 1 1))
SNARK is used to prove there is a list of actions that the result function takes in and performs on the
agent’s current location and returns the location of interest
For example if the agent is at (2,1) and wants to get to (0,0), SNARK would generate, assuming the appropriate
squares are visited, (list down left left)
i.e. (= (result ?actions (list 2 1)) (list 0 0))
(= (result ?actions (result (list ?action) (list 2 1))) (list 0 0)) //general-result-defn
(= (result ?actions (list 2 0)) (list 0 0))
//result-of-down
(= (result ?actions (result (list ?action) (list 2 0))) (list 0 0))
//general-result-defn
(= (result ?actions (list 1 0)) (list 0 0))
//result-of-left
at this point ‘left solves it, and SNARK has remembered the list up to this point, so the answer is
(list down left left)
Simulation Performance
•
•
•
•
In the world situation on the right, it
takes SNARK 2 seconds to generate
(LIST UP RIGHT RIGHT DOWN
DOWN RIGHT RIGHT UP UP UP
UP UP UP LEFT LEFT LEFT
LEFT LEFT LEFT DOWN DOWN
DOWN DOWN DOWN DOWN) as a
solution for the home layer – that’s 2
seconds for a list of length 25
Before much needed efficiency
enhancements and some slight
theory adjustments, this proof would
have taken well over a day
This shows the need for careful,
terse theory and taking full
advantage of all appropriate
efficiency options in SNARK
Here, a gray square represents a
visited square and A represents
the agent’s location
A
Simulation
Performance : Type I
150
100
min
average
max
50
0
2x2
3x3
4x4
5x5
6x6
7x7
8x8
9 x 9 10 x 10
“Feeding” WW-Cracking Robot
Block Design
(WAIS IQ)
PERI “Cracked” Block Design*
*With much help from Sandia Labs’ Bettina Schimanski.
Which tests?
Story Arrangement
(Sample images from WAIS, compliments of Psychological Corporation)
Story Arrangement
(Sample images from WAIS, compliments of Psychological Corporation)
(Schimanski dissertation)
Story Arrangement
(Sample images from WAIS, compliments of Psychological Corporation)
Floridi’s Continuum, and Claims
(“Consciousness, Agents, and the Knowledge Game” Minds & Machines)
False
Belief
Task
Wise Man Deafening Torture Ultimate
Test (n)
Test
Boots Test Sifter
Cutting-Edge
AI
Yes
Yes
No
No
No
Zombies
Yes
Yes
Yes
Yes
No
Yes
Yes
Yes
Yes
Yes
Human
Persons
(s-conscious!
p-conscious!)
Cracking False-Belief Tasks ...
In SL, w/ real-time comm w/ ATP
In SL, w/ real-time comm w/ ATP
SNARK-USER 14 >
(in-immature-scenario
(prove '(t-retrieve subject
teddybear
?c)
:answer '(looks-in ?c)))
(Refutation
(Row 1
(or (not (person ?x)) (not (object ?
y)) (not (container ?z)) (not (in ?y ?
z)) (bel-in ?x ?y ?z))
assertion)
(Row 2
(or (not (person ?x))
(not (container ?y))
(not (object ?z))
(not (w-retrieve ?x ?z))
(not (bel-in ?x ?z ?y))
(t-retrieve ?x ?z ?y))
assertion)
(Row 4
(person subject)
assertion)
(Row 6
(container c2)
assertion)
(Row 7
(object teddybear)
assertion)
(Row 8
(in teddybear c2)
assertion)
(Row 9
(w-retrieve subject teddybear)
assertion)
(Row 10
(not (t-retrieve subject teddybear ?
x))
negated_conjecture
Answer (looks-in ?x))
(Row 11
(or (not (person ?x)) (bel-in ?x
teddybear c2))
(rewrite (resolve 1 8) 6 7))
(Row 25
(bel-in subject teddybear c2)
(resolve 11 4))
(Row 28
(t-retrieve subject teddybear c2)
(rewrite (resolve 2 25) 9 7 6 4))
(Row 30
false
(resolve 10 28)
Answer (looks-in c2)))
:PROOF-FOUND
SNARK-USER 15 > (answer t)
(LOOKS-IN C2)
SNARK-USER 12 >
(in-mature-scenario
(prove '(t-retrieve subject
teddybear
?c)
:answer '(looks-in ?c)))
(Refutation
(Row 1
(or (not (person ?x))
(not (container ?y))
(not (object ?z))
(not (w-retrieve ?x ?z))
(not (bel-in ?x ?z ?y))
(t-retrieve ?x ?z ?y))
assertion)
(Row 2
(or (not (person ?x)) (not (object ?
y)) (not (container ?z)) (not (p-in ?
x ?y ?z)) (bel-in ?x ?y ?z))
assertion)
(Row 4
(person subject)
assertion)
(Row 5
(container c1)
assertion)
(Row 7
(object teddybear)
assertion)
(Row 8
(p-in subject teddybear c1)
assertion)
(Row 9
(w-retrieve subject teddybear)
assertion)
(Row 10
(not (t-retrieve subject teddybear ?
x))
negated_conjecture
Answer (looks-in ?x))
(Row 11
(bel-in subject teddybear c1)
(rewrite (resolve 2 8) 5 7 4))
(Row 25
(t-retrieve subject teddybear c1)
(rewrite (resolve 1 11) 9 7 5 4))
(Row 26
false
(resolve 10 25)
Answer (looks-in c1))
)
:PROOF-FOUND
SNARK-USER 13 > (answer t)
(LOOKS-IN C1)
“The present account of the false belief transition is incomplete
in important ways. After all, our agent had only to choose the
best of two known models. This begs an understanding of the
dynamics of rational revision near threshold and when the
space of possible models is far larger. Further, a single formal
model ought ultimately to be applicable to many false belief
tasks, and to reasoning about mental states more generally.
Several components seem necessary to extend a particular
theory of mind into such a framework theory: a richer
representation for the propositional content and attitudes in
these tasks, extension of the implicit quantifier over trials to
one over situations and people, and a broader view of the
probability distributions relating mental state variables. Each of
these is an important direction for future research.”
“Intuitive Theories of Mind: A Rational Approach to False Belief”
Goodman et al.
“The present account of the false belief transition is incomplete
in important ways. After all, our agent had only to choose the
best of two known models. This begs an understanding of the
dynamics of rational revision near threshold and when the
space of possible models is far larger. Further, a single formal
model ought ultimately to be applicable to many false belief
tasks, and to reasoning about mental states more generally.
Several components seem necessary to extend a particular
theory of mind into such a framework theory: a richer
representation for the propositional content and attitudes in
these tasks, extension of the implicit quantifier over trials to
one over situations and people, and a broader view of the
probability distributions relating mental state variables. Each of
these is an important direction for future research.”
“Intuitive Theories of Mind: A Rational Approach to False Belief”
Goodman et al.
Done.
“The present account of the false belief transition is incomplete
in important ways. After all, our agent had only to choose the
best of two known models. This begs an understanding of the
dynamics of rational revision near threshold and when the
space of possible models is far larger. Further, a single formal
model ought ultimately to be applicable to many false belief
tasks, and to reasoning about mental states more generally.
Several components seem necessary to extend a particular
theory of mind into such a framework theory: a richer
representation for the propositional content and attitudes in
these tasks, extension of the implicit quantifier over trials to
one over situations and people, and a broader view of the
probability distributions relating mental state variables. Each of
these is an important direction for future research.”
“Intuitive Theories of Mind: A Rational Approach to False Belief”
Goodman et al.
Done.
Konstantine Arkoudas & Selmer Bringsjord
Full generality
wrt time and
change: includes
event calculus —
yet fast.
Cracking Wise Man Tests ...
Wise Men Puzzle
?
?
?
Wise Men Puzzle
?
?
?
Wise man A
Wise man B
Wise man C
Wise Men Puzzle
I don’t know
?
?
?
Wise man A
Wise man B
Wise man C
Wise Men Puzzle
I don’t know
I don’t know
?
?
?
Wise man A
Wise man B
Wise man C
Wise Men Puzzle
I don’t know
I don’t know
?
?
?
Wise man A
Wise man B
Wise man C
I DO know
Wise Men Puzzle
I don’t know
I don’t know
?
?
?
Wise man A
Wise man B
Wise man C
I DO know
Wise Men Puzzle
I don’t know
I don’t know
?
?
?
Wise man A
Wise man B
Wise man C
I DO know
Start of Reasoning in WMP3
(pov of truly wise man; easy for smart humans)
Start of Reasoning in WMP3
(pov of truly wise man; easy for smart humans)
Arkoudas-Proved-Sound Algorithm for
Generating Proof-Theoretic Solution to WMPn
All our
humanauthored
proofs
machinechecked.
“Life and Death” Wise Man Test (3)
* Again: Object-level reasoning, reasoning that produces object-level reasoning
(e.g., methods), and direct, “dirty,” purely computational procedures.
Now harder, and
confessions ...
Floridi’s Continuum, and Claims
False
Belief
Task
Wise Man Deafening Torture Ultimate
Test (n)
Test
Boots Test Sifter
Cutting-Edge
AI
Yes
Yes
No
No
No
Zombies
Yes
Yes
Yes
Yes
No
Yes
Yes
Yes
Yes
Yes
Human
Persons
(s-conscious!
p-conscious!)
Floridi’s Continuum, and Claims
False
Belief
Task
Wise Man Deafening Torture Ultimate
Test (n)
Test
Boots Test Sifter
Cutting-Edge
AI
Yes
Yes
No
No
No
Zombies
Yes
Yes
Yes
Yes
No
Yes
Yes
Yes
Yes
Yes
Human
Persons
(s-conscious!
p-conscious!)
Floridi’s “Ultimate (s- and pconsciousness) Sifter”
?
?
?
Wise man A
Wise man B
Wise man C
poison innocuous
?
?
?
Wise man A
Wise man B
Wise man C
Poison pill strikes the taker dumb.
?
?
?
Wise man A
Wise man B
Wise man C
“Have you been struck dumb?”
?
?
?
Wise man A
Wise man B
Wise man C
“Have you been struck dumb?”
Heaven knows!
?
?
?
Wise man A
Wise man B
Wise man C
Two possibilities:
Subsequent silence: failure/death.
Or ...
NO!!
?
?
?
Wise man A
Wise man B
Wise man C
“Had I taken the dumbing tablet I would not have
been able to report orally my state of ignorance
about my dumb/non-dumb state, but I have been,
and I know that I have been, as I have heard myself
speaking and saw the guard reacting to my
speaking, but this (my oral report) is possible only if
I did not take the dumbing tablet, so I know I know
I am in the non-dumb state, hence I know that ...”
—Luciano Floridi
Three Kinds of Belief
•
•
•
de dicto beliefs
Bertrand believes that there’s a depressed diner in the joint.
Bb [∃x(Di(x) ∧ De(x) ∧ In(x, diner76)])
•
de re beliefs
There’s someone Harvey believes to be in the joint, and a
depressed diner.
•
∃xBh [(Di(x) ∧ De(x) ∧ In(x, diner76)]
de se beliefs
Perry believes that he, himself is in the joint, and depressed.
•
BI ! [(Di(I ! ) ∧ De(I ! ) ∧ In(I ! , diner76)])
Three Challenges
1
2
Bb [∃x(Di(x) ∧ De(x) ∧ In(x, diner76)]) !" ∃xDi(x)
∃xBh [(Di(x) ∧ De(x) ∧ In(x, diner76)] ! ∃xDi(x)
BI ! [(Di(I ! ) ∧ De(I ! ) ∧ In(I ! , diner76)]) "#
Bt [(Di(t) ∧ De(t) ∧ In(t, diner76)]), ∀t
3
The personal pronoun has no descriptive content.
In fact, even its perfectly correct use doesn’t entail
that the user is physical, and certainly doesn’t entail
that the user has any particular physical attributes.
?
?
?
Wise man A
?
?
?
Wise man B
Wise man C
So, there’s work to be done ... but despite the fact
we can’t build persons, we can build AIs that pass
any short test. That’s why Blade Runner is our future.
Wise man A
?
?
?
Wise man B
Wise man C
So, there’s work to be done ... but despite the fact
we can’t build persons, we can build AIs that pass
any short test. That’s why Blade Runner is our future.
Wise man A
?
?
?
Wise man B
Wise man C