The Turing Test:
Computing Machinery and Intelligence by Alan Turing,
published in "Mind", vol. LIX, N 236, pages 433-460, October, 1950.
Because of the advancement in software agents technologies,
nowdays this test has commercial applications discussed in
this New York Times article (NY Times, December 10, 2002) and in
several other articles.
The recent CAPTCHA project has a goal
of developing electronic tests that can tell humans and computers apart.
Patrick Hayes and Kenneth Ford
Turing Test Considered Harmful.
This paper was published in the proceedings of the
International Joint Conference on AI (IJCAI-1995) (Volume 1),
Montreal Canada, August 20-25, 1995.
Kenneth M. Ford, Patrick J. Hayes, Clark Glymour, James Allen
(from the Florida Institute for Human and Machine Cognition, IHMC).
Cognitive Orthoses: Toward Human-Centered AI. Published in
AI MAGAZINE, Winter 2015, Vol 36, No 4, pages 5-8.
Building Watson (a computer that won in Jeopardy competition over human champions):
"An overview of the DeepQA project" by
David Ferrucci, Eric Brown, Jennifer Chu-Carroll, James Fan, David
Gondek, Aditya A. Kalyanpur, Adam Lally, J. William Murdock, Eric
Nyberg, John Prager, Nico Schlaefer, Chris Welty from IBM Research.
Published in "AI Magazine", vol 31, N3, 2010, pp. 59-79.
A related
Watson's Jeopardy! Challenge Web site at IBM Research.
These papers
dicuss some of the technical issues
(linked from the IBM Research Web site).
The paper
Natural Language Processing With Prolog in the IBM Watson System written by
Adam Lally (IBM) and Paul Fodor (Stony Brook University) was published on March 31, 2011
(this is
PDF version of this article.
In Memoriam
Alain Colmerauer: 1941-2017, by Lawrence M. Fisher. Published in Communications of the ACM, May 22, 2017.
Alain Colmerauer,
a French computer scientist and a father of the logic programming language
Prolog, passed away on May 12, 2017, at the age of 76. A
documentary movie
about Alain Colmerauer and PROLOG.
Think you have solved question answering? Try the
AI2 Reasoning Challenge (ARC)!
The ARC dataset contains 7,787 genuine grade-school level, multiple-choice
science questions, assembled to encourage research in advanced question-answering.
This extensive benchmark was developed by a reasearch team from
Allen Institite for Artificial Intellgience (AI2).
This research is described in the paper written by
Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, Oyvind Tafjord
published in arXiv:1803.05457.
Same research group (Peter Clark, Oren Etzioni and others) published a related paper
From 'F' to 'A' on the N.Y. Regents Science Exams:
An Overview of the Aristo Project, arXiv:1909.01958, 11 Sep 2019. They wrote:
"Even in 2016, the best AI system achieved merely 59.3% on an 8th Grade science exam challenge.
This paper reports unprecedented success on the Grade 8 New York Regents Science Exam,
where for the first time a system scores more than 90% on the exam's non-diagram,
multiple choice (NDMC) questions. In addition, our Aristo system, building upon
the success of recent language models, exceeded 83% on the corresponding Grade 12
Science Exam NDMC questions."
Gideon Lewis-Kraus wrote an article
The Great A.I. Awakening. This is an article about
recent improvements in Google Translate and the so-called
``neural networks" technology that helped to make translation better.
Published in the New York Times Magazine on December 14, 2016.
In a more recent article, also published in New York Times,
Gary Marcus claims that
Artificial Intelligence Is Stuck and then proposes
what needs to be done to move it forward (New York Times, July 29, 2017).
In Spring 2020, Open-AI Lab has released a neural-network system GPT3 with 175 billions of parameters.
In response, Gary Marcus and Ernest Davis wrote this:
GPT3: OpenAI’s language generator has no idea what it’s talking about.
Published in MIT Technology Review on August 22, 2020.
They make the following conclusion about GPT3:
"It’s a fluent spouter of bullshit, but even with 175 billion parameters and 450 gigabytes of input data, it’s not a reliable interpreter of the world."
Gary Marcus is founder and CEO of Robust.AI He is also a professor emeritus at NYU, and author of five books.
Ernest Davis is a professor of computer science at New York University. He has authored four books.
(Here is a local copy for students
to study and laugh. Yes, it is funny. But it is also sad. You decide. )
October 1, 2020.
Is GPT-3 Intelligent?
A conversation of John Etchemendy, co-director of the Stanford Institute for Human-Centered
Artificial Intelligence, with Oren Etzioni, CEO of Allen Institute for Artificial Intelligence (AI2),
company founder, and Professor Emeritus of computer science at the Univ. of Washington (Seattle, USA).
March 10, 2021.
Emily Bender, Timnit Gebru,
Angelina McMillan-Major, Shmargaret Shmitchell
raise doubts whether the direction taken by BERT and its variants GPT2/3 is
the right research direction to pursue:
On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?.
This paper was published in the proceedings of the 2021 ACM Conference on
Fairness, Accountability, and Transparency. In particular, they comment that
large language models "are not performing natural language understanding (NLU), and
only have success in tasks that can be approached by manipulating linguistic form".
An article about a linguist Emily Bender, a Professor from the Univ. of Washngton:
You Are Not a Parrot And a Chatbot is Not a Human (New York Magazine, Intelligencer, March 1, 2023).
This article is written by Elizabeth Weil, a features writer for New York Magazine.
Yejin Choi
investigates if (and how) AI systems can learn commonsense knowledge and reasoning.
A talk presented by Yejin Choi at TED2023 on Tuesday, April 18, 2023:
"Why AI is incredibly smart and shockingly stupid".
She is a computer science professor at the University of Washington,
a distinguished research fellow at the Institute for Ethics in AI at the University of Oxford and
a senior research director at the nonprofit Allen Institute for AI,
where she oversees the commonsense-focused Mosaic project.
Barry Smith, SUNY Distinguished Professor in the Department of Philosophy in
the University of Buffalo, and Jobst Landgrebe, founder of Cognotekt, a German AI company,
have co-authored a new book
Why Machines Will Never Rule the World:
with the sub-title "Artificial Intelligence without Fear".
Published August 12, 2022 by
Routledge, ISBN 9781032309934..
SHRDLU, a program for understanding natural language, written by Terry
Winograd at the M.I.T. Artificial Intelligence Laboratory in 1968-70.
SHRDLU carried on a simple dialog with a user, about a small world of objects
(the BLOCKS world).
Terry Winograd is professor of computer science at
Stanford University.
SHRDLU resurrection:
this Web site collects information about subsequent versions and updates.
Thinking machines: Can there be? Are we?, Terry Winograd, Stanford University.
Noam Chomsky on Where Artificial Intelligence Went Wrong.
An extended conversation with the legendary linguist.
by Yarden Katz. Published in the Atlantic November 1, 2012. A few excerpts from the
video of this interview is available from Youtube.
Related to this:
Steven Pinker asks Noam Chomsky a question
about doing lots of statistical analysis. This is from the panel at MIT:
A Look at the Original Roots of Artificial Intelligence, Cognitive Science, and Neuroscience.
Here is a reply from Peter Norvig:
On Chomsky and the Two Cultures of Statistical Learning.
Geoffrey Hinton
(Canadian Institute for Advanced Research, University of Toronto and Google)
received his Research Excellence Award at the IJCAI-2005.
This is the highest honor for research in artificial intelligence.
The very gentle after-dinner version of his lecture
``Can computer simulations of the brain allow us to see into the mind?"
is available as PPT slides, but you also need to download six .avi movies
to the same directory as the powerpoint file and with the same names as
they currently have:
moviebuildup.avi
movierecon2.avi
movierecon3.avi
mov2.avi
mov4.avi
mov8.avi
Jürgen Schmidhuber published in June 2015 an online
Critique of Paper by "Deep Learning Conspiracy"
where he critically discussed an article "Deep learning" written by
Yann LeCun, Yoshua Bengio, and Geoffrey Hinton
(published in the journal Nature, v521, p436-444, on 28 May 2015).
Moreover, he published his own
deep learning overview that has been widely discussed and verified
by the machine learning community. His overview provides an unbiased
historical review of research that led to success of
deep learning in neural networks.
Some researchers believe deep-learning with its back-propagation still
has a core role in AI's future. But on September 15, 2017,
Professor Geoff Hinton said that, to push materially ahead,
entirely new methods will probably have to be invented. Hinton quoted
a great German physicist Max Planck who said that
"
Science advances one funeral at a time",
and then Hinton added
"The future depends on some graduate student who is deeply suspicious of everything I have said".
Gary Marcus (New York University) published a related paper:
Deep Learning: A Critical Appraisal,
arXiv1801.00631 (Submitted on 2 Jan 2018).
On Wednesday, 19 Feb 2020, Gary Marcus published the paper
"The Next Decade in AI:
Four Steps Towards Robust Artificial Intelligence". It appears in arXiv:2002.06177
More recently, Doug Lenat and Gary Marcus published a paper
Getting from Generative AI to Trustworthy AI: What LLMs might learn from Cyc.
Doug Lenat (September 13, 1950 - August 31, 2023) was an American businessman and researcher
in artificial intelligence who was the founder and CEO of Cycorp, Inc. of Austin, Texas.
Cycorp developed Cyc, a large scale knowledge base that focuses on implicit knowledge.
Dr. Geoffrey Hinton's 2019 Public lecture at the 13th Annual Meeting of the
Canadian Association for Neuroscience
Does the brain do backpropagation?,
on Tuesday, May 21, 2019, 6:30-8:00pm, in the Sick-Kids auditorium.
John Launchbury, the Director of DARPA's Information Innovation Office (I2O), discusses the
"three waves of AI" and the capabilities required for AI
to reach its full potential. He outlines 3 waves in the AI research
and explains -- what AI can do, what it can't do, and where it is headed.
Published on Feb 15, 2017.
Defense Advanced Research Projects Agency (DARPA) Announces $2 Billion Campaign to
Develop Next Wave of AI Technologies, September 7, 2018.
DARPA’s multi-year strategy seeks contextual reasoning in AI systems
to create more trusting, collaborative partnerships between humans and machines.
One of the programs is called
Machine Common Sense.
M. Mitchell Waldrop wrote
"The much-ballyhooed artificial intelligence approach boasts impressive feats
but still falls short of human brainpower. Researchers are determined
to figure out what’s missing."
He published an article
What are the limits of deep learning?
in the Proceedings of the National Academy of Sciences of the USA,
on January 22, 2019, volume 116 (4), pages 1074-1077.
DOI 10.1073/pnas.1821594116
Adnan Darwiche,
a Professor and former chairman of the computer science department at
the University of California Los Angeles (UCLA) published a paper
Human-Level Intelligence or Animal-Like Abilities?
in the Communications of the ACM, October 2018, Vol. 61, No. 10 (October), pages 56-67.
It is also available as a
PDF file.
Professor Darwiche directs the Automated Reasoning Group at UCLA.
His research interests span probabilistic and symbolic reasoning,
and their applications including machine learning.
Founded in 1979, the Association for the Advancement of Artificial Intelligence (AAAI)
is a nonprofit scientific society devoted to advancing the scientific understanding of
the mechanisms underlying thought and intelligent behavior and their embodiment in machines.
AAAI-2020 Chat with Daniel Kahneman.
After the Turing event of the previous evening (Sunday, February 9, 2020), there was
a chat with Nobel laureate Daniel Kahneman (on Monday, Feb 10, 2020)
to discuss the present and future of AI and human decision making.
Provided by
AAAI Livestreaming.
Hector Geffner,
a professor from the Universitat Pompeu Fabra (Barcelona, Spain)
talks about
General Solvers for General AI. Published on June 17, 2016.
Reasoning with Cause and Effect,
a research excellence lecture by
Judea Pearl, Univ. of California,
Los Angeles.
Konstantine Arkoudas wrote an article
GPT-4 Can’t Reason
that appeared in Medium on August 7, 2023. A continuation of his article
GPT-4 Can’t Reason: Addendum appeared on September 7, 2023.
Professor
Michael Jordan,
Univ. of California, Berkeley (USA), published the paper
Artificial Intelligence—The Revolution Hasn’t Happened Yet
in the Harvard Data Science Review (HDSR), 2019, Issue 1.1.
In particular, he writes the following:
"Of course, classical human-imitative AI problems remain of
great interest as well. However, the current focus on doing
AI research via the gathering of data, the deployment of deep learning
infrastructure, and the demonstration of systems that mimic certain
narrowly-defined human skills—with little in the way of emerging
explanatory principles—tends to deflect attention from major open
problems in classical AI. These problems include the need to bring
meaning and reasoning into systems that perform natural language processing,
the need to infer and represent causality, the need to develop
computationally-tractable representations of uncertainty and
the need to develop systems that formulate and pursue long-term goals."
For those of you who want to learn more about games.
You can read about
General Game Playing Project and also about
Artificial Intelligence and
Interactive Entertainment.
1975 ACM Turing Award Lecture
"Computer science as empirical inquiry: symbols and search" by
Allen Newell and Herbert A. Simon, Carnegie-Mellon Univ., Pittsburgh, PA.
Published in Communications of the ACM, Volume 19 Issue 3, March 1976, Pages 113-126.
Provided by the
ACM Digital Library.
Knowledge-based model of mind and its contribution to sciences.
An Interview with Ed Feigenbaum, a professor from
Stanford University.
Published in ``Communications of the ACM'', 2010, Vol. 53 No. 6, Pages 41-45.
DOI 10.1145/1743546.1743564
(Full
text PDF)
What is a Systematic Method of Scientific Discovery? by
Herbert A. Simon, Carnegie Mellon University. Published in
Systematic Methods of Scientific Discovery:
Papers from the 1995 Spring Symposium, ed. Raul Valdes-Perez, pages 1-2.
Technical Report SS-95-03. Association for the Advancement of Artificial Intelligence,
Menlo Park, California.
Where is AI Heading?
"Eye on the Prize" by
Nils Nilsson,
Stanford University. Published in "AI Magazine", vol 16, N2, 1995, pp. 9-17.
How do you teach a computer common sense? Researchers at a company
called Cycorp in Austin, Texas, are trying to find out. Since 1984, they
have been incorporating a huge collection of everyday knowledge in an AI
project named
Cyc.
The Cyc project aims to develop a comprehensive common sense knowledge base,
and associated reasoning systems. They are now being used to
enable the development of knowledge-intensive applications for industry
and government.
Why people think computers can't, written by
Marvin Minsky,
Massachusetts Institute of Technology.
Published in
"AI Magazine", vol. 3, N4, Fall 1982, p. 3-15.
Programs with Common Sense (1958), John McCarthy, Stanford University.
How Intelligent is Deep Blue?, by Drew
McDermott, Yale University.
[This is the original, long version of an article that appeared in
the May 14, 1997 New York Times with more flamboyant title.]
If the link above fails, download
a local copy.
A Gamut of Games. This article reviews the past successes,
current projects, and future research directions for AI using computer games
as a research test bed. Written by
Jonathan Schaeffer,
University of Alberta, Canada.
Published in ``AI Magazine'', volume 22, number 3, pp. 29-46, 2001.
Allen Newell:
The Scientific Relevance of Robotics. Remarks at the Dedication of
the CMU Robotics Institute.
Published in the AI Magazine, Vol 2, No 1, Spring 1981.
When Robots Meet People: Research Directions In Mobile Robotics
written by
Sebastian Thrun, Stanford University. He was a head of the team that
built Stanley, the robotic car. Stanley was judged to be the "Best Robot
Of All Time" by Wired Magazine, and
NOVA shot a great
documentary about Stanley and the race, which is available online
Sebastian Thurn:
Lifelong Learning Algorithms published in the book
"Learning to Learn" edited by Sebastian Thrun and Lorien Pratt,
Springer 1998.
DOI.
Sebastian Thrun and Tom Mitchell.
Lifelong robot learning. Robotics and Autonomous Systems, 15:25-46, 1995
This is one of the publications from a
large collection of research papers.
Tom Mitchell Et Al about
NELL:
Never-Ending Language Learner.
Communications of the ACM, April 2018, Volume 61, Issue 5.
DOI
Robots, Re-Evolving Mind written by
Hans Moravec,
Carnegie Mellon University. He also provides a photo of
Shakey, the robot.
The robot and the baby an amusing story written by
John McCarthy (4th Sep 1927 - 24th Oct 2011), a person who invented the term
"Artificial Intelligence". He was a professor at Stanford University.
Here is
a local copy in PDF
of this story written on June 28, 2001.
"A logical framework for depiction and image interpretation",
R. Reiter (Univ. of Toronto), and A. Mackworth (Univ. of British Columbia).
Published in:
Artificial Intelligence, vol 41, N 2, 1989, pp. 125-155.
Logical vs.Analogical or Symbolic vs. Connectionist or Neat vs. Scruffy, written by
Marvin Minsky,
Massachusetts Institute of Technology. Published in
"AI Magazine", vol 12, N 2, 1991, pp. 34-51.
From here to human-level AI, John McCarthy, Stanford University.
An invited talk at the Knowledge Representation conference, 1996.
Oliver Sacks (1933–2015) was a physician and the author of over ten books.
Speak, Memory
published in the New York Review of Books, in the February 21, 2013 issue.
The Mental Life of Plants and Worms, Among Others,
published in the New York Review of Books, in the April 24, 2014 issue.
In the River of Consciousness,
published in the New York Review of Books, in the January 15, 2004 issue.
Christof Koch is the Chief Scientist and President of the Allen Institute for Brain Science.
From 1987 until 2013, he worked as Professor of Cognitive and Behavioral Biology,
at the California Institute of Technology. His lecture
The Quest for Consciousness: A Neurobiological Approach
was delivered on March 22, 2006, UC Berkeley Campus.
Wikipedia provides
more information about his research.
The Feeling of Life Itself: Why Consciousness Is Widespread but Can't Be Computed
(MIT Press, 2019).
His book
"The Quest for Consciousness: a Neurobiological Approach",
was published by Roberts and Co., (2004), ISBN 0-9747077-0-8.
Dr Alex Taylor sets a difficult problem solving task,
will the crow defeat the puzzle?
Are crows the ultimate problem solvers? -- Inside the Animal Mind: Episode 2.
BBC Two Programme website.
Computer programs as empirical models in cognitive psychology.
Herbert Simon, the Psychology Department at Carnegie Mellon University.
Human beings use symbolic processes to solve problems,
reason, speak and write, learn and invent. Over the past 45 years,
cognitive psychology has built and tested empirical models of these processes.
The models take the form of computer programs that
simulate human behavior.
Herbert Simon Collection at the CMU.
What has AI in Common with Philosophy?,
John McCarthy, Stanford University.
Mathematical Intuition vs. Mathematical Monsters,
Synthese, 2000, p.317-332, written by
Solomon Feferman, Stanford University. See also his paper
The Logic of Mathematical.
Discovery. Vs. the Logical. Structure of Mathematics reprinted as
Chapter 3 in the book "In the Light of Logic". Author: Solomon
Feferman. (Oxford University Press, 1998, ISBN 0-19-508030-0,
Logic and Computation in Philosophy series).
"
Where Mathematics Comes From", written by George Lakoff and Rafael Nunez,
published by "Basic Books".
Book review: "Where Mathematics Come From, Reviewed by
James J. Madden,
Department of Mathematics, Louisiana State University. Professor
Ernest Davis published his review
Mathematics as Metaphor
in the Journal of Experimental and Theoretical AI, vol. 17, no. 3, 2005, pp. 305-315.
What is Artificial Intelligence (HTML version),
John McCarthy, Stanford University.
An important introductory paper for undergraduate students.
This Web page has links to other versions of John McCarthy's paper.
If the above link is not operational, then you can read
(Local Copy of the March 29, 2003 version):
"ps" - PostScript and
"pdf" - Acrobat Reader formats.
(Local Copy of the November 12, 2007 version):
Part 1: Basic Questions
Part 2: Branches of AI
Part 3: Applications
Part 4: More Questions
Part 5: Bibliography
Asimov, Isaac: "Robot Visions" and "Robot Dreams",
there are several paperback editions.
Raymond Smullyan (1919–2017) was a mathematician, logician, magician,
creator of extraordinary puzzles, philosopher, pianist.
One of his best known collections of recreational logic puzzles is
"What is the name of this book?". There are several paperback editions,
e.g., recent editions are published by Dover.