|
Kenneth D. Forbus and Paul J. Feltovich. (2001). Smart
Machines in Education: The Coming Revolution in Educational
Technology. Menlo Park, CA: AAAI Press/MIT Press.
Pp. 483
$37.95 ISBN 0-262-56141-7
Reviewed by Bryan R. Warnick
University of Illinois at Urbana-Champaign
January 22, 2003
Smart Machines in Education: The Coming Revolution in
Educational Technology is clearly a book for those curious
about advances in cutting edge educational technology. The
editors of the volume, Kenneth D. Forbus and Paul J. Feltovich,
have brought together an illuminating collection of essays that
show how research in cognitive science and artificial
intelligence (AI) is shaping technological development in
education. The essays reveal what the next generation of
educational technology might look like, and help us grasp some of
the reasons why the technology might look that way.
It is also a book that can be read on different
levels. Certain essays have something of a historical twist,
tracing the negotiation that often occurs as educational
technologies are developed, evaluated, and inserted into
classroom environments. Other essays focus more on the theories
of learning that undergird the technological applications;
indeed, some of the essays do not mention technology at all, and
instead focus on ideas about learning that might be more fully
incorporated into future technological artifacts. Still other
essays are of a more technical bent, relaying in some detail how
their "smart machines" work. Thus, sections of this book should
be of interest to a fairly broad segment of the educational
research community.
In what follows, I give a summary of the essays
contained in this volume, and outline what I find new and
intriguing about each technology. This is a lengthy book of
almost 500 pages, so much more could be said about each article.
After surveying the contents of the book, I will turn to some of
the major themes that pervade the work, and discuss some of its
underlying assumptions.
Smart Machines: A Brief Summary
The first chapter, "Representational and Advisory
Guidance for Students Learning Scientific Inquiry," by Dan
Suthers and others, reviews the authors' attempts to help
students gain a feel for scientific argumentation through an
application called BELVEDERE. In BELVEDERE, students are
presented with a scientific problem (e.g., the extinction of
dinosaurs) together with some information about research relating
to the problem. The students then use this information to build
a representation, or an "evidence map," of the scientific debate,
by organizing the scientific statements and relations between the
statements (e.g., showing which data support which hypothesis).
A computer advisor helps students to reason through their
evidence maps when help is needed. The authors relate in some
detail how they developed this advisor and how through "evidence
pattern strategies" and "expert-path advice strategies" the
appropriate advice is offered.
The second essay is entitled "Motivation and
Failure in Educational Simulation Design" and is the work of
Roger Schank and Adam Neaman. The authors describe simulations
that present "Goal Based Scenarios" aimed at helping students
gain expert skills in a "learning by doing" fashion. They first
describe a simulation called Wolf, in which students play
the part of wildlife biologists attempting to determine the cause
of a declining wolf population. The idea is that students will
learn science by actually going through the steps of scientific
research. As the students develop their hypotheses, they can, at
any time, select a question to ask a "wolf expert" and watch a
video clip that shows an expert wildlife biologist offering a
"war story" from his or her experience. Also included are
exciting clips of a wolf being captured for examination and maps
of prey populations. The most impressive thing about this
application, however, and the others described by Schank and
Neaman, is the care that the developers have taken to make user
mistakes meaningful. Specifically, they try to simulate those
conditions under which novices tend to make errors and then offer
the appropriate just-in-time expert advice. Schank and Neaman
include in their essay a thoughtful discussion of how to use
failure to make errors both educative and motivational, and make
a strong case that computer simulations can help students make
educational mistakes in a non-threatening atmosphere.
The next essay, "Technology Support for Complex
Problem Solving: From SAD Environments to AI," by Guatam Biswas
and others, discusses technology created to help improve
students' problem solving abilities. Their approach to
technological development is to begin with fairly simply
technologies (SADstone age technologies) and add more
sophistication as neededa fruitful way, it seems, to avoid
needless technological complexity. As an example of this
developmental process, they show how they developed the original
Jasper Woodbury problem solving adventure series, a fairly
simple simulation in which students try to solve various
problems. They found that students were having a difficult time
transferring their newly acquired problem solving skills gained
from this single, video-based activity to other problem domains.
To solve this difficulty, they created Adventure Player
that, with the help of more advanced technologies, attempts to
create an environment that more fully facilitates the transfer of
skills. They do this by providing additional features such as
"what-if scenarios" that ask the students to use their new
problem solving skills in new ways. Students can also see
simulated results of their work, toil with a "coach" if they get
stuck, or create simulated problems for other students to
solve.
Most provocative, though, was the second part of the essay
that builds on the idea that we learn best when we teach. After
some reflection on what, exactly, may be educative about
teaching, the authors describe several ways learning is
actualized through teaching, and then describe programs that
allow students to teach a computer character some knowledge or
skill. In one simulation called Betty's Brain, for
example, students are asked to construct a brain for Betty that
will then be able to answer questions about a river ecosystem.
The students build a concept map with nodes (fish, bacteria,
etc.) and relationships between the nodes (fish eats
bacteria), and then test Betty's knowledge for accuracy. Betty
can even be given different learning attitudes and attention
levels to help students reflect on their own dispositions as
learners. The idea is that if students can see as they teach
what sort of learner characteristics are helpful, they can
improve their own learning dispositionsit will be interesting
to see how well this idea works in practice. The authors
conclude their essay by pointing to the evidence showing that the
learning-by-teaching strategy is having a positive effect on
student learning.
Chapter 4 is entitled "Growth and Maturity of
Intelligent Tutoring Systems: A Status Report," and is the work
of Beverly Park Woolf and her group. The article begins by
discussing some of the potential advantages of AI-based tutoring
systems. Intelligent technologies, the authors argue, may be
able to customize themselves to the needs of individual learners.
In addition,
AI applications can also make asynchronous
learning effective... . AI techniques will enable students to
learn about selected material at their own rate before it is
taught or as a supplement to course activities. Such systems
might become routine supporting group collaborations of
students-at-a-distance, exploration of hypothetical worlds, and
the making and testing of hypotheses. Learning need not be
constrained to place and time. (p. 100)
After arguing for the desirability of AI-based technologies,
the authors then tackle the question of what sorts of machines
qualify as "intelligent," and able to produce these benefits.
First, a machine might be intelligent if it is "generative," that
is, if it independently generates instructional materials,
problems, and hints appropriate to the student. Second, an
intelligent machine might make use of modeling. It can model
either the student-user, assessing the current state of knowledge
and doing something based on the model, or it can use expert
models of a particular knowledge domain. Third, an intelligent
machine can perform "instructional modeling," that is, it can
tailor the instruction to the changing level of the learner.
Next, a machine is intelligent if it allows for "mixed
initiative" interactiona smart machine can initiate needed
interaction with a student and can respond appropriately to the
student’s action. Finally, a machine might qualify as
"intelligent" if it is self-improving, and thus able to gradually
improve its own teaching performance. While eventually AI-based
tutors may have all of these abilities, currently most are much
more limited. The authors, however, offer case studies of
machines that are pushing the boundaries. They discuss the
Cardiac Tutor, and various mathematics and engineering
tutors. These applications are generative, they perform student
and expert modeling, and, in the case of the mathematics tutors,
are self-improving. The authors conclude their essay with an
evaluation of these intelligent tutoring systems. They write,
"Formal evaluation of student performance with intelligent tutors
has shown that tutors can achieve performance improvement that
approaches the improvement by one-on-one human tutoring compared
to classroom teaching" (p. 138).
Kenneth Koedinger discusses the famous Pump
Algebra Tutor (PAT) in his article "Cognitive Tutors as
Modeling Tools and Instructional Models." One focus of PAT is
helping students develop multiple representations of algebra
problems. With PAT, students can create tabular, graphical, and
symbolic models of problems. The cognitive tutor chimes in as
needed with just-in-time feedback. The tutor highlights possible
errors with outline texterrors that the software recognizes
by comparing student actions with its database of common errors.
The student can also request a "context sensitive" hint. The
tutor tracks the learner's progress on a problem and then, in a
fairly complex process, selects the appropriate advice:
The tutor chooses the hint message by using the
production system to identify the set of possible next strategic
decisions and ultimate external actions. It chooses among these
based on the student's current focus of activity, what tool and
interface objects the student has selected, the overall status of
the student's solution, and internal knowledge of relative
utility of alternative strategies. Successive levels of
assistance are provided in order to maximize the students'
opportunities to construct or generate knowledge of their own.
(p. 158)
PAT also supports learning through "knowledge tracing," which
tracks a student's problem solving skills, identifies areas of
difficulty, and presents problems in areas that have not yet been
mastered. PAT seems to work well: field studies show a 15-25
percent difference between those classes that used PAT versus
control groups. The most interesting part of the essay, however,
is the final section in which Koedinger discusses PAT as a vehicle
for teacher change. PAT offers a student-centered model of
learning, that teachers, once they are exposed to it, often tend
to replicate in other areas of instruction. Koedinger observes
that, sometimes, teachers begin to borrow the problems, the
representational tools, and feedback strategies embedded in the
cognitive tutors. He writes, "Teachers began to use PAT problems
in their regular classes and began to experiment with more
student-centered learning by doing outside of the computer lab"
(p. 165). The computer becomes a pedagogical role model.
Chapter 6 deals with using intelligent tutors in
reading education. In their essay, "Evaluating Tutors that
Listen: An Overview of Project LISTEN," Jack Mostow and Gregory
Aist discuss the development of computer tutors that listen as
students read and that offer corrections if they hear students
make mistakesan impressive feat of technical engineering.
Sometimes the tutor helps with only problematic words: the tutor
highlights the word, asks the student to read the word, and then
echoes the correct word after the child's response. At other
times, the tutor rereads the sentence and lets the student try
again. The tutor then listens to the student and offers
additional feedback. Later versions of the tutors possess still
more options. Students can, for example, listen to themselves
reading a passage by selecting the "play back" option. The tutor
can decompose a word (sounding it out while highlighting the
appropriate letters) or offer rhymes for problematic words (e.g.,
if there is a problem with the word "wheat" the tutor may suggest
that it "sounds like feat"). Finally, the authors discuss how
the computer tutor allows them to do "invisible experiments."
The tutor can vary its tactics in different situations and keep
track of the results, thus creating some idea of what
interventions are most helpful without disrupting the learning
process with more intrusive research. The authors conclude their
essay by pointing to empirical evidence suggesting that the
listening tutor helps students read more effectively.
In the next contribution to the volume,
"Articulate Software for Science and Engineering Education,"
Kenneth Forbus attempts to show how articulate virtual
laboratories (AVLs) and active illustrations can increase
technical skill and general scientific literacy. Forbus's goal
is to increase students' knowledge of qualitative physics.
Qualitative physics deals with the conceptual underpinnings of
the natural world, or as Frobus puts it, questions dealings with
"what happens, when does it happen, what affects it, and what
does it affect" (p. 236). One way to increase knowledge of
qualitative physics is to experiment with relationships, but in
some domains, like engineering thermodynamics, this is difficult
and dangerous. Virtual laboratories like Forbus's
CyclePad can perhaps solve this problem. In
CyclePad, engineering students can manipulate a variety of
parts (compressors, turbines, pumps, etc.) in a thermodynamic
system. As a student designs a thermodynamic system, the program
derives consequences from the design. In designing a system,
students can ask questions that link back to an explanation
database, ask to see diagrams of the systems they are developing,
or calculate the economic costs. Also included is an on-board
coach that works from a database of common contradictions,
providing analysis and advice. The goal of CyclePad is to
help students to get a clearer picture of the consequences of
design decisions in a realistic environment.
Active illustrations offer another technique of
helping to grasp relationships between scientific concepts. As
an example of an active illustration, Forbus discusses the
Evaporation Laboratory. In this active illustration,
students can see how changes in an environment, temperature, and
container can change the rate of the evaporation of water, thus
making more feasible experiments that would take a long time to
do in the real world, and that would require travel from one
climate to the next.
In Chapter 8, "Animated Pedagogical Agents in
Knowledge-Based Learning Environments," James C. Lester and his
group describe their work that attempts to exploit the emotional
connection we seem to feel to computerized characters and alter
egos. They write,
Because of the immediate and deep affinity that
children seem to develop for these characters, the potential
pedagogical benefits they provide are perhaps even exceeded by
their motivational benefits. By creating the illusion of life,
lifelike computer characters may significantly increase the time
that children seek to spend with educational software, and recent
advances in affordable graphic hardware are beginning to make the
widespread distribution of realtime animation technology a
reality. (p. 269)
While the stated goal of "significantly increas[ing] the time
children seek to spend with educational software" is certainly
odd, it reveals a faith that experience with educational software
can lead to learning. After discussing the reasons why animated
pedagogical agents may be effective in motivating students, the
authors discuss several of the agents they have developed
including Herman the Bug, who helps students design plants
and trees appropriate for given environments, Cosmo, a
"helpful antenna-bearing creature with a hint of a British
accent" that helps students think through computer network
routing mechanisms, and Whizlow, an explanatory lifelike
avatar that offers help in the domain of computer architecture.
These agents serve both to represent the student in the virtual
world and to offer advice when needed. When the student pauses
for a long time or makes a mistake, the animated agents step in
to offer appropriate help. The authors offer some details
relating to how the agent is programmed to act in a smooth and
helpful way, and also describe how the animated agents determine
whether students have made mistakes and how the agents determine
the correct response. Sometimes, a student misconception is
exposed when a model of the student's past behavior is compared
to a database of common novice misconceptions. Once the
misconception has been identified, the appropriate response to
that misconception is analyzed as it relates to the particular
student's current context, and an appropriate agent response is
generated. If the student continues having difficulty acting
successfully, the agent can model the appropriate actions. The
authors conclude their essay with a summary of some research
evaluating the effectiveness of animated pedagogical agents.
Chapter 9 is entitled "Exploiting Model-Based
Reasoning in Educational Systems: Illuminating the Learner Model
Problem." In this essay, Kees de Koning and Bert Bredeweg
attempt to apply what is known about model-based reasoning to the
design of educational software. The goal is to develop
applications with deep subject matter knowledge that can interact
intelligently with a learner. The interaction with the software
is individualized as the program builds a model of the learner
and compares it to a normative model in the subject matter
domain; where the behavior deviates from the norm, the software
develops an appropriate diagnosis and applies an appropriate
pedagogical intervention. Now, in most other intelligent
learning applications, the diagnosis of the problem is based on
elaborate, detailed, and expensive "bug catalogues," which try to
take all known ways in which people make mistakes in a given
domain, and then compare the items in the catalogue to the
learner's mistake. Conversely, de Koning and Bredeweg want to
base the diagnosis on models that seek to "identify those sets of
primitive reasoning steps whose correct behavior is inconsistent
with the observations" (p. 308). There is no attempt to build a
learner's mental fault model here; the diagnosis is only at the
level of problem solving behavior. After discussing in some
technical detail how the subject matter model is generated (the
primitive steps of reasoning are generated on the basis of
qualitative simulation models) and how the behavior diagnosis is
created, the authors then describe their STARlight
educational application which asks students to predict what will
happen in an example situation from physics. When learners make
a false prediction, the software probes their knowledge with
several multiple choice questions designed to find out where the
exact problem is. Is this probing effective? During a small
trial run, the program was often successful in finding the error
and correcting it (or, more often, the learners realized their
own mistakes during the probing process). It could be said,
then, that in this application, the diagnostic instrument
fruitfully drives the dialogue with the learner. The "help"
option does not stand apart from the application, only to be
called upon in rare circumstances; rather, the application is
built around finding and correcting errors.
The next two chapters are different from the rest
of the collection in that they do not discuss particular
technologies at all; rather, they discuss ideas about learning
that may be included in the development of technological
artifacts. The first of these two articles, "The Case of
Considering Cultural Entailments and Genres of Attachment in the
Design of Educational Technologies," by Lisa M. Bouillion and
Louis M. Gomez, discusses cultural considerations that will be
necessary to consider when technologies actually enter
classrooms. They argue that,
as a function of design, artifacts come with a set
of cultural entailments, representing the goals, expectations,
histories, values, and practices associated with a particular
community. Left unquestioned these entailments may clash with
the entailments of a community targeted for use of the
innovation. (p. 347)
As an example of this clash, they investigate the case of a
particular educational innovation designed to break down the
walls between the school and society. This innovation attempted
to involve students in "real world" problems faced by community
institutions. Community groups were approached and asked what
real help they could receive from children, and the students were
then to be involved in filling this need. In a suburban school,
the cultural entailments of this innovation, which included
preparing students to contribute to an established community,
meshed well with the culture of the students. The students were
able to contribute to an awareness campaign on behalf of United
Way. In an inner city school, however, the entailments of the
innovation did not mesh with those of a more suspicious minority
community. In this community, the innovational entailments would
need to change from fitting in to the established
community to changing it. For this reason, the inner-city
teachers suggested that their students work on their own
projects, and this eventually became an activist movement to
clean part of the Chicago River. The innovation was, then, molded
to the requirements of the community. The authors argue that this
negotiation between educational innovation and cultural
entailments must always take place, a fact that must be
incorporated into the design of educational artifacts and
technologies. It would have been interesting to see the authors
use this framework to examine the technologies described in the
rest of the book.
The second article to discuss considerations that
should be used in designing technologies is that of Paul J.
Feltovich, Richard L. Coulson, and Rand J. Spiro: "Learners'
(Mis) Understanding of Important and Difficult Concepts: A
Challenge to Smart Machines in Education." In this contribution
to the volume, the authors explore the question of why some
misconceptions about the world are so hard to change, while
others are not. Entrenched or "stable" misconceptions, they
argue, tend to be simpler and more easily understood than their
more correct counterparts. The elements that make up an
entrenched misconception tend to fit together nicely and in such
a way so as to provide mutual support. An entrenched
misconception "is also more concrete, carries with it more
salient examples and analogies, and is more congruent with the
intuition" (p. 366). The authors take a notorious example from
medical education relating to blood circulation, show how the
example manifests the features of being a stable misconception,
and construct a list of "knowledge shields" that allow students
to evade changing their minds when faced with arguments showing
the old idea to be false. The authors end their article by
arguing that the content most filled with these sorts of
misconceptions are those areas of knowledge that are "the
continuous, the simultaneous, the interactive, [and] the
conditional" (p. 375). The authors think that this type of
subject matter tends to be neglected by educational technologies,
including, with a few exceptions, the smart machines described in
the present volume.
Kirstie L. Bellman's contribution to the book,
"Building the Right Stuff: Some Reflections on the CAETI Program
and the Challenge of Educational Technology," is a retrospective
look at the computer-aided education and training initiative
(CAETI), one of the most important R&D programs in
educational technology ever sponsored by the government. In the
article, Bellman gives a brief history of CAETI research. She
also discusses the program's technology insertion experiments in
the Department of Defense's K-12 schools for overseas
dependents. The development of technology for education is
difficult, she argues, because it requires competence in a wide
variety of fields, for example, in academic subject matter,
learning processes, cultural and small group processes, and
engineering and product design. Educational technology
implementation is also difficult because it requires a "tedious
conversation" among the parents and students and all those who
develop, use, fund, and research educational technology.
Bellman's essay, I believe, does a fine job giving the reader the
flavor of these conversations. CAETI's tedious conversations
took place within and among several clusters of projects. CAETI
aimed at developing (1) "smart navigators to access and integrate
resources" (SNAIR) to help students retrieve and use appropriate
online information, (2) "collaborative applications for
project-based educational resources" (CAPER) to create MUD-like
spaces for intellectual interaction, and (3) "expert associates
to guide individualized learning" (EAGIL) to devise intelligent
tutors, some of which are described in other chapters of the book
(e.g., BELVEDERE and the PUMP algebra tutors).
The initial idea was that the different participantseach
working on a separate projectwould eventually work together
and combine their applications to create truly complex and subtle
applications. It was not feasible, however, given the vast
differences among the stakeholders and levels of commitment, that
this cooperation be forced from the top down. Thus, it was only
required of the participants that they make their programs open
(that they "grow ears and a nervous system"). With this minimal
compliance standard, it was hoped that curiosity among
participants would take over ó an organizational strategy
that has apparently fostered many fruitful collaborations.
Indeed, after discussing the difficult issues involved in
evaluating the products CAETI eventually produced, Bellman spells
out what she thinks were CAETI's successes and failures. With
regard to the successes, Bellman mentions the collaboration that
was fostered. She also counts as a success many aspects of
CAETI's technology insertion program:
Many of our decisions were sound: don't throw the
technology over the wall, use methods like TIP [a technology
insertion plan which works with the teacher to combine the
technology with the teacher's curriculum objectives and
instruction and assessment strategies], sponsor a diversity of
subjects and grades, and use technology to support a variety of
learner and teacher styles. We also made good decisions with
regard to teacher training and involvement. We worked with the
unions to make sure that the teachers foray into technology was a
safe experiment. We tried to develop in each school a critical
mass, build up mentoring and advocates. . . .We also made the
decision to pay the teachers in at least some fashion for their
extra time and costs to train. We also made the good decision to
start with teachers who were interested and enthusiastic. (p.
408)
With regard to the things that should have been done
differently, Bellman writes that CAETI did not take full
advantage of students in both running the technology
infrastructure and having input on developing their own
educational experience. Bellman also regrets not taking fuller
advantage of wireless technology (including wearable and mobile
devices like palm pilots). She ends her essay by pointing to
the challenges that will face subsequent initiatives aimed at
developing educational technology: continuing the momentum from
one program to the next, helping teachers become active
researchers in the model of medical personnel, and facing the
ever-present questions of the aims of education (one of the
biggest problems in development technology is deciding what,
exactly, should be the point of the technology).
Comments and Questions
The tone of this book is generally more guarded
than in many books about educational technology. True, the
authors are all optimistic about the possibility of technology
revolutionizing education, but there is little of the inflated
rhetoric that can so often accompany books about educational
technology. The authors are generally aware of the difficulties
that come when new technology enters the classroom, and many of
the essays are stories of the attempts to overcome these
obstacles. The tone of the articles thus tends to be somewhat
"older and wiser" than other books in the genre, and this
experience shows. For example, many of the authors admit that
people who develop smart machines should listen to teachers,
students, and parents as much as they need to talk (see pages
163, 231, 408). Compare this to the rhetoric of other technology
advocates, like Don Tapscott (1999), who see teachers as more of
an impediment than anything to the advance of the technological
steamroller. Effective technology insertion is much more than
just throwing the new machines "over the wall" into a bewildered
school that never asked for them. The authors seem to realize
this.
The authors also seem generally motivated to
provide real answers the question of why their
technologies are needed. A skeptic can and should ask, with
every technological innovation in education, if the new
innovation is really necessary, or if it is just more technology
for technology's sake. As school closets fill up with
barely-used and now obsolete computers, these questions press
even harder: Is a new educational technology really filling a
need, or is it just a way for developers to express their
technical virtuosity or for corporations to line their pockets?
Why is the technology is necessary and what it can do that a
teacher (or even a relatively untrained TA) cannot do? Many of
the authors try, with varying degrees of success, to answer these
questions.
Some have tried to justify advanced technology in
education by simply invoking the "economics of scale"greater
numbers of students can be served through these smart machines
for a lesser cost (i.e., it is hoped that technology will reduce
educational labor costs). Invoking the economics of scale,
however, is often suspect when it comes to educational
technology. The ever changing infrastructure, the support staff,
and the applications themselves are not cheapfar from it.
Often, when an institution reaches the point it could start
saving money on a technological implementation (the so called
"cross-over point"), both the hardware and software have become
obsolete. In this and in many other ways, the rapid development
of technology is the worst enemy of actually implementing
technology in education. There is no time to perfect it, nor is
there any time to save money with it, before the next new thing
comes along.
If technology rarely delivers on its promise to
save money beyond human teachers, perhaps there are pedagogical
benefits that make the technologies worthwhile in spite of the
cost. And indeed, there are some powerful pedagogical arguments
presented for smart machines in the book. For example, computers
allow for the "invisible experiments" of the type described by
Mostow and Aist, allowing an educator to carefully track
pedagogical effectiveness in inconspicuous ways beyond what any
teacher could previously do. This, together with the idea of
"automated experiments," could revolutionize educational
research, while at the same time raising ethical concerns about
the constant surveillance such technology makes possible.
Another example is that offered by de Koning and Bredeweg, whose
application traces errors in a student's problem solving process
to the basic reasoning steps that could not have been applied
correctly. This process is so intricate, they argue, it would be
"difficult or even impossible for human teachers" (p. 326). This
claim is debatable, of course, but if true it would serve as a
justification for the smart machine in its domain. In addition,
some of the contributors argue that their smart machines create a
world in which students are not afraid to fail (see, for example,
p. 59). The computer does not laugh at you, look exasperated, or
make snide remarks. The very social decontextualization,
bemoaned by some technological critics, makes possible an
environment where it is now tolerable to learn from mistakes.
Granted, the critic could still ask when students would learn to
deal with situations where social pressure exists to perform. As
Quintilian pointed out long ago, schools are in a public setting
because we want students to learn how to act in a public sphere,
with all the pressure to perform: “let him who . . . must
live among the greatest publicity, and in the full daylight of
public affairs, accustom himself, from his boyhood, not to be
ashamed at the sight or men” (1965, p. 22). But this
ability to let students make mistakes freely, away from the
threat of social scorn, could still turn out to be one of the
most important educational possibilities these new smart machines
offer.
This book shows how the new generation of
educational technology has made the student-computer interaction
much richer. There is much more to these programs than simply
typing in a numerical answer and being told whether the answer is
right or wrong. I must admit, though, I still found some of the
interaction to be quite shallow. The applications described in
the book can only handle a given number of pre-set responses. If
a student says something off-the-wall or otherwise unexpected,
the programs would be unable to respond.
When the simulations and tutors described in the book are
trying to teach a clearly rule-governed procedure, they do quite
well. But there is still no smart machine, apparently, that can
help students to think creatively, or otherwise outside of
rule-governed frameworks. A tutor can be developed to help
students solve math problems, but not to help them become
artists. Science, which can be alternately rule-governed and
creative, in the hands of these smart machines, generally reduces
to only a rule governed procedure. None of the scientific
simulations allow students to develop their own research
questions in depth, and even in the research questions that are
given, there seems to be little room to develop one's own
hypothesis if it is not a pre-set option.
This problem of interaction is compounded because
many of the applications described in the book rely on comparing
expert models with student-generated models. This method of
using expert models reveals both the brilliance and baggage of
bringing AI into educational technology. Success is defined such
applications as living up to the expert model. While much
educational mileage can be traveled on this road, it does not
lend itself to developing creative, revolutionary thought. If a
thought is not within the expert model, it is deemed a failure.
The problem, then, even in these smart machines, is that students
are treated as rule-following machinesthe better they follow
the rules, the more they will replicate what an expert would do,
and the more successful their computer interactions
become.
In 1986, Michael Streibel argued that computerized
tutorials demand that human beings act as data-based,
rule-following, symbol-manipulating, information processors. By
demanding pre-set responses, creating a narrow model of the
learner (as inputter of data and not as a full human being), and
comparing this model to expert standards, the educational
discourse contracts too tightly. He writes:
Uniformity in education is enforced not only
because the instructional system attempts to shape a uniform
product (i.e., pre-specified learning outcomes) but also because
the very conceptualization of the individual places semantic and
syntactical constraints on acceptable language for the discussion
of human beings. This in turn makes it impossible to express and
legitimize other conceptions of human beings, educational goals,
and methods outside of the technological framework. (p. 150-1)
I do not see that, some 15 years later, we have come any
closer to making the computer/human discourse any richer.
Although it is now dressed up in fancier clothes, your AI date
still doesn't know how to carry on a very good conversation.
I have other worries about many of the
applications described in the book. One of the best things about
simulations, for example, is that we can seem to speed up the
natural processes of the world. For example, the learner does
not have to wait for days or travel vast distances to test the
difference in the evaporation rates in Forbus's Evaporation
Laboratory. The simulations break down the walls of time and
space and permit an inquiry that would otherwise be impossible.
It is important to realize, however, what this speeded-up world
does not teach about the process of inquiry. It does not teach
students to be patient, for example, and to wait for data that
may be difficult and pain-staking to accumulate.
Moreover, in their fear of student boredom, the
simulations often become interlaced with flash and glamour. The
science simulation Wolf, for instance, discussed by Schank
and Neaman, contains exciting sequences of scientists leaning
from helicopters, tranquilizing and examining a large wolf. It
does not have scenes of the scientist reading quietly in the
library for hours on end, or playing the political cards
necessary to get government funding. The exciting and glamorous
tend to be emphasized, and thus the simulation may give a false
impression about how inquiry actually proceeds. Some of the
authors write about trying to imitate the excitement and passion
of video games (see, for example, pages 97 and 266) in
their educational applications. I worry that this is being a bit
disingenuous: real inquiry is often quite humdrum and
labored.
Although I worry about some of these points, most
can be answered by simply adding other types of experiences
("real-life" dialogue to supplement the shallow computer
interaction and “real-world” inquiry that requires
the development of scholarly patience) to those of the computer
tutors and simulations. To the extent that these additions are
made, my worries will largely be unfounded.
One especially impressive aspect of the book is
the way it reveals an important reason for taking educational
technology seriously, namely, developing educational technology
demands that we continually rethink educational questions. As we
develop and evaluate educational technologies, old questions are
infused with new life: What is the relationship between a teacher
and student? What should happen when students make an error?
What is the best way to teach math? How do students learn? What
is the purpose of education? As the authors wrestle with their
technologies, they do some fine thinking about questions such as
these. For example, some of the pure educational thinking in this
book relating to student mistakes and misconceptions was
exceptional (see Schank and Neaman, pp. 59-66, de Koning and
Bredeweg, pp. 319-326, Feltovich, Coulson, and Spiro, pp.
349-375). Biswas and his group present a wonderful discussion of
how learning-by-teaching may, or may not, be an effective
strategy (pp. 79-84), and Lester's group present some interesting
ideas about how we may learn through alter-egos (pp. 270-272).
Perhaps this is the real legacy of the smart machines in
educationwe get smarter about education as we try to develop
them.
References
Quintilian. (1965). On the Early Education of the
Citizen-Orator. Indianapolis: The Bobb-Merrill Company.
Streibel, M. (1986) . A Critical Analysis of the Use of
Computers in Education. Education Communications and
Technology Journal, 34, (3) 150.
Tapscott, D. (1999). Growing up Digital: The Rise of the
Net Generation. NY: McGraw-Hill Trade
About the Reviewer
Bryan R. Warnick is a doctoral student in Philosophy of
Education at the University of Illinois at Urbana-Champaign. His
research interests include philosophy of technology, ethics, and
philosophy of mind as these disciplines relate to educational
theory and practice. He holds a B.S. degree in Philosophy and
Psychology from the University of Utah, and a M.A. in Philosophy
of Education from the University of Illinois at Urbana
Champaign.
| |
No comments:
Post a Comment